CN115937396A - Image rendering method and device, terminal equipment and computer readable storage medium - Google Patents

Image rendering method and device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN115937396A
CN115937396A CN202211678778.9A CN202211678778A CN115937396A CN 115937396 A CN115937396 A CN 115937396A CN 202211678778 A CN202211678778 A CN 202211678778A CN 115937396 A CN115937396 A CN 115937396A
Authority
CN
China
Prior art keywords
scene
rendered
virtual object
field
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211678778.9A
Other languages
Chinese (zh)
Inventor
张选丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211678778.9A priority Critical patent/CN115937396A/en
Publication of CN115937396A publication Critical patent/CN115937396A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Generation (AREA)

Abstract

The invention provides an image rendering method, an image rendering device, terminal equipment and a computer readable storage medium, wherein the image rendering method comprises the following steps: acquiring a scene to be rendered; the scene to be rendered is a three-dimensional scene with an open single side, and at least one virtual object to be rendered is configured in the scene to be rendered; generating a three-dimensional texture map based on distance information and color information corresponding to the object surface of each virtual object in the scene to be rendered; rendering according to the three-dimensional texture map to obtain a scene image; wherein the scene image is used for embodying virtual objects in the scene to be rendered in a three-dimensional effect on the single side surface. The invention can provide the internal picture of the virtual scene with higher quality.

Description

Image rendering method and device, terminal equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image rendering method, an image rendering device, a terminal device, and a computer-readable storage medium.
Background
In the field of games, rendering some virtual scenes which cannot be entered is required in the games for baking the atmosphere, for example, in urban scenes, players cannot enter all rooms, and in this case, artistic resources do not need to be separately manufactured for all rooms and actual rendering is not required. Related technologies propose that 3D (3-dimensional) room pictures can be rendered by using a plane, so that the picture effect is ensured, and the performance consumption and the labor expenditure are saved.
At present, the core idea of the mainstream technology for rendering a 3D room on a plane is to abstract a room into a mathematical description of a cuboid, and through a certain point on the plane and observing the direction of the point, which point on which wall of the room the line of sight will hit can be calculated, and finally this point is mapped onto a 2D (2-dimension, two-dimensional) texture for sampling.
Disclosure of Invention
In view of the above, the present invention provides an image rendering method, an image rendering apparatus, a terminal device and a computer readable storage medium, which can provide a higher quality internal picture of a virtual scene.
In a first aspect, an embodiment of the present invention provides an image rendering method, including: acquiring a scene to be rendered; the scene to be rendered is a three-dimensional scene with an open single side, and at least one virtual object to be rendered is configured in the scene to be rendered; generating a three-dimensional texture mapping based on distance information and color information corresponding to the object surface of each virtual object in the scene to be rendered; rendering according to the three-dimensional texture map to obtain a scene image; wherein the scene image is used for embodying a virtual object in the scene to be rendered in a three-dimensional effect on the single side surface.
In a second aspect, an embodiment of the present invention further provides an image rendering apparatus, including: the scene acquisition module is used for acquiring a scene to be rendered; the scene to be rendered is a three-dimensional scene with an open single side, and at least one virtual object to be rendered is configured in the scene to be rendered; the texture generation module is used for generating a three-dimensional texture mapping based on the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered; the rendering module is used for rendering according to the three-dimensional texture map to obtain a scene image; wherein the scene image is used for embodying a virtual object in the scene to be rendered in a three-dimensional effect on the single side surface.
In a third aspect, an embodiment of the present invention further provides a terminal device, which includes a processor and a memory, where the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement the image rendering method provided in the first aspect.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium storing computer-executable instructions, which, when invoked and executed by a processor, cause the processor to implement the image rendering method provided in the first aspect.
The image rendering method, the image rendering device, the terminal device and the computer-readable storage medium provided by the embodiments of the present invention first obtain a scene to be rendered, where the scene to be rendered is a three-dimensional scene with an open single side, at least one virtual object to be rendered is configured in the scene to be rendered, then generate a three-dimensional texture map based on distance information and color information corresponding to an object surface of each virtual object in the scene to be rendered, and finally perform rendering processing according to the three-dimensional texture map to obtain a scene image, where the scene image is used for representing the virtual object in the scene to be rendered with a three-dimensional effect on the single side. The method generates the three-dimensional texture mapping according to the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered, and renders the scene image based on the three-dimensional texture mapping, so that the problem that the virtual objects in the virtual scene lose depth and perspective in the prior art can be effectively solved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic effect diagram of a planar rendering 3D room effect according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image rendering method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a virtual scene to be rendered according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for generating a 3D texture map according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a hybrid voxel field provided by an embodiment of the present invention;
fig. 6 is an effect diagram of a virtual scene according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a point cloud array according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating an output texture node according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a texture slice assembly according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a three-dimensional texture map provided in accordance with an embodiment of the present invention;
fig. 11 is a schematic diagram of a scene image according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
At present, mainstream technologies for rendering 3D room effects in a planar manner are based on 2008 "interlayer Mapping: a new technical for rendering functional buildings", the technologies are simple to implement and low in performance consumption, and only one 2D texture at a specific viewing angle and a few very low-cost calculations are needed to implement a real perspective room, but referring to an effect diagram of rendering 3D room effects in a planar manner as shown in fig. 1, fig. 1 shows that the technology can only abstract a basic rectangular room, that is, only walls, ceilings and floors have perspective effects, and virtual objects such as beds, tables, chairs and characters placed in the room lose depth and perspective, and are as if the virtual objects are pasted on the walls and the floors. Based on the above, the embodiment of the invention provides an image rendering method, an image rendering device, a terminal device and a computer readable storage medium, which can provide a higher-quality virtual scene internal picture.
To facilitate understanding of the present embodiment, first, an image rendering method disclosed in the embodiment of the present invention is described in detail, referring to a flowchart of the image rendering method shown in fig. 2, where the method mainly includes the following steps S202 to S206:
step S202, a scene to be rendered is obtained. The scene to be rendered is configured into a three-dimensional scene with an open single side (also called a box-packed scene), the rest five sides of the three-dimensional scene are in a closed state, for example, virtual scenes such as rooms which cannot be entered in a city scene, and at least one virtual object to be rendered, for example, virtual objects such as tables, chairs, beds, characters, ornaments and the like, are configured in the scene to be rendered. In one embodiment, an unrendered virtual scene may be prepared in advance, and an upload channel of the virtual scene may be provided for a user to upload the virtual scene through the upload channel.
Step S204, generating a three-dimensional texture map based on the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered. The distance information is the distance between a point in the scene to be rendered and the object surface of the virtual object. In one embodiment, SDF (signaled Distance Field) values may be generated according to Distance information, and color values of points on the object surface of each virtual object may be generated according to color information in texture mapping information (UV), and output as a texture slice combination, which expresses a scene to be rendered as 256 slice combinations of 256 × 256 pixels, and finally, when the texture slice combination is imported to a specified rendering tool (such as Unity engine), the texture slice combination may be converted into a three-dimensional texture map by specifying a number of lines and a number of columns of slices.
And step S206, rendering according to the three-dimensional texture map to obtain a scene image. And the scene image is used for representing the virtual object in the scene to be rendered in a three-dimensional effect on the single side surface. In an implementation manner, the SDF value in the scene to be rendered can be gradually progressive towards the surface of the rendered object according to the SDF value, until the specified progressive times are reached or the distance between the pixel point and the object surface of the virtual object is smaller than a preset distance threshold, the pixel point can be determined as a target pixel point, the color value in the three-dimensional texture map is sampled by using the target pixel point, the obtained color value is the color value corresponding to the target pixel point, the rendering is performed so as to obtain the scene image, and the perspective effect of the virtual object in the scene image is better.
According to the image rendering method provided by the embodiment of the invention, the three-dimensional texture mapping is generated according to the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered, and the scene image is rendered based on the three-dimensional texture mapping, so that the problem that the virtual objects in the virtual scene lose depth and perspective in the prior art can be effectively solved.
The embodiment of the invention aims to solve the problem that the virtual scene (such as a room) can express good depth and perspective by aiming at the condition that other scenes exist in the virtual scene, and the core of the embodiment of the invention mainly comprises two parts: the first part is a texture making part, which is mainly used for baking the color and distance of a specific object surface onto a 3D texture map and can be completed by using Houdini; the second part is a rendering part and mainly works to render by using a RayMarching algorithm by using 3D texture mapping.
To facilitate understanding of the foregoing embodiments, the embodiment of the present invention provides an implementation manner of the foregoing step S204, and when making a 3D texture map in Houdini, a virtual scene to be rendered, for example, a schematic diagram of a virtual scene to be rendered, such as shown in fig. 3, may be prepared in advance, on this basis, the embodiment of the present invention provides a flowchart of making a 3D texture map, as shown in fig. 4, and performs the following steps 1 to 3 for the virtual scene to be rendered, as shown in fig. 4:
step 1, determining a corresponding directed distance field value in a scene to be rendered based on distance information between a point in the scene to be rendered and an object surface of a virtual object. In a specific implementation, see the following steps 1.1 to 1.3:
step 1.1, converting a scene to be rendered into a voxel format. In one embodiment, please continue to refer to fig. 4, the scene to be rendered is denoted as InputScene, and the scene to be rendered is input to a format conversion (closed 2) node, and the scene to be rendered is converted into a Volume format. With continued reference to fig. 4, since the Volume format is dense and the VDB (voxel data structure) format is sparse, in order to improve the solution efficiency, the scene to be rendered may be converted from the Volume format to the VDB format by another format conversion (ConvertToVolume) node, and it should be noted that both the Volume format and the VDB format may be used to describe the Volume in the scene with rendering.
Step 1.2, mixing a preset first initial voxel field with a to-be-rendered scene in a voxel format to obtain a mixed voxel field. Referring to fig. 4, first, a voxel field generation (volume 3) node is used to generate a first initial voxel field (which may also be referred to as a first initial volume field), then the first initial voxel field and the to-be-rendered scene in the VDB format are input to a blend (volume) node, and the first initial voxel field and the to-be-rendered scene in the VDB format are blended by the blend node, so as to obtain a blended voxel field, such as a schematic diagram of the blended voxel field shown in fig. 5.
Step 1.3, generating (volumemesdf) nodes of the values of the mixed voxel field through a voxel directed distance field, and converting the nodes into directed distance fields (SDFs) to determine directed distance field values corresponding to points in the scene to be rendered. The value of the directed distance field is determined as a value of the directed distance field corresponding to a point in the scene to be rendered. The SDF value generated in the embodiment of the invention is a scalar.
And 2, diffusing the color information corresponding to the object surface of each virtual object in the scene to be rendered into a preset point cloud array so as to determine the corresponding color value in the scene to be rendered. And the size of the point cloud array is consistent with that of the scene to be rendered. In a specific implementation, see the following steps 2.1 to 2.4:
and 2.1, configuring a specified number of points on the surface of the virtual object so as to enable the point density of the surface of the object to be greater than a preset density threshold value. Since the points on the surface of the object at the beginning are very sparse, in order to restore the chartlet effect as much as possible, a large number of points need to be placed on the surface of the object, please continue to refer to fig. 4, specifically, a specified number of points can be placed on the surface of the object of the virtual object by placing (scatter pointonsurface) nodes, so that the point density is large enough to accurately express the effect after the chartlet is converted into the scene.
And 2.2, adding the texture mapping information of the scene to be rendered to the point of the object surface of each virtual object in the scene to be rendered. In practice, a simple point is free of color information, which is stored on the texture of the map, so please continue to refer to fig. 4, which illustrates that the texture mapping information (i.e., UV) is first copied to the point by copying (copyuvtopint) nodes.
And 2.3, determining the color information corresponding to the point on the surface of the object based on the texture mapping information. Referring to fig. 4, color information can be searched in a color search (GetColorFromTexture) node by using texture mapping information of points to find the effect of fig. 6, where fig. 6 is an effect diagram of a virtual scene, and dense and rough points are placed on the surface of an object to restore the original effect of the map as much as possible.
And 2.4, diffusing the color information into a preset point cloud array to determine the corresponding color value in the scene to be rendered. The color values may also be referred to as RGB (three primary color) values, among others. With continued reference to fig. 4, fig. 4 further illustrates that a point cloud array with the same size and space can be manufactured by using a point cloud creation (EmptyPointCloud) node, and color information is diffused into the point cloud array by using an attribtransfer1 node, so that the color in the point cloud array is the color of a point of the point closest to the scene surface, and the effect is as shown in fig. 7, and fig. 7 is a schematic diagram of the point cloud array. It should be noted that the Volume format and the point cloud format are not the same, but common is that space can be used to index the information stored in some of their locations.
And 3, storing the directed distance field value to a first channel of a preset blank canvas and storing the color value to a second channel of the blank canvas to obtain the three-dimensional texture map. In a specific implementation, see step 3.1 to step 3.3 below:
and 3.1, mixing the preset second initial voxel field and the third initial voxel field to obtain a blank canvas. Wherein the second initial voxel field is a scalar field and the third initial voxel field is a vector field. In one embodiment, the SDF value of the virtual scene to be rendered is a scalar value in a Volume format, the RGB value of the scene image is three channel values stored in the point cloud array, carriers of the SDF value and the RGB value are first created, and assuming that the size of the space where the virtual scene to be rendered is 256 × 256, two empty initial voxel fields (i.e., volumes) with a size of 4096 × 1 are set, where one is a scalar field named Alpha and the other is a vector field named Cd, and the two initial voxel fields are mixed by a mixing (EmptyVolume) node to express the space as a set of slice arrays of 16 rows and 16 columns and 256 × 256, which is equivalent to obtaining a blank canvas.
Step 3.2, storing the directed distance field value to a first channel of a blank canvas and storing the color value to a second channel of the blank canvas to obtain a texture slice combination; wherein the texture slice combination is used for expressing a scene to be rendered. Wherein, the first channel is an Alpha channel, and the second channel is a Cd channel. In one embodiment, the output texture node is composed of a plurality of nodes of Houdini type, referring to a schematic diagram of an output texture node shown in FIG. 8, and FIG. 8 illustrates where the output texture node is packaged to convert information in a 3D space into a 2D picture for output. It has 3 inputs, the first being the Volume acting as the "canvas", the second being the SDF Volume, and the third being the 3D point cloud array of colors. Obviously, in the present invention, since the latter two are made in the same process, the Volume size and the point cloud size are consistent. On the basis, please continue to refer to fig. 4, in the packaged outputTexture node, according to the spatial position, the color value in the point cloud array is read and stored into the Cd channel of "canvas", the scalar SDF value in the SDF Volume is read and stored into the Alpha channel of "canvas", and finally, the "canvas" Volume that holds Cd and Alpha simultaneously is exported as an image by using the ROP File Ouput node, the image is one slice in the texture slice combination, all slices constitute the texture slice combination, such as the schematic diagram of one texture slice combination shown in fig. 9, that is, the scene to be rendered is expressed as a slice combination of 256 × 256 pixels. Obviously, in order to keep the slice tiling sufficiently and maintain a certain power exponent relationship between the size of the finally generated 2D map and the size of the 3D map, in the embodiment of the present invention, the Volume size and the point cloud size are 256 × 256, which are converted into a 4096 × 4096 picture; other available sizes also need to satisfy this relationship, such as point cloud size 64 x 64 to map size 1024 x 1024.
And 3.3, obtaining a three-dimensional texture map based on texture slice combination. In a specific implementation, the texture slice combination can be converted into the three-dimensional texture map based on the preset specified number of slice rows and the preset specified number of slice columns. In an embodiment, the texture slice combination needs to be imported into a specified rendering tool for rendering, and the specified rendering tool may be a Unity engine, so that the Unity engine completes the work of rendering part. In a specific implementation, the texture slice combination may be imported into the Unity engine through an import tool of the Unity engine, and the texture slice combination is specified as a 3D texture at the time of import, and the specified number of slice lines and the specified number of slice columns are set, so that the 2D texture can be regarded as the 3D texture again. The designated number of slice rows and the designated number of slice columns may be determined based on a texture slice combination, such as the above texture slice combination being a set of 16 row and 16 column slice arrays, and the designated number of slice rows and the designated number of slice columns are both 16. For example, the embodiment of the present invention further provides a schematic diagram of a three-dimensional texture map as shown in fig. 10, and in practical applications, after a texture slice combination is correctly imported into the Unity engine, a scene may be checked in a resource preview window of the Unity engine.
Further, after the texture slice combination is correctly imported into the Unity engine, the Unity engine may further perform rendering processing according to the three-dimensional texture map to obtain a scene image, in an embodiment, see the following steps a to c:
step a, based on the directed distance field value in the three-dimensional texture map, progressive addition is carried out on pixel points in the three-dimensional texture map until the appointed progressive addition times are reached or the distance between the pixel points and the object surface of the virtual object is smaller than a preset distance value, and target pixel points are obtained. In one embodiment, a shader may be written, and based on a planar surface, a raymanching algorithm may be used to check the Alpha value in the three-dimensional texture map, so as to obtain how far away the shader itself is from the surface of the object, and then the shader may step forward the distance until a specified number of steps is reached or the distance from the shader is negligibly small, so as to obtain the target position (i.e., the target pixel point).
And b, sampling the color value in the three-dimensional texture map according to the target pixel point to determine the target color value corresponding to the target pixel point. In one embodiment, the target position may be used to sample the three-dimensional texture map, and the resulting color values are the final result.
And c, rendering based on the target color value corresponding to each target pixel point to obtain a scene image. Exemplarily, referring to a schematic diagram of a scene image shown in fig. 11, fig. 11 illustrates a plane effect under different viewing angles (including a viewing angle a and a viewing angle b), since the walls of the virtual scene to be rendered used in the above example are not colored, and thus will appear as pure black in the scene image.
In summary, the embodiments of the present invention are directed to make a plane present a depth and perspective sense similar to that of a 3D scene, so as to improve rendering quality for inaccessible rooms and distant windows. Although there is more space and computational performance consumption compared to Interior Mapping, the prior art hardware technology has made the embodiment of the present invention sufficient to present high quality indoor room pictures with low surface number without affecting the normal operation of the game.
As for the image rendering method provided in the foregoing embodiment, an embodiment of the present invention provides an image rendering apparatus, referring to a schematic structural diagram of an image rendering apparatus shown in fig. 12, the apparatus mainly includes the following components:
a scene obtaining module 1202, configured to obtain a scene to be rendered; the scene to be rendered is a three-dimensional scene with an open single side, and at least one virtual object to be rendered is configured in the scene to be rendered;
a texture generating module 1204, configured to generate a three-dimensional texture map based on distance information and color information corresponding to an object surface of each virtual object in a scene to be rendered;
the rendering module 1206 is configured to perform rendering processing according to the three-dimensional texture map to obtain a scene image; and the scene image is used for representing the virtual object in the scene to be rendered in a three-dimensional effect on the single side surface.
The image rendering device provided by the embodiment of the invention generates the three-dimensional texture mapping according to the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered, and renders the scene image based on the three-dimensional texture mapping, so that the problem that the virtual objects in the virtual scene lose depth and perspective in the prior art can be effectively solved.
In one implementation, the texture generation module 1204 is further configured to: determining a directional distance field value corresponding to a point in the scene to be rendered based on distance information between the point in the scene to be rendered and the object surface of the virtual object; diffusing color information corresponding to the object surface of each virtual object in the scene to be rendered into a preset point cloud array to determine a corresponding color value in the scene to be rendered; the size of the point cloud array is consistent with that of a scene to be rendered; storing the directed distance field value to a first channel of a preset blank canvas and storing the color value to a second channel of the blank canvas to obtain a three-dimensional texture map; wherein the first channel is an Alpha channel and the second channel is a Cd channel.
In one implementation, the texture generation module 1204 is further configured to: converting the scene to be rendered into a voxel format; mixing a preset first initial voxel field with the to-be-rendered scene in the voxel format to obtain a mixed voxel field; the value of the first initial voxel field is a first preset threshold, and the value of the mixed voxel field is greater than the first preset threshold, which indicates that a point in the scene to be rendered is located inside the virtual object, otherwise indicates that the point in the scene to be rendered is located outside the virtual object; converting the hybrid voxel field to a directed distance field to determine directed distance field values for points corresponding within the scene to be rendered.
In one implementation, the texture generation module 1204 is further configured to: configuring a specified number of points on the surface of the virtual object so as to enable the density of the points on the surface of the object to be larger than a preset density threshold value; adding texture mapping information of a scene to be rendered to a point on the object surface of each virtual object in the scene to be rendered; determining color information corresponding to the point of the surface of the object based on the texture mapping information; and diffusing the color information into a preset point cloud array to determine the corresponding color value in the scene to be rendered.
In one implementation, the texture generation module 1204 is further configured to: mixing a preset second initial voxel field and a preset third initial voxel field to obtain a blank canvas; wherein the second initial voxel field is a scalar field and the third initial voxel field is a vector field; storing the directed distance field value to a first channel of a blank canvas and storing the color value to a second channel of the blank canvas to obtain a texture slice combination; wherein the texture slice combination is used for expressing a scene to be rendered.
In one implementation, the texture generation module 1204 is further configured to: and converting the texture slice combination into the three-dimensional texture mapping based on the preset number of slice rows and slice columns.
In one embodiment, the rendering module 1206 is further operable to: based on the directed distance field value in the three-dimensional texture map, progressive addition is carried out on the pixel points in the three-dimensional texture map until the appointed progressive addition times are reached or the distance between the pixel points and the object surface of the virtual object is smaller than a preset distance value, and target pixel points are obtained; sampling a color value in the three-dimensional texture map according to the target pixel point to determine a target color value corresponding to the target pixel point; and rendering based on the target color value corresponding to each target pixel point to obtain a scene image.
The device provided by the embodiment of the present invention has the same implementation principle and the same technical effects as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to corresponding contents in the foregoing method embodiments for the parts of the device embodiments that are not mentioned.
The embodiment of the invention provides terminal equipment, which particularly comprises a processor and a storage device, wherein the processor is used for processing a data packet; the storage device has stored thereon a computer program that, when executed by the processor, performs:
an image rendering method, comprising: acquiring a scene to be rendered; the scene to be rendered is a three-dimensional scene with an open single side, and at least one virtual object to be rendered is configured in the scene to be rendered; generating a three-dimensional texture map based on distance information and color information corresponding to the object surface of each virtual object in a scene to be rendered; rendering according to the three-dimensional texture map to obtain a scene image; and the scene image is used for representing the virtual object in the scene to be rendered in a three-dimensional effect on the single side surface.
In one embodiment, generating a three-dimensional texture map based on distance information and color information corresponding to an object surface of each virtual object in a scene to be rendered comprises: determining a corresponding directed distance field value in the scene to be rendered based on distance information between a point in the scene to be rendered and the object surface of the virtual object; diffusing color information corresponding to the object surface of each virtual object in the scene to be rendered into a preset point cloud array to determine a corresponding color value in the scene to be rendered; the size of the point cloud array is consistent with that of a scene to be rendered; storing the directed distance field value to a first channel of a preset blank canvas, and storing the color value to a second channel of the blank canvas to obtain a three-dimensional texture map; wherein the first channel is an Alpha channel and the second channel is a Cd channel.
In one embodiment, the determining, based on distance information between a point within the scene to be rendered and an object surface of the virtual object, a corresponding directional distance field value within the scene to be rendered includes: converting the scene to be rendered into a voxel format; mixing a preset first initial voxel field with the scene to be rendered in the voxel format to obtain a mixed voxel field; the value of the first initial voxel field is a first preset threshold, and the value of the mixed voxel field is greater than the first preset threshold, which indicates that a point in the scene to be rendered is located inside the virtual object, otherwise indicates that the point in the scene to be rendered is located outside the virtual object; converting the hybrid voxel field to a directed distance field to determine directed distance field values for point correspondences within the scene to be rendered.
In one embodiment, diffusing color information corresponding to an object surface of each virtual object in a scene to be rendered into a preset point cloud array to determine a corresponding color value in the scene to be rendered, includes: configuring a specified number of points on the surface of the virtual object so as to enable the density of the points on the surface of the object to be larger than a preset density threshold value; adding texture mapping information of a scene to be rendered to points on the object surface of each virtual object in the scene to be rendered; determining color information corresponding to the point of the object surface based on the texture mapping information; and diffusing the color information into a preset point cloud array to determine the corresponding color value in the scene to be rendered.
In one embodiment, storing a directed distance field value to a first channel of a preset blank canvas and a color value to a second channel of the blank canvas to obtain a three-dimensional texture map comprises: mixing a preset second initial voxel field and a preset third initial voxel field to obtain a blank canvas; wherein the second initial voxel field is a scalar field and the third initial voxel field is a vector field; storing the directed distance field value to a first channel of a blank canvas and storing the color value to a second channel of the blank canvas to obtain a texture slice combination; wherein the texture slice combination is used for expressing a scene to be rendered.
In one embodiment, obtaining a three-dimensional texture map based on texture slice combination comprises: and converting the texture slice combination into the three-dimensional texture mapping based on the preset number of slice rows and slice columns.
In one embodiment, rendering according to a three-dimensional texture map to obtain a scene image includes: based on the directed distance field value in the three-dimensional texture map, progressive addition is carried out on the pixel points in the three-dimensional texture map until the appointed progressive addition times are reached or the distance between the pixel points and the object surface of the virtual object is smaller than a preset distance value, and target pixel points are obtained; sampling a color value in the three-dimensional texture map according to the target pixel point to determine a target color value corresponding to the target pixel point; and rendering based on the target color value corresponding to each target pixel point to obtain a scene image.
The terminal device provided by the embodiment of the invention generates the three-dimensional texture mapping according to the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered, and renders the scene image based on the three-dimensional texture mapping, so that the problem that the virtual objects in the virtual scene lose depth and perspective in the prior art can be effectively solved.
Fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present invention, where the terminal device 100 includes: the system comprises a processor 10, a memory 11, a bus 12 and a communication interface 13, wherein the processor 10, the communication interface 13 and the memory 11 are connected through the bus 12; the processor 10 is arranged to execute executable modules, such as computer programs, stored in the memory 11.
The Memory 11 may include a Random Access Memory (RAM) and a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 13 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
The bus 12 may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 13, but this does not indicate only one bus or one type of bus.
The memory 11 is configured to store a program, and the processor 10 executes the program after receiving an execution instruction, and the method executed by the apparatus defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 10, or implemented by the processor 10.
The processor 10 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 10. The Processor 10 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in the memory 11, and the processor 10 reads the information in the memory 11 and completes the steps of the method in combination with the hardware thereof.
The computer program product of the readable storage medium provided by the embodiment of the invention comprises a computer readable storage medium storing program codes, wherein the program codes comprise instructions for executing the following steps:
an image rendering method, comprising: acquiring a scene to be rendered; the scene to be rendered is a three-dimensional scene with an open single side, and at least one virtual object to be rendered is configured in the scene to be rendered; generating a three-dimensional texture mapping based on distance information and color information corresponding to the object surface of each virtual object in a scene to be rendered; rendering according to the three-dimensional texture map to obtain a scene image; and the scene image is used for representing the virtual object in the scene to be rendered in a three-dimensional effect on the single side surface.
In one embodiment, generating a three-dimensional texture map based on distance information and color information corresponding to an object surface of each virtual object in a scene to be rendered comprises: determining a corresponding directed distance field value in the scene to be rendered based on distance information between a point in the scene to be rendered and the object surface of the virtual object; diffusing color information corresponding to the object surface of each virtual object in the scene to be rendered into a preset point cloud array to determine a corresponding color value in the scene to be rendered; the size of the point cloud array is consistent with that of a scene to be rendered; storing the directed distance field value to a first channel of a preset blank canvas and storing the color value to a second channel of the blank canvas to obtain a three-dimensional texture map; wherein the first channel is an Alpha channel and the second channel is a Cd channel.
In one embodiment, the determining, based on distance information between a point within the scene to be rendered and an object surface of the virtual object, a corresponding directional distance field value within the scene to be rendered includes: converting the scene to be rendered into a voxel format; mixing a preset first initial voxel field with the scene to be rendered in the voxel format to obtain a mixed voxel field; if the value of the first initial voxel field is a first preset threshold value, and if the value of the mixed voxel field is greater than the first preset threshold value, it indicates that a point in the scene to be rendered is located inside the virtual object, otherwise, it indicates that a point in the scene to be rendered is located outside the virtual object; converting the hybrid voxel field to a directed distance field to determine directed distance field values for point correspondences within the scene to be rendered.
In one embodiment, diffusing color information corresponding to an object surface of each virtual object in a scene to be rendered into a preset point cloud array to determine a corresponding color value in the scene to be rendered, includes: configuring a specified number of points on the surface of the virtual object so as to enable the point density of the surface of the object to be larger than a preset density threshold value; adding texture mapping information of a scene to be rendered to a point on the object surface of each virtual object in the scene to be rendered; determining color information corresponding to the point of the surface of the object based on the texture mapping information; and diffusing the color information into a preset point cloud array to determine the corresponding color value in the scene to be rendered.
In one embodiment, storing a directed distance field value to a first channel of a preset blank canvas and a color value to a second channel of the blank canvas to obtain a three-dimensional texture map comprises: mixing a preset second initial voxel field and a preset third initial voxel field to obtain a blank canvas; wherein the second initial voxel field is a scalar field and the third initial voxel field is a vector field; storing the directed distance field value to a first channel of a blank canvas and storing the color value to a second channel of the blank canvas to obtain a texture slice combination; the texture slice combination is used for expressing a scene to be rendered; and obtaining the three-dimensional texture map based on the texture slice combination.
In one embodiment, obtaining a three-dimensional texture map based on a texture slice combination includes: and converting the texture slice combination into the three-dimensional texture mapping based on the preset slice line number and the preset slice column number.
In one embodiment, rendering according to a three-dimensional texture map to obtain a scene image includes: based on the directed distance field value in the three-dimensional texture map, progressive addition is carried out on the pixel points in the three-dimensional texture map until the appointed progressive addition times are reached or the distance between the pixel points and the object surface of the virtual object is smaller than a preset distance value, and target pixel points are obtained; sampling a color value in the three-dimensional texture map according to the target pixel point to determine a target color value corresponding to the target pixel point; and rendering based on the target color value corresponding to each target pixel point to obtain a scene image.
The readable storage medium provided by the embodiment of the invention generates the three-dimensional texture mapping according to the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered, and renders the scene image based on the three-dimensional texture mapping, so that the problem that the virtual objects in the virtual scene lose depth and perspective in the prior art can be effectively solved.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some features, within the scope of the disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image rendering method, comprising:
acquiring a scene to be rendered; the scene to be rendered is a three-dimensional scene with an open single side, and at least one virtual object to be rendered is configured in the scene to be rendered;
generating a three-dimensional texture map based on distance information and color information corresponding to the object surface of each virtual object in the scene to be rendered;
rendering according to the three-dimensional texture map to obtain a scene image; wherein the scene image is used for embodying virtual objects in the scene to be rendered in a three-dimensional effect on the single side surface.
2. The method according to claim 1, wherein the generating a three-dimensional texture map based on the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered comprises:
determining a corresponding directed distance field value in the scene to be rendered based on distance information between a point in the scene to be rendered and the object surface of the virtual object;
diffusing color information corresponding to the object surface of each virtual object in the scene to be rendered into a preset point cloud array to determine a corresponding color value in the scene to be rendered; the size of the point cloud array is consistent with that of the scene to be rendered;
storing the directed distance field value to a first channel of a preset blank canvas, and storing the color value to a second channel of the blank canvas to obtain a three-dimensional texture map; wherein the first channel is an Alpha channel and the second channel is a Cd channel.
3. The method of claim 2, wherein determining directional distance field values corresponding to points within the scene to be rendered based on distance information between the points within the scene to be rendered and object surfaces of the virtual objects comprises:
converting the scene to be rendered into a voxel format;
mixing a preset first initial voxel field with the to-be-rendered scene in the voxel format to obtain a mixed voxel field; the value of the first initial voxel field is a first preset threshold, and the value of the mixed voxel field is greater than the first preset threshold, which indicates that a point in the scene to be rendered is located inside the virtual object, otherwise indicates that the point in the scene to be rendered is located outside the virtual object;
converting the hybrid voxel field to a directed distance field to determine directed distance field values for point correspondences within the scene to be rendered.
4. The method of claim 2, wherein diffusing color information corresponding to the object surface of each virtual object in the scene to be rendered into a preset point cloud array to determine corresponding color values in the scene to be rendered comprises:
configuring a specified number of points on the surface of the virtual object so as to enable the point density of the surface of the object to be greater than a preset density threshold value;
adding texture mapping information of the scene to be rendered to a point on the object surface of each virtual object in the scene to be rendered;
determining color information corresponding to the points on the surface of the object based on the texture mapping information;
and diffusing the color information into a preset point cloud array to determine the corresponding color value in the scene to be rendered.
5. The method according to claim 2, wherein the storing the directed distance field values to a first channel of a preset blank canvas and the storing the color values to a second channel of the blank canvas to obtain a three-dimensional texture map comprises:
mixing a preset second initial voxel field and a preset third initial voxel field to obtain a blank canvas; wherein the second initial voxel field is a scalar field and the third initial voxel field is a vector field;
storing the directed distance field value to a first channel of the blank canvas and storing the color value to a second channel of the blank canvas to obtain a texture slice combination; wherein the texture slice combination is used for expressing the scene to be rendered;
and obtaining a three-dimensional texture map based on the texture slice combination.
6. The method of claim 5, wherein obtaining a three-dimensional texture map based on the texture slice combination comprises:
and converting the texture slice combination into a three-dimensional texture map based on the preset number of slice rows and slice columns.
7. The method of claim 1, wherein rendering the three-dimensional texture map to obtain a scene image comprises:
based on the directed distance field value in the three-dimensional texture map, progressive addition is carried out on pixel points in the three-dimensional texture map until the appointed progressive addition times are reached or the distance between the pixel points and the object surface of the virtual object is smaller than a preset distance value, and target pixel points are obtained;
sampling a color value in the three-dimensional texture map according to the target pixel point to determine a target color value corresponding to the target pixel point;
and rendering the target color values corresponding to each target pixel point to obtain a scene image.
8. An image rendering apparatus, characterized by comprising:
the scene acquisition module is used for acquiring a scene to be rendered; the scene to be rendered is a three-dimensional scene with an open single side, and at least one virtual object to be rendered is configured in the scene to be rendered;
the texture generation module is used for generating a three-dimensional texture mapping based on the distance information and the color information corresponding to the object surface of each virtual object in the scene to be rendered;
the rendering module is used for rendering according to the three-dimensional texture map to obtain a scene image; wherein the scene image is used for embodying a virtual object in the scene to be rendered in a three-dimensional effect on the single side surface.
9. A terminal device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the method of any one of claims 1 to 7.
10. A computer-readable storage medium having computer-executable instructions stored thereon which, when invoked and executed by a processor, cause the processor to perform the method of any of claims 1 to 7.
CN202211678778.9A 2022-12-26 2022-12-26 Image rendering method and device, terminal equipment and computer readable storage medium Pending CN115937396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211678778.9A CN115937396A (en) 2022-12-26 2022-12-26 Image rendering method and device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211678778.9A CN115937396A (en) 2022-12-26 2022-12-26 Image rendering method and device, terminal equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115937396A true CN115937396A (en) 2023-04-07

Family

ID=86653892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211678778.9A Pending CN115937396A (en) 2022-12-26 2022-12-26 Image rendering method and device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115937396A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095108A (en) * 2023-10-17 2023-11-21 海马云(天津)信息技术有限公司 Texture rendering method and device for virtual digital person, cloud server and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095108A (en) * 2023-10-17 2023-11-21 海马云(天津)信息技术有限公司 Texture rendering method and device for virtual digital person, cloud server and storage medium
CN117095108B (en) * 2023-10-17 2024-01-23 海马云(天津)信息技术有限公司 Texture rendering method and device for virtual digital person, cloud server and storage medium

Similar Documents

Publication Publication Date Title
US11680803B2 (en) Rendering operations using sparse volumetric data
CN109427088B (en) Rendering method for simulating illumination and terminal
CN104637089B (en) Three-dimensional model data processing method and device
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
WO2022021309A1 (en) Method and apparatus for establishing model, electronic device, and computer readable storage medium
JPH10510074A (en) Image composition
US9224233B2 (en) Blending 3D model textures by image projection
WO2015070618A1 (en) Method and device for global illumination rendering under multiple light sources
WO2024088002A1 (en) Vertex ambient occlusion value determination method and apparatus, vertex ambient occlusion value application method and apparatus, and device
US20140267260A1 (en) System, method, and computer program product for executing processes involving at least one primitive in a graphics processor, utilizing a data structure
CN112734892A (en) Real-time global illumination rendering method for virtual cable tunnel scene model
CN109155846B (en) Three-dimensional reconstruction method and device of scene, electronic equipment and storage medium
US20230033319A1 (en) Method, apparatus and device for processing shadow texture, computer-readable storage medium, and program product
CN115937396A (en) Image rendering method and device, terminal equipment and computer readable storage medium
CN114255315A (en) Rendering method, device and equipment
CN113256782A (en) Three-dimensional model generation method and device, storage medium and electronic equipment
US9704290B2 (en) Deep image identifiers
CN115222806A (en) Polygon processing method, device, equipment and computer readable storage medium
US11288774B2 (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN111445567A (en) Baking method and device for dynamic object, computer equipment and storage medium
EP3437072B1 (en) System and method for rendering points without gaps
US20230186565A1 (en) Apparatus and method for generating lightweight three-dimensional model based on image
WO2023016423A1 (en) Video color gamut detection method and apparatus, and computing device, computer storage medium and computer program product
CN116485969A (en) Voxel object generation method, voxel object generation device and computer-readable storage medium
Mora et al. Lazy visibility evaluation for exact soft shadows

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination