CN114332347A - Volume cloud data storage method and device and storage medium - Google Patents

Volume cloud data storage method and device and storage medium Download PDF

Info

Publication number
CN114332347A
CN114332347A CN202111162609.5A CN202111162609A CN114332347A CN 114332347 A CN114332347 A CN 114332347A CN 202111162609 A CN202111162609 A CN 202111162609A CN 114332347 A CN114332347 A CN 114332347A
Authority
CN
China
Prior art keywords
map
field information
distance field
volume cloud
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111162609.5A
Other languages
Chinese (zh)
Inventor
陈参
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202111162609.5A priority Critical patent/CN114332347A/en
Publication of CN114332347A publication Critical patent/CN114332347A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a volume cloud data storage method, volume cloud data storage equipment and a volume cloud data storage medium. After distance field information of a target scene containing a volume cloud is obtained, storing the distance field information of each point near the surface of the volume cloud in the target scene in a storage unit of a first 3D map, compressing the distance field information of the target scene by a specified multiple and storing the compressed distance field information in a second 3D map; in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud. Therefore, the distance field information of the points which are not near the surface of the volume cloud in the target scene can be compressed and stored, and the distance field information of the main interest points near the surface of the volume cloud can be stored with high precision, so that the number of the points which need to be stored with high precision is greatly reduced, and the consumption of storage resources is reduced.

Description

Volume cloud data storage method and device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a volume cloud data storage method, device, and storage medium.
Background
Volumetric clouds (Volumetric clouds), also known as Volumetric clouds or Volumetric mists, typically use an image engine to simulate the translucent, irregular appearance of a real Cloud mist when rendering a virtual scene, such as a game scene.
In the existing volume cloud rendering method, data structures such as octree are generally adopted to store data required for rendering volume cloud. However, this storage method consumes a high amount of storage resources. Therefore, a solution is yet to be proposed.
Disclosure of Invention
Aspects of the present disclosure provide a data storage method, device and storage medium for a volume cloud, so as to reduce consumption of storage resources by data required for rendering the volume cloud.
The embodiment of the application provides a data storage method of a volume cloud, which comprises the following steps: obtaining distance field information for a target scene; the distance field information includes a minimum distance of a point in the target scene to a surface of a volumetric cloud in the target scene; the distance field information includes a minimum distance of a point in the target scene to a surface of the volumetric cloud; storing distance field information of respective points in the target scene near the surface of the volumetric cloud in a storage unit of a first 3D map; compressing the distance field information of the target scene by a specified multiple and storing the compressed distance field information in a second 3D map; in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud.
Further optionally, compressing the distance field information of the target scene by a specified multiple and storing the compressed information in a second 3D map, comprising: compressing every NxN point in the target scene into a map unit in the second 3D map; for any map cell, performing fuzzy processing on distance field information of each of N × N points corresponding to the map cell to obtain distance field information shared by the N × N points; storing distance field information common to the N points in a first channel of the map cell.
Further optionally, the method further comprises: if the NxNxN points are located near the surface of the volume cloud, storing indexes of corresponding storage units of the NxN points in the first 3D map in a second channel and a third channel of the map unit; and storing the distance field information of each of the N X N points in the corresponding storage cell in the first 3D map by adopting the N X N individual pixels.
Further optionally, saving distance field information of each of points in the target scene that are located near a surface of the volumetric cloud in a storage unit of a first 3D map includes: obtaining a 3D noise map of points located near a surface of the volumetric cloud; and superposing the first 3D map and the 3D noise map to obtain the corroded first 3D map.
Further optionally, the method further comprises: responding to an instruction of rendering the volume cloud in the target scene, and respectively transmitting a ray from the position of a virtual camera in the target scene to a plurality of pixel points on a screen; controlling a plurality of rays corresponding to the plurality of pixel points to step along a sight line direction respectively, and determining distance field information of step points reached by the plurality of rays respectively according to the first 3D map and/or the second 3D map as respective step distances of the plurality of rays when stepping each time until the plurality of rays reach the surface of the volume cloud respectively; determining the shape of the volume cloud in the three-dimensional space where the target scene is located according to the lengths of the rays; rendering the volume cloud according to a shape of the volume cloud in the three-dimensional space.
Further optionally, determining distance field information of a step point reached by each of the plurality of rays according to the first 3D map and/or the second 3D map at each step includes: for any ray in the plurality of rays, when the ray reaches any stepping point, determining a target mapping unit corresponding to the stepping point from the second 3D mapping; if the index in the target map cell does not point to a storage cell in the first 3D map, then taking the distance field information stored in the target map cell as the distance field information for the step point; if the index in the target map unit points to the target storage unit in the first 3D map, determining the target storage unit from the first 3D map according to the index of the target storage unit; distance field information for the step point is determined from the target storage cell based on the coordinates of the step point.
Further optionally, controlling, according to the distance field information, the plurality of rays corresponding to the plurality of pixel points to step along a line of sight direction, respectively, until the plurality of rays reach a surface of the volume cloud, respectively, including: for any ray in the rays, according to the minimum distance from the point where the virtual camera is located to the surface of the volume cloud, carrying out ray stepping along the sight line direction corresponding to the ray to reach a stepping point; judging whether the stepping point is positioned on the surface of the volume cloud or not according to the minimum distance from the stepping point to the surface of the volume cloud; and if the stepping point is positioned on the surface of the volume cloud, stopping the stepping operation of the ray, and determining the distance between the virtual camera and the surface of the volume cloud in the sight line direction corresponding to the ray according to the distance between the virtual camera and the stepping point.
Further optionally, the method further comprises: and if the stepping point is not on the surface of the volume cloud, continuing to perform ray stepping along the sight line direction corresponding to the ray according to the minimum distance from the stepping point to the surface of the volume cloud until a new stepping point reached by the ray is located on the surface of the volume cloud.
Further optionally, determining a shape of the volume cloud in a three-dimensional space in which the target scene is located according to the lengths of the rays includes: calculating depth values from a plurality of pixel points on the screen to the surface of the volume cloud according to the lengths of the rays and the included angles corresponding to the respective sight directions of the rays; determining the shape of the volume cloud in the three-dimensional space according to the depth values of the plurality of pixel points on the screen to the surface of the volume cloud.
An embodiment of the present application further provides an electronic device, including: a memory and a processor; the memory is to store one or more computer instructions; the processor is to execute the one or more computer instructions to: the steps in the volume cloud data storage method provided by the embodiment of the application are executed.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, where the computer program can implement the steps in the method provided in the embodiments of the present application when executed.
Embodiments of the present application also provide a computer program product, which includes a computer program/instructions, and when the computer program is executed by a processor, the processor is caused to implement the steps in the method provided by the embodiments of the present application.
In the data storage method for the volume cloud provided by the embodiment of the application, after distance field information of a target scene including the volume cloud is acquired, distance field information of points, located near the surface of the volume cloud, in the target scene is stored in a storage unit of a first 3D map, and the distance field information of the target scene is compressed by a specified multiple and then stored in a second 3D map; in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud. Therefore, the distance field information of the points which are not near the surface of the volume cloud in the target scene can be compressed and stored, and the distance field information of the main interest points near the surface of the volume cloud can be stored with high precision, so that the number of the points which need to be stored with high precision is greatly reduced, and the consumption of storage resources is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic flowchart of a data storage method of a volume cloud according to an exemplary embodiment of the present application;
FIG. 1b is a schematic diagram of a first 3D map provided by an exemplary embodiment of the present application;
FIG. 1c is a schematic view of a second 3D map provided in an exemplary embodiment of the present application;
FIG. 1D is a diagram illustrating a query of a storage unit in a first 3D map according to an index in a second 3D map according to an exemplary embodiment of the present application;
fig. 2 is a schematic flowchart of a volume cloud rendering method according to another exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a virtual camera emitting rays to a plurality of pixel points on a screen according to an exemplary embodiment of the present application;
FIG. 4 is a diagram of distance field based ray stepping according to an exemplary embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Volumetric clouds (Volumetric clouds), also known as Volumetric clouds or Volumetric mists, typically use an image engine to simulate the translucent, irregular appearance of a real Cloud mist when rendering a virtual scene, such as a game scene.
In some schemes, data (e.g., texture data) needed to render the volume cloud is typically stored using a data structure such as an octree. However, this storage method is complex, requires support from the underlying framework, and some devices are difficult to use.
In view of the above technical problem, in some embodiments of the present application, a solution is provided, and the technical solutions provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic flowchart of a data storage method of a volume cloud according to an exemplary embodiment of the present application, and as shown in fig. 1a, the method includes:
step 101, obtaining distance field information of a target scene; the distance field information includes a minimum distance of a point in the target scene to a surface of a volumetric cloud in the target scene.
And 102, storing the distance field information of each point near the surface of the volume cloud in the target scene in a storage unit of a first 3D map.
Step 103, compressing the distance field information of the target scene by a specified multiple and storing the compressed distance field information in a second 3D map; in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud.
The embodiment may be implemented by an electronic device, where the electronic device may be a terminal device such as a smart phone, a tablet computer, or a computer, or may be a server device, and the embodiment is not limited. The target scene may include any virtual 3D (3-dimensional) scene obtained by scene modeling, may be a game scene (e.g., a large map scene), or may be an animation scene, and the like, which is not limited in this embodiment. The target scene includes a volume cloud.
Where a Distance Field (signaled Distance Field) is a function that calculates a Distance, a position of a point is input, and a minimum Distance from the point to a surface of any object in the scene is output based on the Distance Field function. When the point is inside the object, the value of the distance field output is negative; when the point is outside the object, the distance field output has a positive value; when the point is on the object surface, the distance field output has a value of 0.
In this embodiment, the distance field information in the target scene refers to distance field information of points in the target scene, which is calculated from the relative position relationship between the points in the target scene and the volume cloud in the target scene. The distance field information includes a minimum distance from any point in the target scene to a surface of a volumetric cloud in the target scene. I.e., the distance field information of the target scene, includes distance field information for each of a plurality of points in the target scene. The point in the target scene refers to a sampling point obtained by sampling a three-dimensional space corresponding to the target scene.
Where the minimum distance from any point to the surface of the volume cloud is calculated, the calculation may be based on a distance field function. The value of the distance field output for a point located inside the model of the volume cloud is negative; the distance field output for a point located on the model surface of the volume cloud has a value of 0.
For points outside of the model of the volume cloud, coordinates of each point on the surface of the volume cloud in the target scene can be determined from the location of the volume cloud in the target scene and shape information of the model of the volume cloud when computing the distance field of the outside points based on the distance field function. For any position point A outside the volume cloud, the distance between the position point A and each point on the surface of the volume cloud can be calculated according to the coordinates of the position point A and the coordinates of each point on the surface of the volume cloud in the target scene. Based on the above manner, a plurality of distances between the location point a and the surface of the volume cloud can be calculated, and the output value of the distance field of the location point can be obtained by selecting the minimum distance from the plurality of distances.
After the distance field information of the target scene is computed, the distance field information may be stored as a rendering resource file of the target scene. During subsequent rendering, a volume cloud can be rendered according to the distance field information of the target scene.
In some scenarios, when the target scene is a large scene, or the volume cloud is a large or giant volume cloud, storing the distance field information of the target scene will occupy more storage resources. In practice, when observing the volume cloud, information near the surface of the volume cloud is visually more interesting, and information outside the surface of the volume cloud is less interesting. Therefore, when rendering the volume cloud, the accuracy requirement for the points (main points of interest) near the surface of the volume cloud is high, and the accuracy requirement for the points (non-main points of interest) other than near the surface is low. Wherein a point near the surface of the volume cloud refers to a point located within a specified distance of the surface of the volume cloud.
Based on this, in this embodiment, the distance field information of non-primary points of interest in the target scene can be compressed for storage, and the distance field information of primary points of interest near the surface can be stored with high accuracy. In this way, only distance field information for points near the surface of the volume cloud need be stored with higher accuracy, thereby reducing the number of points that need to be stored with high accuracy and reducing consumption of storage resources.
Where the compressed storage of distance field information is performed, discrete sampling of the distance field information can be performed. In this embodiment, the distance field information for a point in the target scene may be sampled at a set sampling precision and the sampling results stored using the 3D map. 3D maps, which can be used to store Distance field information of varying precision as SDF (Signed Distance Functions) values. SDF is a way to represent geometry using a distance equation to determine the location of points inside and outside the boundary.
Therein, the distance field information for points near the surface of the volume cloud may be stored in one 3D map (i.e., the first 3D map), as shown in fig. 1 b. The distance field information stored in the first 3D map is initially calculated distance field information, sampled with higher sampling accuracy, without blurring, and with higher accuracy.
The distance field information in the target scene may be compressed and stored in another 3D map (i.e., the second 3D map), as shown in figure 1 c. The compressed storage is that distance field information of a target scene is sampled with low sampling precision, and the distance field information with low precision is obtained through sampling. Wherein the second 3D map includes a plurality of map cells (i.e., minimum storage units), each map cell for compressively storing distance field information for a plurality of points. After compression storage, multiple points in the target scene are downsampled to one point, and the multiple points share one distance field information, so that the precision is low.
In this embodiment, the map cells in the first 3D map are divided into a plurality of memory cells with a certain volume, each memory cell being composed of a plurality of voxels (voxels), as shown in fig. 1 b. Where each voxel is used to store distance field information for a point near the surface of the volume cloud. Wherein the volume of the memory cell is associated with a compressed storage multiple of the distance field information. If the distance field information is compressed by a factor of N, the volume of the memory cell is N x N, i.e., the memory cell contains N x N voxels.
Wherein, in the first 3D map, any memory cell corresponds to a map cell in the second 3D map, and distance field information for a plurality of points stored by the memory cell is compressed and stored in the map cell.
To facilitate utilizing high precision distance field information for points near the surface of the volume cloud, for a portion of the map cells in the second 3D map that correspond to points near the surface of the volume cloud, an index of the memory cells in the first 3D map that correspond to the portion of the map cells can be saved in the portion of the map cells. Namely: if in the second 3D map, the map cell K1 is used to compress the low-precision distance field information for the point [ P1] near the surface of the volume cloud, and in the first 3D map, the memory cell V1 is used to store the high-precision distance field information for the point [ P1] near the surface of the volume cloud, then the index of memory cell V1 can be stored in map cell K1 for subsequent reading.
Based on the above, when distance field information of a point in the target scene is subsequently read, if the point is not near the surface of the volume cloud, low-precision distance field information of the point can be read from the second 3D map. If the point is near the surface of the volume cloud, the index of the storage cell corresponding to the point can be read from the second 3D map, and the high-precision distance field information corresponding to the point can be determined from the first 3D map according to the coordinates (e.g., uv coordinates) of the point according to the index of the storage cell, as shown in fig. 1D.
In this embodiment, after distance field information of a target scene including a volume cloud is obtained, the distance field information of each point in the target scene near the surface of the volume cloud is stored in a storage unit of a first 3D map, and the distance field information of the target scene is compressed by a specified multiple and stored in a second 3D map; in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud. Therefore, the distance field information of the points which are not near the surface of the volume cloud in the target scene can be compressed and stored, and the distance field information of the main interest points near the surface of the volume cloud can be stored with high precision, so that the number of the points which need to be stored with high precision is greatly reduced, and the consumption of storage resources is reduced.
In some optional embodiments, the points in the target scene that are located near the surface of the volumetric cloud may be dynamically changed according to virtual objects that appear dynamically near the surface of the volumetric cloud.
When a virtual object in the target scene is located near the surface of the volume cloud or dynamically passes near the surface of the volume cloud, the main interest point near the surface of the volume cloud can be determined according to the position of the virtual object near the surface of the volume cloud and the distance between the virtual object and the surface of the volume cloud, and the distance field information of the main interest point can be sampled with high adoption accuracy.
For example, in some scenarios, when a virtual flying object (e.g., a bird, airplane, or other flying character) in the target scene flies near the volume cloud, the primary points of interest near the surface of the volume cloud may be determined based on the path of the flying object and the distance of the flying object from the surface of the volume cloud. Wherein, with the dynamic change of the flight path of the flying object, the main interest point near the surface of the volume cloud can be changed dynamically. Based on the implementation mode, the sampling precision of different points in the target scene can be flexibly adjusted according to the objects in the target scene, and the diversified rendering requirements can be met while the storage resources are reduced.
The compression of the distance field information of the target scene by a factor of N will be further explained below as an example. N may take 4, 5, 6, or other values, which is not limited in this embodiment.
In some alternative embodiments, compressing the distance field information of the target scene by a factor of N may compress every N x N points in the target scene into one tile cell in the second 3D tile. For any map cell, the distance field information for each of the N x N points corresponding to the map cell may be blurred to obtain distance field information common to the N x N points. The blurring process may include calculating an average value of distance field information of a plurality of points or selecting a minimum value from the plurality of values, and the embodiment is not limited. Next, the distance field information common to the N points is stored in a first channel (e.g., the B channel) of the map cell.
Optionally, if the N × N dots are located near the surface of the volume cloud, the indices of the memory cells corresponding to the N × N dots in the first 3D map are saved in the second channel (e.g., R channel) and the third channel (e.g., G channel) of the map cell. The N × N dots store respective distance field information of the N × N dots in corresponding memory cells in the first 3D map.
In some embodiments, if the plurality of points corresponding to the map unit in the second 3D map are not near the surface of the volume cloud, the values of the R channel and the G channel of the map unit may be 0. If the plurality of points corresponding to the map unit in the second 3D map are near the surface of the volume cloud, the R channel of the map unit may store the row coordinates of the memory cell corresponding to the map unit in the first 3D map, and the G channel may store the column coordinates of the memory cell corresponding to the map unit in the first 3D map.
In a subsequent query, if a point has a value of 0 at the R, G channel of the corresponding map cell in the second 3D map, the distance field information stored by the corresponding map cell in the second 3D map can be queried as the distance field information for the point. If the value of R, G channel for the corresponding map cell in the second 3D map for a point is not 0, then the corresponding memory cell in the first 3D map can be queried based on the value of R, G channel and the distance field information for the point can be obtained from the memory cell.
In some embodiments, when generating the first 3D map, a result of smoothing of the distance field of points near the volume cloud surface may be sampled using bilinear filtering. For example, when sampling a region of 4 × 4 pixels, if the sampling result is continuous, the value of the boundary between every two adjacent voxels may be set to a repetition value, and a sampling value of 5 × 5 pixels may be obtained. When a voxel is queried according to the index of the voxel, the query index can be scaled to avoid reading a mixed result of the voxel and other voxels around, and further description is omitted.
In some optional embodiments, to further enrich the details near the surface of the volume cloud, noise may be superimposed in the first 3D map. Optionally, when the distance field information of each point near the surface of the volume cloud in the target scene is stored in the storage unit of the first 3D map, the 3D noise map of the point near the surface of the volume cloud may be acquired, and the first 3D map and the 3D noise map are superimposed to obtain the eroded first 3D map. And setting a coefficient of the first 3D map + 3D noise map after erosion. The setting coefficient may be set according to a requirement, and this embodiment is not limited. When the storage unit is subsequently queried according to the index, a query can be performed in the eroded first 3D map to obtain distance field information after superimposing noise. Based on the embodiment, the 3D noise map is set, and the detail information of the volume cloud is designed, so that the volume cloud with higher reality is obtained through rendering.
In some embodiments, the distance field information for the target scene can be computed based on a model of the volume cloud and a location of the volume cloud in the target scene.
The electronic equipment can provide an operation interface, and one or more controls can be displayed on the operation interface. The one or more controls include a control to upload a model of the volume cloud and a control to trigger a distance field conversion operation of the volume cloud. The user can upload a model of a volume cloud designed in a customized manner through the control according to actual requirements, and control the electronic device to execute distance field conversion operation aiming at the model of the volume cloud.
In other embodiments, the electronic device can provide one or more open interfaces (APIs) that the user can use the distance field conversion functionality provided by the electronic device by calling. The electronic device may obtain a model of a custom designed volumetric cloud from the interface parameters in response to a call operation to the open interface, and perform a distance field conversion operation on the model of the volumetric cloud.
The customized volume cloud model can be designed according to requirements in the modeling process of a target scene, and can also be obtained by performing personalized design by art makers, and the embodiment is not limited.
And customizing the designed model of the volume cloud for displaying in the target scene.
In this embodiment, when a custom designed volume cloud is rendered in a target scene, a model of the volume cloud may be obtained, and distance field information of the target scene may be calculated according to the model of the volume cloud and a position of the volume cloud in the target scene; the distance field information includes a minimum distance of a point in the target scene to a surface of the volumetric cloud. The distance field information of the target scene may be used as a rendering resource file for the target scene, such that when rendering the target scene, a volumetric cloud may be rendered based on the distance field information of the target scene. In such an embodiment, the shape of the cloud in the target scene can be specified based on the model of the volumetric cloud and the distance field information, and the specified shape of the cloud is rendered, satisfying the customized rendering requirements of the volumetric cloud. Meanwhile, the modeling of the volume cloud and the modeling of the target scene can be independently carried out, and the modeling efficiency of the virtual three-dimensional space is favorably improved.
Fig. 2 is a schematic flowchart of a rendering method of a volume cloud according to another exemplary embodiment of the present application, and as shown in fig. 2, the method includes:
step 201, obtaining distance field information of a target scene; the distance field information includes a minimum distance of a point in the target scene to a surface of a volumetric cloud in the target scene.
Step 202, storing distance field information of each point near the surface of the volume cloud in the target scene in a storage unit of a first 3D map.
Step 203, compressing the distance field information of the target scene by a specified multiple and storing the compressed distance field information in a second 3D map; in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud.
Step 204, responding to the instruction of rendering the volume cloud in the target scene, and respectively transmitting a ray from the position of the virtual camera in the target scene to a plurality of pixel points on the screen.
Step 205, controlling the plurality of rays corresponding to the plurality of pixel points to step along the sight line direction, and determining distance field information of step points reached by the plurality of rays according to the first 3D map and/or the second 3D map as respective step distances of the plurality of rays when stepping each time until the plurality of rays reach the surface of the volume cloud.
And step 206, determining the shape of the volume cloud in the three-dimensional space of the target scene according to the lengths of the rays.
And step 207, rendering the volume cloud according to the shape of the volume cloud in the three-dimensional space.
The embodiment may be implemented by an electronic device, where the electronic device may be a terminal device such as a smart phone, a tablet computer, or a computer, or may be a server device, and the embodiment is not limited. Wherein a component that can perform distance field conversion operations and a rendering engine for rendering operations of the virtual scene can be run on the electronic device. Reference may be made to the descriptions of the foregoing embodiments in step 201 to step 203, which are not described herein again.
The instruction for rendering the volume cloud in the target scene may be issued by the rendering engine, or may be issued by an upstream application or component, which is not limited in this embodiment. In the volume cloud rendering process, a (Sphere stepping) algorithm can be adopted, and the distance field information is taken as a step length to perform light stepping. Firstly, a ray can be emitted from the position of the virtual camera in the target scene to a plurality of pixel points on the screen, and the ray is used for simulating the light (sight) corresponding to the pixel point. The angles of the rays corresponding to different pixel points are different, that is, the directions of the lines of sight of different pixel points are different, as shown in fig. 3 by rays L1, L2, and L3.
When the Sphere marking algorithm is executed, the rays corresponding to the pixel points can be controlled to respectively step along the sight line direction according to the distance field information of the points in the target scene until the rays respectively reach the surface of the volume cloud. The distance of each step can be regarded as the radius of the sphere, and the process of the step can be regarded as the process of drawing the sphere along the sight line direction based on the radius. The virtual camera is located in the target scene, and the minimum distance from the location (i.e., the viewpoint) of the virtual camera to the surface of the volumetric cloud can be obtained from the distance field information of the points in the target scene. When the light stepping is carried out for the first time, the stepping is carried out for the first time according to the minimum distance from the point where the virtual camera is located to the surface of the volume cloud. The point reached in the target space by each stepping operation may be referred to as a stepping point. Each time a step point is reached, the minimum distance from the step point to the surface of the volume cloud can be read from the distance field information of the target scene, and whether the ray is stepped to the surface of the volume cloud can be determined according to the distance. If the ray steps to the surface of the volume cloud, the stepping is stopped. If the ray does not step to the surface of the volume cloud, continuing stepping according to the minimum distance from the stepping point to the surface of the volume cloud until the ray steps to the surface of the volume cloud.
Wherein, at each step, distance field information of the step point reached by each of the plurality of rays can be determined according to the first 3D map and/or the second 3D map. Taking any one of the plurality of rays as an example, when the ray reaches any one of the stepping points, the target map unit corresponding to the stepping point can be determined from the second 3D map. If the index in the target map cell does not point to a memory cell in the first 3D map, the step point can be deemed not to be near the surface of the volume cloud, and the distance field information stored in the target map cell can be used as the distance field information for the step point. That is, for a step point that is not near the surface of the volume cloud, the minimum distance of the step point to the surface of the volume cloud can be determined from the low accuracy distance field information corresponding to the step point.
Alternatively, if the index in the target map cell points to the target storage cell in the first 3D map, the step point may be considered to be near the surface of the volumetric cloud, at which point the target storage cell may be determined from the first 3D map according to the index of the target storage cell, and the distance field information for the step point may be determined from the target storage cell according to the coordinates of the step point. That is, for a step point located near the surface of the volume cloud, the minimum distance of the step point to the surface of the volume cloud can be determined from the highly accurate distance field information corresponding to the step point.
Since the distance field information represents the minimum distance of a point in the target scene to the surface of the volumetric cloud, when stepping the light according to the distance field information, even if the volumetric cloud is an irregular object, it is ensured that the light does not enter the interior of the volumetric cloud and will eventually reach the surface of the volumetric cloud. Meanwhile, light stepping is performed according to the distance field information, and a proper stepping length can be quickly determined without a large amount of calculation, so that the speed of stepping the light to the surface of the volume cloud is accelerated.
When a plurality of rays corresponding to a plurality of pixel points are respectively stepped to the surface of the volume cloud, the shape of the volume cloud in the three-dimensional space where the target scene is located can be determined according to the respective lengths of the plurality of rays. The volume cloud may be rendered based on a shape of the volume cloud in three-dimensional space.
In this embodiment, when rendering a volume cloud, when performing ray stepping, a suitable stepping length can be determined quickly by using distance field information of a target scene, so that on one hand, the ray stepping into the volume cloud can be avoided, and on the other hand, the speed of the ray stepping onto the surface of the volume cloud can be accelerated. The data storage method based on the volume cloud can reduce the consumption of computing resources, is beneficial to improving the rendering performance and is beneficial to smoothly running the huge volume cloud with large scenes on the terminal equipment.
An embodiment of ray stepping based on distance field information will be exemplarily described below, taking as an example a ray corresponding to any one pixel point.
Alternatively, for any of the plurality of rays, a minimum distance from the virtual camera to the surface of the volumetric cloud may first be determined from the distance field information of the target scene based on the location of the point (i.e., the viewpoint) at which the virtual camera is located in the target scene. And then, according to the minimum distance from the point where the virtual camera is located to the surface of the volume cloud, carrying out ray stepping along the sight line direction corresponding to the ray to reach a stepping point. The sight line direction corresponding to the ray can be regarded as the connection line direction of the virtual camera and the pixel point corresponding to the ray, as shown in fig. 3.
Upon reaching the step point, a minimum distance from the step point to the surface of the volumetric cloud may be determined from the distance field information of the target scene based on the location of the step point in the target scene. Based on the minimum distance of the stepping point to the surface of the volume cloud, it can be determined whether the stepping point is located on the surface of the volume cloud. Based on the definition of the distance field, if the minimum distance from the step point to the surface of the volume cloud is greater than 0, then the step point is outside the volume cloud and does not reach the surface of the volume cloud. If the minimum distance of the step point to the surface of the volume cloud is equal to 0, then the step point is located on the surface of the volume cloud. If the minimum distance from the stepping point to the surface of the volume cloud is less than 0, the stepping point is located inside the volume cloud (this does not happen with this solution).
Therefore, when it is determined whether the stepping point is located on the surface of the volume cloud based on the minimum distance from the stepping point to the surface of the volume cloud, it may be determined whether the minimum distance is greater than 0, and if the minimum distance is greater than 0, it is determined that the stepping point has not been performed on the surface of the volume cloud, and if the minimum distance is equal to 0, it is determined that the stepping point has been performed on the surface of the volume cloud.
If the stepping point is located on the surface of the volume cloud, the stepping operation of the ray can be stopped, and the distance between the virtual camera and the surface of the volume cloud in the sight line direction corresponding to the ray is determined according to the distance between the virtual camera and the stepping point.
Optionally, if the stepping point is not on the surface of the volume cloud, the ray stepping may be continued along the line-of-sight direction corresponding to the ray according to the minimum distance from the stepping point to the surface of the volume cloud until a new stepping point reached by the ray is located on the surface of the volume cloud. When the ray reaches a stepping point, the distance field information corresponding to the stepping point can be used for judging whether the ray reaches the surface of the volume cloud, so that whether the ray needs to be stepped continuously or not is judged.
For example, in some embodiments, as shown in fig. 4, on the first step, the step may be made according to the minimum distance of the camera from the surface of the volume cloud, the ray reaching the first step point a; the distance between the first stepping point A and the surface of the volume cloud is greater than 0, at the moment, second stepping can be carried out according to the minimum distance between the first stepping point A and the surface of the volume cloud, and the ray reaches a second stepping point B; the distance between the second stepping point B and the surface of the volume cloud is greater than 0, at the moment, third stepping can be carried out according to the minimum distance between the second stepping point B and the surface of the volume cloud, and the ray reaches a third stepping point C; the distance between the third step point C and the surface of the volume cloud is greater than 0, at this time, fourth stepping can be carried out according to the minimum distance between the third step point C and the surface of the volume cloud, and the ray reaches a fourth stepping point D; the distance between the fourth stepping point D and the surface of the volume cloud is greater than 0, at this time, fifth stepping can be performed according to the minimum distance between the fourth stepping point D and the surface of the volume cloud, and the ray reaches a fifth stepping point E; the fourth step point E is at a distance equal to 0 from the surface of the volume cloud, at which time it can be determined that the ray is stepped onto the surface of the volume cloud.
As shown in fig. 4, each step is performed by drawing a sphere having a distance field as a radius with the viewpoint or the step point as a spherical center, with the distance field of the viewpoint or the distance field of the step point as a step distance. The drawn sphere is tangent to the volume cloud and creates a new intersection point with the ray. If the new intersection is located on the surface of the volume cloud, then no further stepping is performed. And if the new intersection point is positioned outside the volume cloud, continuously drawing a sphere with the distance field of the new intersection point as the radius by taking the new intersection point as the sphere center until the next intersection point is positioned on the surface of the volume cloud.
In the embodiment, the step length adopted when the light rays step each time can ensure that the drawn circle is tangent to the volume cloud, the light rays can not enter the inside of the volume cloud, and the step length can be obtained every time, so that the step calculation process can be accelerated.
In some alternative embodiments, to further enrich the details of the surface of the volume cloud, the surface of the volume cloud of the distance field representation can be eroded using noise, and the erosion results employed as distance field sample results at Sphere Marching. As will be exemplified below.
Optionally, when the plurality of rays corresponding to the plurality of pixel points are controlled to step along the sight line direction respectively according to the distance field information of the target scene, a 3D noise map of the target scene may be obtained, and the distance field information of the target scene and the 3D noise map of the target scene are superimposed to obtain the distance field information of the target scene after erosion. Next, according to the eroded distance field information, controlling a plurality of rays corresponding to the plurality of pixel points to step along the sight line direction respectively until the plurality of rays reach the surface of the volume cloud respectively.
That is, the step length of each ray is obtained from the eroded distance field information of the target scene. When distance field information is represented using 3D maps, the eroded distance field map + set coefficients 3D noise map. The setting coefficient may be set according to a requirement, and this embodiment is not limited. Based on the embodiment, the 3D noise map is set, and the detail information of the volume cloud is designed, so that the volume cloud with higher reality is obtained through rendering.
By the distance field and ray stepping based method provided by the above embodiment, the distance between the virtual camera and the surface of the volume cloud in the sight line direction corresponding to each ray can be calculated. Next, the above rays may be spatially transformed to obtain the shape of the volume cloud in the three-dimensional space (i.e., world space) where the target scene is located.
Optionally, the depth values from the multiple pixel points on the screen to the surface of the volume cloud may be calculated according to the lengths of the multiple rays and the included angles corresponding to the respective sight directions of the multiple rays; and determining the shape of the volume cloud in the three-dimensional space according to the depth values of the plurality of pixel points on the screen to the surface of the volume cloud.
Take a ray L0 in fig. 4 as an example. When the ray corresponding to L0 is stepped to point E, it reaches the surface of the volume cloud, and the length D (L0) of L0 is OE. Wherein O represents a viewpoint of the virtual camera, an included angle between a line of sight direction corresponding to the ray L0 and the vertical direction is α, and a depth D (pe) from the pixel point P to the surface of the volume cloud is D (L0) sin α.
Based on the method, the depth information of each pixel point reaching the surface of the volume cloud can be calculated, so that the shape of the volume cloud in the three-dimensional space is determined, and the volume cloud is rendered and displayed according to the shape of the volume cloud in the three-dimensional space.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of step 201 to step 203 may be device a; for another example, the execution subject of steps 201 and 202 may be device a, and the execution subject of step 203 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 5 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application, where the electronic device may be used to execute the data storage method of the volume cloud described in the foregoing embodiments. As shown in fig. 5, the electronic apparatus includes: a memory 501 and a processor 502.
The memory 501 is used for storing computer programs and may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, first resources, and so forth.
The memory 501 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 502, coupled to the memory 501, for executing computer programs in the memory 501 for: obtaining distance field information for a target scene; the distance field information includes a minimum distance of a point in the target scene to a surface of a volumetric cloud in the target scene; the distance field information includes a minimum distance of a point in the target scene to a surface of the volumetric cloud; storing distance field information of respective points in the target scene near the surface of the volumetric cloud in a storage unit of a first 3D map; compressing the distance field information of the target scene by a specified multiple and storing the compressed distance field information in a second 3D map; in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud.
Further optionally, when compressing the distance field information of the target scene by a specified multiple and storing the compressed distance field information in the second 3D map, the processor 502 is specifically configured to: compressing every NxN point in the target scene into a map unit in the second 3D map; for any map cell, performing fuzzy processing on distance field information of each of N × N points corresponding to the map cell to obtain distance field information shared by the N × N points; storing distance field information common to the N points in a first channel of the map cell.
Further optionally, the processor 502 is further configured to: if the NxNxN points are located near the surface of the volume cloud, storing indexes of corresponding storage units of the NxN points in the first 3D map in a second channel and a third channel of the map unit; and storing the distance field information of each of the N X N points in the corresponding storage cell in the first 3D map by adopting the N X N individual pixels.
Further optionally, when saving the distance field information of each of the points in the target scene near the surface of the volume cloud in the storage unit of the first 3D map, the processor 502 is specifically configured to: obtaining a 3D noise map of points located near a surface of the volumetric cloud; and superposing the first 3D map and the 3D noise map to obtain the corroded first 3D map.
Further optionally, the processor 502 is further configured to: responding to an instruction of rendering the volume cloud in the target scene, and respectively transmitting a ray from the position of a virtual camera in the target scene to a plurality of pixel points on a screen; controlling a plurality of rays corresponding to the plurality of pixel points to step along a sight line direction respectively, and determining distance field information of step points reached by the plurality of rays respectively according to the first 3D map and/or the second 3D map as respective step distances of the plurality of rays when stepping each time until the plurality of rays reach the surface of the volume cloud respectively; determining the shape of the volume cloud in the three-dimensional space where the target scene is located according to the lengths of the rays; rendering the volume cloud according to a shape of the volume cloud in the three-dimensional space.
Further optionally, when determining distance field information of the step points reached by the plurality of rays according to the first 3D map and/or the second 3D map at each step, the processor 502 is specifically configured to: for any ray in the plurality of rays, when the ray reaches any stepping point, determining a target mapping unit corresponding to the stepping point from the second 3D mapping; if the index in the target map cell does not point to a storage cell in the first 3D map, then taking the distance field information stored in the target map cell as the distance field information for the step point; if the index in the target map unit points to the target storage unit in the first 3D map, determining the target storage unit from the first 3D map according to the index of the target storage unit; distance field information for the step point is determined from the target storage cell based on the coordinates of the step point.
Further optionally, when controlling, according to the distance field information, the plurality of rays corresponding to the plurality of pixel points to step along the line-of-sight direction respectively until the plurality of rays reach the surface of the volume cloud, the processor 502 is specifically configured to: for any ray in the rays, according to the minimum distance from the point where the virtual camera is located to the surface of the volume cloud, carrying out ray stepping along the sight line direction corresponding to the ray to reach a stepping point; judging whether the stepping point is positioned on the surface of the volume cloud or not according to the minimum distance from the stepping point to the surface of the volume cloud; and if the stepping point is positioned on the surface of the volume cloud, stopping the stepping operation of the ray, and determining the distance between the virtual camera and the surface of the volume cloud in the sight line direction corresponding to the ray according to the distance between the virtual camera and the stepping point.
Further optionally, the processor 502 is further configured to: and if the stepping point is not on the surface of the volume cloud, continuing to perform ray stepping along the sight line direction corresponding to the ray according to the minimum distance from the stepping point to the surface of the volume cloud until a new stepping point reached by the ray is located on the surface of the volume cloud.
Further optionally, when determining the shape of the volume cloud in the three-dimensional space where the target scene is located according to the lengths of the rays, the processor 502 is specifically configured to: calculating depth values from a plurality of pixel points on the screen to the surface of the volume cloud according to the lengths of the rays and the included angles corresponding to the respective sight directions of the rays; determining the shape of the volume cloud in the three-dimensional space according to the depth values of the plurality of pixel points on the screen to the surface of the volume cloud.
Further, as shown in fig. 5, the electronic device further includes: display component 503, communication component 504, power component 505, audio component 506, and other components. Only some of the components are schematically shown in fig. 5, and it is not meant that the electronic device comprises only the components shown in fig. 5.
The display component 503 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP), among others. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Wherein the communication component 504 is configured to facilitate wired or wireless communication between the device in which the communication component resides and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The power supply unit 505 provides power to various components of the device in which the power supply unit is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component 506 may be configured to output and/or input audio signals, among other things. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
In this embodiment, after distance field information of a target scene including a volume cloud is obtained, the distance field information of each point near the surface of the volume cloud in the target scene is stored in a storage unit of a first 3D map, and the distance field information of the target scene is compressed by a specified multiple and stored in a second 3D map; in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud. Therefore, the distance field information of the points which are not near the surface of the volume cloud in the target scene can be compressed and stored, and the distance field information of the main interest points near the surface of the volume cloud can be stored with high precision, so that the number of the points which need to be stored with high precision is greatly reduced, and the consumption of storage resources is reduced.
Meanwhile, when the volume cloud is rendered in the target scene, the distance field information of the target scene is utilized to carry out light stepping, and the appropriate stepping length can be quickly determined, so that the light can be prevented from stepping into the volume cloud on the one hand, and the speed of the light stepping to the surface of the volume cloud on the other hand can be accelerated. Therefore, the consumption of computing resources can be reduced, the rendering performance can be improved, and large-scene huge volume clouds can be smoothly operated on the terminal equipment.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the electronic device in the foregoing method embodiments when executed.
Accordingly, the present application also provides a computer program product, which includes a computer program/instructions, and when the computer program is executed by a processor, the processor is caused to implement the steps that can be executed by an electronic device in the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A data storage method of a volume cloud is characterized by comprising the following steps:
obtaining distance field information for a target scene; the distance field information includes a minimum distance of a point in the target scene to a surface of a volumetric cloud in the target scene;
storing distance field information of respective points in the target scene near the surface of the volumetric cloud in a storage unit of a first 3D map;
compressing the distance field information of the target scene by a specified multiple and storing the compressed distance field information in a second 3D map;
in the second 3D map, each map cell stores compressed distance field information common to a plurality of points, and an index of a storage cell in the first 3D map is stored in a map cell corresponding to a point near the surface of the volume cloud.
2. The method of claim 1, wherein compressing the distance field information of the target scene by a specified multiple and storing the compressed information in a second 3D map comprises:
compressing every NxN point in the target scene into a map unit in the second 3D map;
for any map cell, performing fuzzy processing on distance field information of each of N × N points corresponding to the map cell to obtain distance field information shared by the N × N points;
storing distance field information common to the N points in a first channel of the map cell.
3. The method of claim 2, further comprising:
if the NxNxN points are located near the surface of the volume cloud, storing indexes of corresponding storage units of the NxN points in the first 3D map in a second channel and a third channel of the map unit;
and storing the distance field information of each of the N X N points in the corresponding storage cell in the first 3D map by adopting the N X N individual pixels.
4. The method of claim 3, wherein storing distance field information for each of points in the target scene that are near a surface of the volumetric cloud in a storage unit of a first 3D map comprises:
obtaining a 3D noise map of points located near a surface of the volumetric cloud;
and superposing the first 3D map and the 3D noise map to obtain the corroded first 3D map.
5. The method of claim 1, further comprising:
responding to an instruction of rendering the volume cloud in the target scene, and respectively transmitting a ray from the position of a virtual camera in the target scene to a plurality of pixel points on a screen;
controlling a plurality of rays corresponding to the plurality of pixel points to step along a sight line direction respectively, and determining distance field information of step points reached by the plurality of rays respectively according to the first 3D map and/or the second 3D map as respective step distances of the plurality of rays when stepping each time until the plurality of rays reach the surface of the volume cloud respectively;
determining the shape of the volume cloud in the three-dimensional space where the target scene is located according to the lengths of the rays;
rendering the volume cloud according to a shape of the volume cloud in the three-dimensional space.
6. The method of claim 5, wherein determining distance field information for a step point reached by each of the plurality of rays from the first 3D map and/or the second 3D map at each step comprises:
for any ray in the plurality of rays, when the ray reaches any stepping point, determining a target mapping unit corresponding to the stepping point from the second 3D mapping;
if the index in the target map cell does not point to a storage cell in the first 3D map, then taking the distance field information stored in the target map cell as the distance field information for the step point;
if the index in the target map unit points to the target storage unit in the first 3D map, determining the target storage unit from the first 3D map according to the index of the target storage unit;
distance field information for the step point is determined from the target storage cell based on the coordinates of the step point.
7. The method of claim 5, wherein controlling the plurality of rays corresponding to the plurality of pixel points to step along a line of sight direction, respectively, based on the distance field information until the plurality of rays reach a surface of the volumetric cloud, respectively, comprises:
for any ray in the rays, according to the minimum distance from the point where the virtual camera is located to the surface of the volume cloud, carrying out ray stepping along the sight line direction corresponding to the ray to reach a stepping point;
judging whether the stepping point is positioned on the surface of the volume cloud or not according to the minimum distance from the stepping point to the surface of the volume cloud;
and if the stepping point is positioned on the surface of the volume cloud, stopping the stepping operation of the ray, and determining the distance between the virtual camera and the surface of the volume cloud in the sight line direction corresponding to the ray according to the distance between the virtual camera and the stepping point.
8. The method of claim 7, further comprising:
and if the stepping point is not on the surface of the volume cloud, continuing to perform ray stepping along the sight line direction corresponding to the ray according to the minimum distance from the stepping point to the surface of the volume cloud until a new stepping point reached by the ray is located on the surface of the volume cloud.
9. The method of any one of claims 5-8, wherein determining the shape of the volumetric cloud in the three-dimensional space in which the target scene is located based on the lengths of the plurality of rays comprises:
calculating depth values from a plurality of pixel points on the screen to the surface of the volume cloud according to the lengths of the rays and the included angles corresponding to the respective sight directions of the rays;
determining the shape of the volume cloud in the three-dimensional space according to the depth values of the plurality of pixel points on the screen to the surface of the volume cloud.
10. An electronic device, comprising: the system comprises a memory, a central processing unit and a graphic processor;
the memory is to store one or more computer instructions;
the central processor is to execute the one or more computer instructions to: invoking the graphics processor to perform the steps in the method of any of claims 1-9.
11. A computer-readable storage medium storing a computer program, wherein the computer program is capable of performing the steps of the method of any one of claims 1-9 when executed.
12. A computer program product comprising computer programs/instructions for causing a processor to carry out the steps of the method according to any one of claims 1-9 when the computer programs are executed by the processor.
CN202111162609.5A 2021-09-30 2021-09-30 Volume cloud data storage method and device and storage medium Pending CN114332347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111162609.5A CN114332347A (en) 2021-09-30 2021-09-30 Volume cloud data storage method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111162609.5A CN114332347A (en) 2021-09-30 2021-09-30 Volume cloud data storage method and device and storage medium

Publications (1)

Publication Number Publication Date
CN114332347A true CN114332347A (en) 2022-04-12

Family

ID=81044760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111162609.5A Pending CN114332347A (en) 2021-09-30 2021-09-30 Volume cloud data storage method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114332347A (en)

Similar Documents

Publication Publication Date Title
US20200380769A1 (en) Image processing method and apparatus, storage medium, and computer device
EP4070865A1 (en) Method and apparatus for displaying virtual scene, and device and storage medium
EP3882865A1 (en) Object loading method, device, storage medium, and electronic device
US20170154468A1 (en) Method and electronic apparatus for constructing virtual reality scene model
CN111062981B (en) Image processing method, device and storage medium
KR20240071414A (en) Conditional modification of augmented reality object
EP3832605B1 (en) Method and device for determining potentially visible set, apparatus, and storage medium
US10325414B2 (en) Application of edge effects to 3D virtual objects
CN103700134A (en) Three-dimensional vector model real-time shadow deferred shading method based on controllable texture baking
CN112721150A (en) Photocuring 3D printing method, device, equipment and storage medium
CN115082609A (en) Image rendering method and device, storage medium and electronic equipment
CN112902968A (en) High-efficiency three-dimensional digital navigation earth generation method and system
Sudarshan Augmented reality in mobile devices
CN115512025A (en) Method and device for detecting model rendering performance, electronic device and storage medium
CN106776773B (en) Dynamic data display system and method
CN113797531B (en) Occlusion rejection implementation method and device, computer equipment and storage medium
CN113936098B (en) Rendering method and device during volume cloud interaction and storage medium
CN113936096A (en) Customized rendering method and device of volume cloud and storage medium
CN113936097B (en) Volume cloud rendering method, device and storage medium
CN112308768B (en) Data processing method, device, electronic equipment and storage medium
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
JP2023525945A (en) Data Optimization and Interface Improvement Method for Realizing Augmented Reality of Large-Scale Buildings on Mobile Devices
CN112580213A (en) Method and apparatus for generating display image of electric field lines, and storage medium
CN114332347A (en) Volume cloud data storage method and device and storage medium
CN114066715A (en) Image style migration method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination