WO2023088059A1 - 三维模型可见度数据的存储方法、装置、设备及存储介质 - Google Patents

三维模型可见度数据的存储方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023088059A1
WO2023088059A1 PCT/CN2022/127985 CN2022127985W WO2023088059A1 WO 2023088059 A1 WO2023088059 A1 WO 2023088059A1 CN 2022127985 W CN2022127985 W CN 2022127985W WO 2023088059 A1 WO2023088059 A1 WO 2023088059A1
Authority
WO
WIPO (PCT)
Prior art keywords
visibility data
target
value
sampling point
data
Prior art date
Application number
PCT/CN2022/127985
Other languages
English (en)
French (fr)
Inventor
夏飞
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023088059A1 publication Critical patent/WO2023088059A1/zh
Priority to US18/207,577 priority Critical patent/US20230326129A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Definitions

  • the embodiments of the present application relate to the technical fields of computers and the Internet, and in particular to a method, device, device and storage medium for storing 3D model visibility data.
  • the computer device When rendering a 3D model, the computer device needs to calculate the visibility data of each point on the 3D model and store the visibility data for use in the model rendering process.
  • the computer equipment when calculating the visibility data of any point on the 3D model, the computer equipment emits multiple (such as 720) rays with the point as the center of the sphere, and according to whether each ray intersects with the environment object and the intersection distance, The intersection data of each ray is obtained, and the visibility data of this point includes the intersection data of each ray emitted from this point.
  • multiple such as 720
  • the visibility data of only one point on the 3D model includes the intersection data of a large number of rays
  • the data volume of the visibility data of the 3D model is too large, which is not conducive to storage and calculation, and seriously affects the rendering of the 3D model efficiency.
  • Embodiments of the present application provide a method, device, device, and storage medium for storing 3D model visibility data. Described technical scheme is as follows:
  • a method for storing 3D model visibility data is provided, the method is executed by a computer device, and the method includes:
  • the visibility data of each vertex of the three-dimensional model wherein, the visibility data of each vertex is used for interpolation to obtain the restoration value of the visibility data of each of the sampling points;
  • the first error function is used to measure the degree of difference between the restored value of the visibility data of the sampling point and the original value, and the rate of change of the restored value of the visibility data of the sampling point, and the number of vertices is less than the The number of sampling points mentioned above;
  • Visibility data of each vertex of the three-dimensional model is stored.
  • a storage device for visibility data of a 3D model includes:
  • a data acquisition module configured to acquire initial values of visibility data of multiple sampling points of the three-dimensional model; wherein, the visibility data is used to represent the visibility of the sampling points, and the sampling points are at the pixel level;
  • a data determination module configured to determine the visibility data of each vertex of the three-dimensional model with the goal of converging the value of the first error function; wherein, the visibility data of each vertex is used for interpolation to obtain the visibility of each of the sampling points The recovery value of the data; the first error function is used to measure the degree of difference between the recovery value of the visibility data of the sampling point and the original value, and the change rate of the recovery value of the visibility data of the sampling point, the The number of vertices is less than the number of sampling points;
  • the data storage module is used to store the visibility data of each vertex of the 3D model.
  • a computer device the computer device includes a processor and a memory, and a computer program is stored in the memory, and the computer program is loaded and executed by the processor to realize the above-mentioned The storage method of the visibility data of the 3D model.
  • a computer-readable storage medium is provided, and a computer program is stored in the storage medium, and the computer program is loaded and executed by a processor to implement the above method for storing visibility data of a three-dimensional model.
  • a computer program product includes a computer program, the computer program is stored in a computer-readable storage medium, and a processor reads the computer program from the computer-readable storage medium Acquiring and executing the computer program to realize the storage method of the above-mentioned 3D model visibility data.
  • the restored value of the visibility data of the sampling point obtained by the interpolation of the visibility data of each vertex has the smallest difference with the original value, that is,
  • the first error function converges, the final result of the visibility data of each vertex of the 3D model is obtained, and the visibility data of each vertex is stored, without storing the visibility data of a large number of sampling points, which fully reduces the storage of visibility data
  • the space to be used relieves the storage pressure of the visibility data of the 3D model and improves the rendering efficiency of the 3D model.
  • the first error function when the first error function is designed, on the one hand, it is used to measure the difference between the restored value of the visibility data of the sampling point and the original value, and the convergence of the first error function can make the restored value of the visibility data of the sampling point and the original value
  • the difference between the values is as small as possible, so that the difference between the rendered 3D model surface and the original 3D model surface is as small as possible, ensuring the accuracy of 3D model rendering; on the other hand, it is used to measure the visibility of sampling points
  • the rate of change of the restored value of the data, the convergence of the first error function can make the restored values of the visibility data continuous, thereby enhancing the visual effect of the model surface rendered by the 3D model through the visibility data.
  • Fig. 1 is a schematic diagram of a scheme implementation environment provided by an embodiment of the present application.
  • Fig. 2 is a flowchart of a method for storing 3D model visibility data provided by an embodiment of the present application
  • Fig. 3 is a schematic diagram of a rendered model provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of an interpolation function provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an interpolation function provided by another embodiment of the present application.
  • Fig. 6 is a flowchart of a method for storing 3D model visibility data provided by another embodiment of the present application.
  • Fig. 7 is a schematic diagram of sampling point visibility data provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of sampling point visibility data provided by another embodiment of the present application.
  • Fig. 9 is a schematic diagram of the direction of the central axis provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a target vertebral body provided by an embodiment of the present application.
  • Fig. 11 is a schematic diagram of a projected target vertebral body provided by an embodiment of the present application.
  • Fig. 12 is a schematic diagram of the coordinate system of the optimal direction provided by an embodiment of the present application.
  • Fig. 13 is a schematic diagram of target vertebral bodies with different opening angles provided by an embodiment of the present application.
  • Fig. 14 is a block diagram of a storage device for 3D model visibility data provided by an embodiment of the present application.
  • Fig. 15 is a block diagram of a storage device for 3D model visibility data provided by another embodiment of the present application.
  • Fig. 16 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • FIG. 1 shows a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • the solution implementation environment may include: a terminal 10 and a server 20 .
  • the terminal 10 may be, for example, a mobile phone, a tablet computer, a PC (Personal Computer, a personal computer), a wearable device, a vehicle-mounted terminal device, a VR (Virtual Reality, virtual reality) device, an AR (Augmented Reality, augmented reality) device, a smart TV, etc.
  • a client running a target application program can be installed in the terminal 10 .
  • the target application program may be an application program that needs to use 3D model visibility data for model rendering, such as a game application program, a 3D map program, a social networking application program, an interactive entertainment application program, etc., which is not limited in this application.
  • the game application program may include corresponding application programs such as shooting games, battle flag games, and MOBA (Multiplayer Online Battle Arena, multiplayer online tactical arena) games, which are not limited in this application.
  • the client of the target application may render the 3D model based on the visibility data of the 3D model to make the 3D model more realistic.
  • the visibility data is obtained by sampling each pixel in the 3D model, for example, a computer device (such as the terminal 10 or server 20) samples each pixel in the 3D model online or offline to obtain the 3D model visibility data.
  • the visibility data is used to represent the visibility of the entire 3D model, and the visibility can be used to describe the brightness and darkness of each part of the model in the scene (such as a virtual scene), so that the 3D model in the scene is more in line with the optical law, thereby enhancing the 3D model in the scene.
  • the sense of reality in the image enhances the user's visual experience.
  • the server 20 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a cloud server providing cloud computing services.
  • the server 20 may be the background server of the above-mentioned target application, and is used to provide background services for the client of the target application.
  • the visibility data of the above-mentioned 3D model may be stored in the server 20, and the server 20 provides the corresponding visibility data to the terminal 10 when the terminal 10 needs to render the 3D model.
  • the server 20 can be used to sample the 3D model, obtain and store the visibility data of the 3D model.
  • the server 20 receives and stores the visibility data from the terminal 10, which is not limited in this application.
  • the visibility data of the above three-dimensional model may also be stored in the terminal 10 .
  • Communication between the terminal 10 and the server 20 may be performed through a network, for example, the network may be a wired network or a wireless network.
  • the above-mentioned target application program can provide a scene, and the scene is a three-dimensional scene, and there is a three-dimensional model in the scene, and the three-dimensional model can be a three-dimensional character model, a three-dimensional pet model, a three-dimensional vehicle model, etc., and the three-dimensional model can be Dynamic, which can animate in the scene, such as moving or performing various other actions.
  • the 3D model can also be static, for example, the 3D model can be a 3D building model, a 3D plant model, and the like.
  • the above-mentioned scene may be called a virtual scene
  • the above-mentioned three-dimensional model may be a virtual character, a virtual building, a virtual pet, etc. in the game.
  • a virtual scene is a scene displayed (or provided) when a client of a target application (such as a game application) runs on a terminal, and the virtual scene refers to a scene created for virtual characters to perform activities (such as a game competition), such as Virtual houses, virtual islands, virtual maps, etc.
  • the virtual scene can be a simulation scene of the real world, a half-simulation and half-fictional scene, or a purely fictitious scene.
  • the virtual scene may be a three-dimensional virtual scene.
  • different virtual scenes may be displayed (or provided) in different time periods.
  • the three-dimensional model can be any model in the virtual scene, for example, the three-dimensional model can be a character model, an animal model, a building model, a landform model, and the like.
  • the above-mentioned client can adopt the technical solution provided by the embodiment of this application to acquire the visibility data of the 3D model and store it, and when rendering the 3D model, the client can obtain from the stored data The visibility data of the 3D model is used.
  • FIG. 2 shows a flowchart of a method for storing 3D model visibility data provided by an embodiment of the present application.
  • the execution subject of each step of the method may be a computer device, for example, the computer device may be the terminal 10 or the server 20 in the solution implementation environment shown in FIG. 1 .
  • the method may include at least one of the following steps (210-230):
  • Step 210 acquiring original values of visibility data of multiple sampling points of the 3D model; wherein, the visibility data is used to represent the visibility of the sampling points, and the sampling points are at the pixel level.
  • sampling points are points on the surface of the three-dimensional model. Multiple sampling points can be obtained by sampling the points on the surface of the three-dimensional model, and then the visibility data of the multiple sampling points can be obtained by sampling.
  • the sampling point is a pixel-level sampling point, that is, all pixel points on the model can be sampled, and the sampling point can be any point on the surface of the three-dimensional model.
  • the embodiment of the present application does not limit the size of the pixel.
  • the size of the pixel also referred to as a voxel in a three-dimensional image
  • the embodiment of the present application may be 0.1-1 mm.
  • the visibility data directly obtained by sampling the sampling points is the original value
  • the original value refers to the real visibility data of the sampling points on the surface of the three-dimensional model.
  • the three-dimensional model is the same as that described in the above-mentioned embodiments, and will not be repeated here.
  • the visibility data is used to indicate the visibility of the sampling point, which is used to indicate whether the sampling point can be observed from all directions. After recording and integrating the data in the above-mentioned directions, the visibility data of the sampling point can be obtained.
  • the visibility data may also be used to represent the occlusion information of the sampling point by objects around the sampling point, and the visibility data is spherically distributed relative to the sampling point.
  • Step 220 with the value convergence of the first error function as the goal, determine the visibility data of each vertex of the 3D model; wherein, the visibility data of each vertex is used for interpolation to obtain the recovery value of the visibility data of each sampling point; the first error function It is used to measure the difference between the restored value of the visibility data of the sampling point and the original value, and the change rate of the restored value of the visibility data of the sampling point.
  • the number of vertices is less than the number of sampling points.
  • the aforementioned vertices are points on the surface mesh of the 3D model, and the surface mesh of the 3D model is composed of multiple patches, and a polygon formed by multiple vertices is called a patch.
  • a patch is a data structure used for modeling the above-mentioned 3D model. If it is a data structure after the mesh division of the surface of the 3D model, the shape of each patch is arbitrary, and it can be a triangle or other polygons. Among them, a triangular patch is called a triangular patch.
  • the sampling point is any point in the 3D model, and the vertex is the vertex of the patch obtained after the 3D model is meshed, therefore, there must be a corresponding sampling point at the position of each vertex, and the visibility data of the vertex can be obtained by Determined by the original value of the visibility data of the sampling point corresponding to the vertex position.
  • the recovery value of the visibility data of each sampling point in the patch composed of the vertices of the three-dimensional model is obtained, and the recovery value of the visibility data is interpolated (ie, the estimated value ), which may or may not be the same as the original value of the visibility data.
  • the number of sampling points in the patch composed of vertices of the 3D model is greater than the number of vertices of the 3D model.
  • the sampling points in the patch composed of vertices of the 3D model include the 3D The vertices of the model.
  • the first error function is used to measure the degree of difference between the restored value of the visibility data of the sampling point and the original value, and the rate of change of the restored value of the visibility data of the sampling point.
  • the degree of difference represents the magnitude relationship between the restored value and the original value of the visibility data of the sampling point.
  • the rate of change represents the continuity of the restored values of the visibility data of adjacent sampling points, and the lower the rate of change, the higher the continuity of the restored values of the visibility data of adjacent sampling points.
  • the final result of the visibility data of each vertex of the 3D model is determined with the goal of convergence of the value of the first error function.
  • the present application does not limit the way of determining the value of the first error function. The specific form of the first error function will be described in the following embodiments.
  • step 220 may include the following steps (221-222):
  • Step 221 constructing a first error function based on the restored value and the original value of the visibility data of the sampling point; wherein, the value of the first error function is positively correlated with the degree of difference between the restored value and the original value of the visibility data of the sampling point The value of the first error function is positively correlated with the rate of change of the recovery value of the visibility data at the sampling point.
  • the first error function can be recorded as E(x), which can be used to represent the difference between the restored value and the original value of the visibility data of the sampling point, and its specific formula is as follows:
  • f(p) is the original value of the visibility data
  • g(p) is the restored value of the visibility data
  • p is any sampling point on the surface of the 3D model
  • S is the surface of the 3D model
  • is the weight parameter.
  • step 221 may include the following steps (221a-221c):
  • Step 221a based on the difference between the restored value and the original value of the visibility data of the sampling point, construct the first sub-function; wherein, the value of the first sub-function is the difference between the restored value and the original value of the visibility data of the sampling point The difference is positively correlated.
  • the first sub-function represents the degree of difference between the restored value and the original value of the visibility data of each sampling point.
  • the first sub-function may be the sum of the difference between the restored value of the visibility data of the sampling point and the original value, or may be expressed as the sum of the squares of the difference between the restored value of the visibility data of the sampling point and the original value. This is not limited.
  • the first sub-function E1(x) is the sum of the squares of the difference between the restored value of the visibility data of the sampling point and the original value, and its specific formula is as follows:
  • E1(x) ⁇ s (f(p)-g(p)) 2 dp.
  • the visibility data of each vertex of the three-dimensional model may be determined with the goal of convergence of the first sub-function.
  • FIG. 3 shows the rendering effect of the 3D model 31 under the visibility data processed only by the first sub-function.
  • the rendering effect has defects that cannot be ignored: in the 3D model
  • the rate of change of the recovery value of the visibility data at the sampling point is also taken into consideration, so as to solve the fault problem.
  • this problem can be solved through the following steps.
  • Step 221b constructing a second sub-function based on the difference between the rate of change corresponding to at least one group of adjacent patches on the 3D model; wherein, the rate of change corresponding to the target patch on the 3D model refers to the target patch Corresponding to the rate of change of the recovery value of the visibility data of each sampling point; the value of the second sub-function is positively correlated with the rate of change of the recovery value of the visibility data of the sampling point.
  • the second sub-function is used to represent the degree of difference between the change rates corresponding to each group of adjacent facets on the three-dimensional model, and the adjacent facets may refer to two facets with a common edge.
  • the second sub-function can be the sum of the differences between the rates of change corresponding to each group of adjacent patches, or the sum of the squares of the absolute values of the differences between the rates of change corresponding to each group of adjacent patches , which is not limited in this application.
  • the second sub-function E2(x) is the sum of the squares of the absolute values of the differences between the rates of change corresponding to each group of adjacent patches, and the specific formula is as follows:
  • t, u represent any group of adjacent patches.
  • Two patches can be considered as a set of adjacent patches if they have a common edge.
  • Step 221c construct a first error function based on the first sub-function and the second sub-function.
  • the first error function E(x) is determined as the first error function, which can be expressed as follows:
  • Step 222 with the goal of minimizing the value of the first error function, determine the visibility data of each vertex of the 3D model.
  • the value of the first error function When the value of the first error function is the smallest, the difference between the restored value of the visibility data at each sampling point and the original value is the smallest, and at this time the restored value closest to the original value of the visibility data can be obtained, thus ensuring The accuracy of the recovered data improves the rendering accuracy of the 3D model.
  • the value of the first error function is the minimum, the difference between the rate of change corresponding to each group of adjacent patches is the smallest, and at this time, the adjacent patches with the rate of change as small as possible can be obtained, so that the adjacent surfaces
  • the continuity of the visibility data between slices is high, thereby ensuring the overall high continuity of the visibility data, thereby ensuring the visual effect of the rendered 3D model.
  • the minimum value of the first error function can be used as the convergence target to determine the visibility data of each vertex of the 3D model.
  • the smaller the difference between the restored value of the visibility data of the sampling point and the original value the greater the visibility of the sampling point.
  • the value of the first error function decreases during the measurement, continue to update the visibility data of each vertex, and continue to measure the value of the first error function until the value of the first error function begins Stop after increasing.
  • the value of the first error function obtained in the last measurement is the minimum value of the first error function
  • the visibility data of each vertex under the minimum value is the final visibility data of each vertex.
  • a minimum value threshold is set for the value of the first error function, and when the measured value of the first error function is smaller than the threshold, the measurement is stopped, and the visibility data of each vertex at this time is determined as each Final visibility data for vertices.
  • the maximum value of the first error function can also be used as the convergence target.
  • the first error function is used to measure the closeness between the restored value of the visibility data of the sampling point and the original value, close to The higher the degree, the smaller the difference between the restored value and the original value of the visibility data of the sampling point.
  • Step 230 storing visibility data of each vertex of the 3D model.
  • the final result of the visibility data of each vertex of the 3D model obtained in the above steps may be stored in the vertex data of the 3D model.
  • the vertex data of the three-dimensional model includes the vertex data corresponding to each vertex, and the vertex data corresponding to each vertex includes the visibility data of the vertex.
  • the vertex data corresponding to each vertex may also include position data of the vertex, color data of the vertex, etc., which are not limited in this application.
  • the recovery value of the visibility data of the sampling point obtained by interpolating the visibility data of each vertex, and The original value difference is the smallest, that is, when the first error function converges, the final result of the visibility data of each vertex of the 3D model is obtained, and the visibility data of each vertex is stored instead of storing the visibility of a large number of sampling points data, fully reducing the space required for visibility data storage, alleviating the storage pressure of the visibility data of the 3D model, and improving the rendering efficiency of the 3D model.
  • the first error function when the first error function is designed, on the one hand, it is used to measure the difference between the restored value of the visibility data of the sampling point and the original value, and the convergence of the first error function can make the restored value of the visibility data of the sampling point and the original value
  • the difference between the values is as small as possible, so that the difference between the rendered 3D model surface and the original 3D model surface is as small as possible, ensuring the accuracy of 3D model rendering; on the other hand, it is used to measure the visibility of sampling points
  • the rate of change of the restored value of the data, the convergence of the first error function can make the restored values of the visibility data continuous, thereby enhancing the visual effect of the model surface rendered by the 3D model through the visibility data.
  • the first error function is only composed of the first sub-function.
  • the interpolation function can be used to obtain And the restoration value of the visibility data of all sampling points in the composed target patch.
  • the target vertex may refer to a vertex corresponding to the target patch
  • the target patch may refer to any patch corresponding to the 3D model.
  • the interpolation function may include discrete points and a table or file definition of corresponding function values, and the interpolation function is a function that fills in gaps between two data through calculation.
  • the interpolation function can also be used in other graphics, as shown in FIG. 5, which shows the application of a hexagonal 51 interpolation function. The specific operation method is the same as that in FIG. 4, and will not be repeated here.
  • the recovery value of the visibility data of any point p in the patch obtained by the interpolation function is:
  • x i is the visibility data of the sampling point corresponding to the vertex i of the patch
  • B i (p) is the expression of the interpolation function
  • N is the total number of vertices of the patch.
  • the first item in the above formula is the quadratic item about x
  • the second item is the first-order item about x
  • the third item is a constant (f(p) 2 is known)
  • j is the surface Vertex different from vertex i in the slice
  • a i,j ⁇ S B i (p)B i (p)dp
  • b i ⁇ S B i (p)f(p)dp
  • c ⁇ S f(p) 2 dp
  • a i , j is a matrix
  • b i is a vector
  • c is a calculated constant value.
  • the above-mentioned matrix A is a sparse symmetric positive definite matrix
  • the vertex visibility data corresponding to x obtained at this time is the vertex visibility data to be stored in the 3D model.
  • the above method (least squares method) is prone to overfitting problems, so the continuity between each adjacent patch needs to be considered.
  • the matrix R is a regular term, and R does not affect the calculated absolute value of x.
  • the introduction of the regular term is to neutralize the above-mentioned over-fitting problem, so that the value obtained by the interpolation function is close to the original value, and at the same time, the form of the function should be relatively "smooth", that is, the interpolation function of the target vertex and the gradient of the interpolation function of this point.
  • the first sub-function can be updated as:
  • i, j and k represent the three vertices of the triangle facet respectively.
  • the expression form of the global gradient of the difference function can be obtained according to the center of gravity value of the interpolation function.
  • u i is direction, so:
  • the values of the elements related to x i , x j , x k in the first matrix are not 0, which can be simplified as:
  • FIG. 6 shows a flowchart of a method for storing 3D model visibility data provided by another embodiment of the present application.
  • the execution subject of each step of the method may be a computer device, for example, the computer device may be the terminal 10 or the server 20 in the solution implementation environment shown in FIG. 1 .
  • the method may include at least one of the following steps (610-680):
  • Step 610 for the target sampling point of the 3D model, acquire initial visibility data of the target sampling point.
  • the target sampling point may refer to any sampling point corresponding to the 3D model.
  • the initial visibility data of the target sampling point includes: intersection data pointing to multiple directions with the target sampling point as the center of the sphere. Wherein, for the target direction in the multiple directions, the intersection data on the target direction is used to indicate whether the ray emitted from the target sampling point along the target direction intersects with the environment object and the intersection distance. In the case that the ray along the target direction emitted from the target sampling point intersects with the environment object, the intersection distance is further obtained.
  • the intersection distance refers to the vertex (that is, the target sampling point) and the intersection point (that is, the ray along the target direction and The distance between intersection points of environment objects).
  • the target sampling point selected in FIG. 7 is a point 72 under the armpit of the character model 71, but in actual operation, the selected target sampling point is only located on the surface of the character model 71, where It is only to clearly introduce the specific form of visibility data of the target sampling point.
  • a point 72 at the armpit of the character model 71 radiates a large number of rays in various directions, which are represented by thin and thick lines in the figure. Among them, a thin line indicates that the ray in this direction does not intersect any object, and a thick line indicates that the ray in this direction intersects other objects. The length of the thick line is the intersection distance.
  • Visibility data consists of data corresponding to thin and thick lines. Wherein, the thin line data includes the direction and the data for judging that there is no intersection, and the thick line data includes the direction, the data for judging the intersection and the intersecting distance.
  • Step 620 determining the target cone used for fitting the initial visibility data of the target sampling point.
  • the sampling point 82 on the character model 81 is used to sample the visibility data, wherein the sphere 83 in the right figure is obtained after sampling, that is, the sphere 84 in the left figure (used to represent The initial visibility data of the sampling point 82), it is obvious that the area 85 of the sphere 83 is black, which is represented as invisible, that is, the overlapping part of the sphere 84 and the character model 81, therefore, only the area 86 that does not overlap needs to be considered Obviously, the region 86 and the center of the sphere 84 can form a cone, and the visibility data of the sampling point 82 can be represented by the cone, which is the target cone corresponding to the sampling point 82.
  • the target cone is used to equivalently represent the initial visibility data of the sampling point.
  • the initial visibility data is a spherical distribution. If viewed from the perspective of the user, only a part of the spherical surface can be observed at the same time, and the sphere can be equivalent to the cone. body.
  • the target vertebral body needs only three data to represent, which are the direction of the central axis of the target vertebral body, the opening angle and the scaling value.
  • the central axis direction is used to indicate the opening direction of the target cone, that is, the direction of the area of the visibility data of the sampling point that is not covered by the scene object
  • the opening angle is used to indicate the size of the target cone, that is, the visibility of the sampling point
  • the zoom value is used to represent the brightness of the target cone, that is, the proportion of the visible area in the area not covered by scene objects in the visibility data of the sampling point.
  • the central axis direction of the target vertebral body is represented by 2 floating point numbers
  • the opening angle of the target vertebral body is represented by 1 floating point number
  • the scaling value of the target vertebral body is represented by 1 floating point number.
  • a floating-point number is a digital representation of a number belonging to a specific subset of rational numbers, and is used in computers to approximate any real number. Specifically, this real number is obtained by multiplying an integer or fixed-point number (that is, the mantissa) by the integer power of a certain base (usually 2 in computers).
  • This representation method is similar to scientific notation with a base of 10.
  • S sign bit, the value is 0 or 1, which determines the sign of a number, 0 means positive, 1 means negative;
  • M mantissa, represented by decimals, for example, 8.345*10 ⁇ 0, 8.345 is the mantissa;
  • R Base, which means that the decimal number R is 10, and the binary number R is 2;
  • E Exponent, represented by an integer, such as 10 ⁇ -1, -1 is the exponent.
  • four floating-point numbers are used to represent a pyramid in space, thereby further reducing the amount of calculation of the original value of the visibility data, thereby improving the efficiency of obtaining the original value of the visibility data.
  • the ray 91 in FIG. 9 is the central axis direction of the target vertebral body, and the central axis direction of the target vertebral body can be obtained through two angles (such as angle ⁇ and angle ⁇ in the figure). , where the two angles correspond to the above two floating-point numbers.
  • point 101 is the center of the sphere (representing the original value of the visibility data), and the figure 102 intercepted by selecting the corresponding opening angle by the center of the sphere 101 is the target vertebral body, and the target vertebral body 102 is based on the center of the sphere 101.
  • the apex is the apex, and the base is a part-spherical cone.
  • Fitting the original value of the visibility data of the target sampling point obtained above to the target cone can reduce the loss of the original value of the visibility data, thereby ensuring the accuracy of the visibility data as much as possible, and the data corresponding to the cone is far from It is smaller than the data corresponding to the sphere, thereby further reducing the amount of data of the visibility data, which is beneficial to reducing the storage space required for the visibility data.
  • the visible part is set to 1, such as the white area 103 in the figure, and the invisible part is set to 0, such as the lined part in the figure, where the intersection Distances are not shown in the diagram.
  • the intersection distance can be set by the lightness and darkness of the color.
  • the lightness and darkness of the target cone are related to the intersection distance. The closer the intersection distance is, the darker the lightness of the corresponding position of the target cone is. The brightness of the corresponding position of the cone is brighter.
  • the interval of the lines is used to represent the lightness and darkness of the target vertebral body, and the smaller the interval of the lines, the darker the lightness and darkness of the target vertebral body.
  • the area 111 is completely visible, the areas 112 and 114 are invisible and the intersection distance is large, and the areas 113 and 115 are invisible and the intersection distance is small, wherein each area represents the average value of the visibility information in the area.
  • different lightness and darkness may be set according to each sampling point of each sampling point in the target pyramid. This application does not limit the way of setting the brightness of the target vertebral body.
  • step 620 includes the following steps (621-624):
  • Step 621 Project the initial visibility data of the target sampling point into the spherical harmonic function space to obtain the projected visibility data of the target sampling point.
  • the initial visibility data of the target sampling point is projected into the spherical harmonic function space, and the projected visibility data of the target sampling point in the spherical harmonic function space is obtained.
  • the projection visibility data can be represented by 16 floating point numbers.
  • the spherical harmonic function space refers to the space corresponding to the spherical harmonic function.
  • the spherical harmonic function is used to map each point on the spherical surface to a complex function value.
  • the projected visibility data is the complex function value set corresponding to the initial visibility data. In this way, the amount of data required to represent the visibility data can be reduced, thereby helping to reduce the storage space required for the visibility data, thereby alleviating the storage pressure of the visibility data.
  • Step 622 Based on the projected visibility data of the target sampling point, determine the optimal visible direction corresponding to the target sampling point.
  • the optimal visible direction refers to the central axis direction of the visible area corresponding to the target sampling point determined in the spherical harmonic function space.
  • the visible area corresponding to the target sampling point is determined, and then the optimal visible direction of the target sampling point is determined.
  • the projected visibility data of the target sampling point projected into the spherical harmonic function space is shown in FIG. 12, wherein the sphere 120 is the projected visibility data obtained by the above-mentioned projection, and the coordinate axes with arrows 121, 122, and 123 represent the initial coordinate axes in the spherical harmonic function space, and the ray 124 is the central axis direction of the visible area corresponding to the target sampling point in the spherical harmonic function space, that is, the optimal visible direction corresponding to the target sampling point.
  • the rays 125 and 126 are the other two coordinate axes of the coordinate system determined by the right-hand rule according to the optimal visible direction corresponding to the target sampling point. Among them, rays 124, 125, and 126 form a new coordinate system.
  • Step 623 determining the optimal visible direction as the central axis direction of the target vertebral body.
  • the target vertebral body is projected into the brand-new coordinate system obtained above, wherein the direction of the central axis of the target vertebral body is the optimal visible direction of the target sampling point.
  • the expression of the target vertebra is converted to obtain the corresponding projection representation of the target vertebra in the spherical harmonic function space , in this way, the difficulty of obtaining the parameters of the target vertebral body can be reduced, thereby improving the efficiency of determining the parameters of the target vertebral body.
  • Step 624 with the value convergence of the second error function as the goal, determine the opening angle and scaling value of the target vertebral body; wherein, the second error function is used to measure the projection representation and target sampling of the target vertebral body in the spherical harmonic function space The degree of difference between projected visibility data for points.
  • the opening angle of the target vertebral body is obtained.
  • the opening angle of the target vertebral body can be determined by traversing the projection representations of the target vertebral body at different opening angles in combination with the value of the second error function, and then the scaling value of the target vertebral body can be determined based on the opening angle.
  • FIG. 13 shows the projected target vertebral bodies at four different opening angles, wherein 131, 132, 133 and 134 are the opening angles of 15 degrees, 30 degrees, and 45 degrees, respectively. degree and 60 degrees of the projected target vertebral body, the uppermost area of the target vertebral body under the four opening angles is the visibility data of the target vertebral body, and under normal circumstances, other areas should be completely invisible black areas , but due to numerical fluctuations during projection, some white areas appear in the black area of the target cone, but the white areas appearing do not affect the final rendering result.
  • the projected target cone is the projection representation of the target cone in the spherical harmonic function space.
  • the visibility data of 4 floating-point numbers is obtained by down-converting the complex 16 floating-point number visibility data, and the visibility data of the pyramid represented by the 4 floating-point numbers can well represent the visibility data, so that while reducing the data volume of a large number of visibility data, it also ensures the accuracy of the visibility data.
  • Step 630 Determine the original value of the visibility data of the target sampling point based on the target pyramid.
  • the visibility data of the target sampling point includes the direction of the central axis of the target pyramid, the opening angle, and the scaling value.
  • the scaling value is used to represent the brightness of the visible area.
  • the brightness of the visible area is used to represent the occlusion of the visible area.
  • the original value of the visibility data includes the original value of the central axis direction of the target cone, the original value of the opening angle and the original value of the scaling value.
  • the visibility data of the target sampling point consists of 4 floating-point numbers, which are the direction of the central axis, the opening angle and the scaling value.
  • the direction of the central axis is represented by two floating point numbers
  • the opening angle and the scaling value are represented by one floating point number respectively.
  • Step 640 with the value convergence of the first error function as the goal, determine the visibility data of each vertex of the 3D model; wherein, the visibility data of each vertex is used for interpolation to obtain the recovery value of the visibility data of each sampling point; the first error function It is used to measure the difference between the restored value of the visibility data of the sampling point and the original value, and the change rate of the restored value of the visibility data of the sampling point.
  • the number of vertices is less than the number of sampling points.
  • Step 650 storing visibility data of each vertex of the 3D model.
  • Steps 640 and 650 are the same as those described in the above-mentioned embodiments.
  • Steps 640 and 650 are the same as those described in the above-mentioned embodiments.
  • the visibility data of each sampling point can be inversely deduced according to the visibility data of each vertex, and then the 3D model can be realized.
  • Rendering the following will take the target patch on the 3D model as an example, and this embodiment of the present application may also include steps 660-680.
  • Step 660 for the target patch on the 3D model, obtain the visibility data of each vertex of the target patch.
  • the target patch may refer to any patch on the 3D model, and the target patch may be a polygon composed of vertices in the 3D model, such as a triangular patch composed of three vertices. Each facet is tightly fitted and evenly covered on the surface of the 3D model to form a surface mesh of the 3D model.
  • the visibility data of each vertex of the target patch can be obtained from the vertex data of the three-dimensional model.
  • Step 670 according to the visibility data of each vertex of the target patch, determine the visibility data of the barycenter point of the target patch.
  • the position of the barycenter point of the target patch and the visibility data of the barycenter point are determined. For example, according to the distance between each vertex in the target patch and the center of gravity point, determine the weight corresponding to each vertex, and perform weighted calculation on the visibility data of each vertex according to the weight corresponding to each vertex, and obtain the visibility data of the center of gravity point.
  • Step 680 According to the visibility data of each vertex of the target patch and the visibility data of the barycenter point of the target patch, interpolate to obtain the recovery value of the visibility data of each sampling point corresponding to the target patch.
  • the difference value is obtained to obtain the recovery value of the visibility data of each sampling point of the target patch.
  • the 3D model is rendered according to the restored value of the visibility data of each sampling point of the 3D model.
  • the embodiment of the present application obtains the visibility data of the sampling points of the target patch by interpolating the visibility data of the vertices of the target patch and the visibility data of the center of gravity, thereby ensuring the accuracy of the rendered 3D model.
  • the accurate visibility data can be obtained through a simple interpolation function, the complexity of the recovery of the visibility data is reduced, and the visibility data of each sampling point can be recovered based on the visibility data of a small number of vertices, which further reduces the grayness of the visibility data.
  • the amount of calculation is reduced, thereby improving the recovery efficiency of the visibility data, and further improving the rendering efficiency of the 3D model.
  • rendering the 3D model in combination with accurate visibility data can further improve the rendering effect of the 3D model.
  • the visibility data of each sampling point in the target patch is obtained through interpolation based on the target vertex and center of gravity in the target patch, which ensures the accuracy of the data.
  • the embodiment of the present application firstly fits the visibility data of the sampling points into the spherical harmonic function space, and then obtains 4 floating-point numbers in the target cone through the down-frequency of the data, and reduces the data of the visibility data through the down-frequency of the data While reducing the storage pressure of the visibility data greatly, it also ensures the accuracy of the visibility data.
  • S, ⁇ , ⁇ are the scaling value of the target cone, the opening angle and the direction vector of the central axis, respectively
  • c i ( ⁇ ) are the corresponding values of the 16 floating-point numbers in the spherical harmonic function
  • Y i ( ⁇ ) is the spherical Basis functions for harmonic functions.
  • S is the scaling value of the target cone
  • the brightness of the target cone is determined by the scaling value
  • all the shades of the target cone are set to the same brightness, that is, the same scaling value.
  • c i ( ⁇ ) can be calculated by the integral tool, and the result of c i ( ⁇ ) is as follows:
  • the zoom value of the cone can be calculated by the bisection method, and at this time, the visibility data represented by the cone of the target vertex is obtained.
  • the scaling value of the target vertebral body can be obtained based on the above formula, and the above visibility data is represented by the opening angle of the target vertebral body, the direction of the central axis and the scaling value.
  • the method may include the following steps (1-6):
  • Model structure data includes model size, surface contour, bone contour and other data.
  • Model material data includes data such as model material and surface color.
  • the model structure of the 3D model is constructed.
  • the material and surface color of the 3D model can be obtained in one rendering.
  • the model structure of the constructed 3D model is rendered to obtain the material and surface color of the 3D model.
  • the material of the three-dimensional model includes the density, surface roughness, and surface gloss of the three-dimensional model.
  • the set motion mode is the motion mode of the pre-designed 3D model when designing the 3D model. Specifically, according to the set motion mode of the 3D model, the motion data of the bones of the 3D model is set, and the bones of the 3D model move according to the above motion data. In this way, the three-dimensional model is driven to move, and the designed corresponding actions can be made to obtain the movement mode of the three-dimensional model.
  • the motion mode of the 3D model is the same as the set motion mode of the 3D model.
  • the 3D model Place the 3D model that has been rendered once and has bone motion data set in the game scene, and add lights, and obtain and store the visibility data of the 3D model through the methods in the above embodiments.
  • the 3D model drives the bone motion according to the motion data of the bone, so that the 3D model moves in the game scene.
  • the acquired visibility data of the 3D model will change with the motion of the 3D model, and all the motion data will be obtained. State visibility data is stored.
  • the rendered 3D model is obtained through secondary rendering.
  • the rendered 3D model is obtained through secondary rendering, wherein the surface of the 3D model obtained from the secondary rendering shows lightness and darkness, and the lightness and darkness shown on the surface of the 3D model will change with the movement of the 3D model changes happened.
  • a 3D model with skeletal motion data is placed in the game scene after the first rendering and the skeletal motion data is set, and the visibility data of the 3D model is rendered twice through the lights in the game scene to obtain a 3D model with shading .
  • the secondary rendering is not added, when the 3D model is placed in the game scene, because the lighting factor is not considered, the surface of the 3D model will not show the difference in brightness brought by the light, so that the 3D model cannot be well integrated into the game scene. In the game scene, it seems out of place.
  • the shading data of the 3D model is displayed, so that the 3D model can be well integrated into the game environment, and when the 3D model moves, the shading of the 3D model surface will also change Changes with movement, making the 3D model more realistic in the game environment.
  • FIG. 14 shows a block diagram of a storage device for 3D model visibility data provided by an embodiment of the present application.
  • the device has the function of realizing the above-mentioned method example, and the function may be realized by hardware, or may be realized by hardware executing corresponding software.
  • the device may be a computer device, or may be set in the computer device.
  • the apparatus 1400 may include: a data acquisition module 1410 , a data determination module 1420 and a data storage module 1430 .
  • the data acquisition module 1410 is configured to acquire original values of visibility data of multiple sampling points of the 3D model; wherein, the visibility data is used to represent the visibility of the sampling points, and the sampling points are at the pixel level.
  • the data determination module 1420 is configured to determine the visibility data of each vertex of the three-dimensional model with the goal of converging the value of the first error function; wherein, the visibility data of each vertex is used for interpolation to obtain the recovery value of the visibility data of each sampling point;
  • the first error function is used to measure the degree of difference between the restored value of the visibility data of the sampling point and the original value, and the rate of change of the restored value of the visibility data of the sampling point, and the number of vertices is less than the number of sampling points.
  • the data storage module 1430 is configured to store the visibility data of each vertex of the 3D model.
  • the data determination module 1420 includes: a function construction unit 1421 and a data determination unit 1422 .
  • the function construction unit 1421 is configured to construct a first error function based on the restored value and the original value of the visibility data of the sampling point; wherein, the value of the first error function is the difference between the restored value and the original value of the visibility data of the sampling point The degree of difference is positively correlated, and the value of the first error function is positively correlated with the rate of change of the restored value of the visibility data at the sampling point.
  • the data determination unit 1422 is configured to determine the visibility data of each vertex of the 3D model with the goal of minimizing the value of the first error function.
  • the function construction module 1421 is further configured to construct a first sub-function based on the difference between the restored value of the visibility data of the sampling point and the original value; wherein, the value of the first sub-function is the same as that of the sampling point The difference between the restored value of the visibility data and the original value is positively correlated; based on the difference between the change rates corresponding to at least one group of adjacent patches on the three-dimensional model, a second sub-function is constructed; wherein, on the three-dimensional model The rate of change corresponding to the target patch refers to the rate of change of the recovery value of the visibility data of each sampling point corresponding to the target patch; the value of the second sub-function is positively correlated with the rate of change of the recovery value of the visibility data of the sampling point The relationship builds a first error function based on the first subfunction and the second subfunction.
  • the data acquisition module 1410 includes: a data acquisition unit 1411 , a vertebra body fitting unit 1412 and a data determination unit 1413 .
  • the data acquiring unit 1411 is configured to acquire the initial visibility data of the target sampling point for the target sampling point of the 3D model; wherein, the initial visibility data of the target sampling point includes: intersection data pointing to multiple directions with the target sampling point as the vertex.
  • the cone fitting unit 1412 is configured to determine a target cone for fitting the initial visibility data of the target sampling point.
  • the data determination unit 1413 is configured to determine the original value of the visibility data of the target sampling point based on the target vertebral body.
  • the visibility data of the target sampling point includes the direction of the central axis of the target vertebral body, the opening angle and the scaling value, and the scaling value is used to characterize the visible area the lightness and darkness.
  • the cone fitting unit 1412 is also used to project the initial visibility data of the target sampling point to the spherical harmonic function space to obtain the projected visibility data of the target sampling point; based on the projected visibility data of the target sampling point, determine the target The optimal visible direction corresponding to the sampling point, the optimal visible direction refers to the central axis direction of the visible area corresponding to the target sampling point determined in the spherical harmonic function space; the optimal visible direction is determined as the central axis of the target vertebral body Direction; with the value convergence of the second error function as the goal, determine the opening angle and scaling value of the target vertebral body; wherein, the second error function is used to measure the projection representation of the target vertebral body in the spherical harmonic function space and the target sampling point The degree of difference between the projected visibility data for .
  • the visibility data of the target sampling point is represented by 4 floating point numbers; wherein, the central axis direction of the target vertebral body is represented by 2 floating point numbers, the opening angle of the target vertebral body is represented by 1 floating point number, and the target vertebral body is represented by 1 floating point number
  • the scaling value of the body is represented by a floating point number.
  • the device 1400 further includes a data usage module 1440 .
  • the data usage module 1440 is used to obtain the visibility data of each vertex of the target mesh from the vertex data of the 3D model for the target mesh on the 3D model; determine the target mesh according to the visibility data of each vertex of the target mesh According to the visibility data of each vertex of the target patch and the visibility data of the barycenter point of the target patch, interpolate to obtain the recovery value of the visibility data of each sampling point corresponding to the target patch.
  • the recovery value of the visibility data of the sampling point obtained by interpolating the visibility data of each vertex, and The original value difference is the smallest, that is, when the first error function converges, the final result of the visibility data of each vertex of the 3D model is obtained, and the visibility data of each vertex is stored instead of storing the visibility of a large number of sampling points data, fully reducing the space required for visibility data storage, alleviating the storage pressure of the visibility data of the 3D model, and improving the rendering efficiency of the 3D model.
  • the first error function when the first error function is designed, on the one hand, it is used to measure the difference between the restored value of the visibility data of the sampling point and the original value, and the convergence of the first error function can make the restored value of the visibility data of the sampling point and the original value
  • the difference between the values is as small as possible, so that the difference between the rendered 3D model surface and the original 3D model surface is as small as possible, ensuring the accuracy of 3D model rendering; on the other hand, it is used to measure the visibility of sampling points
  • the rate of change of the restored value of the data, the convergence of the first error function can make the restored values of the visibility data continuous, thereby enhancing the visual effect of the model surface rendered by the 3D model through the visibility data.
  • the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to the needs.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device and the method embodiment provided by the above embodiment belong to the same idea, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
  • FIG. 16 shows a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the computer device 1600 can be any electronic device with data calculation, processing and storage functions, such as the terminal or server introduced above, which can be used to implement the method for storing 3D model visibility data provided in the above embodiments. Specifically:
  • the computer device 1600 includes a central processing unit (such as a CPU (Central Processing Unit, central processing unit), a GPU (Graphics Processing Unit, a graphics processing unit) and an FPGA (Field Programmable Gate Array, Field Programmable Logic Gate Array) etc.) 1601, A system memory 1604 including a RAM (Random-Access Memory) 1602 and a ROM (Read-Only Memory) 1603, and a system bus 1605 connecting the system memory 1604 and the central processing unit 1601.
  • the computer device 1600 also includes a basic input/output system (Input Output System, I/O system) 1606 that helps to transmit information between various devices in the server, and is used to store operating systems 1613, application programs 1614 and other program modules 1615 mass storage device 1607.
  • I/O system Input Output System
  • the basic input/output system 1606 includes a display 1608 for displaying information and input devices 1609 such as a mouse and a keyboard for users to input information. Wherein, both the display 1608 and the input device 1609 are connected to the central processing unit 1601 through the input and output controller 1610 connected to the system bus 1605 .
  • the basic input/output system 1606 may also include an input output controller 1610 for receiving and processing input from a number of other devices such as a keyboard, a mouse, or an electronic stylus. Similarly, input output controller 1610 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 1607 is connected to the central processing unit 1601 through a mass storage controller (not shown) connected to the system bus 1605 .
  • the mass storage device 1607 and its associated computer-readable media provide non-volatile storage for the computer device 1600 . That is to say, the mass storage device 1607 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact Disc Read-Only Memory, CD-ROM) drive.
  • a computer-readable medium such as a hard disk or a CD-ROM (Compact Disc Read-Only Memory, CD-ROM) drive.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory or Other solid-state storage technologies, CD-ROM, DVD (Digital Video Disc, high-density digital video disc) or other optical storage, tape cartridges, tapes, disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory or Other solid-state storage technologies
  • the computer device 1600 can also run on a remote computer connected to the network through a network such as the Internet. That is, the computer device 1600 can be connected to the network 1612 through the network interface unit 1611 connected to the system bus 1605, or in other words, the network interface unit 1616 can also be used to connect to other types of networks or remote computer systems (not shown) .
  • the memory also includes a computer program, which is stored in the memory and configured to be executed by one or more processors, so as to implement the above method for storing visibility data of the 3D model.
  • a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor of a computer device, the above method for storing visibility data of a three-dimensional model is realized.
  • the computer-readable storage medium may include: ROM (Read-Only Memory, read-only memory), RAM (Random-Access Memory, random access memory), SSD (Solid State Drives, solid state drive) or an optical disc, etc.
  • the random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
  • a computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device executes the above-mentioned method for storing the visibility data of the three-dimensional model.
  • the information including but not limited to target device information, target personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • signals involved in this application All are authorized by the subject or fully authorized by all parties, and the collection, use and processing of relevant data need to comply with relevant laws, regulations and standards of relevant countries and regions.
  • the three-dimensional models involved in this application are all obtained under the condition of full authorization.
  • the "plurality” mentioned herein refers to two or more than two.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
  • the character "/” generally indicates that the contextual objects are an "or” relationship.
  • the numbering of the steps described herein only exemplarily shows a possible sequence of execution among the steps. In some other embodiments, the above-mentioned steps may not be executed according to the order of the numbers, such as two different numbers The steps are executed at the same time, or two steps with different numbers are executed in the reverse order as shown in the illustration, which is not limited in this embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

一种三维模型可见度数据的存储方法,包括:获取三维模型的多个采样点的可见度数据的原始值;其中,可见度数据用于表示采样点的可见度,采样点为像素级;以第一误差函数的取值收敛为目标,确定三维模型的各个顶点的可见度数据;其中,各个顶点的可见度数据用于插值得到各个采样点的可见度数据的恢复值;第一误差函数用于衡量采样点的可见度数据的恢复值与原始值之间的差异度,以及采样点的可见度数据的恢复值的变化率,顶点的数量少于采样点的数量;存储三维模型的各个顶点的可见度数据。还提供一种三维模型可见度数据的存储装置、设备及存储介质。

Description

三维模型可见度数据的存储方法、装置、设备及存储介质
本申请要求于2021年11月19日提交的申请号为202111374336.0、发明名称为“三维模型可见度数据的存储方法、装置、设备及存储介质”,以及于2021年12月28日提交的申请号为202111624049.0、发明名称为“三维模型可见度数据的存储方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机和互联网技术领域,特别涉及一种三维模型可见度数据的存储方法、装置、设备及存储介质。
背景技术
三维模型渲染时,计算机设备需要计算该三维模型上各个点的可见度数据,并将该可见度数据进行存储,以便在模型渲染过程中进行使用。
在相关技术中,计算三维模型上的任意一点的可见度数据时,计算机设备以该点为球心发射多条(如720条)射线,并根据各条射线是否与环境物体相交以及相交距离,来得到各条射线的相交数据,该点的可见度数据即包括从该点发射的各条射线的相交数据。
然而,在上述相关技术中,由于仅三维模型上一个点的可见度数据就包括大量射线的相交数据,三维模型的可见度数据的数据量过大,不利于存储和计算,严重影响了三维模型的渲染效率。
发明内容
本申请实施例提供了一种三维模型可见度数据的存储方法、装置、设备及存储介质。所述技术方案如下:
根据本申请实施例的一个方面,提供了一种三维模型可见度数据的存储方法,所述方法由计算机设备执行,所述方法包括:
获取三维模型的多个采样点的可见度数据的初始值;其中,所述可见度数据用于表示所述采样点的可见度,所述采样点为像素级;
以第一误差函数的取值收敛为目标,确定所述三维模型的各个顶点的可见度数据;其中,所述各个顶点的可见度数据用于插值得到各个所述采样点的可见度数据的恢复值;所述第一误差函数用于衡量所述采样点的可见度数据的恢复值与原始值之间的差异度,以及所述采样点的可见度数据的恢复值的变化率,所述顶点的数量少于所述采样点的数量;
存储所述三维模型的各个顶点的可见度数据。
根据本申请实施例的一个方面,提供了一种三维模型可见度数据的存储装置,所述装置包括:
数据获取模块,用于获取三维模型的多个采样点的可见度数据的初始值;其中,所述可见度数据用于表示所述采样点的可见度,所述采样点为像素级;
数据确定模块,用于以第一误差函数的取值收敛为目标,确定所述三维模型的各个顶点的可见度数据;其中,所述各个顶点的可见度数据用于插值得到各个所述采样点的可见度数据的恢复值;所述第一误差函数用于衡量所述采样点的可见度数据的恢复值与原始值之间的差异度,以及所述采样点的可见度数据的恢复值的变化率,所述顶点的数量少于所述采样点的数量;
数据存储模块,用于存储所述三维模型的各个顶点的可见度数据。
根据本申请实施例的一个方面,提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现上述三维模型可见度数据的存储方法。
根据本申请实施例的一个方面,提供了一种计算机可读存储介质,所述存储介质中存储 有计算机程序,所述计算机程序由处理器加载并执行以实现上述三维模型可见度数据的存储方法。
根据本申请实施例的一个方面,提供了一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机程序,以实现上述三维模型可见度数据的存储方法。
本申请实施例提供的技术方案可以带来如下有益效果:
通过获取三维模型的各个采样点的可见度数据的原始值,然后通过第一误差函数,使由各个顶点的可见度数据插值得到的采样点的可见度数据的恢复值,和原始值差异度最小,也就是第一误差函数在收敛的情况下,得到三维模型的各个顶点的可见度数据的最终结果,并对各个顶点的可见度数据进行存储,而不用存储大量的采样点的可见度数据,充分减少了可见度数据存储所要使用的空间,缓解了三维模型的可见度数据的存储压力,并提升了三维模型的渲染效率。同时,第一误差函数在设计的时候,一方面用于衡量采样点的可见度数据的恢复值和原始值之间的差异度,收敛第一误差函数可以使得采样点的可见度数据的恢复值和原始值之间的差异度尽可能小,从而使得渲染后的三维模型表面和原始三维模型表面的差别尽可能的小,保证了三维模型渲染时的准确度;另一方面用于衡量采样点的可见度数据的恢复值的变化率,收敛第一误差函数可以使得可见度数据的恢复值之间连续,从而增强了三维模型通过可见度数据渲染后的模型表面的视觉效果。
附图说明
图1是本申请一个实施例提供的方案实施环境的示意图;
图2是本申请一个实施例提供的三维模型可见度数据的存储方法的流程图;
图3是本申请一个实施例提供的渲染后的模型的示意图;
图4是本申请一个实施例提供的插值函数的示意图;
图5是本申请另一个实施例提供的插值函数的示意图;
图6是本申请另一个实施例提供的三维模型可见度数据的存储方法的流程图;
图7是本申请一个实施例提供的采样点可见度数据的示意图;
图8是本申请另一个实施例提供的采样点可见度数据的示意图;
图9是本申请一个实施例提供的中心轴方向的示意图;
图10是本申请一个实施例提供的目标椎体的示意图;
图11是本申请一个实施例提供的投影后目标椎体的示意图;
图12是本申请一个实施例提供的最优方向的坐标系的示意图;
图13是本申请一个实施例提供的不同开口角度的目标椎体的示意图;
图14是本申请一个实施例提供的三维模型可见度数据的存储装置的框图;
图15是本申请另一个实施例提供的三维模型可见度数据的存储装置的框图;
图16是本申请一个实施例提供的计算机设备的结构示意图。
具体实施方式
请参考图1,其示出了本申请一个实施例提供的方案实施环境的示意图。该方案实施环境可以包括:终端10和服务器20。
终端10可以是诸如手机、平板电脑、PC(Personal Computer,个人计算机)、可穿戴设备、车载终端设备、VR(Virtual Reality,虚拟现实)设备、AR(Augmented Reality,增强现实)设备、智能电视等电子设备,本申请对此不作限定。终端10中可以安装运行有目标应用程序的客户端。例如,该目标应用程序可以是需要使用三维模型可见度数据进行模型渲染的应用程序,如游戏应用程序、三维地图程序、社交类应用程序、互动娱乐类应用程序等,本申请对此不作限定。其中,游戏应用程序可以包括射击类游戏、战旗类游戏、MOBA(Multiplayer Online Battle Arena,多人在线战术竞技游戏)类游戏等对应的应用程序,本申请对此不作限定。
在一个示例中,目标应用程序的客户端可基于三维模型的可见度数据,对三维模型进行渲染,使三维模型更加真实。可选地,可见度数据是通过对三维模型中的各个像素点进行采样得到的,如计算机设备(如终端10或服务器20)在线或离线对三维模型中的各个像素点进行采样,得到三维模型的可见度数据。该可见度数据用于表示整个三维模型的可见度,可见度可用于描述该模型的各个部位在场景(如虚拟场景)中的明暗度,使三维模型在场景中更加符合光学规律,从而增强三维模型在场景中的真实感,增强用户的视觉体验。
服务器20可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或分布式系统,还可以是提供云计算服务的云服务器。服务器20可以是上述目标应用程序的后台服务器,用于为目标应用程序的客户端提供后台服务。可选地,上述三维模型的可见度数据可存储于服务器20中,在终端10需要对三维模型进行渲染时,服务器20将对应的可见度数据提供给终端10。示例性地,服务器20可用于对三维模型进行采样,得到并存储三维模型的可见度数据。或者,服务器20接收来自终端10的可见度数据,并对其进行存储,本申请对此不作限定。可选地,上述三维模型的可见度数据也可以存储于终端10中。
终端10和服务器20之间可以通过网络进行通信,例如该网络可以是有线网络或者无线网络。
在一些实施例中,上述目标应用程序可以提供场景,且该场景是三维场景,场景中存在三维模型,该三维模型可以是三维人物模型、三维宠物模型、三维载具模型等,三维模型可以是动态的,其可以在场景中进行活动,如移动或执行各种其他操作。三维模型也可以是静态的,例如三维模型可以是三维建筑模型、三维植物模型等。
以游戏应用程序为例,上述场景可以称为虚拟场景,上述三维模型可以是游戏中的虚拟角色、虚拟建筑、虚拟宠物等。
虚拟场景是目标应用程序(如游戏应用程序)的客户端在终端上运行时显示(或提供)的场景,该虚拟场景是指营造出的供虚拟角色进行活动(如游戏竞技)的场景,如虚拟房屋、虚拟岛屿、虚拟地图等。该虚拟场景可以是对真实世界的仿真场景,也可以是半仿真半虚构的场景,还可以是纯虚构的场景。虚拟场景可以是三维虚拟场景。目标应用程序的客户端在终端上运行时,不同的时间段可以显示(或提供)不同的虚拟场景。
三维模型可以是虚拟场景中的任意模型,例如该三维模型可以是人物模型、动物模型、建筑模型、地貌模型等。在一些示例中,上述客户端可以采用本申请实施例提供的技术方案,获取该三维模型的可见度数据并将其进行存储,在对该三维模型进行渲染时,客户端可以从存储的数据中获取该三维模型的可见度数据进行使用。
请参考图2,其示出了本申请一个实施例提供的三维模型可见度数据的存储方法的流程图。该方法各步骤的执行主体可以是计算机设备,例如该计算机设备可以是图1所示方案实施环境中的终端10或服务器20。该方法可以包括如下几个步骤(210~230)中的至少一个步骤:
步骤210,获取三维模型的多个采样点的可见度数据的原始值;其中,可见度数据用于表示采样点的可见度,采样点为像素级。
上述采样点是三维模型表面上的点,可以通过对三维模型表面上的点进行采样,得到多个采样点,进而采样得到多个采样点的可见度数据。其中,采样点是像素级别的采样点,也就是对模型上的所有像素点都可以进行采样,该采样点可以是三维模型表面上的任意一点。本申请实施例对像素的大小不作限定,例如,本申请实施例中的像素(三维图像中也被称之为体素)的大小可以为0.1-1mm。在本申请实施例中,直接对采样点进行采样得到的可见度数据为原始值,该原始值是指三维模型表面采样点的真实的可见度数据。三维模型与上述实施例介绍相同,这里不再赘述。
可见度数据用于表示采样点的可见度,采样点的可见度用于表示从各个方向是否能观察 到该采样点,记录上述各个方向上的数据并进行整合后,可以得到采样点的可见度数据。示例性地,可见度数据也可用于表示采样点周围物体对该采样点的遮挡信息,则可见度数据相对于采样点为球面分布。
步骤220,以第一误差函数的取值收敛为目标,确定三维模型的各个顶点的可见度数据;其中,各个顶点的可见度数据用于插值得到各个采样点的可见度数据的恢复值;第一误差函数用于衡量采样点的可见度数据的恢复值与原始值之间的差异度,以及采样点的可见度数据的恢复值的变化率,顶点的数量少于采样点的数量。
在本申请实施例中,上述顶点是三维模型表面网格上的点,三维模型的表面网格由多个面片组成,多个顶点形成的多边形称为一个面片。面片是用于上述三维模型建模的一种数据结构,如是三维模型表面网格分割后的数据结构,每个面片的形状是任意的,可以是三角形,也可以是其他多边形。其中,三角形的面片被称为三角形面片。
由于采样点是三维模型中的任意一点,而顶点是三维模型经过网格划分后得到的面片的顶点,因此,每个顶点的位置上必然存在对应的采样点,则该顶点的可见度数据可由该顶点位置对应的采样点的可见度数据的原始值决定。
可选地,通过基于三维模型的顶点的可见度数据,插值得到由该三维模型的顶点组成的面片中的各个采样点的可见度数据的恢复值,可见度数据的恢复值为插值(即预估值),其与可见度数据的原始值可能相同,也可能不相同。可选地,该三维模型的顶点组成的面片中的采样点的数量大于该三维模型的顶点的数量,在一些实施例中,该三维模型的顶点组成的面片中的采样点包括该三维模型的顶点。本申请实施例通过第一误差函数来衡量采样点的可见度数据的恢复值与原始值之间的差异度,以及采样点的可见度数据的恢复值的变化率。其中,差异度表示采样点的可见度数据的恢复值与原始值之间的大小关系。变化率表示相邻采样点的可见度数据的恢复值的连续性,变化率越低,相邻采样点的可见度数据的恢复值的连续性越高。本申请实施例以第一误差函数的取值收敛为目标,确定三维模型的各个顶点的可见度数据的最终结果。本申请对第一误差函数的取值方式不作限定。第一误差函数的具体形式将在下文实施例中阐述。
在一个示例中,步骤220可以包括如下步骤(221~222):
步骤221,基于采样点的可见度数据的恢复值与原始值,构建第一误差函数;其中,第一误差函数的取值与采样点的可见度数据的恢复值与原始值之间的差异度呈正相关关系,第一误差函数的取值与采样点的可见度数据的恢复值的变化率呈正相关关系。
第一误差函数可记为E(x),其可用于表示采样点的可见度数据的恢复值与原始值之间的差异度,其具体公式如下:
Figure PCTCN2022127985-appb-000001
其中,f(p)是可见度数据的原始值,g(p)是可见度数据的恢复值,p是三维模型表面上的任一采样点,S是三维模型的表面,α为权重参数。
可选地,步骤221可以包括如下步骤(221a~221c):
步骤221a,基于采样点的可见度数据的恢复值与原始值之间的差值,构建第一子函数;其中,第一子函数的取值与采样点的可见度数据的恢复值与原始值之间的差异度呈正相关关系。
第一子函数表示各个采样点的可见度数据的恢复值与原始值之间的差异度。其中,第一子函数可以是采样点的可见度数据的恢复值与原始值的差值之和,也可以表示为采样点的可见度数据的恢复值与原始值的差值的平方和,本申请对此不作限定。在本申请实施例中,第一子函数E1(x)是采样点的可见度数据的恢复值与原始值的差值的平方和,其具体公式如下:
E1(x)=∫ s(f(p)-g(p)) 2dp。
在一些实施例中,可以以第一子函数收敛为目标,确定三维模型的各个顶点的可见度数 据。示例性地,如图3所示,图3示出了三维模型31在只经过第一子函数处理得到的可见度数据下的渲染效果,显而易见的,该渲染效果存在无法忽视的缺陷:在三维模型31对应的区域32和区域33中,黑白交接处出现断层,从而显得该区域比较脏,视觉效果不好。因此,本申请实施例在构建第一误差函数时,还考虑了采样点的可见度数据的恢复值的变化率,以解决该断层问题。在本申请实施例中,可以通过下述步骤解决这个问题。
步骤221b,基于三维模型上的至少一组相邻面片对应的变化率之间的差值,构建第二子函数;其中,三维模型上的目标面片对应的变化率,是指目标面片对应的各个采样点的可见度数据的恢复值的变化率;第二子函数的取值与采样点的可见度数据的恢复值的变化率呈正相关关系。
第二子函数用于表示三维模型上的各组相邻面片对应的变化率之间的差异度,相邻面片可以是指具有共同的边的两个面片。其中,第二子函数可以是各组相邻面片对应的变化率之间的差值之和,也可以是各组相邻面片对应的变化率之间的差值的绝对值的平方和,本申请对此不作限定。在本申请实施例中,第二子函数E2(x)是各组相邻面片对应的变化率之间的差值的绝对值的平方和,具体公式如下:
Figure PCTCN2022127985-appb-000002
其中,t,u代表任意一组相邻面片。如果两个面片具有共同的边,那么这两个面片即可作为一组相邻面片。
步骤221c,基于第一子函数和第二子函数,构建第一误差函数。
将上述第一子函数E1(x)和第二子函数E2(x)合并,得到第一误差函数E(x)。例如,将第一子函数和第二子函数的和值确定为第一误差函数,该第一误差函数可以表示如下:
E(x)=E1(x)+E2(x)。
步骤222,以最小化第一误差函数的取值为目标,确定三维模型的各个顶点的可见度数据。
当第一误差函数的取值为最小时,各个采样点的可见度数据的恢复值与原始值之间的差异度最小,此时可以得到与可见度数据的原始值最为相近的恢复值,从而保证了恢复数据的准确性,进而提高了三维模型的渲染准确性。同时,当第一误差函数的取值为最小时,各组相邻面片对应的变化率之间的差异度最小,此时可以得到变化率尽可能小的相邻面片,使得相邻面片之间的可见度数据的连续性高,从而保证了可见度数据整体的高连续性,进而保证了渲染后的三维模型的视觉效果。
也即可以以第一误差函数的最小值为收敛目标,确定三维模型的各个顶点的可见度数据,此时采样点的可见度数据的恢复值与原始值之间的差异度越小,采样点的可见度数据的恢复值的变化率越小,得到的顶点的可见度数据就越精确且连续性越高。
示例性地,当第一误差函数的取值在测量中减少时,则继续更新各个顶点的可见度数据,以及继续对该第一误差函数的取值进行测量,直到第一误差函数的取值开始增加后停止,此时上一次测量得到的第一误差函数的取值,即为该第一误差函数的最小值,该最小值下的各个顶点的可见度数据即为各个顶点的最终可见度数据。
可选地,对第一误差函数的取值设置最小值的阈值,当测量得到的第一误差函数的取值小于该阈值时,停止测量,并将此时的各个顶点的可见度数据确定为各个顶点的最终可见度数据。
在一些可行的实施例中,还可以以第一误差函数的最大值为收敛目标,此时,第一误差函数用于衡量采样点的可见度数据的恢复值与原始值之间的接近度,接近度越高,采样点的可见度数据的恢复值与原始值之间的差异度就越小。
步骤230,存储三维模型的各个顶点的可见度数据。
可选地,可以将上述步骤得到的三维模型的各个顶点的可见度数据的最终结果存储到三维模型的顶点数据中。三维模型的顶点数据中,包括各个顶点分别对应的顶点数据,每一个 顶点对应的顶点数据中,包括该顶点的可见度数据。可选地,每一个顶点对应的顶点数据中还可以包括该顶点的位置数据、该顶点的颜色数据等,本申请对此不作限定。
综上所述,本实施例通过获取三维模型的各个采样点的可见度数据的原始值,然后通过第一误差函数,使由各个顶点的可见度数据插值得到的采样点的可见度数据的恢复值,和原始值差异度最小,也就是第一误差函数在收敛的情况下,得到三维模型的各个顶点的可见度数据的最终结果,并对各个顶点的可见度数据进行存储,而不用存储大量的采样点的可见度数据,充分减少了可见度数据存储所要使用的空间,缓解了三维模型的可见度数据的存储压力,并提升了三维模型的渲染效率。同时,第一误差函数在设计的时候,一方面用于衡量采样点的可见度数据的恢复值和原始值之间的差异度,收敛第一误差函数可以使得采样点的可见度数据的恢复值和原始值之间的差异度尽可能小,从而使得渲染后的三维模型表面和原始三维模型表面的差别尽可能的小,保证了三维模型渲染时的准确度;另一方面用于衡量采样点的可见度数据的恢复值的变化率,收敛第一误差函数可以使得可见度数据的恢复值之间连续,从而增强了三维模型通过可见度数据渲染后的模型表面的视觉效果。
下面,对第一误差函数的构建及求解过程进行举例说明。
在不考虑各个面片之间的可见度数据连续性的情况下,第一误差函数只由第一子函数构成,此时,可以基于目标顶点的可见度数据,通过插值函数求得由目标顶点围合而组成的目标面片中所有采样点的可见度数据的恢复值。其中,目标顶点可以是指目标面片对应的顶点,目标面片可以是指三维模型对应的任一面片。
其中,插值函数可以包含离散点以及对应函数值的表或文件定义,插值函数是将两个数据之间的空缺的数据通过计算而进行填充的函数。
在一些实施例中,如图4所示,图4中可知点A的值为1,点V i,V j和V k的值为0,则通过插值函数,可以得到线段AV i,AV j,AV k上各个点的值,其中,线段AV i,AV j,AV k上各个点的值是连续的,且越接近A,它的值就越接近1且不大于1,同样的,越接近V i,V j和V k,它的值就越接近0且不小于0。类似地,插值函数也可以用在其他图形中,如图5所示,图5示出了一个六边形51的插值函数的应用,具体操作方法与图4中相同,这里就不再赘述。
通过插值函数得到的面片中任意一点p的可见度数据的恢复值为:
Figure PCTCN2022127985-appb-000003
其中,x i是该面片的顶点i对应采样点的可见度数据,B i(p)是插值函数的表达式,N为该面片的顶点的总数量。
由上文可知,第一子函数的公式如下表示:E(x)=∫ s(f(p)-g(p)) 2dp;
代入g(p)得:
Figure PCTCN2022127985-appb-000004
Figure PCTCN2022127985-appb-000005
Figure PCTCN2022127985-appb-000006
可以看出,上述式子中第一项是关于x的二次项,第二项是关于x的一次项,第三项是常数(f(p) 2是已知的),j为该面片中与顶点i不同的顶点,因此,用矩阵形式写出上述第一子函数的公式为:E(x)=x TAx-2x Tb+c;
其中,A i,j=∫ SB i(p)B i(p)dp,b i=∫ SB i(p)f(p)dp,c=∫ Sf(p) 2dp,A i,j是矩阵,b i是向量,c是计算得到的常数值。
计算E(x)的最小值和E(x)取最小值时x的取值。
根据E(x)的计算公式,上述矩阵A为一个稀疏对称正定矩阵,则上述第一子函数是一个单调递增函数,并且当它的斜率等于0时取得最小值,即当所有
Figure PCTCN2022127985-appb-000007
都取0时上述第一 子函数得到最优解,进而可以化简得到E(x)取最小值时的计算公式的矩阵形式为:Ax=b。
此时得到的x对应的顶点可见度数据即为三维模型所要存储的顶点可见度数据。但上述方法(最小二乘法)容易造成过拟合问题,因此还需考虑各个相邻面片之间的连续性。
在考虑各个相邻面片之间的连续性的情况下,需要加入正则项。加入正则项后的计算公式的矩阵形式为:(A+αR)x=b;
其中,矩阵R为正则项,R不对计算出来的x的绝对值大小造成影响。正则项的引入是为了中和上述过拟合问题,使插值函数得到的值接近原值的同时,函数的形式要比较“平滑”,也就是目标顶点的插值函数和该点的插值函数的梯度要尽量一致,根据上述条件,可以将第一子函数更新为:
Figure PCTCN2022127985-appb-000008
其中,
Figure PCTCN2022127985-appb-000009
是三角形面片t的变化率,
Figure PCTCN2022127985-appb-000010
是三角形面片u的变化率。
根据插值函数的全局梯度差值大小,得到插值函数的全局梯度差值公式:
Figure PCTCN2022127985-appb-000011
其中,i、j和k分别表示三角形面片的三个顶点。
可以根据插值函数的重心值,得到差值函数的全局梯度的表现形式。
设B i(p)是相对于v i的重心值,则有:
Figure PCTCN2022127985-appb-000012
其中,a t是三角形面片的面积,这个式子里的变量只有h i,所以B i(p)是关于h i的函数,E i为顶点i在三角形面片中对边的长度:
进一步地,则有
Figure PCTCN2022127985-appb-000013
其中,u i
Figure PCTCN2022127985-appb-000014
的方向,所以:
Figure PCTCN2022127985-appb-000015
为了将积分表示成矩阵形式,这里把
Figure PCTCN2022127985-appb-000016
也写成矩阵形式,从上式可知
Figure PCTCN2022127985-appb-000017
是x i,x j,x k的线性函数,所以可以写成:
Figure PCTCN2022127985-appb-000018
其中,第一个矩阵中与x i,x j,x k相关的元素的值为非0,简化记作:
Figure PCTCN2022127985-appb-000019
根据插值函数的全局梯度的表现形式,计算得到正则项:
Figure PCTCN2022127985-appb-000020
所以正则项矩阵就是:
R=(F t-F u) T(F t-F u);
将上式带入带有正则项E(x)取最小值时的计算公式的矩阵形式,计算得到的x对应的顶点可见度数据即为三维模型所要存储的顶点可见度数据。
请参考图6,其示出了本申请另一个实施例提供的三维模型可见度数据的存储方法的流程图。该方法各步骤的执行主体可以是计算机设备,例如该计算机设备可以是图1所示方案实施环境中的终端10或服务器20。该方法可以包括如下几个步骤(610~680)中的至少一个步骤:
步骤610,对于三维模型的目标采样点,获取目标采样点的初始可见度数据。
目标采样点可以是指三维模型对应的任一采样点。目标采样点的初始可见度数据包括:以目标采样点为球心指向多个方向的相交数据。其中,对于多个方向中的目标方向,该目标方向上的相交数据用于指示从目标采样点发射的沿该目标方向的射线,是否与环境物体相交以及相交距离。在从目标采样点发射的沿目标方向的射线与环境物体相交的情况下,进一步获取相交距离,相交距离是指顶点(也即目标采样点)和相交点(也即上述沿目标方向的射线与环境物体的相交点)之间的距离。
在一些实施例中,如图7所示,图7中选取的目标采样点是人物模型71腋下一点72,但在实际操作中,选取的目标采样点只位于人物模型71的表面,此处只为清楚地介绍目标采样点的可见度数据的具体表现形式。图7中人物模型71腋下一点72向外各个方向发散出大量射线,由图中细线和粗线表示。其中,细线表示该方向的射线没有与任何物体相交,粗线表示该方向的射线与其他物体相交。粗线的长度为相交距离。可见度数据由细线和粗线对应的数据组成。其中,细线数据中包括方向和判定没有相交的数据,粗线数据中包括方向、判定相交的数据和相交距离。
显而易见的,采样点发散出的射线数量越多,所得到的采样点的可见度数据也就越精确,同时,存储该采样点的可见度数据所需要的空间也就越大。
步骤620,确定用于拟合目标采样点的初始可见度数据的目标椎体。
在一些实施中,如图8所示,对人物模型81上的采样点82进行可见度数据的采样,其中,右图中球体83是采样后得到,也就是左图中的球体84(用于表示采样点82的初始可见度数据),显而易见的是,球体83的区域85是黑色的,表示为不可见,也就是球体84与人物模型81所重叠的部分,因此,只需考虑没有重叠的区域86的可见度信息,显然,可以将区域86和球体84的球心组成一个椎体,通过椎体来对采样点82的可见度数据进行表示,该椎体即为采样点82对应的目标椎体。目标椎体用于等效表示采样点的初始可见度数据,初始可见度数据为球面分布,若以用户的角度去观察,同一时刻只能观察到该球面的一部分,进而可以将该球体等效于椎体。
目标椎体仅需三个数据即可表示,分别为目标椎体的中心轴方向、开口角度和缩放值。其中,中心轴方向用于表示目标椎体的开口方向,也就是采样点的可见度数据的未被场景物体覆盖的区域的方向,开口角度用于表示目标椎体的大小,也就是采样点的可见度数据中未被场景物体覆盖的区域的大小,缩放值用于表示目标椎体的明暗度,也就是采样点的可见度数据中未被场景物体覆盖的区域中可见区域的占比。
可选地,目标椎体的中心轴方向采用2个浮点数表示,目标椎体的开口角度采用1个浮点数表示,目标椎体的缩放值采用1个浮点数表示。
浮点数(float),是属于有理数中某特定子集的数的数字表示,在计算机中用以近似表示任意某个实数。具体地说,这个实数由一个整数或定点数(即尾数)乘以某个基数(计算机中通常是2)的整数次幂得到,这种表示方法类似于基数为10的科学计数法。浮点数是采用科学计数法来表示一个数字的,它的格式可以写成这样:V=(-1)^S*M*R^E。其中,各个变量的含义如下:
S:符号位,取值0或1,决定一个数字的符号,0表示正,1表示负;
M:尾数,用小数表示,例如8.345*10^0,8.345就是尾数;
R:基数,表示十进制数R就是10,表示二进制数R就是2;
E:指数,用整数表示,例如10^-1,-1即是指数。
在本申请实施例中,通过4个浮点数来表示空间中的一个椎体,从而可以进一步减少可见度数据的原始值的计算量,进而提高可见度数据的原始值的获取效率。
在一些实施例中,如图9所示,图9中射线91为目标椎体的中心轴方向,可以通过两个角度(如图中角度θ和角度φ)来得到目标椎体的中心轴方向,其中,两个角度对应上述两个浮点数。
如图10所示,点101是球体(表示可见度数据的原始值)的球心,由球心101选取相应的开口角度截取到的图形102为目标椎体,目标椎体102是以球心101为顶点,底面为部分球面的椎体。将上述得到的目标采样点的可见度数据的原始值拟合到目标椎体中,可以降低可见度数据的原始值的损耗,从而尽可能地确保可见度数据的准确性,且椎体对应的数据远远小于球体对应的数据,从而进一步降低了可见度数据的数据量,有利于降低可见度数据所需占用的存储空间。
可选地,根据目标采样点的可见度数据的原始值,将其中可见的部分设置为1,如图中白色区域103,将其中不可见部分设置为0,如图中划线部分,其中,相交距离不在图中显示。可选地,相交距离可以通过颜色的明暗设置,例如,目标椎体的明暗度与相交距离呈相关,相交距离越近,目标椎体对应位置的明暗度就越暗,相交距离越远,目标椎体对应位置的明暗度就越亮。
在一些实施例中,如图11所示,采用线条的间隔来表示目标椎体的明暗度,线条的间隔越小,目标椎体的明暗度越暗。其中,区域111中是完全可见的,区域112和114中不可见且相交距离较大,区域113和115中不可见且相交距离较小,其中,各个区域表示该区域中可见度信息的平均值。可选地,可以根据目标椎体中的各个采样点的每个采样点设置不同的明暗度。本申请对目标椎体的明暗度设置方式不作限定。
可选地,步骤620包括如下步骤(621~624):
步骤621,将目标采样点的初始可见度数据投影到球谐函数空间,得到目标采样点的投影可见度数据。
将目标采样点的初始可见度数据投影到球谐函数空间中,得到目标采样点在球谐函数空间中的投影可见度数据。该投影可见度数据可以通过16个浮点数来表示。球谐函数空间是指球谐函数对应的空间,球谐函数用于将球面上的每一个点映射到一个复数函数值,投影可见度数据即为初始可见度数据对应的复数函数值集合。如此可以降低表示可见度数据所需的数据量,从而有利于降低可见度数据所需占用的存储空间,进而缓解可见度数据存储压力。
步骤622,基于目标采样点的投影可见度数据,确定目标采样点对应的最优可见方向,最优可见方向是指在球谐函数空间中确定出的目标采样点对应的可见区域的中心轴方向。
根据目标采样点投影到球谐函数空间中的投影可见度数据,确定出目标采样点对应的可见区域,再确定出目标采样点的最优可见方向。
在一些实施例中,如图12所示,图12中展示了目标采样点投影到球谐函数空间中的投影可见度数据,其中,球体120是上述投影得到的投影可见度数据,带箭头的坐标轴121,122,123分别表示球谐函数空间中的初始坐标轴,而射线124为球谐函数空间中目标采样点对应的可见区域的中心轴方向,也就是目标采样点对应的最优可见方向。射线125和126为根据目标采样点对应的最优可见方向通过右手定则确定出来的坐标系的另两个坐标轴。其中,射线124,125,126组成全新的坐标系。
步骤623,将最优可见方向,确定为目标椎体的中心轴方向。
将目标椎体投影到上述得到的全新的坐标系中,其中,目标椎体的中心轴方向为上述目标采样点的最优可见方向。示例性地,以目标采样点的最优可见方向为中心轴方向,上述全新的坐标系为基础,对目标椎体的表达式进行转换,得到目标椎体在球谐函数空间下对应的投影表征,如此,可以降低目标椎体的参数的获取难度,进而提高目标椎体的参数的确定效率。
步骤624,以第二误差函数的取值收敛为目标,确定目标椎体的开口角度和缩放值;其中,第二误差函数用于衡量目标椎体在球谐函数空间中的投影表征与目标采样点的投影可见度数据之间的差异度。
将上述得到的目标椎体投影后得到的投影表征与投影可见度数据进行比较,得到目标椎体的开口角度。可选地,可以结合第二误差函数的取值,通过遍历目标椎体在不同开口角度 下的投影表征,确定目标椎体的开口角度,进而基于该开口角度确定目标椎体的缩放值。
在一些实施例中,如图13所示,图13展示了4个不同开口角度时的投影后的目标椎体,其中131、132、133和134分别为开口角度为15度、30度、45度和60度的投影后的目标椎体,该4种开口角度下的目标椎体中的最上边的区域为目标椎体的可见度数据,正常情况下,其他区域应为全不可见的黑色区域,但由于投影时容易出现数值震荡,使目标椎体的黑色区域中出现部分白色区域,但出现的白色区域并不影响最终渲染结果。将上述4种投影后的目标椎体与目标采样点的投影可见度数据(球面分布)进行比较,将与投影可见度数据最为接近的投影后的目标椎体的开口角度确定为目标椎体的最终开口角度。投影后的目标椎体即为目标椎体在球谐函数空间中的投影表征。
本申请实施例通过将复杂的16个浮点数的可见度数据,降频得到4个浮点数的可见度数据,由该4个浮点数表示的椎体的可见度数据能很好地表示本申请实施例中的可见度数据,如此在减少了大量可见度数据的数据量的同时,也保证了可见度数据的准确性。
步骤630,基于目标椎体确定目标采样点的可见度数据的原始值,目标采样点的可见度数据包括目标椎体的中心轴方向、开口角度和缩放值,缩放值用于表征可见区域的明暗度。
其中,可见区域的明暗度用于表示可见区域的遮挡情况。可见度数据的原始值包括目标椎体的中心轴方向的原始值、开口角度的原始值和缩放值的原始值。
目标采样点的可见度数据由4个浮点数组成,分别为中心轴方向、开口角度和缩放值。其中,中心轴方向采用2个浮点数表示,开口角度和缩放值分别采用1个浮点数表示。
步骤640,以第一误差函数的取值收敛为目标,确定三维模型的各个顶点的可见度数据;其中,各个顶点的可见度数据用于插值得到各个采样点的可见度数据的恢复值;第一误差函数用于衡量采样点的可见度数据的恢复值与原始值之间的差异度,以及采样点的可见度数据的恢复值的变化率,顶点的数量少于采样点的数量。
步骤650,存储三维模型的各个顶点的可见度数据。
步骤640和650与上述实施例介绍相同,本申请实施例未说明的内容可以参考上述实施例,此处不再赘述。
可选地,在获取三维模型的各个顶点的可见度数据之后,在需要对该三维模型进行渲染的情况下,可以根据该各个顶点的可见度数据反推各个采样点的可见度数据,进而实现三维模型的渲染,下文将以三维模型上的目标面片为例进行说明,本申请实施例还可以包括步骤660-680。
步骤660,对于三维模型上的目标面片,获取目标面片的各个顶点的可见度数据。
目标面片可以是指三维模型上的任一面片,该目标面片可以是由三维模型中的顶点组成的多边形,诸如由三个顶点组成的三角形面片。各个面片紧密贴合并均匀覆盖在三维模型的表面组成三维模型的表面网格。可选地,可以从三维模型的顶点数据中,获取目标面片的各个顶点的可见度数据。
步骤670,根据目标面片的各个顶点的可见度数据,确定目标面片的重心点的可见度数据。
根据目标面片的各个顶点的可见度数据和重心点的计算公式,确定目标面片的重心点的位置和重心点的可见度数据。例如,根据目标面片中的各个顶点和重心点之间的距离,确定各个顶点分别对应的权重,根据各个顶点分别对应的权重对各个顶点的可见度数据进行加权计算,得到重心点的可见度数据。
步骤680,根据目标面片的各个顶点的可见度数据以及目标面片的重心点的可见度数据,插值得到目标面片对应的各个采样点的可见度数据的恢复值。
通过插值函数,根据目标面片的各个顶点和重心点的可见度数据,差值得到目标面片的各个采样点的可见度数据的恢复值。在获取三维模型的各个采样点的可见度数据的恢复值之后,根据三维模型的各个采样点的可见度数据的恢复值对三维模型进行渲染。
综上所述,本申请实施例通过对目标面片的顶点的可见度数据和重心的可见度数据进行插值后得到目标面片的采样点的可见度数据,保证了渲染后的三维模型的准确性。同时,由于通过简单插值函数即可得到准确的可见度数据,从而降低了可见度数据的恢复复杂性,且基于少量顶点的可见度数据即可恢复出各个采样点的可见度数据,进一步减低了可见度数据的灰度计算量,从而提高了可见度数据的恢复效率,进而提高了三维模型的渲染效率。另外,结合准确的可见度数据对三维模型进行渲染,可进一步提高三维模型的渲染效果。
另外,通过基于目标面片中的目标顶点和重心点进行插值,得到目标面片中各个采样点的可见度数据,保证了数据的准确性。
另外,本申请实施例通过将采样点的可见度数据先拟合到球谐函数空间中,然后通过数据的降频得到4个浮点数的目标椎体中,通过数据降频减少了可见度数据的数据量的同时又不会损失太多的精度,在大大减少可见度数据的存储压力的同时,还保证了可见度数据的精度。
下面,以获取目标采样点对应的目标椎体为例,对各个采样点分别对应的目标椎体的开口角度和缩放值的计算过程进行介绍说明。
其中,4个浮点数的目标椎体投影到球谐函数空间后的可见度数据的表达式为:
Figure PCTCN2022127985-appb-000021
其中,S,α,ω分别是目标椎体的缩放值、开口角度和中心轴方向向量,c i(α)是球谐函数中16个浮点数各自对应的值,Y i(ω)是球谐函数的基函数。
其中,4个浮点数的目标椎体的可见度数据的表达式为:
V cone(x)=S(0<=x<=α);
其中,S是目标椎体的缩放值,目标椎体的明暗度由缩放值决定,目标椎体的所有明暗度设置为同一个明暗度,也就是同一个缩放值。
c i(α)可以通过积分工具进行计算得到,c i(α)的结果如下:
Figure PCTCN2022127985-appb-000022
Figure PCTCN2022127985-appb-000023
Figure PCTCN2022127985-appb-000024
Figure PCTCN2022127985-appb-000025
c 1,3,4,5,7,8,9,10,11,13,14,15(α)=0;
将上述16个浮点值带入16个浮点数的球谐函数的可见度数据中,可以得到目标椎体投影后的可见度数据,也即目标椎体在球谐函数空间中的投影表征。
接下来通过最小化V cone(S,α,ω)和V SH(ω)(即目标采样点的投影可见度数据)之间的差值,来得到该目标采样点的目标椎体的缩放值,也即最小化第二误差函数,该过程可以表示如下:
Figure PCTCN2022127985-appb-000026
对α和S分别求导:
Figure PCTCN2022127985-appb-000027
Figure PCTCN2022127985-appb-000028
由c i(α)的正交规范性可得:
Figure PCTCN2022127985-appb-000029
进而可以推出:
Figure PCTCN2022127985-appb-000030
Figure PCTCN2022127985-appb-000031
把它们都设为0可以求得极值点条件等式为:
Figure PCTCN2022127985-appb-000032
Figure PCTCN2022127985-appb-000033
可以通过二分法计算得到圆锥的缩放值,此时,得到了该目标顶点的圆锥所表示的可见度数据。示例性地,只要确定了目标椎体的开口角度之后,基于上式即可得到目标椎体的缩放值,上述可见度数据由目标椎体的开口角度、中心轴方向和缩放值表示。
下面,以游戏场景中三维模型的搭建与渲染过程为例,对本申请实施例提供的技术方案在该场景中的应用进行介绍说明,该方法可以包括如下几个步骤(1~6):
1.获取三维模型的模型结构数据和模型材质数据。
模型结构数据包括模型大小、表面轮廓、骨骼轮廓等数据。模型材质数据包括模型材质、表面颜色等数据。
2.根据三维模型的模型结构数据,构建三维模型的模型结构。
根据三维模型的模型大小、表面轮廓、骨骼轮廓等数据,构建三维模型的模型结构。
3.根据三维模型的模型材质数据,一次渲染得到三维模型的材质和表面颜色。
根据三维模型的模型材质数据,对构建完成的三维模型的模型结构进行渲染,得到三维模型的材质和表面颜色。其中,三维模型的材质包括的三维模型的密度、表面粗糙程度、表面光泽等。
4.根据三维模型的设定运动方式,设置三维模型的骨骼的运动数据。
设定运动方式是在设计三维模型时预先设计好的三维模型的运动方式,具体根据三维模型的设定运动方式,设置三维模型的骨骼的运动数据,三维模型的骨骼根据上述运动数据进行运动,从而带动三维模型运动,且能够做出设计好的相应的动作,得到三维模型的运动方式。其中,三维模型的运动方式与三维模型的设定运动方式相同。
5.根据三维模型所在的游戏场景中的灯光,获取三维模型的可见度数据。
将一次渲染完成并设置好骨骼运动数据的三维模型放置在游戏场景中,并加以灯光,通过上述实施例中的方法获取三维模型的可见度数据并将其存储。其中,三维模型根据该骨骼的运动数据驱动骨骼运动,使得三维模型在游戏场景中运动,同时,获取到的三维模型的可见度数据会随着三维模型的运动而发生改变,将获取到所有的运动状态时的可见度数据进行存储。
6.根据三维模型的可见度数据,二次渲染得到渲染完成的三维模型。
根据存储的三维模型的可见度数据,二次渲染得到渲染完成的三维模型,其中,二次渲染得到的三维模型表面表现出明暗度,三维模型表面表现出的明暗度会随着三维模型的运动而发生改变。
本实施例中,通过将一次渲染完成并设置好骨骼运动数据的三维模型放置在游戏场景中,并通过游戏场景中的灯光对三维模型的可见度数据进行二次渲染,得到具有明暗度的三维模型。在不添加二次渲染时,在将三维模型放置在游戏场景中时,由于没有考虑灯光因素,三维模型的表面不会显示出灯光带来的明暗度差异,使得三维模型不能很好的融入到游戏场景中,显得格格不入。但在考虑灯光因素并对三维模型进行二次渲染后,表现出三维模型的明暗度数据,使三维模型能够很好地融入游戏环境,且三维模型运动时,三维模型表面的明暗度也会随着运动而发生改变,使得三维模型在游戏环境中更加真实。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图14,其示出了本申请一个实施例提供的三维模型可见度数据的存储装置的框图。 该装置具有实现上述方法示例的功能,该功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置可以是计算机设备,也可以设置在计算机设备中。该装置1400可以包括:数据获取模块1410、数据确定模块1420和数据存储模块1430。
数据获取模块1410,用于获取三维模型的多个采样点的可见度数据的原始值;其中,可见度数据用于表示采样点的可见度,采样点为像素级。
数据确定模块1420,用于以第一误差函数的取值收敛为目标,确定三维模型的各个顶点的可见度数据;其中,各个顶点的可见度数据用于插值得到各个采样点的可见度数据的恢复值;第一误差函数用于衡量采样点的可见度数据的恢复值与原始值之间的差异度,以及采样点的可见度数据的恢复值的变化率,顶点的数量少于采样点的数量。
数据存储模块1430,用于存储三维模型的各个顶点的可见度数据。
在一些实施例中,如图15所示,数据确定模块1420包括:函数构建单元1421和数据确定单元1422。
函数构建单元1421,用于基于采样点的可见度数据的恢复值与原始值,构建第一误差函数;其中,第一误差函数的取值与采样点的可见度数据的恢复值与原始值之间的差异度呈正相关关系,第一误差函数的取值与采样点的可见度数据的恢复值的变化率呈正相关关系。
数据确定单元1422,用于以最小化第一误差函数的取值为目标,确定三维模型的各个顶点的可见度数据。
在一些实施例中,函数构建模块1421还用于基于采样点的可见度数据的恢复值与原始值之间的差值,构建第一子函数;其中,第一子函数的取值与采样点的可见度数据的恢复值与原始值之间的差异度呈正相关关系;基于三维模型上的至少一组相邻面片对应的变化率之间的差值,构建第二子函数;其中,三维模型上的目标面片对应的变化率,是指目标面片对应的各个采样点的可见度数据的恢复值的变化率;第二子函数的取值与采样点的可见度数据的恢复值的变化率呈正相关关系基于第一子函数和第二子函数,构建第一误差函数。
在一些实施例中,如图15所示,数据获取模块1410包括:数据获取单元1411、椎体拟合单元1412和数据确定单元1413。
数据获取单元1411,用于对于三维模型的目标采样点,获取目标采样点的初始可见度数据;其中,目标采样点的初始可见度数据包括:以目标采样点为顶点指向多个方向的相交数据。
椎体拟合单元1412,用于确定用于拟合目标采样点的初始可见度数据的目标椎体。
数据确定单元1413,用于基于目标椎体确定目标采样点的可见度数据的原始值,目标采样点的可见度数据包括目标椎体的中心轴方向、开口角度和缩放值,缩放值用于表征可见区域的明暗度。
在一些实施例中,椎体拟合单元1412还用于将目标采样点的初始可见度数据投影到球谐函数空间,得到目标采样点的投影可见度数据;基于目标采样点的投影可见度数据,确定目标采样点对应的最优可见方向,最优可见方向是指在球谐函数空间中确定出的目标采样点对应的可见区域的中心轴方向;将最优可见方向,确定为目标椎体的中心轴方向;以第二误差函数的取值收敛为目标,确定目标椎体的开口角度和缩放值;其中,第二误差函数用于衡量目标椎体在球谐函数空间中的投影表征与目标采样点的投影可见度数据之间的差异度。
在一些实施例中,目标采样点的可见度数据采用4个浮点数表示;其中,目标椎体的中心轴方向采用2个浮点数表示,目标椎体的开口角度采用1个浮点数表示,目标椎体的缩放值采用1个浮点数表示。
在一些实施例中,如图15所示,装置1400还包括数据使用模块1440。
数据使用模块1440,用于对于三维模型上的目标面片,从三维模型的顶点数据中,获取目标面片的各个顶点的可见度数据;根据目标面片的各个顶点的可见度数据,确定目标面片的重心点的可见度数据;根据目标面片的各个顶点的可见度数据以及目标面片的重心点的可 见度数据,插值得到目标面片对应的各个采样点的可见度数据的恢复值。
综上所述,本实施例通过获取三维模型的各个采样点的可见度数据的原始值,然后通过第一误差函数,使由各个顶点的可见度数据插值得到的采样点的可见度数据的恢复值,和原始值差异度最小,也就是第一误差函数在收敛的情况下,得到三维模型的各个顶点的可见度数据的最终结果,并对各个顶点的可见度数据进行存储,而不用存储大量的采样点的可见度数据,充分减少了可见度数据存储所要使用的空间,缓解了三维模型的可见度数据的存储压力,并提升了三维模型的渲染效率。同时,第一误差函数在设计的时候,一方面用于衡量采样点的可见度数据的恢复值和原始值之间的差异度,收敛第一误差函数可以使得采样点的可见度数据的恢复值和原始值之间的差异度尽可能小,从而使得渲染后的三维模型表面和原始三维模型表面的差别尽可能的小,保证了三维模型渲染时的准确度;另一方面用于衡量采样点的可见度数据的恢复值的变化率,收敛第一误差函数可以使得可见度数据的恢复值之间连续,从而增强了三维模型通过可见度数据渲染后的模型表面的视觉效果。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
请参考图16,其示出了本申请一个实施例提供的计算机设备的结构示意图。该计算机设备1600可以是任何具备数据计算、处理和存储功能的电子设备,如上文介绍的终端或服务器,其可用于实施上述实施例中提供的三维模型可见度数据的存储方法。具体来讲:
该计算机设备1600包括中央处理单元(如CPU(Central Processing Unit,中央处理器)、GPU(Graphics Processing Unit,图形处理器)和FPGA(Field Programmable Gate Array,现场可编程逻辑门阵列)等)1601、包括RAM(Random-Access Memory,随机存储存储器)1602和ROM(Read-Only Memory,只读存储器)1603的系统存储器1604,以及连接系统存储器1604和中央处理单元1601的系统总线1605。该计算机设备1600还包括帮助服务器内的各个器件之间传输信息的基本输入/输出系统(Input Output System,I/O系统)1606,和用于存储操作系统1613、应用程序1614和其他程序模块1615的大容量存储设备1607。
该基本输入/输出系统1606包括有用于显示信息的显示器1608和用于用户输入信息的诸如鼠标、键盘之类的输入设备1609。其中,该显示器1608和输入设备1609都通过连接到系统总线1605的输入输出控制器1610连接到中央处理单元1601。该基本输入/输出系统1606还可以包括输入输出控制器1610以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器1610还提供输出到显示屏、打印机或其他类型的输出设备。
该大容量存储设备1607通过连接到系统总线1605的大容量存储控制器(未示出)连接到中央处理单元1601。该大容量存储设备1607及其相关联的计算机可读介质为计算机设备1600提供非易失性存储。也就是说,该大容量存储设备1607可以包括诸如硬盘或者CD-ROM(Compact Disc Read-Only Memory,只读光盘)驱动器之类的计算机可读介质(未示出)。
不失一般性,该计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-Only Memory,电可擦写可编程只读存储器)、闪存或其他固态存储技术,CD-ROM、DVD(Digital Video Disc,高密度数字视频光盘)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知该计算 机存储介质不局限于上述几种。上述的系统存储器1604和大容量存储设备1607可以统称为存储器。
根据本申请实施例,该计算机设备1600还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即计算机设备1600可以通过连接在该系统总线1605上的网络接口单元1611连接到网络1612,或者说,也可以使用网络接口单元1616来连接到其他类型的网络或远程计算机系统(未示出)。
存储器还包括计算机程序,该计算机程序存储于存储器中,且经配置以由一个或者一个以上处理器执行,以实现上述三维模型可见度数据的存储方法。
在示例性实施例中,还提供了一种计算机可读存储介质,存储介质中存储有计算机程序,计算机程序在被计算机设备的处理器执行时实现上述三维模型可见度数据的存储方法。
可选地,该计算机可读存储介质可以包括:ROM(Read-Only Memory,只读存储器)、RAM(Random-Access Memory,随机存储器)、SSD(Solid State Drives,固态硬盘)或光盘等。其中,随机存取记忆体可以包括ReRAM(Resistance Random Access Memory,电阻式随机存取记忆体)和DRAM(Dynamic Random Access Memory,动态随机存取存储器)。
在示例性实施例中,还提供了一种计算机程序产品,计算机程序产品包括计算机程序,计算机程序存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质中读取计算机程序,处理器执行计算机程序,使得计算机设备执行上述三维模型可见度数据的存储方法。
需要说明的是,本申请所涉及的信息(包括但不限于对象设备信息、对象个人信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等)以及信号,均为经对象授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的三维模型等都是在充分授权的情况下获取的。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。另外,本文中描述的步骤编号,仅示例性示出了步骤间的一种可能的执行先后顺序,在一些其它实施例中,上述步骤也可以不按照编号顺序来执行,如两个不同编号的步骤同时执行,或者两个不同编号的步骤按照与图示相反的顺序执行,本申请实施例对此不作限定。
以上仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (17)

  1. 一种三维模型可见度数据的存储方法,所述方法由计算机设备执行,所述方法包括:
    获取三维模型的多个采样点的可见度数据的原始值;其中,所述可见度数据用于表示所述采样点的可见度,所述采样点为像素级;
    以第一误差函数的取值收敛为目标,确定所述三维模型的各个顶点的可见度数据;其中,所述各个顶点的可见度数据用于插值得到各个所述采样点的可见度数据的恢复值;所述第一误差函数用于衡量所述采样点的可见度数据的恢复值与原始值之间的差异度,以及所述采样点的可见度数据的恢复值的变化率,所述顶点的数量少于所述采样点的数量;
    存储所述三维模型的各个顶点的可见度数据。
  2. 根据权利要求1所述的方法,其中,所述以第一误差函数的取值收敛为目标,确定所述三维模型的各个顶点的可见度数据,包括:
    基于所述采样点的可见度数据的恢复值与原始值,构建所述第一误差函数;其中,所述第一误差函数的取值与所述采样点的可见度数据的恢复值与原始值之间的差异度呈正相关关系,所述第一误差函数的取值与所述采样点的可见度数据的恢复值的变化率呈正相关关系;
    以最小化所述第一误差函数的取值为目标,确定所述三维模型的各个顶点的可见度数据。
  3. 根据权利要求2所述的方法,其中,所述基于所述采样点的可见度数据的恢复值与原始值,构建所述第一误差函数,包括:
    基于所述采样点的可见度数据的恢复值与原始值之间的差值,构建第一子函数;其中,所述第一子函数的取值与所述采样点的可见度数据的恢复值与原始值之间的差异度呈正相关关系;
    基于所述三维模型上的至少一组相邻面片对应的变化率之间的差值,构建第二子函数;其中,所述三维模型上的目标面片对应的变化率,是指所述目标面片对应的各个采样点的可见度数据的恢复值的变化率;所述第二子函数的取值与所述采样点的可见度数据的恢复值的变化率呈正相关关系;
    基于所述第一子函数和所述第二子函数,构建所述第一误差函数。
  4. 根据权利要求1所述的方法,其中,所述获取三维模型的多个采样点的可见度数据的原始值,包括:
    对于所述三维模型的目标采样点,获取所述目标采样点的初始可见度数据;其中,所述目标采样点的初始可见度数据包括:以所述目标采样点为顶点指向多个方向的相交数据;
    确定用于拟合所述目标采样点的初始可见度数据的目标椎体;
    基于所述目标椎体确定所述目标采样点的可见度数据的原始值,所述目标采样点的可见度数据包括所述目标椎体的中心轴方向、开口角度和缩放值,所述缩放值用于表征可见区域的明暗度。
  5. 根据权利要求4所述的方法,其中,所述确定用于拟合所述目标采样点的初始可见度数据的目标椎体,包括:
    将所述目标采样点的初始可见度数据投影到球谐函数空间,得到所述目标采样点的投影可见度数据;
    基于所述目标采样点的投影可见度数据,确定所述目标采样点对应的最优可见方向,所述最优可见方向是指在所述球谐函数空间中确定出的所述目标采样点对应的可见区域的中心轴方向;
    将所述最优可见方向,确定为所述目标椎体的中心轴方向;
    以第二误差函数的取值收敛为目标,确定所述目标椎体的开口角度和缩放值;其中,所述第二误差函数用于衡量所述目标椎体在所述球谐函数空间中的投影表征与所述目标采样点的投影可见度数据之间的差异度。
  6. 根据权利要求4所述的方法,其中,所述目标采样点的可见度数据采用4个浮点数表示;其中,所述目标椎体的中心轴方向采用2个浮点数表示,所述目标椎体的开口角度采用1个浮点数表示,所述目标椎体的缩放值采用1个浮点数表示。
  7. 根据权利要求1至6任一项所述的方法,其中,所述存储所述三维模型的各个顶点的可见度数据之后,还包括:
    对于所述三维模型上的目标面片,获取所述目标面片的各个顶点的可见度数据;
    根据所述目标面片的各个顶点的可见度数据,确定所述目标面片的重心点的可见度数据;
    根据所述目标面片的各个顶点的可见度数据以及所述目标面片的重心点的可见度数据,插值得到所述目标面片对应的各个采样点的可见度数据的恢复值。
  8. 一种三维模型可见度数据的存储装置,所述装置包括:
    数据获取模块,用于获取三维模型的多个采样点的可见度数据的原始值;其中,所述可见度数据用于表示所述采样点的可见度,所述采样点为像素级;
    数据确定模块,用于以第一误差函数的取值收敛为目标,确定所述三维模型的各个顶点的可见度数据;其中,所述各个顶点的可见度数据用于插值得到各个所述采样点的可见度数据的恢复值;所述第一误差函数用于衡量所述采样点的可见度数据的恢复值与原始值之间的差异度,以及所述采样点的可见度数据的恢复值的变化率,所述顶点的数量少于所述采样点的数量;
    数据存储模块,用于存储所述三维模型的各个顶点的可见度数据。
  9. 根据权利要求8所述的装置,其中,所述数据确定模块,包括:
    函数构建单元,用于基于所述采样点的可见度数据的恢复值与原始值,构建所述第一误差函数;其中,所述第一误差函数的取值与所述采样点的可见度数据的恢复值与原始值之间的差异度呈正相关关系,所述第一误差函数的取值与所述采样点的可见度数据的恢复值的变化率呈正相关关系;
    数据确定单元,用于以最小化所述第一误差函数的取值为目标,确定所述三维模型的各个顶点的可见度数据。
  10. 根据权利要求9所述的装置,其中,所述函数构建单元,用于:
    基于所述采样点的可见度数据的恢复值与原始值之间的差值,构建第一子函数;其中,所述第一子函数的取值与所述采样点的可见度数据的恢复值与原始值之间的差异度呈正相关关系;
    基于所述三维模型上的至少一组相邻面片对应的变化率之间的差值,构建第二子函数;其中,所述三维模型上的目标面片对应的变化率,是指所述目标面片对应的各个采样点的可见度数据的恢复值的变化率;所述第二子函数的取值与所述采样点的可见度数据的恢复值的变化率呈正相关关系;
    基于所述第一子函数和所述第二子函数,构建所述第一误差函数。
  11. 根据权利要求8所述的装置,其中,所述数据获取模块,包括:
    数据获取单元,用于对于所述三维模型的目标采样点,获取所述目标采样点的初始可见 度数据;其中,所述目标采样点的初始可见度数据包括:以所述目标采样点为顶点指向多个方向的相交数据;
    椎体拟合单元,用于确定用于拟合所述目标采样点的初始可见度数据的目标椎体;
    数据确定单元,用于基于所述目标椎体确定所述目标采样点的可见度数据的原始值,所述目标采样点的可见度数据包括所述目标椎体的中心轴方向、开口角度和缩放值,所述缩放值用于表征可见区域的明暗度。
  12. 根据权利要求11所述的装置,其中,所述椎体拟合单元,用于:
    将所述目标采样点的初始可见度数据投影到球谐函数空间,得到所述目标采样点的投影可见度数据;
    基于所述目标采样点的投影可见度数据,确定所述目标采样点对应的最优可见方向,所述最优可见方向是指在所述球谐函数空间中确定出的所述目标采样点对应的可见区域的中心轴方向;
    将所述最优可见方向,确定为所述目标椎体的中心轴方向;
    以第二误差函数的取值收敛为目标,确定所述目标椎体的开口角度和缩放值;其中,所述第二误差函数用于衡量所述目标椎体在所述球谐函数空间中的投影表征与所述目标采样点的投影可见度数据之间的差异度。
  13. 根据权利要求11所述的装置,其中,所述目标采样点的可见度数据采用4个浮点数表示;其中,所述目标椎体的中心轴方向采用2个浮点数表示,所述目标椎体的开口角度采用1个浮点数表示,所述目标椎体的缩放值采用1个浮点数表示。
  14. 根据权利要求8至13任一项所述的装置,其中,所述装置还包括数据使用模块,用于:
    对于所述三维模型上的目标面片,获取所述目标面片的各个顶点的可见度数据;
    根据所述目标面片的各个顶点的可见度数据,确定所述目标面片的重心点的可见度数据;
    根据所述目标面片的各个顶点的可见度数据以及所述目标面片的重心点的可见度数据,插值得到所述目标面片对应的各个采样点的可见度数据的恢复值。
  15. 一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如权利要求1至7任一项所述的方法。
  16. 一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至7任一项所述的方法。
  17. 一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机程序,以实现如权利要求1至7任一项所述的方法。
PCT/CN2022/127985 2021-11-19 2022-10-27 三维模型可见度数据的存储方法、装置、设备及存储介质 WO2023088059A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/207,577 US20230326129A1 (en) 2021-11-19 2023-06-08 Method and apparatus for storing visibility data of three-dimensional model, device, and storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111374336.0 2021-11-19
CN202111374336 2021-11-19
CN202111624049.0 2021-12-28
CN202111624049.0A CN114399421A (zh) 2021-11-19 2021-12-28 三维模型可见度数据的存储方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/207,577 Continuation US20230326129A1 (en) 2021-11-19 2023-06-08 Method and apparatus for storing visibility data of three-dimensional model, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2023088059A1 true WO2023088059A1 (zh) 2023-05-25

Family

ID=81229452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/127985 WO2023088059A1 (zh) 2021-11-19 2022-10-27 三维模型可见度数据的存储方法、装置、设备及存储介质

Country Status (3)

Country Link
US (1) US20230326129A1 (zh)
CN (1) CN114399421A (zh)
WO (1) WO2023088059A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399421A (zh) * 2021-11-19 2022-04-26 腾讯科技(成都)有限公司 三维模型可见度数据的存储方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270339A (zh) * 2011-07-21 2011-12-07 清华大学 一种空间各异模糊核三维运动去模糊的方法及系统
US9536344B1 (en) * 2007-11-30 2017-01-03 Roblox Corporation Automatic decoration of a three-dimensional model
CN108038902A (zh) * 2017-12-07 2018-05-15 合肥工业大学 一种面向深度相机的高精度三维重建方法和系统
CN112446919A (zh) * 2020-12-01 2021-03-05 平安科技(深圳)有限公司 物体位姿估计方法、装置、电子设备及计算机存储介质
CN114399421A (zh) * 2021-11-19 2022-04-26 腾讯科技(成都)有限公司 三维模型可见度数据的存储方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9536344B1 (en) * 2007-11-30 2017-01-03 Roblox Corporation Automatic decoration of a three-dimensional model
CN102270339A (zh) * 2011-07-21 2011-12-07 清华大学 一种空间各异模糊核三维运动去模糊的方法及系统
CN108038902A (zh) * 2017-12-07 2018-05-15 合肥工业大学 一种面向深度相机的高精度三维重建方法和系统
CN112446919A (zh) * 2020-12-01 2021-03-05 平安科技(深圳)有限公司 物体位姿估计方法、装置、电子设备及计算机存储介质
CN114399421A (zh) * 2021-11-19 2022-04-26 腾讯科技(成都)有限公司 三维模型可见度数据的存储方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US20230326129A1 (en) 2023-10-12
CN114399421A (zh) 2022-04-26

Similar Documents

Publication Publication Date Title
US9569885B2 (en) Technique for pre-computing ambient obscurance
CA2866849C (en) Method for estimating the opacity level in a scene and corresponding device
US9928643B2 (en) Hierarchical continuous level of detail for three-dimensional meshes
US9208610B2 (en) Alternate scene representations for optimizing rendering of computer graphics
US11790594B2 (en) Ray-tracing with irradiance caches
Livny et al. A GPU persistent grid mapping for terrain rendering
Mudge et al. Viewpoint quality and scene understanding
WO2023088059A1 (zh) 三维模型可见度数据的存储方法、装置、设备及存储介质
CN113345063A (zh) 基于深度学习的pbr三维重建方法、系统与计算机存储介质
CN115205441A (zh) 图像渲染方法和装置
CN116843841B (zh) 基于网格压缩的大规模虚拟现实系统
CN117333637B (zh) 三维场景的建模及渲染方法、装置及设备
EP4287134A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
CN115482326A (zh) 调整渲染目标显示尺寸的方法、装置及存储介质
CN116824082B (zh) 虚拟地形的绘制方法、装置、设备、存储介质及程序产品
US11954802B2 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
WO2024109006A1 (zh) 一种光源剔除方法及渲染引擎
US20230394767A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
WO2024037116A9 (zh) 三维模型的渲染方法、装置、电子设备及存储介质
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
US20230316628A1 (en) Techniques for avoiding self-intersections when rendering signed distance functions
US20240221317A1 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
Chochlík Scalable multi-GPU cloud raytracing with OpenGL
Kunert et al. Evaluating Light Probe Estimation Techniques for Mobile Augmented Reality
WO2023224627A1 (en) Face-oriented geometry streaming

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894596

Country of ref document: EP

Kind code of ref document: A1