CN117173320A - Curved surface model processing method, device, computer equipment and storage medium - Google Patents

Curved surface model processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117173320A
CN117173320A CN202311174223.5A CN202311174223A CN117173320A CN 117173320 A CN117173320 A CN 117173320A CN 202311174223 A CN202311174223 A CN 202311174223A CN 117173320 A CN117173320 A CN 117173320A
Authority
CN
China
Prior art keywords
model
target
curved surface
viewpoint
surface model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311174223.5A
Other languages
Chinese (zh)
Inventor
李家辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311174223.5A priority Critical patent/CN117173320A/en
Publication of CN117173320A publication Critical patent/CN117173320A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a curved surface model processing method, a curved surface model processing device, computer equipment and a storage medium, wherein the curved surface model, a depth effect model corresponding to the curved surface model and an enclosing model are obtained, and the central positions of the curved surface model, the depth effect model and the enclosing model are the same; calculating the distance information from each region block in the surrounding model to the surface of the depth effect model; determining a target area block according to the position of the current viewpoint of the target sight on the curved surface model; according to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint; and correlating the target viewpoint with the target sight to the sampling point of the curved surface model. The application presents the visual effect of the model with high number of the patches through the model with less number of the patches, improves the rendering speed of the computer equipment and reduces the occupation of the memory and the bandwidth.

Description

Curved surface model processing method, device, computer equipment and storage medium
Technical Field
The application relates to the technical field of communication, in particular to a curved surface model processing method, a curved surface model processing device, computer equipment and a storage medium, wherein the storage medium is a computer readable storage medium.
Background
For the plane model, the plane model can be enabled to present depth feeling through parallax mapping, the space can be divided into a plurality of height layers according to the height and the height map of the plane model, the actual sampling point of the sight line can be determined through stepping the sight line, and the UV coordinates of the actual sampling point are used as the UV coordinates of the sight line to the sampling point of the plane model, so that the plane model can be seen to be higher than the actual plane or lower than the actual plane, and the depth feeling is provided in the visual effect.
For a curved surface model, since the stepping distance of the line of sight cannot be determined, parallax mapping cannot be applied to the curved surface model, so that if depth perception is to be presented, a model with a higher number of patches is constructed, the number of patches of the model is high, the data size of the model is large, and the time and resources required by computer equipment to render the model are more.
Disclosure of Invention
The embodiment of the application provides a curved surface model processing method, a curved surface model processing device, computer equipment and a storage medium.
The curved surface model processing method provided by the embodiment of the application comprises the following steps:
acquiring a curved surface model, a depth effect model and a surrounding model corresponding to the curved surface model, wherein the depth effect model and the curved surface model have depth differences, and the center positions of the curved surface model, the depth effect model and the surrounding model are the same;
dividing the surrounding model into a plurality of area blocks, and calculating distance information from each area block to the surface of the depth effect model;
determining a target region block from the plurality of region blocks according to the position of the current viewpoint of the target sight on the curved surface model;
according to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint;
and correlating the target viewpoint with the target sight line to the sampling point of the curved surface model so as to perform texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effect of the curved surface model is the same as that of the depth effect model.
Correspondingly, the embodiment of the application also provides a curved surface model processing device, which comprises:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a curved surface model, a depth effect model corresponding to the curved surface model and a surrounding model, the depth effect model and the curved surface model have depth differences, and the central positions of the curved surface model, the depth effect model and the surrounding model are the same;
the dividing unit is used for dividing the surrounding model into a plurality of area blocks and calculating the distance information from each area block to the surface of the depth effect model;
a determining unit configured to determine a target region block among the plurality of region blocks according to a position of a current viewpoint of a target line of sight on the curved surface model;
a step unit, configured to control the current viewpoint to step in the direction of the target line of sight according to the distance information corresponding to the target region block until the region block corresponding to the current viewpoint after the step meets a preset condition, and determine the target viewpoint according to the current viewpoint after the step;
and the association unit is used for associating the target viewpoint with the sampling points of the target sight line for the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effect of the curved surface model is the same as that of the depth effect model.
Correspondingly, the embodiment of the application also provides computer equipment, which comprises a memory and a processor; the memory stores a computer program, and the processor is configured to run the computer program in the memory, so as to execute any one of the curved surface model processing methods provided by the embodiments of the present application.
Accordingly, the embodiment of the application also provides a computer readable storage medium for storing a computer program, where the computer program is loaded by a processor to execute any of the curved surface model processing methods provided by the embodiment of the application.
According to the embodiment of the application, the curved surface model, the depth effect model and the surrounding model corresponding to the curved surface model are obtained, the depth effect model and the curved surface model have depth difference, and the central positions of the curved surface model, the depth effect model and the surrounding model are the same; dividing the surrounding model into a plurality of area blocks, and calculating the distance information from each area block to the surface of the depth effect model; determining a target area block in the plurality of area blocks according to the position of the current viewpoint of the target sight on the curved surface model; according to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint; and correlating the target viewpoint and the target sight line with the sampling points of the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effects of the curved surface model and the depth effect model are the same.
According to the embodiment of the application, the distance information of the area block in the surrounding model can indicate the stepping distance of the current viewpoint to the depth effect model, the current viewpoint can be controlled to step to the nearby area of the surface of the depth effect model according to the stepping distance corresponding to the area block, the target viewpoint is the actual sampling point of the target sight to the curved surface model, and the curved surface model can be enabled to present the visual effect of the depth effect model according to the texture coordinate corresponding to the target viewpoint as the texture coordinate of the sampling point of the target sight to the curved surface model.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for processing a curved surface model according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a curve model and visual depth effects provided by an embodiment of the present application;
FIG. 3 is a schematic view of parallax mapping of a planar model according to an embodiment of the present application;
FIG. 4 is a view stepping schematic diagram provided by an embodiment of the present application;
FIG. 5 is another view stepping schematic provided by an embodiment of the present application;
fig. 6 is a schematic diagram of two view blocks located in the same area according to an embodiment of the present application;
FIG. 7 is a schematic diagram providing stepping within the same area block in accordance with an embodiment of the present application;
FIG. 8 is a schematic diagram of a curved surface model processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a curved surface model processing method, a curved surface model processing device, computer equipment and a computer readable storage medium. The curved surface model processing device can be integrated in computer equipment, and the computer equipment can be a server, a terminal and other equipment.
The terminal may include a mobile phone, a wearable intelligent device, a tablet computer, a notebook computer, a personal computer (PC, personal Computer), a car-mounted computer, and the like.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
The present embodiment will be described from the viewpoint of a curved surface model processing apparatus, which may be integrated in a computer device, which may be a server or a terminal, or other devices.
The specific flow of the curved surface model processing method provided by the embodiment of the application can be as follows, as shown in fig. 1:
101. and acquiring a curved surface model, a depth effect model corresponding to the curved surface model and an enclosing model, wherein the depth effect model and the curved surface model have depth differences, and the central positions of the curved surface model, the depth effect model and the enclosing model are the same.
The curved surface model can be a model comprising at least one curved surface, and each curved surface can be a smooth curved surface or a smoother curved surface; the depth effect model is a model having a sense of depth, and for example, bricks, cobbles, or the like are paved, and the depth effect model has a sense of depth due to the uneven effect of the bricks and cobbles.
The visual effect presented by the curved surface model can be the same as the visual effect presented by the depth effect model, the difference is that the surface of the depth effect model is truly rugged, and the visual effect presented by the model surface of the processed curved surface model is the same as the visual effect presented by the depth effect model under the condition that the shape of the surface of the curved surface model is not changed, namely the model surface is consistent with that before the processing, the curved surface model and the corresponding visual effect can be shown as a graph in fig. 2, the graph in fig. 2 shows a cross section of the curved surface model, wherein a solid line is the actual height of the curved surface model, and a dotted line represents the visual height of the curved surface model.
For the plane model, the depth feeling of the plane model can be presented through parallax mapping, specifically, as shown in fig. 3, a black curve represents the height or depth to be presented in the visual sense of the plane model, a sampling point of a sight line on the plane model is T0, and because the height of the plane deviates, an intersection point of an actual sampling point of the sight line and the black curve can be obtained by stepping the viewpoint until the viewpoint is lower than the black curve, a point T3 close to the intersection point can be obtained, texture coordinate sampling can be carried out on the point T3, and an observer sees the point T0 as the point T3, so that the plane model looks higher than the actual plane or lower than the actual plane, the depth feeling is presented in the visual effect, and the brick paving effect can be visually presented on a flat wall surface or the window effect of a shop can be presented on the flat wall surface of a building through parallax mapping.
For the curved surface model, the step distance of the sight line cannot be determined, so that the parallax map cannot be applied to the curved surface model.
The visual effect of the model with high number of the patches can be presented through the model with low number of the patches, the rendering speed of the computer equipment is improved, and the occupation of the memory and the bandwidth by the model is reduced.
The surrounding model may be an object capable of containing a curved surface model, for example, a cube or other shapes, and the surrounding model may be generated according to the size of the curved surface model, that is, in an embodiment, the step of "obtaining the curved surface model, a depth effect model corresponding to the curved surface model, and the surrounding model" may specifically include:
acquiring a depth effect model corresponding to the curved surface model;
determining at least two boundary coordinates according to coordinates of model vertexes of the curved surface model;
a bounding model is generated from the at least two boundary coordinates.
The boundary coordinates may be coordinates outside the curved surface model or coordinates on the curved surface model, so that the surrounding model may surround the curved surface model.
For example, a curved surface model and a depth effect model are obtained, coordinates of model vertices are respectively determined according to coordinates of model vertices of a grid surface on the curved surface model, maximum values and minimum values of coordinate components in an x axis, a y axis and a z axis are respectively determined, then coordinate combination is performed according to the maximum values and the minimum values of the coordinate components in the x axis, the y axis and the z axis to obtain two boundary coordinates, for example, the maximum values of the coordinate components in the x axis, the y axis and the z axis can be combined to obtain one boundary coordinate, the minimum values of the coordinate components in the x axis, the y axis and the z axis can be combined to obtain the other boundary coordinate, and a connecting line of the two boundary coordinates can be regarded as a diagonal line surrounding the model, so that a cube model can be determined according to the boundary coordinates. The coordinates obtained by the combination can be shifted, so that the length, the width and the height of the obtained surrounding model are larger than those of the curved surface model.
Alternatively, the length, width and height of the curved surface model can be determined according to the maximum and minimum values of the coordinate components in the x-axis, the y-axis and the z-axis, and then an enclosing model with the same length, width and height as the central position of the model as the curved surface model can be constructed. And constructing an enclosing model with the length, width and height larger than those of the curved surface model and the center position of the model being the same as that of the curved surface model according to the length, width and height of the curved surface model.
The position of the model center of the surrounding model is the same as that of the curved surface model, and the surrounding model is also the same as that of the depth effect model, if the curved surface model, the depth effect model and the surrounding model are placed in the same three-dimensional space, the three models are mutually nested, the depth effect model is nested with the curved surface model, and the curved surface model is nested with the surrounding model, so that the depth effect model and the curved surface model have depth differences, and when the sight reaches the surface of the curved surface model, the sight also needs to move continuously to reach the surface of the depth effect model.
102. The bounding model is divided into a plurality of region blocks and distance information of each region block to the depth effect model surface is calculated.
The distance information may represent the nearest distance from the region block to the surface of the depth effect model, and the distance information may include a directional distance, that is, the distance information includes direction information, where the distance information is greater than 0, and represents that the center of the region block is outside the depth effect model; the distance information is smaller than 0, and the center of the regional block is shown in the depth effect model; the distance information is equal to 0, indicating that the center of the region block is on the surface of the depth effect model.
Alternatively, the distance information is larger than 0, which means that the center of the region block is outside the depth effect model; the distance information is equal to 0, and the center of the regional block is on the surface of the depth effect model; the distance information is empty, indicating that the center of the region block is within the depth effect model.
For example, the surrounding model may be divided into a plurality of region blocks, the shape of the region blocks may be a cube, and distance information of each region block to the depth effect model surface is obtained according to the nearest distance of the center depth effect model surface of each region block.
The directional distance field diagram of the depth effect model can be obtained based on the distance information corresponding to each region block, and the directional distance field diagram of the depth effect model can also be regarded as a directional distance field diagram of the curved surface model for determining the stepping distance of the target sight line.
103. And determining a target area block in the plurality of area blocks according to the position of the current viewpoint of the target sight line on the curved surface model.
The target sight line may be considered as a ray from the position of the virtual camera, the target sight line may be a ray with a specified angle, the current viewpoint may be considered as an observation point of the target sight line on the curved surface model, and the current viewpoint may be a point on the surface of the curved surface model, for example, may be a model vertex.
And selecting the region block closest to the observation point according to the distance between the position of the observation point and the central position of the region block, and obtaining the target region block.
104. And controlling the current viewpoint to step in the direction of the target sight line according to the distance information corresponding to the target region block until the region block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint.
If the distance information is smaller than 0, the center of the region block is located in the depth effect model, and the preset condition can include that the distance information corresponding to the region block is smaller than 0; and if the distance information is null, the preset condition may include that the distance information corresponding to the region block is null, which indicates that the center of the region block is located in the depth effect model.
For example, the distance information corresponding to the target area block can be used for determining the stepping distance of the current viewpoint, then the current viewpoint is controlled to move on the target view line for the stepping distance to obtain a new viewpoint, whether the new viewpoint meets the condition is judged, and if the new viewpoint meets the condition, the new viewpoint is used as the target viewpoint; if not, continuing to step until a target viewpoint meeting the condition is found, and judging whether the new viewpoint meets the condition or not can be performed by judging whether the region block corresponding to the new viewpoint meets the preset condition, i.e. in an embodiment, the step of "controlling the current viewpoint to step in the direction of the target line of sight according to the distance information corresponding to the target region block until the distance information of the region block corresponding to the current viewpoint meets the preset condition, to obtain the target viewpoint" may specifically include:
Determining the stepping distance of the current viewpoint according to the distance information of the target area block;
controlling the current viewpoint to step in the direction of the target sight based on the step distance to obtain candidate viewpoints;
determining a corresponding first region block according to the position of the candidate viewpoint;
and if the distance information of the first area block is smaller than zero, determining the target viewpoint according to the candidate viewpoint.
For example, the length indicated by the distance information of the target region block may be used as the step distance of the current viewpoint, and the current viewpoint may be moved in the direction of the target line of sight based on the step distance, thereby obtaining the candidate viewpoint.
Since the shape and the center position of the region block are determined, the first region block closest to the candidate viewpoint can be determined according to the position of the candidate viewpoint. If the distance information of the first region block is less than zero or is empty, the candidate view is determined as the target view.
According to the position of the target viewpoint, determining whether the target viewpoint is positioned on the surface of the depth effect model or not, if the target viewpoint is positioned on the surface of the depth effect model, determining the UV coordinates of the target viewpoint according to the UV coordinates of the model vertexes of the depth effect model, if the target viewpoint is not positioned on the surface of the depth effect model, determining that the UV coordinates corresponding to the target viewpoint is empty, and displaying transparent textures at sampling points on the curved surface model.
Alternatively, it may also be determined according to the position of the candidate viewpoint, whether the candidate viewpoint is located on the surface of the depth effect model in the positional relationship, if so, the candidate viewpoint is taken as the target viewpoint, and if not, the model vertex closest to the candidate viewpoint in the depth effect model is taken as the target viewpoint.
Optionally, if the candidate viewpoint is not on the surface of the depth effect model, after determining the model vertex with the depth effect model closest to the candidate viewpoint, determining a mesh plane containing the model vertex in the depth effect model, calculating the distance between the candidate viewpoint and each mesh plane, screening the mesh plane with the shortest distance from the mesh plane as a target mesh plane, determining a mapping point of the candidate viewpoint on the target mesh plane, and taking the mapping point as the target viewpoint.
The length of the vector corresponding to the candidate viewpoint and the mapping point is known, the vector is perpendicular to the target mesh surface, namely, the vector is in the same direction with the normal line of the target mesh surface, the mapping point is positioned on the target mesh surface, and the coordinates of the mapping point can be solved based on the conditions.
If the target distance of the first region block is greater than zero, which indicates that the candidate viewpoint is located outside the depth effect model in the position relationship, the candidate viewpoint may be controlled to continue to step until the candidate viewpoint reaches the depth effect model to obtain the target viewpoint, that is, in an embodiment, after the step of determining the corresponding first region block according to the position of the candidate viewpoint, the curved surface model processing method provided by the embodiment of the present application may specifically further include:
If the distance information of the first area block is larger than zero, updating the stepping distance according to the distance information of the first area block;
updating the candidate view point into the current view point, and returning to execute stepping on the current view point in the direction of the target line of sight based on the stepping distance to obtain the candidate view point until the current view point and the candidate view point are positioned in the same region block;
and determining the target viewpoint according to the candidate viewpoints.
For example, if the distance information of the first area block is greater than zero, the distance information of the first area block is taken as a step distance; and updating the candidate viewpoint into the current viewpoint, and returning to the execution step of controlling the current viewpoint to step in the direction of the target sight line based on the stepping distance to obtain the candidate viewpoint.
The visual height effect of the region block corresponding to the local part of the curved surface model and the visual height effect of the curved surface model may be as shown in fig. 4, the radius length of the dashed circle is the distance information of the corresponding region block, the radius of the solid circle is the stepping distance, the center of the solid circle is the current viewpoint, the current viewpoint is controlled to step according to the stepping distance, that is, the current viewpoint is moved to the position where the candidate viewpoint shown in fig. 4 is located, the candidate viewpoint is the intersection point of the target sight line and the solid circle, as shown in fig. 5, the first region block may be determined according to the candidate viewpoint, the first region block is the region block where the center of the small dashed circle is located, the stepping distance may be determined according to the distance information of the first region block, the candidate viewpoint after the first step is controlled to step in the sight line direction of the target sight line, the candidate viewpoint after the second step is obtained, the candidate viewpoint after the second step is determined, the region block (marked as the second region block) closest to the candidate viewpoint obtained by the second step is determined, whether the first region block and the second region block are the same region block is determined, and if the first region block is not the same region block is obtained.
If the distance information of the second region block is greater than zero, determining a stepping distance according to the distance information of the second region block, then controlling the candidate view obtained by the second stepping to continue stepping to obtain the candidate view after the third stepping, determining a region block corresponding to the candidate view after the third stepping, and the like until the region block corresponding to the candidate view after the stepping is the same region block as the region block corresponding to the candidate view after the last stepping, which can be shown in fig. 6.
If the current viewpoint and the region block corresponding to the candidate viewpoint are the same region block, determining the target viewpoint according to the candidate viewpoint, specifically, calculating the distance from the candidate viewpoint to each grid surface in the depth effect model according to the position information of the candidate viewpoint, selecting the target grid surface with the shortest distance, if the shortest distance is 0, taking the candidate viewpoint as the target viewpoint, otherwise, determining the mapping point of the candidate viewpoint on the target grid surface, and taking the mapping point as the target viewpoint.
Since the length of the vector corresponding to the mapping point from the candidate viewpoint is known, the vector is perpendicular to the target mesh plane, i.e., is in the same direction as the normal line of the target mesh plane, and the mapping point is located on the target mesh plane, the coordinates of the mapping point can be solved based on the above conditions.
Optionally, the distance between the candidate viewpoint and the depth effect model may be calculated according to the position information of the candidate viewpoint, if the distance is greater than a preset distance threshold, the step of continuously controlling the candidate viewpoint according to the distance information corresponding to the same region block until the distance between the candidate viewpoint and the depth effect model is less than the preset distance threshold, so as to obtain the target viewpoint, that is, in an embodiment, the step of determining the target viewpoint according to the candidate viewpoint may specifically include:
and continuously controlling the candidate view point to step in the direction of the target view line according to the stepping distance corresponding to the region block corresponding to the candidate view point and the target view point until the distance between the candidate view point and the surface of the curved surface model is smaller than a preset distance threshold value, so as to obtain the target view point.
The preset distance threshold may be a preset distance, for example, 1cm, 5mm, etc.
For example, when the region blocks corresponding to the candidate viewpoint and the current viewpoint are the same region block, as shown in fig. 7, a stepping distance is determined according to the distance information of the region block, then the candidate viewpoint is controlled to step based on the stepping distance, after the step is calculated, the distance between the candidate viewpoint and the depth effect model is controlled continuously, if the distance is greater than a preset distance threshold, the step is controlled continuously, if the distance is less than the preset distance threshold, a target viewpoint is determined according to the stepped candidate viewpoint, specifically, the distance between the candidate viewpoint and each grid surface in the depth effect model can be calculated according to the position information of the candidate viewpoint, and a target grid surface with the shortest distance is selected, if the shortest distance is 0, the candidate viewpoint is taken as the target viewpoint, otherwise, the mapping point of the candidate viewpoint on the target grid surface is determined, and the mapping point of the candidate viewpoint on the target grid surface is taken as the target viewpoint, and the mapping point of the candidate viewpoint on the target grid surface can be determined with reference to the description of the related content, which is not repeated herein.
105. And correlating the target viewpoint and the target sight line with the sampling points of the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effects of the curved surface model and the depth effect model are the same.
For example, determining a sampling point of the target sight line for the curved surface model, taking the target viewpoint as an actual sampling point of the target sight line for the curved surface model, so that texture coordinates of the actual sampling point are taken as texture coordinates of a sampling point of the target sight line on the curved surface model, and then performing texture mapping processing on the curved surface model, so that the curved surface model can display textures corresponding to the actual sampling point at the sampling point, thereby realizing that the visual effect is the same as that of the depth effect model.
The texture coordinates of the model on the depth effect model may be obtained by mapping in advance, or may be determined by mapping the target viewpoint into the texture space, so after the step of associating the target viewpoint with the target line of sight with respect to the sampling point of the surface model, the surface model processing method provided by the embodiment of the present application may specifically further include:
mapping the target viewpoint into a texture space to obtain texture coordinates corresponding to the target viewpoint;
And performing texture mapping on the curved surface model based on the texture coordinates and the sampling points of the target sight line on the curved surface model to obtain a processed curved surface model with the same visual effect as the depth effect model.
The texture space may be a space corresponding to a texture map of the depth effect model, and the target viewpoint is mapped into the texture space, so as to obtain texture coordinates corresponding to the target viewpoint.
For example, a grid surface corresponding to the target viewpoint on the depth effect model may be determined, a barycenter coordinate of the target viewpoint on the grid surface is calculated according to the vertex coordinates of the model vertices of the grid surface and the coordinates of the target viewpoint, the barycenter coordinate represents a linear combination weight of the target viewpoint between three vertices of the grid surface, a UV coordinate of the model vertices of the grid surface is obtained, and interpolation calculation is performed according to the barycenter coordinate and the UV coordinate of the model vertices of the grid surface to obtain the corresponding UV coordinate of the target viewpoint.
And taking texture coordinates corresponding to the target viewpoint as texture coordinates of sampling points of the target sight line to the curved surface model, determining texture coordinates corresponding to each model vertex on the curved surface model based on the same mode, performing texture mapping, and mapping the texture map on the position indicated by the sampling points of the target sight line on the curved surface model, so that the visual effect of the curved surface model after the texture mapping is the same as that of the depth model.
In an embodiment, determining texture coordinates corresponding to a target viewpoint, mapping the target viewpoint from a time space to a tangent space according to a TBN matrix, wherein coordinate components of coordinates in the tangent space on an X axis and a Y axis correspond to U coordinates and V coordinates in the texture coordinates, respectively, and obtaining texture coordinates corresponding to the target viewpoint according to coordinates corresponding to the target viewpoint in the tangent space, i.e. "mapping the target viewpoint to the texture space to obtain texture coordinates corresponding to the target viewpoint", may specifically include:
determining a grid surface corresponding to the target viewpoint on the depth effect model and a normal vector of the grid surface;
calculating tangent vectors of the grid surface according to vertex coordinates of the model vertices on the grid surface and corresponding texture coordinates;
determining a secondary tangential vector according to the tangential vector and the normal vector;
converting the coordinates of the target viewpoint in world space based on the conversion matrix determined by the normal vector, the tangent vector and the auxiliary tangent vector to obtain the coordinates in tangent space;
and obtaining texture coordinates of the target viewpoint according to the tangential space dimension coordinates.
For example, the method specifically includes determining a grid surface corresponding to the target viewpoint on the depth effect model, determining a normal vector of the grid surface according to model data of the depth effect model, and determining vertex coordinates and texture coordinates of model vertices on the grid surface.
Calculating tangent vectors of the grid surface according to the vertex coordinates and the texture coordinates of the model vertices on the grid surface; since the tangential vector, the normal vector and the sub-tangential vector are perpendicular to each other, the sub-tangential vector can be determined from the tangential vector and the normal vector, and the tangential vector, the sub-tangential vector and the normal vector are unit vectors on coordinate axes in the tangential space.
The tangent vector, the sub-tangent vector, and the normal vector in the tangent space correspond to the X-axis, the Y-axis, and the Z-axis in the world coordinates, respectively, and the tangent space can be regarded as a space resulting from the deflection of the world space, so that a conversion matrix that maps points in the world coordinates to the tangent space is determined based on the normal vector, the tangent vector, and the sub-tangent vector.
And then converting the coordinates of the target viewpoint in the world space according to the conversion matrix to obtain the coordinates of the target viewpoint in the tangential space, and obtaining the texture coordinates of the target viewpoint according to the coordinates of the tangential space.
From the above, according to the embodiment of the application, the curved surface model, the depth effect model corresponding to the curved surface model and the surrounding model are obtained, the depth effect model and the curved surface model have depth difference, and the central positions of the curved surface model, the depth effect model and the surrounding model are the same; dividing the surrounding model into a plurality of area blocks, and calculating the distance information from each area block to the surface of the depth effect model; determining a target area block in the plurality of area blocks according to the position of the current viewpoint of the target sight on the curved surface model; according to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint; and correlating the target viewpoint and the target sight line with the sampling points of the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effects of the curved surface model and the depth effect model are the same.
According to the embodiment of the application, the distance information of the area block in the surrounding model can indicate the stepping distance of the current viewpoint to the depth effect model, the current viewpoint can be controlled to step to the nearby area of the surface of the depth effect model according to the stepping distance corresponding to the area block, the target viewpoint is the actual sampling point of the target sight to the curved surface model, and the curved surface model can be enabled to present the visual effect of the depth effect model according to the texture coordinate corresponding to the target viewpoint as the texture coordinate of the sampling point of the target sight to the curved surface model.
In order to facilitate better implementation of the curved surface model processing method provided by the embodiment of the application, an embodiment also provides a curved surface model processing device. The meaning of the nouns is the same as that in the curved surface model processing method, and specific implementation details can be referred to the description in the method embodiment.
The surface model processing apparatus may be integrated in a computer device, as shown in fig. 8, and the surface model processing apparatus may include: the acquisition unit 301, the division unit 302, the determination unit 303, the stepping unit 304, and the association unit 305 are specifically as follows:
(1) The obtaining unit 301 is configured to obtain a curved surface model, a depth effect model corresponding to the curved surface model, and an enclosing model, where the depth effect model has a depth difference from the curved surface model, and the center positions of the curved surface model, the depth effect model, and the enclosing model are the same.
In an embodiment, the acquisition unit 301 may include:
the model acquisition subunit is used for acquiring the curved surface model and a depth effect model corresponding to the curved surface model;
the coordinate determining subunit is used for determining at least two boundary coordinates according to the coordinates of the model vertexes of the curved surface model;
and the model generation subunit is used for generating an enclosed model according to at least two boundary coordinates.
(2) A dividing unit 302 for dividing the surrounding model into a plurality of region blocks and calculating distance information of each region block to the surface of the depth effect model.
(3) A determining unit 303 for determining a target region block among the plurality of region blocks according to a position of a current viewpoint of the target line of sight on the curved surface model.
(4) And the step unit 304 is configured to control the current viewpoint to step in the direction of the target line of sight according to the distance information corresponding to the target region block until the region block corresponding to the stepped current viewpoint meets a preset condition, and determine the target viewpoint according to the stepped current viewpoint.
In an embodiment, the step unit 304 may include:
a distance determining subunit, configured to determine a step distance of the current viewpoint according to the distance information of the target area block;
a step subunit, configured to control, based on the step distance, the current viewpoint to step in the direction of the target line of sight, so as to obtain a candidate viewpoint;
a region block determining subunit, configured to determine a corresponding first region block according to a position of the candidate viewpoint;
and the first view point determining subunit is used for determining the target view point according to the candidate view point if the distance information of the first area block is smaller than zero.
In an embodiment, the stepping unit 304 may further include:
a distance updating subunit, configured to update the step distance according to the distance information of the first area block if the distance information of the first area block is greater than zero;
a viewpoint updating subunit, configured to update the candidate viewpoint as the current viewpoint, and return to perform stepping control on the current viewpoint in the direction of the target line of sight based on the stepping distance, so as to obtain the candidate viewpoint, until the current viewpoint and the candidate viewpoint are located in the same region block;
and a second viewpoint determining sub-unit for determining a target viewpoint from the candidate viewpoints.
In an embodiment, the second view determining subunit may include:
And the circulation module is used for continuously controlling the candidate view point to step in the direction of the target sight line according to the stepping distance corresponding to the region block corresponding to the candidate view point and the target view point until the distance between the candidate view point and the surface of the curved surface model is smaller than a preset distance threshold value, so as to obtain the target view point.
(5) And the associating unit 305 is configured to associate the target viewpoint with the sampling point of the target line of sight for the curved surface model, so as to perform texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effect of the curved surface model is the same as that of the depth effect model.
In an embodiment, the curved surface model processing device provided by the embodiment of the present application may further include:
the space mapping unit is used for mapping the target viewpoint into the texture space to obtain texture coordinates corresponding to the target viewpoint;
and the texture mapping unit is used for performing texture mapping on the curved surface model based on the texture coordinates and the sampling points of the target sight to the curved surface model to obtain a processed curved surface model with the same visual effect as the depth effect model.
In an embodiment, the space mapping unit may include:
the grid surface determining subunit is used for determining a grid surface corresponding to the target viewpoint on the depth effect model and a normal vector of the grid surface;
The vector calculating subunit is used for calculating the tangent vector of the grid surface according to the vertex coordinates of the model vertexes and the corresponding texture coordinates on the grid surface;
a vector determination subunit, configured to determine a secondary tangent vector according to the tangent vector and the normal vector;
the space conversion subunit is used for converting the coordinate of the target viewpoint in world space based on the conversion matrix determined by the normal vector, the tangential vector and the auxiliary tangential vector to obtain the coordinate in tangential space;
and the coordinate determining subunit is used for obtaining texture coordinates of the target viewpoint according to the tangential space dimension coordinates.
As can be seen from the above, the curved surface model processing device according to the embodiment of the present application obtains the curved surface model, the depth effect model and the surrounding model corresponding to the curved surface model through the obtaining unit 301, wherein the depth effect model and the curved surface model have a depth difference, and the center positions of the curved surface model, the depth effect model and the surrounding model are the same; the dividing unit 302 divides the surrounding model into a plurality of region blocks, and calculates distance information of each region block to the surface of the depth effect model; the determining unit 303 determines a target region block among the plurality of region blocks according to the position of the current viewpoint of the target line of sight on the curved surface model; the stepping unit 304 controls the current viewpoint to step in the direction of the target line of sight according to the distance information corresponding to the target region block until the region block corresponding to the stepped current viewpoint meets the preset condition, and determines the target viewpoint according to the stepped current viewpoint; the association unit 305 associates the target viewpoint and the target line of sight with respect to the sampling points of the surface model so as to perform texture mapping processing on the surface model based on the target viewpoint so that the visual effect of the surface model is the same as that of the depth effect model.
According to the embodiment of the application, the distance information of the area block in the surrounding model can indicate the stepping distance of the current viewpoint to the depth effect model, the current viewpoint can be controlled to step to the nearby area of the surface of the depth effect model according to the stepping distance corresponding to the area block, the target viewpoint is the actual sampling point of the target sight to the curved surface model, and the curved surface model can be enabled to present the visual effect of the depth effect model according to the texture coordinate corresponding to the target viewpoint as the texture coordinate of the sampling point of the target sight to the curved surface model.
Correspondingly, the embodiment of the application also provides computer equipment which can be a terminal. Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 500 includes a processor 501 having one or more processing cores, a memory 502 having one or more computer readable storage media, and a computer program stored on the memory 502 and executable on the processor. The processor 501 is electrically connected to the memory 502. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 501 is a control center of the computer device 500, connects various parts of the entire computer device 500 using various interfaces and lines, and performs various functions of the computer device 500 and processes data by running or loading software programs and/or modules stored in the memory 502, and calling data stored in the memory 502, thereby performing overall monitoring of the computer device 500.
In the embodiment of the present application, the processor 501 in the computer device 500 loads the instructions corresponding to the processes of one or more application programs into the memory 502 according to the following steps, and the processor 501 executes the application programs stored in the memory 502, so as to implement various functions:
obtaining a curved surface model, a depth effect model corresponding to the curved surface model and an enclosing model, wherein the depth effect model and the curved surface model have depth difference, and the central positions of the curved surface model, the depth effect model and the enclosing model are the same;
dividing the surrounding model into a plurality of area blocks, and calculating the distance information from each area block to the surface of the depth effect model;
determining a target area block in the plurality of area blocks according to the position of the current viewpoint of the target sight on the curved surface model;
According to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint;
and correlating the target viewpoint and the target sight line with the sampling points of the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effects of the curved surface model and the depth effect model are the same.
In an embodiment, the step of "controlling the current viewpoint to step in the direction of the target line of sight according to the distance information corresponding to the target region block until the region block corresponding to the current viewpoint after the step meets the preset condition, and determining the target viewpoint according to the current viewpoint after the step" may specifically include:
determining the stepping distance of the current viewpoint according to the distance information of the target area block;
controlling the current viewpoint to step in the direction of the target sight based on the step distance to obtain candidate viewpoints;
determining a corresponding first region block according to the position of the candidate viewpoint;
and if the distance information of the first area block is smaller than zero, determining the target viewpoint according to the candidate viewpoint.
In an embodiment, after the step of determining the corresponding first region block according to the position of the candidate viewpoint, the method for processing the curved surface model according to the embodiment of the present application may further include:
If the distance information of the first area block is larger than zero, updating the stepping distance according to the distance information of the first area block;
updating the candidate view point into the current view point, and returning to execute stepping on the current view point in the direction of the target line of sight based on the stepping distance to obtain the candidate view point until the current view point and the candidate view point are positioned in the same region block;
and determining the target viewpoint according to the candidate viewpoints.
In an embodiment, the step of determining the target viewpoint from the candidate viewpoints may include:
and continuously controlling the candidate view point to step in the direction of the target view line according to the stepping distance corresponding to the region block corresponding to the candidate view point and the target view point until the distance between the candidate view point and the surface of the curved surface model is smaller than a preset distance threshold value, so as to obtain the target view point.
In an embodiment, after the step of associating the target viewpoint with the target line of sight with respect to the sampling point of the surface model, the surface model processing method provided by the embodiment of the application further includes:
mapping the target viewpoint into a texture space to obtain texture coordinates corresponding to the target viewpoint;
and performing texture mapping on the curved surface model based on the texture coordinates and the sampling points of the target sight line on the curved surface model to obtain a processed curved surface model with the same visual effect as the depth effect model.
In an embodiment, the step of mapping the target viewpoint to the texture space to obtain texture coordinates corresponding to the target viewpoint may specifically include:
determining a grid surface corresponding to the target viewpoint on the depth effect model and a normal vector of the grid surface;
calculating tangent vectors of the grid surface according to vertex coordinates of the model vertices on the grid surface and corresponding texture coordinates;
determining a secondary tangential vector according to the tangential vector and the normal vector;
converting the coordinates of the target viewpoint in world space based on the conversion matrix determined by the normal vector, the tangent vector and the auxiliary tangent vector to obtain the coordinates in tangent space;
and obtaining texture coordinates of the target viewpoint according to the tangential space dimension coordinates.
In an embodiment, the step of "obtaining the curved surface model, the depth effect model and the surrounding model corresponding to the curved surface model" may include:
acquiring a depth effect model corresponding to the curved surface model;
determining at least two boundary coordinates according to coordinates of model vertexes of the curved surface model;
a bounding model is generated from the at least two boundary coordinates.
From the above, according to the embodiment of the application, the curved surface model, the depth effect model corresponding to the curved surface model and the surrounding model are obtained, the depth effect model and the curved surface model have depth difference, and the central positions of the curved surface model, the depth effect model and the surrounding model are the same; dividing the surrounding model into a plurality of area blocks, and calculating the distance information from each area block to the surface of the depth effect model; determining a target area block in the plurality of area blocks according to the position of the current viewpoint of the target sight on the curved surface model; according to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint; and correlating the target viewpoint and the target sight line with the sampling points of the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effects of the curved surface model and the depth effect model are the same.
According to the embodiment of the application, the distance information of the area block in the surrounding model can indicate the stepping distance of the current viewpoint to the depth effect model, the current viewpoint can be controlled to step to the nearby area of the surface of the depth effect model according to the stepping distance corresponding to the area block, the target viewpoint is the actual sampling point of the target sight to the curved surface model, and the curved surface model can be enabled to present the visual effect of the depth effect model according to the texture coordinate corresponding to the target viewpoint as the texture coordinate of the sampling point of the target sight to the curved surface model.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 9, the computer device 500 further includes: a touch display screen 503, a radio frequency circuit 504, an audio circuit 505, an input unit 506, and a power supply 507. The processor 501 is electrically connected to the touch display 503, the radio frequency circuit 504, the audio circuit 505, the input unit 506, and the power supply 507, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 9 is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The touch display screen 503 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 503 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 501, and can receive commands from the processor 501 and execute them. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 501 to determine the type of touch event, and the processor 501 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 503 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch sensitive display 503 may also implement an input function as part of the input unit 506.
The radio frequency circuitry 504 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 505 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 505 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 505 and converted into audio data, which are processed by the audio data output processor 501 for transmission to, for example, another computer device via the radio frequency circuit 504, or which are output to the memory 502 for further processing. The audio circuit 505 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 507 is used to power the various components of the computer device 500. Alternatively, the power supply 507 may be logically connected to the processor 501 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 507 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 9, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which will not be described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform the steps in any of the virtual article marking methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
Obtaining a curved surface model, a depth effect model corresponding to the curved surface model and an enclosing model, wherein the depth effect model and the curved surface model have depth difference, and the central positions of the curved surface model, the depth effect model and the enclosing model are the same;
dividing the surrounding model into a plurality of area blocks, and calculating the distance information from each area block to the surface of the depth effect model;
determining a target area block in the plurality of area blocks according to the position of the current viewpoint of the target sight on the curved surface model;
according to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint;
and correlating the target viewpoint and the target sight line with the sampling points of the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effects of the curved surface model and the depth effect model are the same.
In an embodiment, the step of "controlling the current viewpoint to step in the direction of the target line of sight according to the distance information corresponding to the target region block until the region block corresponding to the current viewpoint after the step meets the preset condition, and determining the target viewpoint according to the current viewpoint after the step" may specifically include:
Determining the stepping distance of the current viewpoint according to the distance information of the target area block;
controlling the current viewpoint to step in the direction of the target sight based on the step distance to obtain candidate viewpoints;
determining a corresponding first region block according to the position of the candidate viewpoint;
and if the distance information of the first area block is smaller than zero, determining the target viewpoint according to the candidate viewpoint.
In an embodiment, after the step of determining the corresponding first region block according to the position of the candidate viewpoint, the method for processing the curved surface model according to the embodiment of the present application may further include:
if the distance information of the first area block is larger than zero, updating the stepping distance according to the distance information of the first area block;
updating the candidate view point into the current view point, and returning to execute stepping on the current view point in the direction of the target line of sight based on the stepping distance to obtain the candidate view point until the current view point and the candidate view point are positioned in the same region block;
and determining the target viewpoint according to the candidate viewpoints.
In an embodiment, the step of determining the target viewpoint from the candidate viewpoints may include:
and continuously controlling the candidate view point to step in the direction of the target view line according to the stepping distance corresponding to the region block corresponding to the candidate view point and the target view point until the distance between the candidate view point and the surface of the curved surface model is smaller than a preset distance threshold value, so as to obtain the target view point.
In an embodiment, after the step of associating the target viewpoint with the target line of sight with respect to the sampling point of the surface model, the surface model processing method provided by the embodiment of the application further includes:
mapping the target viewpoint into a texture space to obtain texture coordinates corresponding to the target viewpoint;
and performing texture mapping on the curved surface model based on the texture coordinates and the sampling points of the target sight line on the curved surface model to obtain a processed curved surface model with the same visual effect as the depth effect model.
In an embodiment, the step of mapping the target viewpoint to the texture space to obtain texture coordinates corresponding to the target viewpoint may specifically include:
determining a grid surface corresponding to the target viewpoint on the depth effect model and a normal vector of the grid surface;
calculating tangent vectors of the grid surface according to vertex coordinates of the model vertices on the grid surface and corresponding texture coordinates;
determining a secondary tangential vector according to the tangential vector and the normal vector;
converting the coordinates of the target viewpoint in world space based on the conversion matrix determined by the normal vector, the tangent vector and the auxiliary tangent vector to obtain the coordinates in tangent space;
and obtaining texture coordinates of the target viewpoint according to the tangential space dimension coordinates.
In an embodiment, the step of "obtaining the curved surface model, the depth effect model and the surrounding model corresponding to the curved surface model" may include:
acquiring a depth effect model corresponding to the curved surface model;
determining at least two boundary coordinates according to coordinates of model vertexes of the curved surface model;
a bounding model is generated from the at least two boundary coordinates.
From the above, according to the embodiment of the application, the curved surface model, the depth effect model corresponding to the curved surface model and the surrounding model are obtained, the depth effect model and the curved surface model have depth difference, and the central positions of the curved surface model, the depth effect model and the surrounding model are the same; dividing the surrounding model into a plurality of area blocks, and calculating the distance information from each area block to the surface of the depth effect model; determining a target area block in the plurality of area blocks according to the position of the current viewpoint of the target sight on the curved surface model; according to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint; and correlating the target viewpoint and the target sight line with the sampling points of the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effects of the curved surface model and the depth effect model are the same.
According to the embodiment of the application, the distance information of the area block in the surrounding model can indicate the stepping distance of the current viewpoint to the depth effect model, the current viewpoint can be controlled to step to the nearby area of the surface of the depth effect model according to the stepping distance corresponding to the area block, the target viewpoint is the actual sampling point of the target sight to the curved surface model, and the curved surface model can be enabled to present the visual effect of the depth effect model according to the texture coordinate corresponding to the target viewpoint as the texture coordinate of the sampling point of the target sight to the curved surface model.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The above description of the curved surface model processing method, the apparatus, the computer device and the computer storage medium provided by the embodiments of the present application applies specific examples to illustrate the principles and the embodiments of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. A method for processing a curved surface model, comprising:
acquiring a curved surface model, a depth effect model and a surrounding model corresponding to the curved surface model, wherein the depth effect model and the curved surface model have depth differences, and the center positions of the curved surface model, the depth effect model and the surrounding model are the same;
dividing the surrounding model into a plurality of area blocks, and calculating distance information from each area block to the surface of the depth effect model;
determining a target region block from the plurality of region blocks according to the position of the current viewpoint of the target sight on the curved surface model;
according to the distance information corresponding to the target area block, controlling the current viewpoint to step in the direction of the target line of sight until the area block corresponding to the stepped current viewpoint meets the preset condition, and determining the target viewpoint according to the stepped current viewpoint;
and correlating the target viewpoint with the target sight line to the sampling point of the curved surface model so as to perform texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effect of the curved surface model is the same as that of the depth effect model.
2. The method according to claim 1, wherein the step of controlling the current viewpoint to step in the direction of the target line of sight according to the distance information corresponding to the target region block until the region block corresponding to the stepped current viewpoint meets a preset condition, and the step of determining the target viewpoint according to the stepped current viewpoint includes:
determining the stepping distance of the current viewpoint according to the distance information of the target area block;
controlling the current viewpoint to step in the direction of the target sight based on the step distance to obtain candidate viewpoints;
determining a corresponding first region block according to the position of the candidate viewpoint;
and if the distance information of the first area block is smaller than zero, determining a target viewpoint according to the candidate viewpoint.
3. The method of claim 2, wherein after determining the corresponding first region block according to the position of the candidate viewpoint, the method further comprises:
if the distance information of the first area block is greater than zero, updating the stepping distance according to the distance information of the first area block;
updating the candidate view point into the current view point, and returning to execute the step-by-step distance-based control of the current view point in the direction of the target sight line to obtain the candidate view point until the current view point and the candidate view point are positioned in the same region block;
And determining the target viewpoint according to the candidate viewpoints.
4. A method according to claim 3, wherein said determining the target viewpoint from the candidate viewpoints comprises:
and continuously controlling the candidate view point to step in the direction of the target sight line according to the step distance corresponding to the region block corresponding to the candidate view point and the target view point until the distance between the candidate view point and the surface of the curved surface model is smaller than a preset distance threshold value, so as to obtain the target view point.
5. The method of claim 1, wherein after the correlating the target viewpoint with the target line of sight for sampling points of the surface model, the method further comprises:
mapping the target viewpoint into a texture space to obtain texture coordinates corresponding to the target viewpoint;
and performing texture mapping on the curved surface model based on the texture coordinates and the sampling points of the target sight to the curved surface model to obtain a processed curved surface model with the visual effect identical to the depth effect model.
6. The method of claim 5, wherein mapping the target viewpoint into texture space results in texture coordinates corresponding to the target viewpoint, comprising:
Determining a grid surface corresponding to the target viewpoint on a depth effect model and a normal vector of the grid surface;
calculating tangent vectors of the grid surface according to vertex coordinates of the model vertices on the grid surface and corresponding texture coordinates;
determining a secondary tangent vector according to the tangent vector and the normal vector;
converting the coordinate of the target viewpoint in world space based on the conversion matrix determined by the normal vector, the tangent vector and the auxiliary tangent vector to obtain the coordinate in tangent space;
and obtaining texture coordinates of the target viewpoint according to the tangential space dimension coordinates.
7. The method according to any one of claims 1-6, wherein the obtaining a surface model, a depth effect model corresponding to the surface model, and a surrounding model includes:
acquiring a curved surface model and a depth effect model corresponding to the curved surface model;
determining at least two boundary coordinates according to coordinates of model vertexes of the curved surface model;
and generating the surrounding model according to the at least two boundary coordinates.
8. A curved surface model processing apparatus, comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a curved surface model, a depth effect model corresponding to the curved surface model and a surrounding model, the depth effect model and the curved surface model have depth differences, and the central positions of the curved surface model, the depth effect model and the surrounding model are the same;
The dividing unit is used for dividing the surrounding model into a plurality of area blocks and calculating the distance information from each area block to the surface of the depth effect model;
a determining unit configured to determine a target region block among the plurality of region blocks according to a position of a current viewpoint of a target line of sight on the curved surface model;
a step unit, configured to control the current viewpoint to step in the direction of the target line of sight according to the distance information corresponding to the target region block until the region block corresponding to the current viewpoint after the step meets a preset condition, and determine the target viewpoint according to the current viewpoint after the step;
and the association unit is used for associating the target viewpoint with the sampling points of the target sight line for the curved surface model so as to carry out texture mapping processing on the curved surface model based on the target viewpoint, so that the visual effect of the curved surface model is the same as that of the depth effect model.
9. A computer device comprising a memory and a processor; the memory stores a computer program, and the processor is configured to execute the computer program in the memory to perform the curved surface model processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program loaded by a processor to perform the curved surface model processing method of any of claims 1-7.
CN202311174223.5A 2023-09-11 2023-09-11 Curved surface model processing method, device, computer equipment and storage medium Pending CN117173320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311174223.5A CN117173320A (en) 2023-09-11 2023-09-11 Curved surface model processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311174223.5A CN117173320A (en) 2023-09-11 2023-09-11 Curved surface model processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117173320A true CN117173320A (en) 2023-12-05

Family

ID=88931565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311174223.5A Pending CN117173320A (en) 2023-09-11 2023-09-11 Curved surface model processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117173320A (en)

Similar Documents

Publication Publication Date Title
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN112150560B (en) Method, device and computer storage medium for determining vanishing point
CN112465945B (en) Model generation method and device, storage medium and computer equipment
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
CN112370783A (en) Virtual object rendering method and device, computer equipment and storage medium
CN112950753B (en) Virtual plant display method, device, equipment and storage medium
CN113350792B (en) Contour processing method and device for virtual model, computer equipment and storage medium
CN117173320A (en) Curved surface model processing method, device, computer equipment and storage medium
CN112308766B (en) Image data display method and device, electronic equipment and storage medium
CN116402880B (en) Method, device, equipment and storage medium for determining oil-containing area
CN117541674A (en) Virtual object model rendering method and device, computer equipment and storage medium
CN112843717B (en) Resource allocation method and device, storage medium and computer equipment
CN118135081A (en) Model generation method, device, computer equipment and computer readable storage medium
CN117058298A (en) Texture tiling method, texture tiling device, computer-readable storage medium and computer device
CN115880402A (en) Flow animation generation method and device, electronic equipment and readable storage medium
CN113842641A (en) Method, device, terminal and storage medium for determining nearest waypoint
CN115222863A (en) Rendering method, device, medium and equipment of virtual model
CN117899468A (en) Scene content generation method, device and computer readable storage medium
CN116863048A (en) Collapse effect generation method, device, equipment and computer readable storage medium
CN116958429A (en) Virtual model generation method and device, computer equipment and storage medium
CN117893668A (en) Virtual scene processing method and device, computer equipment and storage medium
CN115588066A (en) Rendering method and device of virtual object, computer equipment and storage medium
CN117618898A (en) Map generation method, map generation device, electronic device and computer readable storage medium
CN116451289A (en) Spider web effect realization method, spider web effect realization device, computer equipment and storage medium
CN115712427A (en) Rectangular control rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination