CN117292096A - Virtual model generation method and device, storage medium and electronic equipment - Google Patents

Virtual model generation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117292096A
CN117292096A CN202311483592.2A CN202311483592A CN117292096A CN 117292096 A CN117292096 A CN 117292096A CN 202311483592 A CN202311483592 A CN 202311483592A CN 117292096 A CN117292096 A CN 117292096A
Authority
CN
China
Prior art keywords
model
volume
target
virtual
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311483592.2A
Other languages
Chinese (zh)
Inventor
罗舒仁
郭正扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311483592.2A priority Critical patent/CN117292096A/en
Publication of CN117292096A publication Critical patent/CN117292096A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a virtual model generation method, a virtual model generation device, a computer storage medium and electronic equipment, and relates to the technical field of computers. The virtual model generation method comprises the following steps: creating a volume model in the virtual scene, and determining an intersecting plane of the volume model and the virtual scene; generating a plurality of target stacked volume models based on the intersecting planes; and carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model. The method and the device can improve the efficiency of optimizing the model precision and the details of the virtual model, and further improve the efficiency of constructing the high-precision detail virtual model.

Description

Virtual model generation method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method for generating a virtual model, an apparatus for generating a virtual model, a computer storage medium, and an electronic device.
Background
In constructing a virtual scene using computer graphics (Computer Graphics, abbreviated CG), various virtual models, such as building models in urban scenes, animals and plants in jungle scenes, stones, etc., and containers in wharf scenes, need to be added to the virtual scene according to scene requirements. And improving the model detail and precision of the virtual model surface in the virtual scene is an important target for constructing the virtual scene.
In the conventional virtual model generation method, a developer is usually required to manually engrave an existing virtual model, and then perform processes such as heavy topology baking and the like to obtain a virtual model with higher model details and higher precision. However, this approach consumes a lot of time and manpower, resulting in a low efficiency of optimizing the model accuracy and details for the virtual model.
Disclosure of Invention
The disclosure provides a virtual model generation method, a virtual model generation device, a computer storage medium and electronic equipment, so that the efficiency of optimizing model precision and details of a virtual model is improved, and the efficiency of constructing a high-precision detail virtual model is further improved.
In a first aspect, an embodiment of the present disclosure provides a method for generating a virtual model, including: creating a volume model in the virtual scene, and determining an intersecting plane of the volume model and the virtual scene; generating a plurality of target stacked volume models based on the intersecting planes; and carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating a virtual model, including: the plane determining module is used for creating a volume model in the virtual scene and determining an intersecting plane of the volume model and the virtual scene; a model generation module for generating a plurality of target stacked volume models based on the intersecting planes; and the model fusion module is used for carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model.
In a third aspect, an embodiment of the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of generating a virtual model as above.
In a fourth aspect, one embodiment of the present disclosure provides an electronic device, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of generating a virtual model as above via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
the method for generating the virtual model comprises the steps of creating a volume model in a virtual scene, and determining an intersecting plane of the volume model and the virtual scene; generating a plurality of target stacked volume models based on the intersecting planes; and carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model. According to the method, a plurality of target stacking volume models for generating stacking effects can be automatically generated according to the intersecting plane of the volume model and the virtual scene, so that the high-precision detail model can be obtained by fusing the target stacking volume models with the stacking effects, and the virtual model fused by the stacking effects has high precision and low surface number. The method can avoid the technical problem that the accuracy and the detail efficiency of the virtual model are low in the manual optimization in the prior art, and achieves the technical effect of improving the accuracy and the detail efficiency of the virtual model.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely some embodiments of the present disclosure and that other drawings may be derived from these drawings without undue effort.
Fig. 1 schematically shows a system architecture diagram of a virtual model generation system in the present exemplary embodiment;
fig. 2 schematically shows a flowchart of a virtual model generation method in the present exemplary embodiment;
fig. 3 (a) -3 (b) schematically illustrate creating a volumetric model diagram in a virtual scene in the present exemplary embodiment;
FIG. 4 schematically illustrates a flow chart of one method of determining intersecting planes in the present exemplary embodiment;
FIG. 5 schematically illustrates a boundary region display schematic of a volumetric model in the present exemplary embodiment;
FIG. 6 schematically illustrates a schematic diagram of one method of generating a target stacked volume model in the present exemplary embodiment;
Fig. 7 schematically shows a model diagram after a fusion process in the present exemplary embodiment;
fig. 8 schematically shows a schematic diagram of a model after another fusion process in the present exemplary embodiment;
FIG. 9 schematically illustrates a displacement map diagram in the present exemplary embodiment;
FIG. 10 schematically illustrates a virtual model diagram obtained by adding a permutation map in the present exemplary embodiment;
fig. 11 (a) -11 (b) schematically show an effect schematic after a face-subtracting operation in the present exemplary embodiment;
fig. 12 schematically shows a schematic configuration diagram of a virtual model generating apparatus in the present exemplary embodiment;
fig. 13 schematically shows a structural diagram of an electronic device in the present exemplary embodiment.
Detailed Description
Exemplary embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the exemplary embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In order to help those skilled in the art to better understand the technical solutions of the present disclosure, the following description will explain relevant matters related to the technical solutions of the present disclosure.
1) Phantom Engine (UE for short): a game engine developed by a gaming company EPIC. Is a complete game development platform oriented to game machines, personal computers and the like, and provides a great amount of core technologies, data generation tools and basic support for developers.
2) Maya: is three-dimensional animation software manufactured by Autodesk company in the United states, and the application objects are professional video advertisements, character animation, film tricks and the like.
3) Computer graphics (Computer Graphics, abbreviated CG): is a general term for all graphics drawn by computer software, and with the development of a series of related industries in which computer is used as a main tool for visual design and production, the world practice is to refer to the field of visual design and production using computer technology as CG. It includes both technology and art, and almost encompasses all visual art creation activities in the computer age today, such as planar print design, web page design, three-dimensional animation, video special effects, multimedia technology, computer aided design-based architectural design, industrial modeling design, etc.
4) And (3) model generation: is a method of generating a three-dimensional model using algorithms and scripts. It describes the shape, structure and details of the model by programming and automatically generates the model. Can be used to create a variety of complex scenes, buildings, people, and objects. It can generate parts of the model based on parameterized rules, mathematical functions and randomness, thereby achieving a highly controllable and variable design. The method has the advantages of high efficiency, flexibility and reusability, and can accelerate the manufacturing process and improve the production efficiency.
5) Mesh (Mesh): mesh is a Mesh of a model, vertexes are the most basic components of the Mesh, lines are formed by vertexes, triangles are formed by lines, faces are formed by triangles, and then the surface expression of the Mesh, such as shading, mapping and the like, is performed by materials.
With the rapid development of game technology, virtual models in application game scenes run through a illusion engine are typically rendered by game character designers using CG models of 3Ds Max, maya, etc. When constructing a virtual scene and configuring a virtual model in the virtual scene using CG, in order to reduce resource consumption during model rendering, a method of reducing the number of top points of the model surface and thus the number of facets is generally adopted, but the process also synchronously reduces the accuracy and detail of the model surface. Therefore, the low-surface number virtual model needs to be optimized to improve the model precision and detail of the virtual model in the scene to obtain a high-precision model.
In related art schemes, high-precision models are typically generated using a way to make stacking effects at model tangent locations. And the method for producing the stacking effect at the tangent position of the model comprises the following steps of: and generating a special volume, detecting the volume, generating a position form of a small surface model, fusing and reconstructing a plurality of small surface models based on a distance model, and the like. The process requires manual carving by a developer, and then flows of heavy topology, baking and the like are performed. However, the method consumes a great deal of time and labor, so that the time and labor cost for optimizing the virtual model are high, the efficiency for optimizing the model precision and detail of the virtual model is low, and the efficiency for constructing the high-precision detail virtual model is affected.
In view of the foregoing, exemplary embodiments of the present disclosure provide a method for generating a virtual model, which may create a volume model in a virtual scene and determine an intersecting plane of the volume model and the virtual scene; generating a plurality of target stacked volume models based on the intersecting planes; and carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model. The method can automatically generate a plurality of target stacking volume models for generating stacking effects according to the intersecting plane of the volume model and the virtual scene, so that a high-precision detail model is obtained.
In order to solve the above-mentioned problems, the present disclosure proposes a method and apparatus for generating a virtual model, which can be applied to the system architecture of the exemplary application environment shown in fig. 1.
As shown in fig. 1, system architecture 100 may include a terminal device 101, a network 102, and a server 103. Network 102 is the medium used to provide communication links between terminal device 101 and server 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others. Terminal device 101 may be, for example, but not limited to, a smart phone, a palm top (Personal Digital Assistant, PDA), a notebook, a server, a desktop computer, or any other computing device with networking capabilities.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 103 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform. In the field of game technology, the server 103 may be a fantasy engine UE, a game engine, or the like.
The method for generating the virtual model provided in the embodiments of the present disclosure may be executed in the server 103, and accordingly, the generating device of the virtual model is generally disposed in the server 103. The method for generating the virtual model provided by the embodiment of the present disclosure may also be performed in the terminal device 101, and correspondingly, the generating device of the virtual model may also be provided in the terminal device 101. The method for generating a virtual model provided by the embodiment of the present disclosure may also be partially executed in the server 103 and partially executed in the terminal device 101, and accordingly, a part of modules of the generating apparatus of the virtual model may be provided in the server 103 and a part of modules may be provided in the terminal device 101.
For example, in one exemplary embodiment, the terminal device 101 may create a volume model in the virtual scene and determine an intersection plane of the volume model and the virtual scene; generating a plurality of target stacked volume models based on the intersecting planes; and carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model.
However, it is easy to understand by those skilled in the art that the above application scenario is only for example, and the present exemplary embodiment is not limited thereto.
The following exemplifies the application of the virtual model generation method to the terminal device 101 described above, taking the terminal device 101 described above as an execution subject. Fig. 2 schematically illustrates a flowchart of a method for generating a virtual model in the present exemplary embodiment, please refer to fig. 2, where the method for generating a virtual model provided in the embodiment of the present disclosure includes the following steps S201-S203:
step S201, creating a volume model in the virtual scene, and determining an intersecting plane of the volume model and the virtual scene.
Step S202, generating a plurality of target stacking volume models based on the intersecting planes.
And step S203, fusing the multiple target stacking volume models to obtain a target virtual model.
In some embodiments of the present disclosure, a volume model is created in a virtual scene, and an intersecting plane of the volume model and the virtual scene is determined; generating a plurality of target stacked volume models based on the intersecting planes; and carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model. The method can automatically generate a plurality of target stacking volume models for generating stacking effects according to the intersecting plane of the volume model and the virtual scene, so that a high-precision detail model is obtained.
The following describes in detail the implementation of each step in the embodiment shown in fig. 2 with reference to specific embodiments:
in step S201, a volume model is created in the virtual scene and an intersecting plane of the volume model and the virtual scene is determined.
Wherein the volumetric model is a virtual model with a volume, which may be of arbitrary shape. For example, the volume model may be any regular shape model such as a sphere, a cuboid, or the like, or may be any other irregular shape model, which is not limited in any way by the embodiments of the present disclosure.
Illustratively, the volumetric model is used to detect the region of overlap with the virtual scene, i.e. the intersection with the virtual scene is required by the created volumetric model. In determining the intersection plane between the volumetric model and the virtual scene, the overlap between the volumetric models needs to be ignored or eliminated. For example, in the blueprint, the overlapping area between the user-created volume models can be removed by the Ignore method provided by the blueprint node, and only the intersecting plane between the volume model and the virtual scene is reserved and stored in the array.
In performing the above-described step of creating a volumetric model in a virtual scene, in some example embodiments of the present disclosure, a user may create the volumetric model directly in the virtual scene and determine an approximate range for generating a high-precision model by adjusting a pose area of the volumetric model such that an overlap is created between the volumetric model and the virtual scene.
In other example embodiments of the present disclosure, the user may also pre-create a volumetric model in the blueprint. When the user needs to generate a high-precision virtual model in the virtual scene, the user can directly select and drag the pre-constructed volume model from the blueprint to the virtual scene. Of course, the user can make the placed volume model and the virtual scene overlap by adjusting the placement area of the volume model, and the process can also achieve the effect of creating the volume model in the virtual scene.
It is emphasized that the volumetric model created in the virtual scene requires the portion that intersects the virtual scene. And the volume models can be created in the virtual scene, and the volume models can be partially overlapped or completely non-overlapped when a plurality of volume models are created, and the embodiment of the disclosure does not have any special limitation.
In the case of a partial overlap region between the plurality of volumetric models:
in an optional embodiment of the disclosure, under the condition that the creating of the plurality of volume models in the virtual scene is satisfied, when the step of determining the intersecting plane of the volume model and the virtual scene is performed, a model volume of each volume model may be determined when an overlapping region exists between the plurality of volume models; based on the model volumes of the respective volume models, an intersection plane between the respective volume models and the virtual scene is determined.
Wherein, in case there is an overlap region between the plurality of volume models, for each volume model, the model volume of the volume model may be composed of two parts, irrespective of the overlap region between the respective volume model and the virtual scene: one part is a volume that does not overlap with other volume models, and the other part is a volume that overlaps with other volume models. For example, when the volume model a and the volume model B have overlapping volume regions, the model volume of the volume model a includes: no overlapping region occurs in the volume model a with the volume model B, and an overlapping region occurs in the volume model a with the volume model B.
For example, the model volume of each volume model may determine the spatial location occupied by the volume model. When there is an overlap region between the plurality of volume models, the model volume process of determining each volume model ignores the overlap between the volume models, only for the spatial position occupied by each volume model itself, thereby determining the intersection plane between each volume model and the virtual scene.
FIG. 3 schematically illustrates creating a volumetric model in a virtual scene in accordance with an exemplary embodiment; taking the volumetric model as a volumetric sphere model (volumetric sphere model, i.e., a volumetric model in the shape of a sphere) as an example with reference to fig. 3, fig. 3 (a) may be a process of creating a volumetric model in a blueprint under which areas of each volumetric model occluded by a virtual scene or other volumetric model may be viewed assuming that 5 volumetric sphere models are created in advance in the blueprint and there is a partial overlap between the volumetric sphere models. Fig. 3 (b) is a schematic diagram after adding the pre-created volumetric sphere model of fig. 3 (a) to the virtual scene and overlapping with each plane of the virtual scene. As can be seen from a comparison of fig. 3 (a) and 3 (b), the region where the overlap between the volumetric sphere models and the virtual scene is generated is blocked from being visually seen.
In this case, the model volumes of the 5 volume models shown in fig. 3 (b) are the complete volumes of the respective volume sphere models, that is, the model volumes of the respective volume models include: the sum of the model volumes of the volumetric sphere model overlapping with the other volumetric sphere models and the model volumes not overlapping. An intersection plane between each volumetric sphere model and the virtual scene may then be determined based on the model volumes of each volumetric sphere model.
Conversely, for the case where there is a partial overlap region between each volume model:
if there is no overlap region between the volume models, the model volume of each volume model is composed of a part, that is, only the volume region that does not overlap with the other volume models is included, except for the volume that overlaps with the virtual scene.
In this embodiment, when there is an overlapping region between the plurality of volume models, the intersection plane between each volume model and the virtual scene is determined based on the model volume of each volume model, so that the stacking effect can be generated more accurately, so as to ensure the accuracy of the target virtual model generated later.
In an alternative embodiment of the present disclosure, when the step of determining the intersecting plane between each volume model and the virtual scene based on the model volume of each volume model is performed, fig. 4 schematically illustrates a flowchart for determining the intersecting plane in the present exemplary embodiment, please refer to fig. 4, which includes the following steps S401 to S403:
Step S401, performing Boolean operation on model grids corresponding to the plurality of volume models to obtain initial model grids of each volume model.
For example, since the volume models are static mesh bodies in the default state, when editing the volume models, it is necessary to convert the static mesh bodies into dynamic mesh bodies preferentially before executing step S401. The static grid body is that the grid (Mesh) of the model is static and does not support editing operation between the models, while the dynamic grid body is that the grid (Mesh) of the model is dynamic and supports editing operation of the models. Therefore, before performing step S401, the static mesh body may be preferentially converted into the dynamic mesh body so that the user may dynamically edit the created volume model.
In some example embodiments of the present disclosure, all of the volumetric mesh is replicated and Boolean operations (Boolean) are performed in sequence.
When a plurality of volume models are added to the virtual scene as shown in fig. 3 (a), all the volume meshes are copied and boolean operations are sequentially performed, so that the area of the volume models, which is blocked by the insertion into the virtual scene, can be removed, and the user cannot see the area, and can reject the area by boolean operations to obtain the effect as shown in fig. 3 (b).
And step S402, merging the initial model grids of the volume models to obtain target model grids.
Wherein the Mesh corresponds to the shape of the virtual model and the target model Mesh corresponds to a complete model containing all created volumetric meshes.
Illustratively, after initial model grids of the volume models are obtained respectively, a merging process (Union operation) can be performed to accurately determine an intersecting plane between the volume model and the virtual scene.
Step S403, determining an intersecting plane between each volume model and the virtual scene based on the model volumes of each volume model of the target volume model mesh.
Illustratively, an intersection plane between the merged volumetric model and the virtual scene is determined.
In step S202, a plurality of target stacked volume models are generated based on the intersecting planes.
Wherein the volume of the plurality of target stacked volume models is substantially smaller than the volume of the volume models.
In some example embodiments of the present disclosure, bounding box information of a volume model is determined based on a center coordinate value and a radius of the volume model; generating a boundary region of the volume model according to the bounding box information, and generating a plurality of initial stacked volume models in the boundary region; and determining an initial stacking volume model, of which the distance between any plane corresponding to the virtual scene in the plurality of initial stacking volume models is smaller than a preset distance, as a target stacking volume model.
Wherein the virtual scene corresponds to each plane and comprises an intersecting plane.
Bounding Box information (Bounding Box) is an algorithm for solving the optimal Bounding space of a set of discrete points, the basic idea of which is to replace a complex geometric object approximately with a somewhat larger and simpler-to-property geometry (called Bounding Box). It can be seen that bounding box information is the smallest space that encloses a volume model, and thus, the bounding box information of the volume model can be rapidly determined by the center coordinate value and the radius of the volume model.
For example, bounding regions of the volumetric model may be generated based on bounding box information of the volumetric model. To facilitate a better understanding of the process of generating bounding regions of a volumetric model by those skilled in the art, an exemplary illustration may be provided in connection with FIG. 5.
Taking the volume model shown in fig. 3 (a) as an example, fig. 5 schematically shows a boundary area display schematic diagram of one volume model in the present exemplary embodiment. To facilitate user viewing of the bounding region, as shown in fig. 5, a bounding box identifying the bounding region may be drawn for each volumetric model, with the bounding boxes corresponding to the volumetric models being displayed as shown in fig. 5.
Illustratively, after generating the boundary region for each volumetric model, a plurality of initial stacked volumetric models may be randomly generated within the boundary region. That is, parameters such as the volume size, density, and overlapping depth of any plane corresponding to the virtual scene of the initial stacked volume model in the boundary region are random.
The method includes the steps of removing an initial stacking volume model with a distance greater than a preset distance from any plane corresponding to the virtual scene in the plurality of initial stacking volume models, and determining an initial stacking volume model with a distance less than the preset distance from any plane corresponding to the virtual scene in the initial stacking volume models as a target stacking volume model.
Fig. 6 schematically shows a schematic diagram of generating a target stacking volume model in the present exemplary embodiment. As shown in fig. 6, a stacked volume model in which the distance between any one plane corresponding to the virtual scene in the boundary region is smaller than the preset distance is reserved.
To facilitate calculation of bounding boxes and determination of the target stacking volume model described above, in some example embodiments of the present disclosure, a plurality of initial stacking points are generated within the boundary region; when the step of determining that an initial stacking volume model, of the plurality of initial stacking volume models, having a distance from each plane corresponding to the virtual scene that is smaller than a preset distance is performed as a target stacking volume model, determining that an initial stacking point, of which a normal distance from any one plane corresponding to the virtual scene is smaller than the preset distance, is a target stacking point; in response to a first adjustment operation of the volume parameter for the target stacking point, the target stacking point is updated to the target stacking volume model.
By way of example, random initial stacking points are generated in the boundary area, and when the distance between any plane corresponding to the virtual scene in the initial stacking volume model is calculated, the radius of the stacking points is negligible compared with the volume model, so that the target stacking points can be directly and accurately brushed from the initial stacking points. And then, the radius of each target stacking point is adjusted to obtain a target stacking volume model.
It should be noted that, the surface of the virtual model or the virtual scene is composed of a large number of triangles, then the nearest triangle surface between the initial stacking point and the virtual scene can be calculated based on each initial stacking point, so that the point closest to the initial stacking point in the triangle line surface is determined based on the normal direction of the triangle, and the distance between the initial stacking point and the nearest point is compared with the preset distance to determine whether the initial stacking point is the target stacking point.
In the present embodiment, the accuracy of determining the target stacking volume model can be improved by performing the distance calculation by stacking points instead of the stacking volume model.
In an alternative embodiment of the present disclosure, to ensure that the target stacking volume model is not fully covered by any plane of the virtual scene (the target stacking volume model cannot be displayed in the user interface if fully covered), an offset may be added based on the normal direction of the center point of each target stacking volume model from the closest plane in the virtual scene.
In an alternative embodiment of the present disclosure, after the generation of the target stacking volume model, the target parameters of the target stacking volume model are adjusted in response to a second adjustment operation for the target parameters of the target stacking volume model.
The target parameter comprises at least one of density, volume and overlapping depth of any plane corresponding to the virtual scene of the target stacking volume model.
After the target stacking volume model is generated, the user can edit the interface through the user interaction interface, and the user can update the generated target stacking volume model to the corresponding target parameters through the adjustment operation to the target parameters in the edit interface.
For example, the editing interface includes at least one of the density, volume, and depth of overlap of any plane corresponding to the virtual scene of the target stacked volume model. The user can adjust the quantity of the finally generated target stacking volume model through the density parameters; the radius of the target stacking volume model can be adjusted through the volume parameters; the size of the target stacked volume model displayed in the interface or the volume model covered by the virtual scene may be adjusted by the depth of overlap of any of the planes corresponding to the virtual scene.
In step S203, a plurality of target stacked volume models are subjected to fusion processing to obtain a target virtual model.
In an alternative embodiment of the present disclosure, the fusion process may be a morphological process (e.g., an expansion process) based on distance parameters between multiple target stacked volume models. Namely, voxel information of a plurality of target stacked volume models is acquired; and carrying out morphological processing according to voxel information of the multiple target stacked volume models to obtain a target virtual model.
Wherein the Voxel information (Voxel) is a volume pixel unit in three-dimensional space. Which is similar to a pixel in two-dimensional space, but in three-dimensional space voxel information represents a volume element rather than just a point on a plane. Each voxel information has coordinates and attributes such as color, density, or texture. The main parameter of the Voxel information is Voxel Resolution (Voxel Resolution), the Resolution is that the smaller the size is, the more details of the model can be captured, and the higher the Resolution is, the easier the shape of the model is indicated to be restored, and the higher the accuracy is.
Where Mesh Resolution (Mesh Resolution) is used to generate the accuracy of the model Mesh, higher Resolution will lead to higher model face numbers, but also provide more detail. The distance parameter will control the fusion distance, the higher the fusion distance, the more integral the model is calculated.
It should be noted that higher voxel resolution is not required during the fusion process, and the resolution may be reduced in order to save resources.
For example, the target stacked volume model may be transformed into voxel information by parameters, such that the distance between each target stacked volume model is determined based on the voxel information of the target stacked volume model for morphological processing. For example, the target stacked volume models with overlapping regions are fused together.
The parameters for the morphological processing can also be provided to the user through the user interface, so that the user can adjust the effect of the fusion processing according to the self requirements.
Fig. 7 schematically shows a model schematic after a fusion process in the present exemplary embodiment. Fig. 8 schematically shows a model schematic after another fusion process in the present exemplary embodiment.
As can be seen from fig. 7 and 8, when the morphology process is performed, if the distance parameter configuration between different target stacking volume models is larger, the corresponding fusible target stacking volume models are more.
Further, after the target virtual model is generated based on fig. 2, details and accuracy of the target virtual model may be further optimized.
In some example embodiments of the present disclosure, in response to a selected target replacement map of a plurality of replacement maps in a replacement map list provided on a graphical user interface, fusing the target replacement map to the target virtual model, resulting in an initial replacement model; and updating each vertex coordinate value of the initial displacement model to obtain the target mapping model in response to a third adjustment operation of a first parameter in the tangential direction of each vertex coordinate of the initial displacement model and a second parameter in the normal direction of each vertex coordinate of the initial displacement model.
The graphical user interface provides a replacement map list for the user, wherein the replacement map list comprises a plurality of replacement maps which can be selected by the user. The displacement map user reveals the surface detail effects of the virtual model, such as moss model, lawn model, snow model, etc. on the stone.
For example, for a target virtual model, a user fuses with the target virtual model after selecting a target replacement map to obtain an initial replacement model. For each model vertex of the initial displacement model (i.e. each vertex on the model surface) there is a determined vertex coordinate value, and for each vertex coordinate value an automatic adjustment may be made in tangential direction and/or normal direction, respectively. That is, the user may adjust only the vertex coordinate values in the tangential direction, may adjust only the vertex coordinate values in the normal direction, or may adjust both the vertex coordinate values in the tangential direction and the normal direction.
Specifically, determining tangential directions and normal directions corresponding to the model vertexes on the model surface of the target virtual model; in response to a third adjustment operation for the first parameter in the tangential direction and for the second parameter in the normal direction, the vertex coordinates of the initial displacement model surface may be updated.
For example, the user may perform a custom adjustment on the vertex coordinate values of the initial displacement map (as shown in fig. 9), and specifically may adjust the map to obtain the target displacement model through a third adjustment operation for the first parameter in the tangential direction and for the second parameter in the normal direction, so as to obtain a new virtual model (as shown in fig. 10) based on the new displacement map, where the details and the accuracy of the model surface are higher than those of the target virtual model obtained in the step of fig. 2.
It should be noted that, the amplitude of each vertex on the surface of the initial displacement model can be adjusted by the third adjustment operation for the first parameter in the tangential direction and the second parameter in the normal direction, so that the surface of the obtained target displacement model presents an uneven effect, and the method is suitable for more application scenes.
For example, although the target virtual model obtained by any of the above embodiments may achieve high accuracy of the model surface, it may not be a mapping required by the user (for example, the user needs to generate a moss model), and the user may replace the surface mapping of the target virtual model, and the user may also adjust the surface mapping to suit the needs of the user.
Because the high surface number brought by the replacement mapping can have a certain influence on the system performance, the surface subtracting operation can be carried out on the virtual model so as to reduce the performance consumption of the system.
Illustratively, a part of the generated virtual model is inserted into the virtual scene, so as to be covered by the virtual scene, and this part can be removed through boolean operations in the above embodiments. Another part is to put the overlapping region in the model.
In an alternative embodiment of the present disclosure, overlapping regions in the target displacement model are detected; and performing subtraction operation on the overlapped area to obtain a virtual model after the subtraction operation.
Fig. 11 (a) -11 (b) schematically show an effect schematic after a face-subtracting operation in the present exemplary embodiment. Fig. 11 (a) is a virtual model obtained without performing the face subtraction operation, and fig. 11 (b) is a virtual model obtained by using the above-described subtraction operation, and the use of the virtual model after the face subtraction operation reduces the number of high faces brought by adding the replacement map, thereby reducing the consumption of resources and improving the optimization performance of the virtual model.
After the target virtual model is generated in any embodiment, the user can flexibly adjust any pair of parameters provided by the user on the graphical user interface to adjust the target virtual model, and finally the generated virtual model can be exported to the static grid body for use.
In order to implement the above method for generating a virtual model, an embodiment of the present disclosure provides a device for generating a virtual model. Fig. 12 schematically shows a schematic architectural diagram of a virtual model generating apparatus.
The virtual model generating device 1200 includes a plane determining module 1201, a model generating module 1202, and a model fusing module 1203.
A plane determining module 1201, configured to create a volume model in the virtual scene, and determine an intersecting plane of the volume model and the virtual scene; a model generation module 1202 for generating a plurality of target stacked volume models based on the intersecting planes; the model fusion module 1203 is configured to fuse the multiple target stacked volume models to obtain a target virtual model.
In an alternative embodiment of the present disclosure, the plane determining module 1201 is specifically configured to determine a model volume of each volume model if there is an overlapping region between the plurality of volume models; based on the model volumes of the respective volume models, an intersection plane between the respective volume models and the virtual scene is determined.
In an optional embodiment of the present disclosure, the plane determining module 1201 is specifically configured to perform boolean operation on volume model grids corresponding to a plurality of volume models to obtain initial volume model grids of each volume model; combining the initial body model grids of the volume models to obtain target body model grids; based on model volumes of the volume models of the object model mesh, an intersection plane between the volume models and the virtual scene is determined.
In an alternative embodiment of the present disclosure, the model generation module 1202 is configured to determine bounding box information of the volumetric model based on the central coordinate values and the radius of the volumetric model; generating a boundary region of the volume model according to the bounding box information, and generating a plurality of initial stacked volume models in the boundary region; determining an initial stacking volume model, of which the distance between any plane corresponding to the virtual scene in the plurality of initial stacking volume models is smaller than a preset distance, as a target stacking volume model; wherein the virtual scene corresponds to each plane and comprises an intersecting plane.
In an alternative embodiment of the present disclosure, model generation module 1202 is configured to generate a plurality of initial stacking points within the boundary region; the model generation module 1202 is configured to determine an initial stacking point, where a normal distance between the multiple initial stacking points and any plane corresponding to the virtual scene is smaller than a preset distance, as a target stacking point; in response to a first adjustment operation of the volume parameter for the target stacking point, the target stacking point is updated to the target stacking volume model.
In an optional embodiment of the disclosure, the generating device 1200 of the virtual model further includes a parameter adjustment module 1204, where the parameter adjustment module 1204 is further configured to adjust the target parameter of the target stacking volume model in response to a second adjustment operation for the target parameter of the target stacking volume model; the target parameter comprises at least one of density, volume and overlapping depth of any plane corresponding to the virtual scene of the target stacking volume model.
In an alternative embodiment of the present disclosure, the model fusion module 1203 is configured to obtain voxel information of a plurality of target stacked volume models; and carrying out morphological processing according to voxel information of the multiple target stacked volume models to obtain a target virtual model.
In an alternative embodiment of the present disclosure, the model generation module 1202 is configured to, in response to a target replacement map selected from a plurality of replacement maps in a replacement map list provided on a graphical user interface, fuse the target replacement map to the target virtual model to obtain an initial replacement model; and updating each vertex coordinate value of the initial displacement model to obtain the target mapping model in response to a third adjustment operation of a first parameter in the tangential direction of each vertex coordinate of the initial displacement model and a second parameter in the normal direction of each vertex coordinate of the initial displacement model.
In an alternative embodiment of the present disclosure, the model generation module 1202 is configured to detect overlapping regions in the target displacement model; and performing subtraction operation on the overlapped area to obtain a virtual model after the subtraction operation.
The virtual model generating device 1200 provided in the embodiments of the present disclosure may execute the technical scheme of the virtual model generating method in any of the embodiments, and the implementation principle and beneficial effects of the virtual model generating device are similar to those of the virtual model generating method, and may refer to the implementation principle and beneficial effects of the virtual model generating method, which are not described herein.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary method" section of this specification, when the program product is run on the terminal device.
A program product for implementing the above-mentioned method according to an embodiment of the present invention may employ a portable compact disc read Only Memory (CD-ROM) and include a program code, and may be run on a terminal device such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (Local Area Network, LAN) or wide area network (Wide Area Network, WAN), or may be connected to an external computing device (e.g., connected through the internet using an internet service provider).
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1300 according to this embodiment of the invention is described below with reference to fig. 13. The electronic device 1300 shown in fig. 13 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 13, the electronic device 1300 is embodied in the form of a general purpose computing device. The components of the electronic device 1300 may include, but are not limited to: the at least one processing unit 1310, the at least one memory unit 1320, a bus 1330 connecting the different system components (including the memory unit 1320 and the processing unit 1310), and a display unit 1340.
Wherein the storage unit stores program code that is executable by the processing unit 1310 such that the processing unit 1310 performs steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification. For example, the processing unit 1310 may perform steps S201 to S203 as shown in fig. 2.
The storage unit 1320 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 13201 and/or cache memory 13202, and may further include Read Only Memory (ROM) 13203.
The storage unit 1320 may also include a program/utility 13204 having a set (at least one) of program modules 13205, such program modules 13205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1330 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 1300 may also communicate with one or more external devices 1400 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1300, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1300 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1350. Also, the electronic device 1300 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, for example, the Internet, through a network adapter 1360. As shown, the network adapter 1360 communicates with other modules of the electronic device 1300 over the bus 1330. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1300, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk array (Redundant Arrays of Independent Disks, RAID) systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of generating a virtual model, comprising:
creating a volume model in a virtual scene, and determining an intersecting plane of the volume model and the virtual scene;
generating a plurality of target stacked volume models based on the intersecting planes;
and carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model.
2. The method of generating a virtual model of claim 1, wherein the virtual scene creates a plurality of volumetric models;
Wherein said determining an intersection plane of said volumetric model and said virtual scene comprises:
if an overlapping area exists among the plurality of volume models, determining a model volume of each volume model;
the intersection plane between each of the volume models and the virtual scene is determined based on a model volume of each of the volume models.
3. The method of generating a virtual model according to claim 2, wherein the determining the intersection plane between each of the volume models and the virtual scene based on the model volume of each of the volume models comprises:
performing Boolean operation on model grids corresponding to the plurality of volume models to obtain initial model grids of each volume model;
combining the initial model grids of the volume models to obtain target model grids;
the intersection plane between each of the volumetric models and the virtual scene is determined based on a model volume of each of the volumetric models of the target model mesh.
4. The method of generating a virtual model of claim 1, wherein the generating a plurality of target stacked volume models based on the intersecting planes comprises:
Determining bounding box information of the volume model based on the central coordinate value and the radius of the volume model;
generating a boundary region of the volume model according to the bounding box information, and generating a plurality of initial stacked volume models in the boundary region;
determining an initial stacking volume model, of the plurality of initial stacking volume models, of which the distance between any plane corresponding to the virtual scene is smaller than a preset distance, as the target stacking volume model; wherein the intersecting planes are included in the virtual scene corresponding to each plane.
5. The method of generating a virtual model of claim 4, wherein generating a plurality of initial stacked volume models within the boundary region comprises:
generating a plurality of initial stacking points within the boundary region;
wherein determining, as the target stacking volume model, an initial stacking volume model, of the plurality of initial stacking volume models, having a distance from each plane corresponding to the virtual scene smaller than a preset distance, includes:
determining an initial stacking point with a normal distance smaller than a preset distance from any plane corresponding to the virtual scene to the plurality of initial stacking points as a target stacking point;
The target stacking point is updated to the target stacking volume model in response to a first adjustment operation of a volume parameter for the target stacking point.
6. The method of generating a virtual model of claim 4, further comprising:
adjusting the target parameters of the target stacking volume model in response to a second adjustment operation for the target parameters of the target stacking volume model;
the target parameter comprises at least one of density, volume and overlapping depth of any plane corresponding to the virtual scene of the target stacking volume model.
7. The method for generating a virtual model according to claim 1, wherein the fusing the plurality of target stacked volume models to obtain a target virtual model includes:
acquiring voxel information of the multiple target stacked volume models;
and carrying out morphological processing according to the voxel information of the multiple target stacked volume models to obtain the target virtual model.
8. The method for generating a virtual model according to claim 1, wherein after the fusing the plurality of target stacked volume models to obtain a target virtual model, the method further comprises:
In response to a selected target replacement map of a plurality of replacement maps in a replacement map list provided on a graphical user interface, fusing the target replacement map to the target virtual model to obtain an initial replacement model; and updating each vertex coordinate value of the initial replacement model to obtain the target mapping model in response to a third adjustment operation of a first parameter in the tangential direction of each vertex coordinate of the initial replacement model and/or a second parameter in the normal direction of each vertex coordinate of the initial replacement model.
9. The method of generating a virtual model of claim 8, further comprising:
detecting an overlapping region in the target map model;
and performing subtraction operation on the overlapped area to obtain a virtual model after the subtraction operation.
10. A virtual model generation apparatus, the apparatus comprising:
the plane determining module is used for creating a volume model in the virtual scene and determining an intersecting plane of the volume model and the virtual scene;
a model generation module for generating a plurality of target stacked volume models based on the intersecting planes;
And the model fusion module is used for carrying out fusion processing on the multiple target stacking volume models to obtain a target virtual model.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of generating a virtual model according to any one of claims 1 to 9.
12. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of generating a virtual model of any one of claims 1 to 9 via execution of the executable instructions.
CN202311483592.2A 2023-11-08 2023-11-08 Virtual model generation method and device, storage medium and electronic equipment Pending CN117292096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311483592.2A CN117292096A (en) 2023-11-08 2023-11-08 Virtual model generation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311483592.2A CN117292096A (en) 2023-11-08 2023-11-08 Virtual model generation method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117292096A true CN117292096A (en) 2023-12-26

Family

ID=89253682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311483592.2A Pending CN117292096A (en) 2023-11-08 2023-11-08 Virtual model generation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117292096A (en)

Similar Documents

Publication Publication Date Title
EP3457362B1 (en) Reduced acceleration structures for ray tracing systems
US8665267B2 (en) System and method for generating 3D surface patches from unconstrained 3D curves
JP4810561B2 (en) Graphics model conversion apparatus and graphics model processing program for causing computer to function as graphics model conversion apparatus
JP5837363B2 (en) Water marking of 3D modeled objects
JP2002501640A (en) Adaptive mesh refinement method and apparatus
US20150002510A1 (en) Sketch-based generation and editing of quad meshes
US7236170B2 (en) Wrap deformation using subdivision surfaces
KR20140139984A (en) Compression and decompression of a 3d modeled object
JP2009064447A (en) Method and device for generating adaptively sampled distance field
KR102620835B1 (en) CityGML-based building object information generation method using 3D geometric object information, building object information generation system, and computer program therefor
JP2023029984A (en) Method, device, electronic apparatus, and readable storage medium for generating virtual image
Wang et al. Fast Mesh Simplification Method for Three‐Dimensional Geometric Models with Feature‐Preserving Efficiency
CN111429561A (en) Virtual simulation rendering engine
JP2002334346A (en) Method for converting range data of object to model of the object
CN117078828A (en) Texture model simplification method and device
US11763524B2 (en) Layered meshing for additive manufacturing simulations
CN117292096A (en) Virtual model generation method and device, storage medium and electronic equipment
Guo et al. Adaptive surface mesh remeshing based on a sphere packing method and a node insertion/deletion method
CN114564268A (en) Equipment management method and device, electronic equipment and storage medium
CN114494623A (en) LOD-based terrain rendering method and device
US10636210B2 (en) Dynamic contour volume deformation
Ma et al. Research and application of personalized human body simplification and fusion method
CN109675314B (en) Virtual model optimization method and device, electronic equipment and storage medium
Shakaev et al. View-Dependent Level of Detail for Real-Time Rendering of Large Isosurfaces
CN114937126A (en) Flattening editing method, device and equipment for quantized grid terrain and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination