CN114307158A - Three-dimensional virtual scene data generation method and device, storage medium and terminal - Google Patents

Three-dimensional virtual scene data generation method and device, storage medium and terminal Download PDF

Info

Publication number
CN114307158A
CN114307158A CN202111669499.1A CN202111669499A CN114307158A CN 114307158 A CN114307158 A CN 114307158A CN 202111669499 A CN202111669499 A CN 202111669499A CN 114307158 A CN114307158 A CN 114307158A
Authority
CN
China
Prior art keywords
model
scene
parameters
data
target scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111669499.1A
Other languages
Chinese (zh)
Inventor
王金鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202111669499.1A priority Critical patent/CN114307158A/en
Publication of CN114307158A publication Critical patent/CN114307158A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a device for generating three-dimensional virtual scene data, a storage medium and a terminal, relates to the technical field of image processing, and mainly aims to solve the problem of low efficiency of the conventional three-dimensional auxiliary map making. The method comprises the following steps: acquiring geometric model data and model layout parameters of a basic component model of a virtual scene to be built; determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generating a target scene model matched with the construction parameters; obtaining model adjustment parameters of the target scene model, and adjusting the construction parameters of the target scene model through the model adjustment parameters to obtain an adjusted target scene model; and obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, and generating three-dimensional virtual scene data. The method is mainly used for generating the three-dimensional virtual scene data.

Description

Three-dimensional virtual scene data generation method and device, storage medium and terminal
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating three-dimensional virtual scene data, a storage medium, and a terminal.
Background
In the process of processing a three-dimensional virtual scene, a three-dimensional virtual scene with concept content is usually prepared in advance, so that the three-dimensional virtual scene is applied to a scene such as a game, a video and the like for testing or scene layout. Conventionally, when generating three-dimensional virtual scene data, a creator of a scene generally performs manual creation, and the creation methods used by different creators are different. However, in the process of manually creating the three-dimensional virtual scene, the generation efficiency of the three-dimensional virtual scene is reduced due to complicated creation steps, a large amount of human resources are consumed, the generation time of the three-dimensional virtual scene is greatly wasted, and the generated three-dimensional virtual scene cannot be directly represented as an image with a material effect finally, so that the generation efficiency of the three-dimensional virtual scene data is influenced.
Disclosure of Invention
In view of this, the present invention provides a method and an apparatus for generating three-dimensional virtual scene data, a storage medium, and a terminal, and mainly aims to solve the problem of low generation efficiency of the existing three-dimensional virtual scene data.
According to an aspect of the present invention, there is provided a method for generating three-dimensional virtual scene data, including:
acquiring geometric model data and model layout parameters of a basic component model of a virtual scene to be built, wherein the basic component model is a non-static model, and the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model;
determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generating a target scene model matched with the construction parameters;
obtaining model adjustment parameters of the target scene model, and adjusting the construction parameters of the target scene model through the model adjustment parameters to obtain an adjusted target scene model;
and obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, and generating three-dimensional virtual scene data.
Further, the generating the target scene model matched with the building parameters comprises:
calling a space partition component, and building the basic component model in a three-dimensional virtual scene based on the space partition component, wherein the space partition component is configured with initial building parameters of different basic component models;
and configuring the initial construction parameters into the construction parameters through the space partition assembly to obtain a target scene model in the three-dimensional virtual scene.
Further, the obtaining of the model adjustment parameter of the target scene model, and adjusting the construction parameter of the target scene model by the model adjustment parameter to obtain the adjusted target scene model includes:
acquiring scene adjustment operation on the target scene model, wherein the scene adjustment operation comprises operation contents for adjusting geometric model data and model layout parameters of the target scene model;
analyzing the adjustment content generated by the scene adjustment operation on the target scene model, determining model adjustment parameters, and adjusting the construction parameters of the target scene model according to the model adjustment parameters based on the space editing subassembly of the space partition assembly to obtain an adjusted target scene model.
Further, before obtaining the map material matched with the target scene model, the method further includes:
loading resource maps for rendering different target scene models;
creating a material ball based on the resource map, and configuring at least one material parameter for the material ball to obtain a map material;
and configuring matched parameter nodes for the material parameters so as to adjust the material parameters based on the parameter nodes.
Further, the obtaining of the map material matched with the target scene model, and rendering the target scene model based on the map material, and the generating of the three-dimensional virtual scene data includes:
determining at least one charting material matched with the model surface of the target scene model according to the composition material corresponding relation, wherein the composition material corresponding relation is used for representing the corresponding relation between different model surfaces and different material charting;
and calling a material ball of the chartlet material, and rendering the model surface of the target scene model according to the material parameters corresponding to the material ball to obtain three-dimensional virtual scene data.
Further, after obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, and generating three-dimensional virtual scene data, the method further includes:
obtaining at least one scene atmosphere model, wherein the scene atmosphere model is used for increasing atmosphere characteristics of the three-dimensional virtual scene data;
rendering the scene atmosphere model into the three-dimensional virtual scene data according to a scene characteristic auxiliary relationship to obtain scene three-dimensional virtual scene data, wherein the scene characteristic auxiliary relationship is used for representing characteristic relationships between different scene atmosphere models and different target scene models in the three-dimensional virtual scene data.
Further, the method further comprises:
under the condition that geometric model data and model layout parameters of a basic component model of a virtual scene to be built are obtained to generate three-dimensional virtual scene data, starting a rollback mechanism, wherein the rollback mechanism is used for tracking and recording process information for generating the three-dimensional virtual scene data;
and acquiring process information matched with a target rollback node from rollback information obtained by starting the rollback mechanism so as to adjust data in the process information.
According to another aspect of the present invention, there is provided an apparatus for generating three-dimensional virtual scene data, including:
the virtual scene building method comprises an acquisition module, a calculation module and a display module, wherein the acquisition module is used for acquiring geometric model data and model layout parameters of a basic component model of a virtual scene to be built, the basic component model is a non-static model, and the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model;
the generating module is used for determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters and generating a target scene model matched with the construction parameters;
the adjusting module is used for obtaining model adjusting parameters of the target scene model and adjusting the construction parameters of the target scene model through the model adjusting parameters to obtain an adjusted target scene model;
and the rendering module is used for acquiring a map material matched with the target scene model, rendering the target scene model based on the map material and generating three-dimensional virtual scene data.
Further, the generating module includes:
the system comprises a calling unit, a searching unit and a processing unit, wherein the calling unit is used for calling a space partition component and building a basic component model in a three-dimensional virtual scene based on the space partition component, and initial building parameters of different basic component models are configured in the space partition component;
and the configuration unit is used for configuring the initial construction parameters into the construction parameters through the space partition assembly to obtain a target scene model in the three-dimensional virtual scene.
Further, the obtaining module comprises:
an obtaining unit, configured to obtain a scene adjustment operation on the target scene model, where the scene adjustment operation includes an operation content for adjusting geometric model data and model layout parameters of the target scene model;
and the determining unit is used for analyzing the adjusting content generated by the scene adjusting operation on the target scene model, determining a model adjusting parameter, and adjusting the construction parameter of the target scene model according to the model adjusting parameter based on the space editing subassembly of the space partition component to obtain the adjusted target scene model.
Further, the apparatus further comprises: the module is loaded on the basis of the load module,
the loading module is also used for loading resource maps for rendering different target scene models;
the configuration module is used for creating a material ball based on the resource map and configuring at least one material parameter for the material ball to obtain a map material;
the configuration module is further configured to configure the matched parameter node for the material parameter, so as to adjust the material parameter based on the parameter node.
Further, the rendering module includes:
the determining unit is used for determining at least one charting material matched with the model surface of the target scene model according to the composition material corresponding relation, and the composition material corresponding relation is used for representing the corresponding relation between different model surfaces and different material charting;
and the rendering unit is used for calling the material ball of the chartlet material and rendering the model surface of the target scene model according to the material parameters corresponding to the material ball to obtain three-dimensional virtual scene data.
Further, the air conditioner is provided with a fan,
the acquisition module is further used for acquiring at least one scene atmosphere model, and the scene atmosphere model is used for increasing atmosphere characteristics of the three-dimensional virtual scene data;
the rendering module is further configured to render the scene atmosphere model into the three-dimensional virtual scene data according to a scene feature auxiliary relationship to obtain scene three-dimensional virtual scene data, where the scene feature auxiliary relationship is used to represent feature relationships between different scene models and different target scene models in the three-dimensional virtual scene data.
Further, the apparatus further comprises: the start-up module is used for starting the module,
the starting module is used for starting a rollback mechanism under the condition that the geometric model data and the model layout parameters of the basic component model of the virtual scene to be built are obtained to generate three-dimensional virtual scene data, and the rollback mechanism is used for tracking and recording process information for generating the three-dimensional virtual scene data;
the obtaining module is further configured to obtain process information matched with a target rollback node from rollback information obtained by starting the rollback mechanism, so as to adjust data in the process information.
According to still another aspect of the present invention, there is provided a storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform operations corresponding to the method for generating three-dimensional virtual scene data as described above.
According to still another aspect of the present invention, there is provided a terminal including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the three-dimensional virtual scene data generation method.
By the technical scheme, the technical scheme provided by the embodiment of the invention at least has the following advantages:
the invention provides a method and a device for generating three-dimensional virtual scene data, a storage medium and a terminal, compared with the prior art, the embodiment of the invention obtains geometric model data and model layout parameters of a basic component model of a virtual scene to be built, wherein the basic component model is a non-static model, and the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model; determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generating a target scene model matched with the construction parameters; obtaining model adjustment parameters of the target scene model, and adjusting the construction parameters of the target scene model through the model adjustment parameters to obtain an adjusted target scene model; the method comprises the steps of obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, generating three-dimensional virtual scene data, increasing the operation convenience of generating the three-dimensional virtual scene data, achieving the purpose of flexibly and automatically adjusting the scene model in the three-dimensional virtual scene data, and increasing the scene effect of the three-dimensional virtual scene data in a mode of rendering the map material, so that the generation efficiency of the three-dimensional virtual scene data is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating a method for generating three-dimensional virtual scene data according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a basic component model provided by an embodiment of the present invention;
FIG. 3 is a diagram illustrating a target scene model according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a three-dimensional street view virtual scene provided by an embodiment of the present invention;
fig. 5 is a flowchart illustrating another method for generating three-dimensional virtual scene data according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating a method for generating three-dimensional virtual scene data according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating a method for generating three-dimensional virtual scene data according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of a material ball according to an embodiment of the present invention;
fig. 9 is a flowchart illustrating a method for generating three-dimensional virtual scene data according to an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating three-dimensional virtual scene data of a street view atmosphere according to an embodiment of the present invention;
fig. 11 is a schematic diagram illustrating an auxiliary image in FBX file format according to an embodiment of the present invention;
fig. 12 is a block diagram illustrating a device for generating three-dimensional virtual scene data according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
When generating three-dimensional virtual scene data, a creator of a scene generally performs manual creation, and the creation methods used by different creators are different. However, in the process of manually creating the three-dimensional virtual scene, the generation efficiency of the three-dimensional virtual scene is reduced due to complicated creation steps, a large amount of human resources are consumed, the generation time of the three-dimensional virtual scene is greatly wasted, and the generated three-dimensional virtual scene cannot be directly represented as an image with a material effect finally, so that the generation efficiency of the three-dimensional virtual scene data is influenced. An embodiment of the present invention provides a method for generating three-dimensional virtual scene data, as shown in fig. 1, the method includes:
101. and acquiring geometric model data and model layout parameters of a basic component model of the virtual scene to be built.
In the embodiment of the invention, the virtual scene to be built is three-dimensional virtual scene data, the current execution main body obtains the geometric model data and the model distribution parameters of the basic component model, and at the moment, the geometric model data and the model distribution parameters can be input by an editor in advance or can be input by the editor in real time. The basic component model is a basic building block unit for building a three-dimensional virtual scene, the basic component model is a non-static model, and the three-dimensional modeling coordinates of the basic component model can be adjusted to obtain UV with different shapes, namely, the UV coordinates of the three-dimensional modeling textures with different shapes are represented. Meanwhile, the basic component models are spliced with each other by configuring a splicing interface, for example, the square component models and the cone component models are spliced with each other, and the basic component models include but are not limited to models in basic set shapes such as box bodies, cones, cylinders, pyramid bodies, spheres, bent connecting pieces, linear connecting pieces, spiral connecting pieces and the like. Specifically, the basic component model is used for representing several most basic model bodies expected to build the virtual scene, and at the moment, geometric model data and model layout parameters are used as basic parameters for building the basic component model. The geometric model data are used for representing the geometric shape of the basic component model in a three-dimensional space, the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model, the distribution position is the composition position of the basic component model expected to be built in the three-dimensional virtual scene, and the logical relationship is the logical relationship of Boolean operation between the basic component models, such as logic and, logic or, logic not, and logic XOR, so that the current execution main body automatically builds the scene model in the three-dimensional virtual scene based on the geometric model data and the model layout parameters.
It should be noted that the geometric model data and the model layout parameters of the basic component model of the virtual scene to be built can be input in real time by a scene editor or directly stored in the current execution end, so as to be acquired when the basic component model needs to be built, and the embodiment of the present invention is not particularly limited.
102. And determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generating a target scene model matched with the construction parameters.
In the embodiment of the invention, after the geometric model data and the model layout parameters are obtained, the construction parameters of the basic component model are determined in a calculation mode, and at the moment, the construction parameters are used for representing the specific spatial position, the model parameters and the like of the basic component model constructed in the three-dimensional virtual model. The obtained basic component models can be multiple, each basic component model can be spliced through a splicing interface, the geometric shape of each model is determined based on geometric model data, the spatial position of each basic component model in a three-dimensional space is determined based on the distribution position in the model layout parameters, at the moment, the splicing positions of the basic component models are calculated according to the logic relation in the model layout parameters of each basic component model aiming at the splicing among the multiple basic component models, so that the spatial positions, the model parameters and the like of the spliced basic component models are obtained and serve as setting parameters, and the target scene model of the basic component model is set up in the three-dimensional virtual scene. For example, as shown in fig. 2, the basic component model includes a cube, a bridge, a cuboid, and the like, and spatial positions and model parameters of the cube, the bridge, and the cuboid in the virtual street view are calculated based on geometric model data and model layout parameters of the cube, the bridge, and the cuboid, so as to generate a target street view model, at this time, the target street view model includes a plurality of building models in the virtual scene, and each building model is a model of a simple geometric body, so as to further refine the model.
103. And obtaining model adjusting parameters of the target scene model, and adjusting the construction parameters of the target scene model through the model adjusting parameters to obtain an adjusted target scene model.
In the embodiment of the invention, as the basic component model is a three-dimensional model with different geometric shapes, in order to realize convenient adjustment of the composition of the target scene and further improve the refinement degree of the composition, the model adjustment parameters of the target scene model are firstly obtained. The target scene model may include one basic component model built in the three-dimensional virtual scene, or may include a plurality of basic component models built in the three-dimensional virtual scene, and at this time, the model adjustment parameters for obtaining one or more target scene models may be input by a user, or may also be model adjustment parameters for establishing a matching relationship with the target scene model in advance, so as to achieve the purpose of automatic adjustment. At this time, since the target scene model is generated based on the set-up parameters, and the corresponding model adjustment parameters for adjusting the target scene model are the data content for adjusting the set-up parameters, the set-up parameters are adjusted by the model adjustment parameters, so as to realize a refined composition of the target scene model, as shown in fig. 3.
104. And obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, and generating three-dimensional virtual scene data.
In the embodiment of the invention, in order to show the composition effect of the virtual scene, the material effect is increased, the chartlet material matched with the target scene model is obtained, and the chartlet material is used for rendering the target scene model, so that the material effect is added to the target scene model in the virtual scene, and different scene effects are realized. The chartlet material may be generated by baking in advance, or may be loaded from a cloud or a local material database, and the embodiment of the present invention is not limited specifically. In addition, when the target scene model is rendered based on the map material, the rendering is performed directly based on the map material matched with the target scene model, and three-dimensional virtual scene data including all rendered target scene models is obtained, as shown in fig. 4.
Since the three-dimensional virtual scene data is three-dimensional and can be used as a data base for creating a virtual scene such as a three-dimensional animation or a game for a scene editor, the three-dimensional virtual scene data can be directly imported to a game engine or an action creation system for use by rendering based on a game engine, an animation creation system, or the like in the current execution side.
In another embodiment of the present invention, for further definition and explanation, as shown in fig. 5, the step 102 of generating a target scene model matching the set-up parameters includes:
201. calling a space partition component, and building the basic component model in a three-dimensional virtual scene based on the space partition component;
202. and configuring the initial construction parameters into the construction parameters through the space partition assembly to obtain a target scene model in the three-dimensional virtual scene.
In the embodiment of the present invention, the spatial partition component may be a component that is embedded in a game engine and adds a basic component model to a virtual scene, and the current execution subject calls the spatial partition component through the game engine to construct the basic component models with different geometric shapes. The space partition component is provided with a plurality of basic component models in different shapes as basic building block units, so that the building is carried out in a three-dimensional virtual scene. Meanwhile, initial setting parameters of different basic component models are configured in the space partition component, when the basic component model is set up based on the space partition component, setting is carried out according to the initial setting parameters, and then setting is carried out on time initially according to setting parameter configuration obtained through calculation based on geometric model data and model layout parameters, so that a target scene model is obtained.
It should be noted that the generation of the game three-dimensional virtual scene in the embodiment of the present invention is applied to a game engine scene building function, and the game engine may be UE, Unity, or the like, which is not particularly limited. And then, according to the building parameters obtained by calculating the geometric model data and the model layout parameters, calling the space partition component in the game engine to build the basic component model to obtain the target scene model. At this time, in the game engine, the target scene model built according to the building parameters is a three-dimensional model with a spatial position, a spatial layout and a splicing relation, and the target scene model is directly built and generated according to the building parameters.
In another embodiment of the present invention, for further limitation and description, as shown in fig. 6, step 103 obtains a model adjustment parameter of the target scene model, and adjusts a building parameter of the target scene model according to the model adjustment parameter, so as to obtain an adjusted target scene model, where the obtaining includes:
301. acquiring scene adjustment operation on the target scene model;
302. analyzing the adjustment content generated by the scene adjustment operation on the target scene model, determining model adjustment parameters, and adjusting the construction parameters of the target scene model according to the model adjustment parameters based on the space editing subassembly of the space partition assembly to obtain an adjusted target scene model.
In order to realize flexible adjustment of model composition so as to meet composition requirements of different scene editors, parameters of a target scene model are adjusted based on model adjustment parameters, and thus, the detailed adjustment of a target scene is realized. Specifically, in the process of composition making of the virtual scene, scene adjustment operation of the target scene model is obtained, the scene adjustment operation includes operation content for adjusting geometric model data and model layout parameters of the target scene model, so that model adjustment parameters are determined based on the scene adjustment operation, and the purpose of performing detailed adjustment on the target scene model according to the model adjustment parameters is achieved. The scene adjusting operation comprises adjusting contents of model points, model lines and model surfaces of the target scene model, so that the operation contents of adjusting the geometric model data and the model layout parameters are determined based on the operation contents, including but not limited to moving the target scene model points, extending the target scene model lines, expanding the target scene model surfaces, splicing a plurality of target scene models, moving the spatial positions and the like. At this time, if the point, the line, or the plane corresponding to the adjustment operation is determined, the target scene model may be determined correspondingly, and the embodiment of the present invention is not particularly limited. In addition, the target scene model surface is composed of target scene model point connecting lines, and the connecting lines between the target scene model points comprise target scene model lines, so that the linkage adjustment of other target scene model points, target scene model lines and target scene model surfaces can be realized for any adjustment content of the target scene model points, the target scene model lines and the target scene model surfaces.
It should be noted that, the scene adjustment operation includes operation contents for adjusting geometric model data and model layout parameters of the target scene model, including adjustment operations for adjusting positions, lengths, sizes, and the like of target scene model points of the target scene model, at this time, the adjustment operations determine change contents of the geometric model data and the model layout parameters, and determine the change contents as model adjustment parameters, and the building parameters of the target scene model are adjusted according to the model adjustment parameters based on the spatial editing subassembly of the spatial partitioning component, so as to obtain the adjusted target scene model. The embodiment of the present invention is not particularly limited. The space partition component is embedded into the game engine, and after the target scene model is built, the editing subassembly in the space partition space is called, so that the building parameters of the target scene model are refined and adjusted according to the model adjusting parameters based on the space editing subassembly, the target scene model after the model parameters are adjusted is obtained, and at the moment, the target scene model is a three-dimensional model with rich scene details.
In another embodiment of the present invention, for further definition and explanation, as shown in fig. 7, before the step 104 of obtaining a map material matching the target scene model, the method further includes:
401. loading resource maps for rendering different target scene models;
402. creating a material ball based on the resource map, and configuring at least one material parameter for the material ball to obtain a map material;
403. and configuring matched parameter nodes for the material parameters.
In order to construct three-dimensional virtual scene data of a rendering map material, the composition effect of a virtual scene is improved, and the map material is baked in advance. The resource map is pre-stored in a material library, so that when the map material is constructed, the resource map is loaded from the material library, at this time, the material library can be a Quixel or a certain accelerator material library, the resource map is a material texture map used for rendering different composition models, and the map material is generated based on the resource map. Specifically, after the resource map is loaded, a material ball matching the resource map is created, as shown in fig. 8, and material parameters are configured for each material ball, so that the material is adjusted based on the material parameters, and the material parameters include, but are not limited to, color, transparency, and the like, so that the scene of the rendering model is adjusted.
It should be noted that, in order to adjust the map material based on the material parameter, so as to satisfy the composition effect for different virtual scenes, after the map material is generated, parameter nodes are configured for each material parameter of the map material, so as to adjust the material parameter based on the parameter nodes. The parameter node is a node bound to different material parameters in the map material, and the material is adjusted according to the parameter of the node, so that after the map material is generated, the material parameter of the map material is adjusted based on the parameter node, for example, the color or transparency of the wall material is adjusted, and the embodiment of the invention is not particularly limited.
In another embodiment of the present invention, for further limitation and explanation, the step 104 of obtaining a mapping material matched with the target scene model, and rendering the target scene model based on the mapping material, and the generating three-dimensional virtual scene data includes: determining at least one chartlet material matched with the model surface of the target scene model according to the corresponding relation of composition materials; and calling a material ball of the chartlet material, and rendering the model surface of the target scene model according to the material parameters corresponding to the material ball to obtain three-dimensional virtual scene data.
In order to simplify a selection process of a mapping material during rendering of three-dimensional virtual scene data and achieve the purpose of automatically generating the three-dimensional virtual scene data, specifically, the mapping material matched with each model surface of a target scene model is determined according to a mapping material corresponding relation, the mapping material corresponding relation is used for representing the corresponding relation between different target scene model surfaces and different material mappings, and the mapping material matched with the model surface of the target scene model is determined by pre-configuring a current execution end and calling the mapping material corresponding relation when rendering is needed. The composition material corresponding relationship may be written in advance by a user, or may be downloaded simultaneously when the resource map is loaded, and the embodiment of the present invention is not particularly limited. After the chartlet material is determined, a material ball of the configured chartlet material is called, so that the model surface of the target scene model is rendered according to the material parameters corresponding to the material ball to obtain three-dimensional virtual scene data, and at this time, the three-dimensional virtual scene data can be directly realized through a rendering module in a game engine, as shown in fig. 4, and the embodiment of the invention is not particularly limited.
In another embodiment of the present invention, for further limitation and explanation, as shown in fig. 9, after obtaining a mapping material matched with the target scene model in step 104, rendering the target scene model based on the mapping material, and generating three-dimensional virtual scene data, the method further includes:
501. acquiring at least one scene atmosphere model;
502. rendering the scene atmosphere model into the three-dimensional virtual scene data according to the scene characteristic auxiliary relationship to obtain the scene three-dimensional virtual scene data.
As the three-dimensional virtual scene data is a three-dimensional virtual scene, in order to increase more scene effects, after the three-dimensional virtual scene data is obtained through rendering, the scene atmosphere model is obtained, and the scene atmosphere model is rendered into the three-dimensional virtual scene data according to the scene characteristic auxiliary relationship, so that the scene three-dimensional virtual scene data with the scene atmosphere sense is obtained. The scene atmosphere model is a model for increasing atmosphere characteristics of the three-dimensional virtual scene data, and the atmosphere characteristics include, but are not limited to, contents such as special effects, plants, celestial bodies and the like for representing the scene atmosphere in the three-dimensional virtual scene data, for example, as shown in fig. 10, so that the detailed atmosphere effect of the three-dimensional virtual scene data is embodied. In addition, in order to automatically realize the rendering of the atmosphere sense, the scene atmosphere model is rendered into the three-dimensional virtual scene data according to the scene characteristic auxiliary relationship, and the scene characteristic auxiliary relationship is used for representing the characteristic relationship between different scene atmosphere models and different target scene models in the three-dimensional virtual scene data, for example, a leaf model serving as the scene atmosphere model has an attachment relationship with a wall target scene model, so that the leaf model is rendered by attaching to the wall model during rendering. Specifically, the characteristic relationship may include, but is not limited to, an attachment relationship, a positional relationship, an embedding relationship, and the like, so that when rendering is performed, the ambience is increased by adding the scene atmosphere model, so as to obtain three-dimensional virtual scene data with the ambience, and achieve the purpose of generating accurate and refined virtual scene data.
In another embodiment of the present invention, for further definition and illustration, the method further comprises:
601. starting a rollback mechanism under the condition of acquiring geometric model data and model layout parameters of a basic component model of a virtual scene to be built so as to generate three-dimensional virtual scene data;
602. and acquiring process information matched with a target rollback node from rollback information obtained by starting the rollback mechanism so as to adjust data in the process information.
In the embodiment of the invention, when the three-dimensional virtual scene is built, for the generation of the target scene model, Boolean logic calculation is needed to be based on, so that different basic component models are spliced or the target scene model is adjusted and refined, and a rollback mechanism is configured to improve the efficiency and flexibility of building the three-dimensional virtual scene. At the moment, starting a rollback mechanism from the acquisition of geometric model data and model layout parameters of a basic component model of the virtual scene to be built so as to generate three-dimensional virtual scene data, so as to backtrack any rollback node in the generated three-dimensional virtual scene data. The rollback mechanism is used for tracking and recording process information for generating the three-dimensional virtual scene data, wherein the process information is information in the whole process from the acquisition of geometric model data and model layout parameters of a basic component model to the generation of the three-dimensional scene data, and at the moment, the process information comprises but is not limited to blank virtual scene data, operation contents for adjusting the geometric model data and the model layout parameters, a target scene model for operation adjustment and various operation time points, so that the rollback purpose is achieved through recording and tracking the process information. After a rollback mechanism is started, when the generation of the three-dimensional virtual scene data in the embodiment of the invention is executed currently, each operation time sequence and corresponding operation content are recorded and tracked, and a rollback node corresponding to each operation and corresponding recorded process information are determined, so that when a rollback operation is performed, a target rollback node is determined from rollback information including all process information, and corresponding process information is obtained and output. The rollback operation is to select a target rollback node from the process information corresponding to the recording operation time sequence after completing recording and tracking of the process information, and if the target rollback node is selected according to a time point, the target rollback node may also be quickly returned to a previous operation recording point in a mouse wheel manner, so as to call the recording information at the previous recording point for display. The rollback information comprises a piece of process information corresponding to each recording time point, and each recording time node corresponds to a rollback node so as to select a target rollback node from the rollback nodes.
It should be noted that, because the rollback mechanism is used to record and track process information at any time point in the whole process of obtaining geometric model data and model layout parameters of the basic component model of the virtual scene to be built to generate three-dimensional virtual scene data, the output of the recorded information may be virtual scene rendering according to blank virtual scene data at each time point, or operation content for adjusting the geometric model data and the model layout parameters, or a target scene model for performing operation adjustment, that is, re-rendering a virtual scene at a target rollback node, thereby overcoming the disadvantage of boolean computation destructiveness. Of course, if the process information corresponding to the rollback operation only includes the adjustment of the coordinates of the model points, the corresponding adjustment content may be directly output, and if the process information includes the content of the boolean operation in the logical relationship, the virtual scene needs to be rendered again.
In the embodiment of the invention, in order to generate the three-dimensional virtual scene data based on the image data with composition content, the reference composition image data can be loaded to obtain the geometric model data and the model layout parameters of the basic component model of the virtual scene to be built from the reference composition image data, so that the basic image data is built. The reference composition image data is used for providing two-dimensional image content of composition content for the basic component model, the composition content comprises the positions and other contents of all scene objects in different scenes, so that a two-dimensional image with a scene making effect is shown, and the reference composition image data can be loaded in a cloud end and can also be loaded in a composition subsystem. Meanwhile, in order to generate a basic component model by using the spatial partitioning component, call the spatial partitioning component, and obtain geometric model data and model layout parameters of the basic component model of the virtual scene to be constructed after the spatial partitioning component analyzes the geometric shape in the reference composition image data, so as to calculate construction parameters and complete construction of the basic component model, wherein the spatial partitioning component is a component in a game engine, the embodiment of the invention is not specifically limited.
In order to meet the requirement of quickly and conveniently building a basic component model and achieve the purpose of flexibly editing scenes, basic composition requirement data can be obtained and used for representing the content of composition model features required by an expected generated basic component model, and the composition model features include but are not limited to the shape of a composition model, so that the basic component model matched with reference composition image data is generated according to the basic composition requirement data through the spatial partition component.
It should be noted that, in the process of building a basic component model based on a space partition component, a basic component model of a simple geometric shape matched with a scene in reference composition image data is built by using the space partition component according to the reference composition image data, at this time, a scene editor can select a matched geometric shape from the space partition component according to the scene in the reference composition image data to build the basic component model, and the space partition component can also scan or image recognize two-dimensional reference composition image data to determine the spatial position and layout composition of a key geometric body in the scene, so as to obtain geometric model data and model layout parameters of the basic component model of a virtual scene to be built, so as to calculate the parameters and build the basic component model. Meanwhile, in order to refine the basic component model in the shape of a simple geometric figure and simplify the operation flow of a user, the setting parameters of the basic component model can be refined and adjusted by using the input basic composition requirement data, and the embodiment of the invention is not particularly limited.
In the embodiment of the invention, in an application scene generated by three-dimensional virtual scene data of a building scene ground, the method comprises the following steps: when the three-dimensional virtual scene data of the scene ground is made, a making project for editing the scene ground can be created to obtain the geometric model data and the model layout parameters of the basic component model of the virtual scene to be built, after the current execution main body calculates the building parameters according to the geometric model data and the model layout parameters, the target scene model is generated through the space partition components in the game engine UE4, and the target scene model containing one or more geometric shapes is obtained. After the composition is determined, model adjustment parameters of the target scene model are obtained, model refinement processing is performed on the construction parameters of the target scene model through the game engine UE4, namely the construction parameters are adjusted according to the model adjustment parameters, and the adjusted target scene model is obtained. Then, a map resource can be loaded from a perfect accelerator material library or a Quixel material library serving as a third-party map website, a material ball is made in a game engine UE4, and the obtained map material is rendered to a target scene model to obtain three-dimensional virtual scene data. Further, scene three-dimensional virtual scene data with more ambience is obtained by rendering the scene atmosphere model. At this time, since all the generated three-dimensional virtual scenes can be realized by the game engine UE4, the three-dimensional virtual scenes can be converted into the static mesh-to-FBX file format and imported into the 3D software at the time of post-production, as shown in fig. 11.
Compared with the prior art, the embodiment of the invention provides a method for generating three-dimensional virtual scene data, wherein the method comprises the steps of acquiring geometric model data and model layout parameters of a basic component model of a virtual scene to be built, wherein the basic component model is a non-static model, and the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model; determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generating a target scene model matched with the construction parameters; obtaining model adjustment parameters of the target scene model, and adjusting the construction parameters of the target scene model through the model adjustment parameters to obtain an adjusted target scene model; the method comprises the steps of obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, generating three-dimensional virtual scene data, increasing the operation convenience of generating the three-dimensional virtual scene data, achieving the purpose of flexibly and automatically adjusting the scene model in the three-dimensional virtual scene data, and increasing the scene effect of the three-dimensional virtual scene data in a mode of rendering the map material, so that the generation efficiency of the three-dimensional virtual scene data is improved.
Further, as an implementation of the method shown in fig. 1, an embodiment of the present invention provides an apparatus for generating three-dimensional virtual scene data, as shown in fig. 12, the apparatus includes:
the acquiring module 61 is configured to acquire geometric model data and model layout parameters of a basic component model of a virtual scene to be built, where the basic component model is a non-static model, and the model layout parameters are used to represent distribution positions and logical relationships of the basic component model;
the generating module 62 is configured to determine the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generate a target scene model matched with the construction parameters;
the adjusting module 63 is configured to obtain a model adjusting parameter of the target scene model, and adjust a construction parameter of the target scene model according to the model adjusting parameter to obtain an adjusted target scene model;
and a rendering module 64, configured to obtain a map material matched with the target scene model, and render the target scene model based on the map material to generate three-dimensional virtual scene data.
Further, the generating module includes:
the system comprises a calling unit, a searching unit and a processing unit, wherein the calling unit is used for calling a space partition component and building a basic component model in a three-dimensional virtual scene based on the space partition component, and initial building parameters of different basic component models are configured in the space partition component;
and the configuration unit is used for configuring the initial construction parameters into the construction parameters through the space partition assembly to obtain a target scene model in the three-dimensional virtual scene.
Further, the obtaining module comprises:
an obtaining unit, configured to obtain a scene adjustment operation on the target scene model, where the scene adjustment operation includes an operation content for adjusting geometric model data and model layout parameters of the target scene model;
and the determining unit is used for analyzing the adjusting content generated by the scene adjusting operation on the target scene model, determining a model adjusting parameter, and adjusting the construction parameter of the target scene model according to the model adjusting parameter based on the space editing subassembly of the space partition component to obtain the adjusted target scene model.
Further, the apparatus further comprises: the module is loaded on the basis of the load module,
the loading module is also used for loading resource maps for rendering different target scene models;
the configuration module is used for creating a material ball based on the resource map and configuring at least one material parameter for the material ball to obtain a map material;
the configuration module is further configured to configure the matched parameter node for the material parameter, so as to adjust the material parameter based on the parameter node.
Further, the rendering module includes:
the determining unit is used for determining at least one charting material matched with the model surface of the target scene model according to the composition material corresponding relation, and the composition material corresponding relation is used for representing the corresponding relation between different model surfaces and different material charting;
and the rendering unit is used for calling the material ball of the chartlet material and rendering the model surface of the target scene model according to the material parameters corresponding to the material ball to obtain three-dimensional virtual scene data.
Further, the air conditioner is provided with a fan,
the acquisition module is further used for acquiring at least one scene atmosphere model, and the scene atmosphere model is used for increasing atmosphere characteristics of the three-dimensional virtual scene data;
the rendering module is further configured to render the scene atmosphere model into the three-dimensional virtual scene data according to a scene feature auxiliary relationship to obtain scene three-dimensional virtual scene data, where the scene feature auxiliary relationship is used to represent feature relationships between different scene models and different target scene models in the three-dimensional virtual scene data.
Further, the apparatus further comprises: the start-up module is used for starting the module,
the starting module is used for starting a rollback mechanism under the condition that the geometric model data and the model layout parameters of the basic component model of the virtual scene to be built are obtained to generate three-dimensional virtual scene data, and the rollback mechanism is used for tracking and recording process information for generating the three-dimensional virtual scene data;
the obtaining module is further configured to obtain process information matched with a target rollback node from rollback information obtained by starting the rollback mechanism, so as to adjust data in the process information.
Compared with the prior art, the embodiment of the invention provides a device for generating three-dimensional virtual scene data, and the device comprises a non-static model, a virtual scene model generating module and a virtual scene model generating module, wherein the geometric model data and the model layout parameters of a basic component model of the virtual scene to be constructed are obtained, the basic component model is a non-static model, and the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model; determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generating a target scene model matched with the construction parameters; obtaining model adjustment parameters of the target scene model, and adjusting the construction parameters of the target scene model through the model adjustment parameters to obtain an adjusted target scene model; the method comprises the steps of obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, generating three-dimensional virtual scene data, increasing the operation convenience of generating the three-dimensional virtual scene data, achieving the purpose of flexibly and automatically adjusting the scene model in the three-dimensional virtual scene data, and increasing the scene effect of the three-dimensional virtual scene data in a mode of rendering the map material, so that the generation efficiency of the three-dimensional virtual scene data is improved.
According to an embodiment of the present invention, a storage medium is provided, where at least one executable instruction is stored, and the computer executable instruction may execute the method for generating three-dimensional virtual scene data in any of the above method embodiments.
Fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the terminal.
As shown in fig. 13, the terminal may include: a processor (processor)702, a Communications Interface 704, a memory 706, and a communication bus 708.
Wherein: the processor 702, communication interface 704, and memory 706 communicate with each other via a communication bus 708.
A communication interface 704 for communicating with network elements of other devices, such as clients or other servers.
The processor 702 is configured to execute the program 710, and may specifically execute relevant steps in the above-described method for generating three-dimensional virtual scene data.
In particular, the program 710 may include program code that includes computer operating instructions.
The processor 702 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The terminal comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 706 stores a program 710. The memory 706 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 710 may specifically be used to cause the processor 702 to perform the following operations:
acquiring geometric model data and model layout parameters of a basic component model of a virtual scene to be built, wherein the basic component model is a non-static model, and the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model;
determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generating a target scene model matched with the construction parameters;
obtaining model adjustment parameters of the target scene model, and adjusting the construction parameters of the target scene model through the model adjustment parameters to obtain an adjusted target scene model;
and obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, and generating three-dimensional virtual scene data.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for generating three-dimensional virtual scene data is characterized by comprising the following steps:
acquiring geometric model data and model layout parameters of a basic component model of a virtual scene to be built, wherein the basic component model is a non-static model, and the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model;
determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters, and generating a target scene model matched with the construction parameters;
obtaining model adjustment parameters of the target scene model, and adjusting the construction parameters of the target scene model through the model adjustment parameters to obtain an adjusted target scene model;
and obtaining a map material matched with the target scene model, rendering the target scene model based on the map material, and generating three-dimensional virtual scene data.
2. The method of claim 1, wherein generating the target scene model matching the build parameters comprises:
calling a space partition component, and building the basic component model in a three-dimensional virtual scene based on the space partition component, wherein the space partition component is configured with initial building parameters of different basic component models;
and configuring the initial construction parameters into the construction parameters through the space partition assembly to obtain a target scene model in the three-dimensional virtual scene.
3. The method according to claim 2, wherein the obtaining of the model adjustment parameter of the target scene model and the adjusting of the construction parameter of the target scene model by the model adjustment parameter to obtain the adjusted target scene model comprises:
acquiring scene adjustment operation on the target scene model, wherein the scene adjustment operation comprises operation contents for adjusting geometric model data and model layout parameters of the target scene model;
analyzing the adjustment content generated by the scene adjustment operation on the target scene model, determining model adjustment parameters, and adjusting the construction parameters of the target scene model according to the model adjustment parameters based on the space editing subassembly of the space partition assembly to obtain an adjusted target scene model.
4. The method of claim 1, wherein prior to obtaining the map material matching the target scene model, the method further comprises:
loading resource maps for rendering different target scene models;
creating a material ball based on the resource map, and configuring at least one material parameter for the material ball to obtain a map material;
and configuring matched parameter nodes for the material parameters so as to adjust the material parameters based on the parameter nodes.
5. The method of claim 4, wherein obtaining a map material matching the target scene model and rendering the target scene model based on the map material, and wherein generating three-dimensional virtual scene data comprises:
determining at least one charting material matched with the model surface of the target scene model according to the composition material corresponding relation, wherein the composition material corresponding relation is used for representing the corresponding relation between different model surfaces and different material charting;
and calling a material ball of the chartlet material, and rendering the model surface of the target scene model according to the material parameters corresponding to the material ball to obtain three-dimensional virtual scene data.
6. The method of claim 1, wherein after obtaining the map material matching the target scene model and rendering the target scene model based on the map material to generate three-dimensional virtual scene data, the method further comprises:
obtaining at least one scene atmosphere model, wherein the scene atmosphere model is used for increasing atmosphere characteristics of the three-dimensional virtual scene data;
rendering the scene atmosphere model into the three-dimensional virtual scene data according to a scene characteristic auxiliary relationship to obtain scene three-dimensional virtual scene data, wherein the scene characteristic auxiliary relationship is used for representing characteristic relationships between different scene atmosphere models and different target scene models in the three-dimensional virtual scene data.
7. The method of claim 1, further comprising:
under the condition that geometric model data and model layout parameters of a basic component model of a virtual scene to be built are obtained to generate three-dimensional virtual scene data, starting a rollback mechanism, wherein the rollback mechanism is used for tracking and recording process information for generating the three-dimensional virtual scene data;
and acquiring process information matched with a target rollback node from rollback information obtained by starting the rollback mechanism so as to adjust data in the process information.
8. An apparatus for generating three-dimensional virtual scene data, comprising:
the virtual scene building method comprises an acquisition module, a calculation module and a display module, wherein the acquisition module is used for acquiring geometric model data and model layout parameters of a basic component model of a virtual scene to be built, the basic component model is a non-static model, and the model layout parameters are used for representing the distribution position and the logical relationship of the basic component model;
the generating module is used for determining the construction parameters of the basic component model according to the geometric model data and the model layout parameters and generating a target scene model matched with the construction parameters;
the adjusting module is used for obtaining model adjusting parameters of the target scene model and adjusting the construction parameters of the target scene model through the model adjusting parameters to obtain an adjusted target scene model;
and the rendering module is used for acquiring a map material matched with the target scene model, rendering the target scene model based on the map material and generating three-dimensional virtual scene data.
9. A storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the method for generating three-dimensional virtual scene data according to any one of claims 1 to 7.
10. A terminal, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the three-dimensional virtual scene data generation method according to any one of claims 1-7.
CN202111669499.1A 2021-12-30 2021-12-30 Three-dimensional virtual scene data generation method and device, storage medium and terminal Pending CN114307158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111669499.1A CN114307158A (en) 2021-12-30 2021-12-30 Three-dimensional virtual scene data generation method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111669499.1A CN114307158A (en) 2021-12-30 2021-12-30 Three-dimensional virtual scene data generation method and device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN114307158A true CN114307158A (en) 2022-04-12

Family

ID=81020634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111669499.1A Pending CN114307158A (en) 2021-12-30 2021-12-30 Three-dimensional virtual scene data generation method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN114307158A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115017588A (en) * 2022-06-10 2022-09-06 中国建筑西南设计研究院有限公司 Method, device, equipment and storage medium for generating sports building model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115017588A (en) * 2022-06-10 2022-09-06 中国建筑西南设计研究院有限公司 Method, device, equipment and storage medium for generating sports building model

Similar Documents

Publication Publication Date Title
CN107358649B (en) Processing method and device of terrain file
JP2021520579A (en) Object loading methods and devices, storage media, electronic devices, and computer programs
KR102573787B1 (en) Optical probe generation method and apparatus, storage medium and computer device
CN111340928A (en) Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment
CN110675466A (en) Rendering system, rendering method, rendering device, electronic equipment and storage medium
US20050253849A1 (en) Custom spline interpolation
CN111240736B (en) Model configuration method, device, equipment and storage medium
CN113689534B (en) Physical special effect rendering method and device, computer equipment and storage medium
CN114307158A (en) Three-dimensional virtual scene data generation method and device, storage medium and terminal
CN114663324A (en) Fusion display method of BIM (building information modeling) model and GIS (geographic information system) information and related components
Weber et al. Editable indoor lighting estimation
CN111950057A (en) Loading method and device of Building Information Model (BIM)
CN111583378B (en) Virtual asset processing method and device, electronic equipment and storage medium
CN110038302B (en) Unity 3D-based grid generation method and device
CN114299202A (en) Processing method and device for virtual scene creation, storage medium and terminal
CN110989979A (en) Terrain generation method based on UE engine
CN113032699B (en) Model construction method, model construction device and processor of robot
CN114565709A (en) Data storage management method, object rendering method and device
CN113318444B (en) Role rendering method and device, electronic equipment and storage medium
WO2024011733A1 (en) 3d image implementation method and system
CN117372602B (en) Heterogeneous three-dimensional multi-object fusion rendering method, equipment and system
CN115761198A (en) Data model lightweight method, device, equipment and storage medium
WO2023179091A1 (en) Three-dimensional model rendering method and apparatus, and device, storage medium and program product
US20240111496A1 (en) Method for running instance, computer device, and storage medium
WO2023221683A1 (en) Image rendering method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination