CN118001740A - Virtual model processing method and device, computer equipment and storage medium - Google Patents

Virtual model processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN118001740A
CN118001740A CN202410241523.9A CN202410241523A CN118001740A CN 118001740 A CN118001740 A CN 118001740A CN 202410241523 A CN202410241523 A CN 202410241523A CN 118001740 A CN118001740 A CN 118001740A
Authority
CN
China
Prior art keywords
scene
target
model
preset
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410241523.9A
Other languages
Chinese (zh)
Inventor
陈子卉
张政勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410241523.9A priority Critical patent/CN118001740A/en
Publication of CN118001740A publication Critical patent/CN118001740A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a processing method, a device, computer equipment and a storage medium of a virtual model, which comprise the following steps: displaying a scene editing page; responding to triggering operation for a scene editing page, and acquiring basic model data; in response to a selection operation of a target preset scene resource aiming at a target scene type in a plurality of preset scene resources, binding basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file; and generating a virtual model of the target scene type based on the bound target model file. According to the embodiment of the application, the metadata attribute system is introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model making and editing efficiency is effectively improved.

Description

Virtual model processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for processing a virtual model, a computer device, and a storage medium.
Background
With the continuous development of computer communication technology, a large number of popular applications of terminals such as smart phones, tablet computers and notebook computers, the terminals develop towards diversification and individuation, and become increasingly indispensable terminals in life and work, so that in order to meet the pursuit of people for mental life, entertainment games capable of being operated on the terminals are generated, for example, multi-user online tactical competitive games, large multi-user online games and other types of games developed based on client or server architecture are popular with users due to the characteristics of high smoothness, good operation hand feeling, instant combat and the like. With the vigorous development of online games, people have increasingly high requirements on the sense of reality of game scenes. In order to make a player obtain better game experience, many terminal games are often constructed based on real scenes and objects in the real scenes, so that the implementation of game resources such as virtual scenes and virtual elements in the game is expected to be closer to the real environment when the game is designed.
In actual game design engineering, in order to make the game world more realistic, game makers often make game scenes through a game engine, currently, game makers usually use shadow special effect magpists (Houdini software) to make virtual game scenes, houdini is 3D animation and visual effect software, and self-contained modeling tools and models allow users to perform various 3D modeling operations. Houdini further comprises a powerful rendering tool and a texture editor, which can simulate various physical light scenes such as illumination, shadow, texture, transparency and the like, so that a virtual scene or a virtual model which is more in line with the real scene can be rendered. However, the virtual scene or virtual model editing step manufactured by Houdini software is complicated and complicated, and the virtual scene or virtual model can be edited only after a plurality of data resolving steps are needed, so that the time and labor are consumed, the labor cost and the time cost are high, and the virtual scene model manufacturing and editing efficiency is low.
Disclosure of Invention
The embodiment of the application provides a processing method, a device, computer equipment and a storage medium of a virtual model, wherein a metadata attribute system is introduced into a PCG tool system of a scene of a programming resource generation framework (Procedural Content Generation Framework, PCG), and Meta attributes can be directly adjusted in the PCG tool system, so that the types of corresponding virtual scenes or virtual models are adjusted, time cost and manpower resources are saved, virtual models of target scene types can be generated in batches, and virtual scene model manufacturing and editing efficiency is effectively improved.
The embodiment of the application provides a processing method of a virtual model, which comprises the following steps:
Displaying a scene editing page, wherein the scene editing page displays a plurality of scene type preset scene resources, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files;
responding to triggering operation aiming at the scene editing page, and acquiring basic model data;
Responding to a selection operation of a target preset scene resource of a target scene type in the plurality of preset scene resources, and binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file;
And performing model generation operation based on the bound target model file to generate a virtual model of the target scene type.
Correspondingly, the embodiment of the application also provides a processing device of the virtual model, which comprises:
The display unit is used for displaying a scene editing page, wherein the scene editing page is displayed with a plurality of scene type preset scene resources, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files;
the acquisition unit is used for responding to the triggering operation of the scene editing page and acquiring basic model data;
The binding unit is used for responding to the selection operation of the target preset scene resource aiming at the target scene type in the plurality of preset scene resources, and binding the basic model data with the preset metadata file of the target preset scene resource to obtain a bound target model file;
And the generating unit is used for performing model generating operation based on the bound target model file and generating a virtual model of the target scene type.
Correspondingly, the embodiment of the application also provides computer equipment, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program is executed by the processor to realize the processing method of the virtual model of any one of the above.
Accordingly, an embodiment of the present application further provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements a method for processing a virtual model according to any one of the above.
The embodiment of the application provides a processing method, a processing device, computer equipment and a storage medium of a virtual model, wherein a scene editing page is displayed, a plurality of preset scene resources of scene types are displayed in the scene editing page, and corresponding preset metadata files are correspondingly configured for the preset scene resources of one scene type; then, responding to triggering operation aiming at the scene editing page, and acquiring basic model data; then, in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file; and finally, performing model generation operation based on the bound target model file to generate a virtual model of the target scene type. According to the embodiment of the application, the metadata attribute system is introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model manufacturing and editing efficiency is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic system diagram of a processing device for a virtual model according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a virtual model processing method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a processing device for virtual model according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
In the prior art, the PCG tool configured by Houdini Engine is very inflexible, and specifically, the PCG related parameters are not dynamic enough and can only be hard coded inside the HDA asset. If more complex scene information is to be expressed, for example, a river curve needs to be made in scene source data, extra data needs to be added on control points and control line segments of the curve, only Houdini can be used for opening a river HDA, parameters are added on the river HDA, then a user updates the HDA asset in Unreal, the newly added parameters can be seen, and the operation flow is long. Also, in the workflow of Houdini Engine, the user does not facilitate the segmentation process on the original scene parameter data. For example, a game maker may find it convenient to implement the resolution of a curve in a real scene in one HDA, but this results in the need to include control point parameters, control line segment parameters, and global parameters for the curve in the one HDA. If the three data are to be split, the TA is required to do three HDAs, which correspond to the control points, the control line segments and the calculation of the global data respectively, and the data are relatively easy to be redundant. Further, houdini Engine's parameters can only be used as a global data, and there is no way to make local fine-tuning. For example, the curve system needs slightly different attributes of each control point, which is not possible in the Houdini Engine parameter system, and only global unified attributes of the whole curve can be set. Meanwhile, the PCG tool configured by Houdini Engine has limited data expression capability, and can only support the type of Houdini own parameter system, so that more demands of game makers cannot be met. In the prior art, all data are stored and bound in one scene file, but a plurality of clear text Json files are not adopted, so that the method is not beneficial to separate locking processing during multi-person cooperation and secondary processing of Houdini.
The embodiment of the application provides a virtual model processing method, a virtual model processing device, computer equipment and a storage medium. Specifically, the method for processing the virtual model according to the embodiment of the present application may be performed by a computer device, where the computer device may be a terminal. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a Personal computer (PC, personal Computer), a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), and the like, and the terminal may further include a client, where the client may be a video application client, a music application client, a game application client, a browser client carrying a game program, or an instant messaging client, and the like.
According to the processing method of the virtual model, which is provided by the embodiment of the application, the content of the scene source data is driven based on the data, so that the situation that a PCG user can see scene parameters after the HDA is hard-coded can be avoided, the representation of the scene source data is decoupled from an HDA algorithm, the user can conveniently and fast fine-tune local scene data, the user can conveniently and fast customize the type of the source data, in addition, the scene original data is stored into Json, and the data is written into and visualized.
Referring to fig. 1, fig. 1 is a schematic view of a virtual model processing system according to an embodiment of the present application, which includes a computer device, and the system may include at least one terminal, at least one server, and a network. The terminal held by the user can be connected to the server of different games through the network. A terminal is any device having computing hardware capable of supporting and executing a software product corresponding to game making software. In addition, the terminal has one or more multi-touch-sensitive screens for sensing and obtaining inputs of a user through touch or slide operations performed at a plurality of points of the one or more touch-sensitive display screens. In addition, when the system includes a plurality of terminals, a plurality of servers, and a plurality of networks, different terminals may be connected to each other through different networks, through different servers. The network may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different terminals may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network.
The computer equipment can display a scene editing page, wherein the scene editing page displays a plurality of preset scene resources of scene types, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files; then, responding to triggering operation aiming at the scene editing page, and acquiring basic model data; then, in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file; and finally, performing model generation operation based on the bound target model file to generate a virtual model of the target scene type. According to the embodiment of the application, the metadata attribute system is introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model manufacturing and editing efficiency is effectively improved.
It should be noted that, the schematic view of the scenario of the processing system of the virtual model shown in fig. 1 is only an example, and the processing system and the scenario of the virtual model described in the embodiment of the present application are for more clearly describing the technical solution of the embodiment of the present application, and do not constitute a limitation on the technical solution provided by the embodiment of the present application, and as one of ordinary skill in the art can know, along with the evolution of the processing system of the virtual model and the appearance of a new service scenario, the technical solution provided by the embodiment of the present application is equally applicable to similar technical problems.
The processing method of the virtual model provided by the embodiment of the application can use game making application software, a fantasy engine (Unreal Engine) which is a game development tool, in particular, the fantasy engine is a 3D graphic rendering engine, and the fantasy engine is commonly used for developing games, movie making, building visualization, training simulation, medical field and development of virtual reality and real-time interactive application programs in a plurality of other fields so as to help artists and designers to create high-quality digital content. The programmatic resource generation framework (Procedural Content Generation Framework, PCG) technique is a technique used in game development to programmatically generate game content, such as checkpoints, maps, and tasks, in a real scene from input scene source data by an algorithm. The PCG technology can save time and resources compared to hand-crafting game content, especially when crafting large open world games.
The embodiment of the application provides a processing method, a device, computer equipment and a storage medium of a virtual model, wherein the processing method of the virtual model can be used together with a terminal, such as a smart phone, a tablet personal computer, a notebook computer or a personal computer. The processing method, apparatus, computer device and storage medium of the virtual model are described in detail below. The following description of the embodiments is not intended to limit the preferred embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of a virtual model processing method according to an embodiment of the present application, and the specific flow may be as follows:
101, displaying a scene editing page, wherein the scene editing page is displayed with a plurality of scene type preset scene resources, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files.
In the embodiment of the application, a preset scene resource of a scene type is correspondingly configured with a corresponding preset metadata file, the preset metadata file is Meta data, the Meta data is scene metadata, the content of scene source data is driven based on data, and the representation of the scene source data is decoupled from an HDA algorithm. Therefore, the game producer can conveniently and quickly fine-tune local scene data, the game producer can conveniently customize the type of source data, and the scene original data is stored into a Json format for data writing and writing visualization.
Wherein Meta data is metadata for controlling generation of editor UI details in Unreal engine. For example, meta data of the int class contains Min and Max values, and can handle the draggable range of the generated maximum and minimum value of int. The Meta generation attribute is an attribute generated from Meta data for saving a value set by a game maker in the Meta generation editor.
102, Acquiring basic model data in response to a triggering operation for the scene editing page.
In the embodiment of the application, the computer equipment can respond to the triggering operation of the scene editing page to acquire the basic model data. For example, a game maker may customize control points and line segments for generating curves as base model data, or may customize virtual model data for generating virtual models as base model data. Optionally, the game maker may also import the set curve file or virtual model file into the system, so as to obtain corresponding basic model data and virtual model data based on the curve file or virtual model file, respectively.
103, In response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file.
In a specific embodiment, the computer device may respond to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, and at this time, the target scene type may be a river type, and bind the base model data with a preset metadata file of the target preset scene resource of the river type to obtain a bound target model file.
104, Performing model generation operation based on the bound target model file to generate a virtual model of the target scene type.
In the embodiment of the application, the computer equipment can perform model generation operation based on the bound target model file to generate the virtual model of the target scene type. For example, the computer device may perform a model generation operation based on the bound target model file, generating a river-type virtual model, a terrain-type virtual model, a town-type virtual model, a road-type virtual model, a rock-type virtual model, or a vegetation-type virtual model.
Optionally, taking the curve file obtained according to the application as an example, the multi-user collaborative editing operation can be performed based on the curve file, the curve file can include a plurality of curve segments, each curve segment corresponds to a curve subfile, each user can select a curve subfile corresponding to a target curve segment, that is, the editing operation can be performed on the curve segment, so that a plurality of users can simultaneously edit different curve segments of the same curve, and the multi-user collaborative efficiency is improved.
In summary, the embodiment of the present application provides a method for processing a virtual model, by displaying a scene editing page, where preset scene resources of a plurality of scene types are displayed in the scene editing page, and corresponding preset metadata files are configured for the preset scene resources of one scene type; then, responding to triggering operation aiming at the scene editing page, and acquiring basic model data; then, in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file; and finally, performing model generation operation based on the bound target model file to generate a virtual model of the target scene type. According to the embodiment of the application, the metadata attribute system is introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model manufacturing and editing efficiency is effectively improved.
In light of the foregoing, the following will further describe, by way of example, a processing method of a virtual model of the present application, specific examples of which are described below.
In an embodiment, the step of "the acquiring the basic model data in response to the triggering operation for the scene editing page", the method may include:
and responding to the triggering operation aiming at the scene editing page, and acquiring basic curve data, wherein the basic curve data comprises control point data and line segment data.
Specifically, in the step of "the binding operation is performed on the base model data and the preset metadata file of the target preset scene resource to obtain a bound target model file" in response to the selection operation of the target preset scene resource of the target scene type in the plurality of preset scene resources, the method may include:
And in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic curve data with a preset metadata file of the target preset scene resource to obtain a bound curve file.
Further, in the step of performing a model generation operation based on the bound object model file to generate a virtual model of the object scene type, the method may include:
And performing model generation operation based on the bound curve file to generate a virtual curve model of the target scene type.
In another embodiment, the step of "the acquiring basic model data in response to the triggering operation for the scene editing page", the method may include:
and responding to the triggering operation of the scene editing page, and acquiring basic building model data.
Specifically, in the step of "the binding operation is performed on the base model data and the preset metadata file of the target preset scene resource to obtain a bound target model file" in response to the selection operation of the target preset scene resource of the target scene type in the plurality of preset scene resources, the method may include:
And in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic building model data with a preset metadata file of the target preset scene resource to obtain a bound building model file.
Further, in the step of performing a model generation operation based on the bound object model file to generate a virtual model of the object scene type, the method may include:
and performing model generation operation based on the bound building model file to generate a virtual building model of the target scene type.
In an embodiment, the scene type includes a terrain type, a river type, a town type, a road type, a rock type, and a vegetation type.
In a specific embodiment, in the step of "the binding operation is performed on the base model data and the preset metadata file of the target preset scene resource in response to the selection operation of the target preset scene resource for the target scene type in the plurality of preset scene resources, to obtain a bound target model file", the method may include:
responding to a selection operation of a target preset scene resource of a target scene type in the plurality of preset scene resources, and acquiring a preset metadata file of the target preset scene resource;
Determining corresponding preset metadata attributes from the preset metadata files;
And binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file so as to bind the preset metadata attribute with the basic model data.
In summary, the embodiment of the present application provides a method for processing a virtual model, by displaying a scene editing page, where preset scene resources of a plurality of scene types are displayed in the scene editing page, and corresponding preset metadata files are configured for the preset scene resources of one scene type; then, responding to triggering operation aiming at the scene editing page, and acquiring basic model data; then, in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file; and finally, performing model generation operation based on the bound target model file to generate a virtual model of the target scene type. According to the embodiment of the application, the metadata attribute system is introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model manufacturing and editing efficiency is effectively improved.
In order to further explain the processing method of the virtual model provided by the embodiment of the present application, an application of the processing method of the virtual model in a specific implementation scenario will be described below, where the specific scenario is as follows.
In the embodiment of the application, the Meta attribute system is defined and then introduced into the data of each PCG system, so that virtual models or virtual scene resources with different scene types are rapidly generated. For example, in a landform system, basesys _info, basesys_layers, and other data information are included, which are all Meta attributes generated by the Meta system, game makers can modify Meta information of these data, add required fields, configure the own required landform data format and export, at this time, the Json file exported by the Meta system already includes necessary attribute fields, only the game maker needs to process in the HDA corresponding to the system. Specifically, the new pipeline architecture includes houdini and unrealeditor, through which a map may be generated, specifically through a system including terrain, river, town, road, rock, or vegetation in houdini, interacting with unrealeditor to generate a target game scene map. The pipeline architecture can automatically generate the terrain to generate the terrain efficiently, each module can be generated fully automatically, the art adjustment time is saved, only a small amount of data interaction times is needed, and the speed for manufacturing the game scene/the terrain/the map is high, and the efficiency is high.
For another example, in the case of curve data editing, a curve may be bound to a specified type of Meta file, at which time the curve may be considered a specific type of curve, i.e., a curve having a specified type. Specifically, a curve-bound river-type-related Meta file is considered a river system, a curve-bound track-type-related Meta file is considered a track system, and a curve-bound road-type-related Meta file is considered a road system. The game maker selects the type of curve needed to be used in the virtual scene, edits control points and line segments on the terrain, and at the moment, the game selects a Meta file, and the control points and the line segments are attached with the corresponding type Meta attribute. The game maker can accurately carry out fine-granularity parameter adjustment on each control point and each curve, and the control points and the line segments of one curve are serialized to the Json file, so that the game maker can realize accurate control of output PCG data, and the problem that in the PCG logic of Houdini Engine, only the global data of the curve can be adjusted is avoided. The curve data is mapped to the Json file in units of strips, and if the curve is edited by multiple persons, the curve can be realized only by taking a lock corresponding to the Json file of the curve to be edited by a game maker. Furthermore, the Meta attribute supports user-defined expansion, and can be used for creating the custom Meta attribute by only inheriting the Meta attribute class and realizing the logic and serialization of the created disc UI component. According to the embodiment of the application, the Meta data information can be defined according to the requirement on the attribute in the PCG flow, and the editor receives the Meta data information and converts the Meta data information into the corresponding data type, and the Meta data information is serialized into the attribute field and is reflected on the attribute interface of the scene editor. Specifically, the data flow can be performed in two modules, one module is defined by a project group, the other module is an editor, and in the project group definition, the definition and the path of the Meta resource and the Meta information defining the designated module can be included; in the editor, the information of the Meta can be accepted according to the rule of the Meta, the type of the Meta resource data can be accessed according to the Meta, the information can be serialized into an editor attribute field, and the editing, expanding and saving of the resource data can be performed. Through the data flow architecture, according to the requirement on the attribute in the PCG flow, the item defines Meta data information, the Meta information is received by the editor and converted into a corresponding data type, and the data type is serialized into an attribute field, so that the attribute field is embodied on an attribute interface of the scene editor.
Specifically, meta is classified into a basic type including Int, float, bool, string, color, uassrt, file and the like, and a container type classified into Dict, array, struct.
In one embodiment, the computer device may display an editor page of the Meta editor on the graphical user interface, which is currently divided into three columns, namely a Meta variable list, meta variable attributes, and an Attribute interface preview. The Meta variable list is used for displaying Meta attribute values, the Meta attribute values are used for recording a current Meta data set, three operations can be performed for the Meta variable, the first is adding, the Meta variable can be added, one row is added in the Meta variable list in the editor page, and the state of variable value modification is defaulted. The second is delete, which can delete the currently selected Meta variable, and not delete without selection. The third type is a search, which can be done on an editor page with a name keyword, searching for the value of the Meta variable, the Meta variable list showing the set of search keywords. The newly added Meta variable defaults to the type of Float (or what default type setting is available for the Sun Meta), and the Meta variable can be selected to modify the attribute value of the Meta variable in the Meta attribute column, including the default value corresponding to the Meta variable. It should be noted that, the attributes corresponding to each type of Meta file are different, and some common Meta attributes include an edit type (editType), a default value (default), a display name (text), a display order (sort), an editable (editable), a maximum value (max)/a minimum value (min)/a progressive value (step), and a child object attribute (childAttribute), and the attributes of the Meta file may be one or more of the above attributes. The editing type may be an editing terrain type, such as a type of terrain, river, town, road, rock, vegetation, etc.
Further, a Meta preview interface may be displayed on the editor page, and when the game maker performs operations on the first two columns, for example, performing operations of adding and deleting values or modifying a default value of the Meta value, and after the current value is modified, the preview interface is updated in real time according to the modified result. Specifically, a preview view of the adjusted virtual model may be displayed.
In the embodiment of the application, houdini parameter previewing can be performed, and the hda parameter can be specifically saved or stored as json, and the saving operation is specifically performed to cover json files declared in the hda param. The save operation refers to the existing json, saves a new file, defaults a path with the json file stated in the original hda paramm, and can save other file paths. And storing the scene original data into Json, and performing data writing-in, writing-out and visualization.
In one embodiment, this may be described in terms of a specific process of data flow, for example, a specially structured repository may be used to hold scene raw data. For example, items are listed under the Meta folder in units of each subsystem, meta data of each subsystem is listed under each subsystem folder one by one, and game makers can edit Json corresponding to these Meta files using a Meta editor. Then, the game maker enters the corresponding system in the editor, and the system loads the corresponding Meta file under the system to generate the corresponding editing interface (these interfaces dynamically generate new editing interfaces when the user edits the Meta during operation). After the game maker edits and finishes selecting and exporting, json data corresponding to the exported Meta attribute enters each subsystem. For example, in the case of a curved track system, there are several types of Meta description below, such as town tracks, physical lines, field tracks. The data driving implementation principle is that the program scans the track system and reflects different curve types under the track tool, and game makers select the curve types to create scenes in scenes and select to export, so that corresponding folders can be generated in Json folders of the original data of the scenes. The game maker can also choose to modify the fields contained in the Meta data of various curves and export again during running, so as to realize dynamic update. The embodiment of the application can realize the editing of game makers and the decoupling of PCG algorithm, the game makers can randomly modify Meta data to derive any form of attribute data, and the game makers do not need to be bound with the HDA made by the game makers, so that the separation of two functions is realized. In addition, the game producer does not need to rely on the hard coding of the program to establish town, track, physical line and other types, and the game producer can customize the data types required by dynamic construction. In an embodiment of the present application, the subsystem may further include a river system, a town system, a road system, or a rock system, etc.
In summary, the embodiment of the present application provides a method for processing a virtual model, by displaying a scene editing page, where preset scene resources of a plurality of scene types are displayed in the scene editing page, and corresponding preset metadata files are configured for the preset scene resources of one scene type; then, responding to triggering operation aiming at the scene editing page, and acquiring basic model data; then, in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file; and finally, performing model generation operation based on the bound target model file to generate a virtual model of the target scene type. According to the embodiment of the application, the metadata attribute system is introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model manufacturing and editing efficiency is effectively improved.
In order to better implement the above method, the embodiment of the present application may also provide a processing apparatus for a virtual model, where the processing apparatus for a virtual model may be specifically integrated in a computer device, for example, may be a computer device such as a terminal.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a virtual model processing apparatus according to an embodiment of the present application, where the apparatus includes:
A display unit 201, configured to display a scene editing page, where preset scene resources of multiple scene types are displayed in the scene editing page, where a preset scene resource of a scene type is correspondingly configured with a corresponding preset metadata file;
An obtaining unit 202, configured to obtain basic model data in response to a triggering operation for the scene editing page;
a binding unit 203, configured to respond to a selection operation for a target preset scene resource of the target scene types in the plurality of preset scene resources, and perform a binding operation on the base model data and a preset metadata file of the target preset scene resource, so as to obtain a bound target model file;
and the generating unit 204 is configured to perform a model generating operation based on the bound target model file, and generate a virtual model of the target scene type.
In some embodiments, the processing means of the virtual model comprises:
And the first acquisition subunit is used for responding to the triggering operation for the scene editing page and acquiring basic curve data, wherein the basic curve data comprises control point data and line segment data.
In some embodiments, the processing means of the virtual model comprises:
The first binding subunit is configured to respond to a selection operation for a target preset scene resource of the target scene types in the plurality of preset scene resources, and bind the base curve data with a preset metadata file of the target preset scene resource to obtain a bound curve file.
In some embodiments, the processing means of the virtual model comprises:
And the first generation subunit is used for performing model generation operation based on the bound curve file to generate a virtual curve model of the target scene type.
In some embodiments, the processing means of the virtual model comprises:
And the second acquisition subunit is used for responding to the triggering operation of the scene editing page and acquiring the basic building model data.
In some embodiments, the processing means of the virtual model comprises:
And the second binding subunit is used for responding to the selection operation of the target preset scene resource aiming at the target scene type in the plurality of preset scene resources, and binding the basic building model data with the preset metadata file of the target preset scene resource to obtain a bound building model file.
In some embodiments, the processing means of the virtual model comprises:
and the second generation subunit is used for performing model generation operation based on the bound building model file to generate a virtual building model of the target scene type.
In some embodiments, the processing apparatus of the virtual model includes a setting unit configured to set:
The scene types include terrain types, river types, town types, road types, rock types, and vegetation types.
In some embodiments, the processing means of the virtual model comprises:
A third obtaining subunit, configured to obtain a preset metadata file of a target preset scene resource in response to a selection operation of the target preset scene resource for a target scene type in the plurality of preset scene resources;
A determining subunit, configured to determine a corresponding preset metadata attribute from the preset metadata file;
and the third binding subunit is used for binding the basic model data with the preset metadata file of the target preset scene resource to obtain a bound target model file so as to bind the preset metadata attribute with the basic model data.
The embodiment of the application discloses a processing device of a virtual model, which can display a scene editing page through a display unit 201, wherein preset scene resources of a plurality of scene types are displayed in the scene editing page, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files; the acquisition unit 202 acquires basic model data in response to a trigger operation for the scene editing page; the binding unit 203 is configured to perform a binding operation on the basic model data and a preset metadata file of a target preset scene resource in response to a selection operation on the target preset scene resource of the target scene type in the plurality of preset scene resources, so as to obtain a bound target model file; the generating unit 204 performs a model generating operation based on the bound target model file, and generates a virtual model of the target scene type. According to the embodiment of the application, the metadata attribute system can be introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model manufacturing and editing efficiency is effectively improved.
Correspondingly, the embodiment of the application also provides a computer device which can be a terminal or a server, wherein the terminal can be a terminal device such as a smart phone, a tablet Personal computer, a notebook computer, a touch screen, a game console, a Personal computer (PC, personal Computer), a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA) and the like. Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 4. The computer device 300 includes a processor 301 having one or more processing cores, a memory 302 having one or more computer readable storage media, and a computer program stored on the memory 302 and executable on the processor. The processor 301 is electrically connected to the memory 302. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
Processor 301 is a control center of computer device 300 and utilizes various interfaces and lines to connect various portions of the overall computer device 300, and to perform various functions of computer device 300 and process data by running or loading software programs and/or modules stored in memory 302 and invoking data stored in memory 302, thereby performing overall monitoring of computer device 300.
In the embodiment of the present application, the processor 301 in the computer device 300 loads the instructions corresponding to the processes of one or more application programs into the memory 302 according to the following steps, and the processor 301 executes the application programs stored in the memory 302, so as to implement various functions:
Displaying a scene editing page, wherein the scene editing page displays a plurality of scene type preset scene resources, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files;
responding to triggering operation aiming at the scene editing page, and acquiring basic model data;
Responding to a selection operation of a target preset scene resource of a target scene type in the plurality of preset scene resources, and binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file;
And performing model generation operation based on the bound target model file to generate a virtual model of the target scene type.
In an embodiment, the obtaining basic model data in response to a triggering operation for the scene editing page includes:
and responding to the triggering operation aiming at the scene editing page, and acquiring basic curve data, wherein the basic curve data comprises control point data and line segment data.
In an embodiment, the responding to the selection operation of the target preset scene resource for the target scene type in the plurality of preset scene resources carries out binding operation on the basic model data and the preset metadata file of the target preset scene resource to obtain a bound target model file, and includes:
And in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic curve data with a preset metadata file of the target preset scene resource to obtain a bound curve file.
In an embodiment, the generating a virtual model of the target scene type based on the model generating operation performed by the bound target model file includes:
And performing model generation operation based on the bound curve file to generate a virtual curve model of the target scene type.
In an embodiment, the obtaining basic model data in response to a triggering operation for the scene editing page includes:
and responding to the triggering operation of the scene editing page, and acquiring basic building model data.
In an embodiment, the responding to the selection operation of the target preset scene resource for the target scene type in the plurality of preset scene resources carries out binding operation on the basic model data and the preset metadata file of the target preset scene resource to obtain a bound target model file, and includes:
And in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic building model data with a preset metadata file of the target preset scene resource to obtain a bound building model file.
In an embodiment, the generating a virtual model of the target scene type based on the model generating operation performed by the bound target model file includes:
and performing model generation operation based on the bound building model file to generate a virtual building model of the target scene type.
In an embodiment, the scene type includes a terrain type, a river type, a town type, a road type, a rock type, and a vegetation type.
In an embodiment, the responding to the selection operation of the target preset scene resource for the target scene type in the plurality of preset scene resources carries out binding operation on the basic model data and the preset metadata file of the target preset scene resource to obtain a bound target model file, and includes:
responding to a selection operation of a target preset scene resource of a target scene type in the plurality of preset scene resources, and acquiring a preset metadata file of the target preset scene resource;
Determining corresponding preset metadata attributes from the preset metadata files;
And binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file so as to bind the preset metadata attribute with the basic model data.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 4, the computer device 300 further includes: a touch display 303, a radio frequency circuit 304, an audio circuit 305, an input unit 306, and a power supply 307. The processor 301 is electrically connected to the touch display 303, the radio frequency circuit 304, the audio circuit 305, the input unit 306, and the power supply 307, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 4 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 303 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display 303 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 301, and can receive and execute commands sent from the processor 301. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 301 to determine the type of touch event, and the processor 301 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 303 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 303 may also implement an input function as part of the input unit 306.
In an embodiment of the present application, the processor 301 executes an application program to generate a graphical interface on the touch display 303. The touch display 303 is used for presenting a graphical interface and receiving an operation instruction generated by a user acting on the graphical interface.
The radio frequency circuitry 304 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuit 305 may be used to provide an audio interface between a user and a computer device through a speaker, microphone. The audio circuit 305 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 305 and converted into audio data, which are processed by the audio data output processor 301 for transmission to, for example, another computer device via the radio frequency circuit 304, or which are output to the memory 302 for further processing. The audio circuit 305 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 306 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 307 is used to power the various components of the computer device 300. Alternatively, the power supply 307 may be logically connected to the processor 301 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system. The power supply 307 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 4, the computer device 300 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, in the computer device provided in this embodiment, by displaying a scene editing page, where preset scene resources of multiple scene types are displayed in the scene editing page, where a preset scene resource of one scene type is correspondingly configured with a corresponding preset metadata file; then, responding to triggering operation aiming at the scene editing page, and acquiring basic model data; then, in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file; and finally, performing model generation operation based on the bound target model file to generate a virtual model of the target scene type. According to the embodiment of the application, the metadata attribute system is introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model manufacturing and editing efficiency is effectively improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium storing a plurality of computer programs capable of being loaded by a processor to execute steps in any one of the virtual model processing methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
Displaying a scene editing page, wherein the scene editing page displays a plurality of scene type preset scene resources, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files;
responding to triggering operation aiming at the scene editing page, and acquiring basic model data;
Responding to a selection operation of a target preset scene resource of a target scene type in the plurality of preset scene resources, and binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file;
And performing model generation operation based on the bound target model file to generate a virtual model of the target scene type.
In an embodiment, the obtaining basic model data in response to a triggering operation for the scene editing page includes:
and responding to the triggering operation aiming at the scene editing page, and acquiring basic curve data, wherein the basic curve data comprises control point data and line segment data.
In an embodiment, the responding to the selection operation of the target preset scene resource for the target scene type in the plurality of preset scene resources carries out binding operation on the basic model data and the preset metadata file of the target preset scene resource to obtain a bound target model file, and includes:
And in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic curve data with a preset metadata file of the target preset scene resource to obtain a bound curve file.
In an embodiment, the generating a virtual model of the target scene type based on the model generating operation performed by the bound target model file includes:
And performing model generation operation based on the bound curve file to generate a virtual curve model of the target scene type.
In an embodiment, the obtaining basic model data in response to a triggering operation for the scene editing page includes:
and responding to the triggering operation of the scene editing page, and acquiring basic building model data.
In an embodiment, the responding to the selection operation of the target preset scene resource for the target scene type in the plurality of preset scene resources carries out binding operation on the basic model data and the preset metadata file of the target preset scene resource to obtain a bound target model file, and includes:
And in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic building model data with a preset metadata file of the target preset scene resource to obtain a bound building model file.
In an embodiment, the generating a virtual model of the target scene type based on the model generating operation performed by the bound target model file includes:
and performing model generation operation based on the bound building model file to generate a virtual building model of the target scene type.
In an embodiment, the scene type includes a terrain type, a river type, a town type, a road type, a rock type, and a vegetation type.
In an embodiment, the responding to the selection operation of the target preset scene resource for the target scene type in the plurality of preset scene resources carries out binding operation on the basic model data and the preset metadata file of the target preset scene resource to obtain a bound target model file, and includes:
responding to a selection operation of a target preset scene resource of a target scene type in the plurality of preset scene resources, and acquiring a preset metadata file of the target preset scene resource;
Determining corresponding preset metadata attributes from the preset metadata files;
And binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file so as to bind the preset metadata attribute with the basic model data.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any virtual model processing method provided by the embodiment of the application can be executed due to the computer program stored in the storage medium, and the scene editing page is displayed, wherein the scene editing page displays a plurality of preset scene resources of scene types, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files; then, responding to triggering operation aiming at the scene editing page, and acquiring basic model data; then, in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file; and finally, performing model generation operation based on the bound target model file to generate a virtual model of the target scene type. According to the embodiment of the application, the metadata attribute system is introduced into the scene PCG tool system, and the Meta attribute can be directly adjusted in the PCG tool system, so that the types of the corresponding virtual scenes or virtual models can be adjusted, the time cost and the manpower resources are saved, the virtual models of the target scene types can be generated in batches, and the virtual scene model manufacturing and editing efficiency is effectively improved.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The above describes in detail a virtual model processing method, apparatus, computer device and storage medium provided by the embodiments of the present application, and specific examples are applied to describe the principles and embodiments of the present application, where the description of the above embodiments is only used to help understand the technical solution and core ideas of the present application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (12)

1. A method for processing a virtual model, comprising:
Displaying a scene editing page, wherein the scene editing page displays a plurality of scene type preset scene resources, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files;
responding to triggering operation aiming at the scene editing page, and acquiring basic model data;
Responding to a selection operation of a target preset scene resource of a target scene type in the plurality of preset scene resources, and binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file;
And performing model generation operation based on the bound target model file to generate a virtual model of the target scene type.
2. The method according to claim 1, wherein the acquiring basic model data in response to a trigger operation for the scene editing page includes:
and responding to the triggering operation aiming at the scene editing page, and acquiring basic curve data, wherein the basic curve data comprises control point data and line segment data.
3. The method according to claim 2, wherein the performing, in response to a selection operation of a target preset scene resource of the target scene type among the plurality of preset scene resources, a binding operation of the base model data with a preset metadata file of the target preset scene resource to obtain a bound target model file includes:
And in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic curve data with a preset metadata file of the target preset scene resource to obtain a bound curve file.
4. A method for processing a virtual model according to claim 3, wherein the performing a model generation operation based on the bound object model file to generate a virtual model of the object scene type includes:
And performing model generation operation based on the bound curve file to generate a virtual curve model of the target scene type.
5. The method according to claim 1, wherein the acquiring basic model data in response to a trigger operation for the scene editing page includes:
and responding to the triggering operation of the scene editing page, and acquiring basic building model data.
6. The method according to claim 5, wherein the performing, in response to a selection operation of a target preset scene resource of the target scene type among the plurality of preset scene resources, a binding operation of the base model data with a preset metadata file of the target preset scene resource to obtain a bound target model file includes:
And in response to a selection operation of a target preset scene resource of the target scene types in the plurality of preset scene resources, binding the basic building model data with a preset metadata file of the target preset scene resource to obtain a bound building model file.
7. The method for processing a virtual model according to claim 6, wherein the performing a model generation operation based on the bound object model file to generate a virtual model of the object scene type includes:
and performing model generation operation based on the bound building model file to generate a virtual building model of the target scene type.
8. The method of claim 1, wherein the scene type comprises a terrain type, a river type, a town type, a road type, a rock type, and a vegetation type.
9. The method according to claim 1, wherein the performing, in response to a selection operation of a target preset scene resource of the target scene type among the plurality of preset scene resources, a binding operation of the base model data with a preset metadata file of the target preset scene resource to obtain a bound target model file includes:
responding to a selection operation of a target preset scene resource of a target scene type in the plurality of preset scene resources, and acquiring a preset metadata file of the target preset scene resource;
Determining corresponding preset metadata attributes from the preset metadata files;
And binding the basic model data with a preset metadata file of the target preset scene resource to obtain a bound target model file so as to bind the preset metadata attribute with the basic model data.
10. A processing apparatus for a virtual model, comprising:
The display unit is used for displaying a scene editing page, wherein the scene editing page is displayed with a plurality of scene type preset scene resources, and the preset scene resources of one scene type are correspondingly configured with corresponding preset metadata files;
the acquisition unit is used for responding to the triggering operation of the scene editing page and acquiring basic model data;
The binding unit is used for responding to the selection operation of the target preset scene resource aiming at the target scene type in the plurality of preset scene resources, and binding the basic model data with the preset metadata file of the target preset scene resource to obtain a bound target model file;
And the generating unit is used for performing model generating operation based on the bound target model file and generating a virtual model of the target scene type.
11. A computer device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, which computer program, when executed by the processor, implements the method of processing a virtual model according to any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements a method for processing a virtual model according to any one of claims 1 to 9.
CN202410241523.9A 2024-03-04 2024-03-04 Virtual model processing method and device, computer equipment and storage medium Pending CN118001740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410241523.9A CN118001740A (en) 2024-03-04 2024-03-04 Virtual model processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410241523.9A CN118001740A (en) 2024-03-04 2024-03-04 Virtual model processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118001740A true CN118001740A (en) 2024-05-10

Family

ID=90944322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410241523.9A Pending CN118001740A (en) 2024-03-04 2024-03-04 Virtual model processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118001740A (en)

Similar Documents

Publication Publication Date Title
JP7189152B2 (en) 3D environment authoring and generation
US10534605B2 (en) Application system having a gaming engine that enables execution of a declarative language
CN110533755B (en) Scene rendering method and related device
CN112037311A (en) Animation generation method, animation playing method and related device
CN112233211B (en) Animation production method, device, storage medium and computer equipment
CN111464430B (en) Dynamic expression display method, dynamic expression creation method and device
US11887229B2 (en) Method and system for populating a digital environment using a semantic map
CN105074640A (en) Engaging presentation through freeform sketching
CN111936966A (en) Design system for creating graphical content
CN112138380A (en) Method and device for editing data in game
US20150269781A1 (en) Rapid Virtual Reality Enablement of Structured Data Assets
CN113138996A (en) Statement generation method and device
CN118001740A (en) Virtual model processing method and device, computer equipment and storage medium
CN112348955B (en) Object rendering method
CN115222904A (en) Terrain processing method and device, electronic equipment and readable storage medium
CN114797109A (en) Object editing method and device, electronic equipment and storage medium
CN115049804B (en) Editing method, device, equipment and medium for virtual scene
CN116385609A (en) Method and device for processing special effect animation, computer equipment and storage medium
Ling et al. Enhance the usability of cartographic visualization system by user-centered interface design
Yan et al. “Dawn of South Lake”——Design and Implementation of Immersive Interactive System Based on Virtual Reality Technology
CN117893704A (en) Curve instance processing method, device, computer equipment and storage medium
CN116109737A (en) Animation generation method, animation generation device, computer equipment and computer readable storage medium
Van de Broek et al. Perspective Chapter: Evolution of User Interface and User Experience in Mobile Augmented and Virtual Reality Applications
CN114159798A (en) Scene model generation method and device, electronic equipment and storage medium
CN115391278A (en) Resource processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination