CN115170707B - 3D image implementation system and method based on application program framework - Google Patents

3D image implementation system and method based on application program framework Download PDF

Info

Publication number
CN115170707B
CN115170707B CN202210814006.7A CN202210814006A CN115170707B CN 115170707 B CN115170707 B CN 115170707B CN 202210814006 A CN202210814006 A CN 202210814006A CN 115170707 B CN115170707 B CN 115170707B
Authority
CN
China
Prior art keywords
data
attribute
scene
format
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210814006.7A
Other languages
Chinese (zh)
Other versions
CN115170707A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202210814006.7A priority Critical patent/CN115170707B/en
Publication of CN115170707A publication Critical patent/CN115170707A/en
Application granted granted Critical
Publication of CN115170707B publication Critical patent/CN115170707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a 3D image implementation method, which comprises the following steps: loading a target format file; acquiring data for constructing a 3D image according to the target format file; providing various capabilities matched with data or an external interface, and providing basic capability support for Runtime; the target format file is associated with a target format compatible with a GLTF format, the target format is obtained by defining extension field information of the GLTF format, and function codes/effect data corresponding to the newly added attribute are exported through various capabilities or external interfaces. The embodiment of the application also provides a 3D image implementation system, a device, equipment and a computer readable storage medium based on the application program framework. The method and the device can construct the 3D image, are compatible with the file in the GLTF format, and provide a plurality of functional extensions through the service layer, so that each attribute including the newly added attribute of the target format file can be supported.

Description

3D image implementation system and method based on application program framework
Technical Field
The embodiment of the application relates to the field of image processing, in particular to a 3D image implementation system based on an application program framework, and provides a 3D image implementation method, a device, computer equipment and a computer readable storage medium.
Background
With the development of computer technology, three-dimensional pictures are more and more favored by a wide range of users. Therefore, a three-dimensional model format is proposed and widely applied to various scenes such as live broadcast and games, and various three-dimensional visual designs are realized. However, the existing three-dimensional model format cannot meet the application of various scenes, and the extension support is poor.
Disclosure of Invention
An object of the embodiments of the present application is to provide a 3D image implementation system based on an application framework, and a 3D image implementation method, apparatus, computer device and computer readable storage medium for solving the above problems.
An aspect of an embodiment of the present application provides a 3D image implementation system based on an application framework, including:
the Runtime layer is used for loading a target format file;
a data providing layer for acquiring data for constructing a 3D image according to the object format file;
the service layer is used for providing various capabilities matched with the data or an external interface and providing basic capability support for Runtime;
the target format file is associated with a target format compatible with a GLTF format, the target format is obtained by defining extension field information of the GLTF format, and the service layer is used for exporting function codes/effect data corresponding to the newly added attribute.
Optionally, the Runtime layer is further configured to:
managing a plurality of scenes; wherein the managing comprises delaying loading of other scenes than the main scene.
Optionally, the service layer is configured to:
deriving character data, scene data, sky box data, UI data, and/or scripts;
wherein the role data comprises: the function code corresponding to the new attribute;
wherein the scene data includes: and the effect data of each element in the scene corresponding to the newly added attribute.
Optionally, the service layer is configured to:
for 3D live or game services, face capture and motion capture are adapted for characters, or characters are treated as manipulated objects for interacting with the scene.
Optionally, the service layer is configured to:
a file production service is provided for an object format file into which the 3D image data has been imported.
Optionally, the system further comprises an input operation layer, configured to:
defining an interaction attribute of the object, wherein the interaction attribute is used for indicating whether the object can interact or not;
defining events for external input and binding thereof, wherein the external input comprises keyboard operation and/or mouse operation;
defining events with which object actions in the 3D scene are bound;
the method comprises the steps of defining an input behavior of a preset monitoring interface and a custom event bound with the input behavior of the preset monitoring interface.
Optionally, the input operation further defines: the binding relationship between the object and the event is a global property or a local property;
when a target event generated for the object is monitored, a response corresponding to the target event is executed.
Optionally, the system further comprises a tool set layer for:
and the tools are used for adding the component data and the material data supported by the target format.
One aspect of the present embodiment further provides a method for implementing a 3D image, including:
loading a target format file;
acquiring data for constructing a 3D image according to the target format file;
providing various capabilities matched with data or an external interface, and providing basic capability support for Runtime;
the target format file is associated with a target format compatible with a GLTF format, the target format is obtained by defining extension field information of the GLTF format, and function codes/effect data corresponding to the newly added attribute are exported through various capabilities or external interfaces.
An aspect of an embodiment of the present application further provides a 3D image implementing apparatus, which includes
The loading module is used for loading the target format file;
the acquisition module is used for acquiring data for constructing a 3D image according to the target format file;
the service module is used for providing various capabilities matched with the data or an external interface and providing basic capability support for Runtime;
the service module is used for exporting function codes/effect data corresponding to the newly added attribute.
An aspect of the embodiments of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor is configured to implement the steps of the 3D image implementation method as described above when executing the computer program.
An aspect of embodiments of the present application further provides a computer-readable storage medium, in which a computer program is stored, the computer program being executable by at least one processor to cause the at least one processor to perform the steps of the 3D image implementation method as described above.
The system, the method, the device, the equipment and the computer readable storage medium provided by the embodiment of the application have the following advantages:
the 3D image is constructed based on the target format file, the file in the GLTF format can be compatible, and a plurality of functional extensions are provided through the service layer, so that each attribute including the newly added attribute of the target format file can be supported.
Drawings
Fig. 1 schematically illustrates an application environment diagram of a 3D image implementation method according to an embodiment of the present application;
FIG. 2 schematically illustrates a software framework diagram;
FIG. 3 schematically illustrates an application framework diagram;
fig. 4 schematically shows a flow chart of a 3D image realization apparatus according to a second embodiment of the present application;
fig. 5 schematically shows a block diagram of a 3D image realization apparatus according to a third embodiment of the present application;
fig. 6 schematically shows a hardware architecture diagram of a computer device suitable for implementing a 3D image implementation method according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the descriptions relating to "first", "second", etc. in the embodiments of the present application are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
In the description of the present application, it should be understood that the numerical references before the steps do not identify the order of performing the steps, but merely serve to facilitate the description of the present application and to distinguish each step, and therefore should not be construed as limiting the present application.
In order to facilitate those skilled in the art to understand the technical solutions provided in the embodiments of the present application, the following description is provided for the related technologies:
several 3D file formats are currently known: FBX, DAZ, USD, assetBundle, pak, MMD, VRM, etc.
FBX, DAZ, USD, etc. formats: the method can not be loaded in the running process, intermediate data needs to be generated in a game engine in advance for rendering in the running process when the game engine is used, the intermediate data cannot be directly used as a propagation carrier to be sent to a user terminal, the method is more suitable to be used as a production tool rather than a consumption carrier, is only limited to be used as a medium for productivity in the professional field, and is not suitable to be used as a consumption carrier.
AssetBundle, pak, etc. formats: and the system is strongly bound with the engine version, and the upgrading of the engine version can cause that all resources need to be repackaged and cannot be suitable for products which take player creation as creative themes. The method is strongly related to an operating system, and resource packages of different platforms are not universal and need to be generated respectively. Cannot be propagated and traded as an independent resource, and cannot be endowed with the value of a virtual asset. And the export during the operation can not be carried out, the re-creation modification can not be carried out, and the resources can not be reused.
MMD (MikuMikuDance) format: for 3D animated movie scenes, only support as engineering derived video in the tools provided exclusively, with commercial licensing restrictions, no ecochain supports its application in games or vtube (Virtual youtube, virtual UP master).
The VRM format: the virtual live broadcast and social VR game system is used for virtual live broadcast and social VR games, but only contains character part data, larger use scenes cannot be expanded, rendering effect is poor, regional limitation is caused, for example, mouth shape adaptation only includes Japanese, shaders only support MToon (cartoon shaders with global illumination), unlit (shaders without light materials) and PBR (physical Rendering), expansion flexibility is poor, for example, animations are not supported, scene loading is not supported, and function expansion cannot be performed by a third party, so that development of vTuber is hindered.
As mentioned above, several of the above-mentioned 3D file formats have certain limitations. In order to support players to create 3D scenes with high freedom degree and carry out sharing transaction, and the use is not influenced by technical factors such as an operating system, a tool type and a tool version, the application provides a new file format. The format is not influenced by an operating system, a tool type and a version, is easy to use, create, modify and the like, and is convenient to load and export in running.
The new file format (target format) comprises the original specification of the GLTF format, functions are developed in Extensions and Extras fields, the existing GLTF file is compatible, and Json Scheme of the standard GLTF is guaranteed not to be damaged, so that the GLTF can be opened and modified by other tools; the previewing performance of the conventional GLTF tool on the file is reserved, so that a non-special tool can also reserve the previewing and editing capabilities to a certain degree, the minimum data structure of the file is ensured, and default data is used for supporting fields; a large amount of multiplexed data is not required to be stored in an Extra field, the data with strong universality and reusability is stored in Extension, and in order to optimize the file loading speed and reduce the occupation of a memory, two sets of different loading mechanisms are provided for adapting to different use scenes.
The present application is directed to providing an application framework that supports the new file format described above to present and edit 3D images of the new file format.
The following are the term explanations of the present application:
a 3D (three-dimensional) image, which is one of image files for storing information on a three-dimensional model. The 3D image includes a three-dimensional model, a 3D animation, and a 3D project file. The 3D image may include model information consisting of polygons and vertices in three-dimensional space interpreted by three-dimensional software, possibly including color, texture, geometry, light source, and shading information. The 3D image file format may be used in VR, 3D printing, games, movie special effects, construction, medicine, and other related scenes.
GLTF (Graphics Language Transmission Format, graphic Language interchange Format): three-dimensional computer graphics formats and standards, which support the storage of three-dimensional models, appearances, scenes, and animations, are a simplified, interoperable format for 3D assets (Asset), minimizing the file size and processing difficulties of applications. GLTF assets are external data supported by JSON files. Specifically, the GLTF asset contains a JSON format file for a complete scene description (. GLTF): descriptor information for node hierarchy, material, camera and mesh, animation and other constructs; a binary file (. Bin) containing geometry and animation data and other buffer-based data; and texture (. Jpg,. Png). The 3D objects in the scene are defined using meshes (meshes) connected to the nodes. The material is used to define the appearance of the object. Animatics (Animations) describe how 3D objects transition over time. Skins define the way the geometry of an object is deformed based on skeletal pose. Cameras (camera) describes the view configuration of the renderer.
Resource: may include pictures, shaders, textures, models, animations, etc.
Material is a data set for a renderer to read, representing the interaction of an object with light, and includes a map, an illumination algorithm, and the like.
Texture (Texture), a regular, repeatable bitmap, is the basic unit of data input.
Map (Map), which includes texture and many other information, such as texture coordinate sets, map input output controls, etc. Mapping includes a variety of forms, such as lighting mapping, environmental mapping, reflective mapping, and the like. The illumination map is used for simulating the illumination effect of the surface of the object. The environment map includes six textures, and corresponding texture coordinate sets.
Texture mapping (Texture mapping), which maps a Texture to the surface of a three-dimensional object by a set of coordinates, such as UV coordinates.
AssetBundle: a file storage format supported by Unity is also a resource storage and update mode recommended by Unity officials, and can compress, package and dynamically load resources (Asset) and realize hot update.
FBX: the format is used by the FilmBoX software, and is called Motionbuilder after the format is used. FBX can be used between software such as Max, maya, softimage to do model, material, motion and mutual conductance of camera information.
DAZ: is a file format of a 3D scene created by the modeling program DAZ Studio.
The USD (Universal Scene Description) is a file format provided by Pixar based on the entire flow of an animation movie.
VRM (Virtual Reality Modeling): is a virtual 3D human shape model format.
Avatar: is a human form of the 3D character model.
Metaverse: the meta universe, or called the afterspace, the universe in shape, the hyper-space, and the virtual space, is a network of 3D virtual worlds focused on social links. The meta universe may relate to a persisted and decentralized online three-dimensional virtual environment.
The game engine: it refers to some core components of the programmed editable computer game system or interactive real-time image application program. These systems provide game designers with the various tools required to compose games, with the goal of allowing game designers to easily and quickly program games without starting from zero. Most support various operating platforms, such as Linux, macOS X, microsoft Windows. The game engine comprises the following systems: rendering engines (i.e., "renderers," including 2D and 3D graphics engines), physics engines, collision detection systems, sound effects, scripting engines, computer animation, artificial intelligence, web engines, and scene management.
Events (events), including system events and user events. System events are fired by the system. The user event is triggered by the user, such as the user clicking a button, to display specific text in a text box. Event-driven controls perform a function.
The technical solutions provided by the embodiments of the present application are described below by way of exemplary application environments.
Referring to fig. 1, an application environment diagram of a 3D image implementation method according to an embodiment of the present application is shown. The computer device 2 may be configured to run and process 3D files. Computer device 2 may comprise any type of computing device, such as: smart phones, tablet devices, laptop computers, virtual machines, and the like.
The computer device 2 may run a Windows system, android (Android) TM ) An operating system such as a system or an iOS system.
As shown in fig. 2, a block diagram of an exemplary software architecture for computer device 2 is provided below.
In the software architecture, there are several layers, each layer being responsible for different tasks. The layers communicate with each other through a software interface. In some embodiments, the software architecture may be divided into four layers, from top to bottom, an application layer, an application framework layer, a system library, and a kernel layer. The four layers are described below.
The application layer may include a wide variety of applications, such as video applications, 3D game applications, and the like.
And the application framework layer is used for providing an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. For example, extensions and support of functionality are provided for various applications for building and exposing 3D graphics.
The system library may include a plurality of functional modules, such as three-dimensional graphics processing software (e.g., unity engine), a 2D graphics engine (e.g., SGL), and the like. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The kernel layer is a layer between the hardware layer and the software layer. The kernel layer comprises a display driver, a camera driver, an audio driver, a display card driver and the like.
In the following, the computer device 2 is used as a hardware main body, and a plurality of 3D implementation schemes adapted to the new file format are provided.
Example one
Fig. 3 schematically shows an architecture diagram of an application framework-based 3D image implementation system according to an embodiment of the present application.
The application framework may include:
input actions layer.
And (II) a Data provider layer.
And (III) a Service layer.
And (IV) a Runtime layer.
And (V) tool set (Toolset) layer.
Wherein:
and the input operation layer is used for providing interaction, such as accepting input feedback and generating interaction with objects in the scene.
And the Runtime layer is used for loading the object format file when the program runs.
And the data providing layer is used for acquiring data for constructing a 3D image according to the object format file, and can also comprise audio, video and other additional data.
And the service layer is used for providing various capabilities matched with the data or an external interface and providing basic capability support for the Runtime.
And the tool set layer provides tools for third-party developers or creators.
The method comprises the steps that a new attribute is defined in an attribute extension field of a target format file, the target format file is associated with a target format compatible with a GLTF format, the target format is obtained by defining extension field information of the GLTF format, and a service layer is used for exporting function codes and/or effect data corresponding to the new attribute.
Each layer is described below.
Input actions layer.
An input operation layer to:
an interaction control layer is provided. Specifically, interaction attributes of the object are defined, and the interaction attributes are used for indicating whether the object can interact or not.
External input (keyboard, mouse, etc.) binding is provided. Specifically, events for external input and binding thereof are defined, the external input including keyboard operation and/or mouse operation.
An intra-scene event binding is provided. In particular, events are defined with which object actions in the 3D scene are bound.
And providing a monitoring interface of the custom event. Specifically, an input behavior of a preset monitoring interface and a custom event bound with the input behavior of the preset monitoring interface are defined. The user-defined event corresponds to a user-defined event type, and if the character card bug falls out of the map, a user-defined event is thrown out.
In an alternative embodiment, the input operation layer further defines: the binding relationship between the object and the event is a global property or a local property; wherein the local attribute is valid in a specified scene or range.
And when a target event generated for the object is monitored, executing a response corresponding to the target event.
The event can be triggered by external input or internal interaction.
In one exemplary application, the input operation layer may define a listening interface for listening to external inputs in real time. The external input can be realized by a mouse, a touch screen, a touch pad or other sensing elements.
The input operation layer may establish an association between an object and an external input.
For example, the output operation layer defines an association relationship between an object and a keyboard operation.
The keyboard operation can be single or multiple times of knocking and long-time pressing of a certain single key, and can also be combined operation of multiple keys.
For another example, the input operation layer defines an association relationship between an event and a mouse operation.
The mouse operation may be a single or multiple click, drag, dwell in a region, etc.
For example, when a mouse receives a click, the corresponding hardware generates an interrupt. The interrupt may be notified to the input operations layer in the application framework via the kernel layer in the form of a message/primitive event. The input operation layer recognizes the message/primitive event, and determines whether the mouse operation is applied to the target object. When an event which is triggered by mouse clicking is defined on the target object, the event is triggered.
As described above, based on the external input, an event associated with the corresponding object and, in turn, a corresponding response, may be generated.
An example of a response to a partial event of an object in a scene is provided below.
For example, in a moving operation of an object, if the object supports movement, a movement event is generated and the object moves.
For example, if the external input is a switch operation for a switch button, a switch event is generated, and a light effect in a room where the switch button is located is generated based on the switch event.
For example, an object has an attribute of being caught and then thrown, and if the external input is an operation for the object, a corresponding event is generated, and the object is caught and thrown based on the corresponding event.
In another exemplary application, the input operation layer may define an event generated by interaction of objects in a 3D scene.
For example, a character arrives within a scene and objects (lights) turn on automatically.
For example, there is an area around an object (sofa) in the scene, and the character can click to sit down when moving to a specified range.
For example, an object has collider properties. The trigger based on the collider generates a collision event if the object is collided by another object in the scene, and further generates a corresponding collider effect based on the collision event.
In another exemplary application, the input action layer may define a custom event on which to add a script.
For example, the person card bug falls out of the map. Based on the custom event, the expansibility of the input operation layer can be increased.
Through the input operation layer, objects in the scene can interact with the roles, and the export of the custom UI and the script is supported.
And (II) a Data provider layer.
For obtaining data for constructing a 3D image from the object format file.
For example, the capability of constructing 3D characters and scenes is provided, and data such as multimedia, grids, materials, maps, animations and the like are included.
GLTF Compatible File: 3D model data compatible with the GLTF standard.
Multi-Media: multimedia data, including audio and video.
Other: functional data (sky box data, GUI, script).
URL Reference: the link external reference may be a local URL or a network URL.
Since the target format is compatible with the GLTF format, 3D model data compatible with the GLTF standard can be provided.
And (III) a Service layer.
The system is used for providing various capabilities matched with data or an external interface and providing basic capability support for Runtime.
The method is used for the whole 3D file use process of file loading, adaptation, export and the like. For example, according to the data obtained by the target format file and the data providing layer, a file import and export function for a developer level is provided, and a standardized adaptation mechanism is also provided for adapting each function to the data layer, so that the application layer provides practical application.
In this embodiment, the service layer includes a plurality of SDKs, which are replaceable. Illustratively, different SDKs may be replaced according to different 3D engines (e.g., unity) to increase the scalability of the system and increase the range of adaptation to various 3D engines.
The service layer provides the import and export capability of data.
In an alternative embodiment, the service layer is configured to:
import/export role (Avatar) data, scene (Scene) data, sky box (Skybox) data, UI data, and/or Script (Script);
wherein the role data comprises: the function code corresponding to the newly added attribute;
wherein the scene data includes: and the effect data of each element in the scene corresponding to the newly added attribute.
Under the condition of the newly added attribute of the target format file, a physical system, a reloading system, a human-shaped skeleton and the like can be supported, more types of data are supported, such as a cube map, a custom material, sky box data, a post-processing function and the like, the picture expression richer than that of the GLTF format is realized, and the rendering effect can be better restored.
In this embodiment, a target format file compatible with the GLTF format is provided, where the target format file provides a plurality of new attributes, and implements support for the above various attributes and systems, so that the above various data can be exported to the target format file, a more realistic physical effect is provided, a change-over system based on model and material switching is supported, and a grid, a material, and a chartlet may include a plurality of variables, and may be switched during running.
In an alternative embodiment, the service layer is configured to:
serving 3D live or games, adapting face capture and motion capture to characters, or as manipulated objects, and interacting with scenes.
In this embodiment, functions such as human-shaped skeleton and physical dynamics can be supported by the newly added attributes of the target format file, so that the specification of the character model is realized, and the loaded character can be directly adapted to face capture and motion capture, and can also be used as an operated object (Avatar) controlled by a player in various VR games.
In an alternative embodiment, the service layer is configured to:
providing a file production service for generating an object format file into which the 3D image data has been imported.
Because the target format is developed based on the GLTF format, the good cross-platform performance and ecological support of the GLTF format are inherited. The method is not influenced by an operating system, a tool type and a version, is easy to use, create, modify and the like, and is convenient to load and export in runtime. Therefore, sharing of files can be achieved by 3D analyzing the function code.
In addition, the service layer is also used for providing loading and exporting under the Runtime, and providing import under the Editor editing mode as a resource used by engineering, so that the production requirements of professional artists and developers can be met simultaneously, and the developers can develop related applications based on the Runtime part to meet the use requirements of common users.
And (IV) running a Runtime running layer.
For loading an object format file, i.e. providing the ability to load the file for use at run-time.
The Runtime layer is further used for: avartar Driving (such as face capturing expressions and body movements Driving a character), scene Construction, and the like.
In an alternative embodiment, the object format supported file includes a plurality of scenes, and manages the plurality of object scenes.
The plurality of scenes may include a main scene and other scenes than the main scene.
Other scenarios are not necessarily used. In view of this, the Runtime layer is also used to:
managing a plurality of scenes; wherein the managing comprises delaying loading of other scenes than the main scene. Namely: the main scene is loaded first, and when some other scene needs to be used, the other scene is loaded again, so that the resource occupancy rate is reduced.
And (V) tool layering.
The system comprises a plurality of tools for providing a plurality of tools for adding component data and material data supported by the target format.
For example, various code generation tool service layers can be provided, and the tools have export and import functions, convenient code generation functions and the like.
For example:
tools for file (scene) import and export.
Tools for timeline presentations, such as providing timeline-based playback, such as sounds, animations, light switches, object displays, etc., are presented on a timeline. If the plot animation can be added into the file, the audio, the object, the animation, the camera, the light and the like are placed under the same time axis for setting the dynamic parameters.
Tools for animation transformation, such as transforming Unity's huffman animation into skeletal skinning animation, to achieve compatibility with GLTF standard animation.
Component CodeGen, a code generation tool for data serialization and deserialization on a 3D engine Component.
Material CodeGen, a code generation tool for Material parameter serialization and deserialization.
Physics Simulation, a set of physical systems based on dynamic skeletal adjustment, can simulate the physical waving of hair and clothing.
And (4) the avatar Builder assists the user to derive the character model meeting the standard.
Data Compression, mesh supports Draco Compression, mapping supports KTX2 Compression, and files support GLTF files dispersed using Zip Compression.
In the embodiment, a third-party developer can use a code generation tool to quickly add new component data, import and export material parameters, quickly provide codes required by non-standardized data export and import, and completely avoid damaging the standard of the GLTF by the file, so that the existing files in the GLTF format can be still loaded and edited by the standard GLTF tool.
The object format file in the present embodiment will be described below.
Newly added attributes are defined in the attribute extension fields of the target format file, the target format file is associated with a target format compatible with the GLTF format, and the target format is obtained by defining the extension field information of the GLTF format.
In an exemplary application, the new addition attribute includes: an attribute defined in the extension field and used to be pointed to by a node; defining attributes to which no node points in the extension field; and/or define attributes in the nodes.
In the GLTF format, a plurality of elements constituting a 3D image are defined, such as: scene (Scene), node (Node), mesh (Mesh), camera (Camera), material (Material), texture (Texture), skin (Skin).
And the scene describes items for the scene structure, and a scene graph is defined by referring to one or more nodes.
And the nodes are mounted in the scene. The nodes may reference child nodes, grids, cameras, and skins describing the grid transformation, among others.
A mesh for describing mesh data of a 3D object appearing in the scene,
a camera configured for a viewing frustum for rendering a scene.
Each of these elements has one or more attributes. Attributes are used to define properties, characteristics, descriptions, etc. of the corresponding elements.
Taking a node as an example, the attribute table may include: camera, child node, skin, matrix, grid, quaternion rotation, scaling ratio, position information, weight array of grid, name, attribute extension field, attribute addition field.
In the target format, all functions and effects supported by the GLTF format are inherited, and on the premise of not destroying the GLTF format structure, the newly added attribute of the target format is defined by using the attribute extension field and the attribute addition field. In addition, the object format file support field uses default data. The method is not influenced by an operating system, a tool type and a version, is easy to use, create, modify and the like, and is convenient to load and export at runtime. It should be noted that, in order to optimize the loading speed of the target format file and reduce the memory occupation, two different loading mechanisms are provided for adapting to different usage scenarios, that is: the attribute information which does not need to be massively multiplexed is stored in the attribute additional field, and the attribute information with universality and strong reusability is stored in the attribute extension field.
In an exemplary application, the newly added attribute may include an audio file attribute, an audio behavior attribute, an emoticon attribute, a collision volume attribute, a humanoid skeleton attribute, a flip attribute, a lighting map attribute, a metadata attribute, a skeleton dynamics attribute, a post-processing attribute, a dynamic scenario attribute, a scene rendering attribute, a sky box attribute, a cube map attribute, a scenario timeline attribute, a sprite attribute, a streaming media attribute, a resource variable attribute, a derived attribute, and the like. Of course, other attributes of engine or web support may also be included, supporting more functionality.
The application framework in the embodiment of the application is used for adapting the target format file, so that the following advantages are provided:
(1) The file in the GLTF format can be compatible.
(2) Extensions of a plurality of functionalities are provided so that respective attributes of the object format file including the added attribute can be supported.
Each new attribute of the object format file is introduced below.
The attribute extension field of the target format file defines the attribute of the audio file;
wherein the audio file attribute is used for providing file information of an audio clip for the reproduction of the audio clip.
The audio file attributes may be pointed to by the node and thus used by the node.
As shown in table 1, the audio file attribute defined in the attribute extension field, based on which the service layer provides the corresponding capability, includes the following information:
Figure GDA0004058225320000101
TABLE 1
The export format of the target format file can be selected from two suffix formats: gltf and glb. When exporting a separate. Gltf file, uri is used; when exported as a.glb file, the information will be stored via the bufferView field. It should be noted that more suffixes may be defined subsequently for different derived types, for example, according to a pure character model or a scene, to define different suffixes for a document, which is used as a functional distinction.
The attribute extension field of the target format file defines the attribute of defining audio behavior;
wherein the audio behavior attribute comprises one or more playback parameters for controlling playback of the audio clip.
The node can further refer to the audio behavior attribute on the basis of referring to the audio file attribute.
As shown in table 2, the audio behavior attribute defined in the attribute extension field (based on which the service layer provides the corresponding capability) includes the following information:
Figure GDA0004058225320000102
/>
Figure GDA0004058225320000111
TABLE 2
The attribute expansion field of the target format file defines expression transformation attributes;
the expression transformation attributes comprise material information and standard expression file information used for setting a grid mixed shape.
The emoticon attribute may be pointed to by a node and thus used by the node.
As shown in table 3, the emoji transformation attribute defined in the attribute extension field, based on which the service layer provides corresponding capabilities, includes the following information:
Figure GDA0004058225320000121
TABLE 3
Wherein, the blendshapeuvalues defines a mapping table, and records the weights of a plurality of grid transformations to expression transformations. The materialVector4Values define a list recording sets of material parameters for four component vectors (e.g., grid tangent, shader). materialColorValues define another list in which sets of material parameters representing colors are recorded. materialFloatValues define another list that includes sets of material parameters of the float type.
The attribute extension field of the target format file defines the attribute of a collision body;
wherein the collision volume attributes comprise one or more parameters for a collision volume for supporting collision interactions.
The collision volume attributes may be pointed to by the nodes and thus used by the nodes.
As shown in Table 4, the attribute of the collision volume defined in the attribute extension field (based on which the service layer provides the corresponding capabilities) includes the following information:
Figure GDA0004058225320000122
/>
Figure GDA0004058225320000131
TABLE 4
The attribute extension field of the target format file defines human-shaped skeleton attribute;
the human-shaped skeleton attributes comprise parameters of a plurality of human-shaped skeletons and relationship and action constraints among the human-shaped skeletons.
The humanoid skeletal attributes may be pointed to, and thus used by, nodes that correspond to actual humanoid skeletal points.
The Humanoid skeleton attribute defines Avatar used by the Humanoid model.
Any model imported as a human animation type may generate an Avatar resource in which information driving an actor is stored.
The Avatar system is used to tell the game engine how to recognize that a particular animated model is humanoid in layout, and which parts of the model correspond to legs, arms, head and body, after which step the animation data can be "reused". It should be noted that due to the similarity of the skeletal structure between different human characters, animation can be mapped from one human character to another, thereby achieving repositioning and inverse kinematics.
As shown in table 5, the humanoid skeleton attribute defined in the attribute extension field (based on which the service layer provides the corresponding capabilities) includes the following information:
Figure GDA0004058225320000132
/>
Figure GDA0004058225320000141
TABLE 5
In which humanBones record multiple joints, as well as the connection and spatial transformation relationships between individual joints (e.g., neck, head).
The node can further refer to the bone change attribute on the basis of referring to the humanoid bone attribute.
The bone change attribute, based on which the service layer provides the corresponding capabilities, also includes the contents shown in table 6.
Figure GDA0004058225320000142
/>
Figure GDA0004058225320000151
TABLE 6
The attribute extension field of the target format file defines the defined reloading attribute;
the reloading attribute comprises a list of different reloading schemes and a material parameter list of each reloading scheme.
The reloading attributes may be pointed to by the node and thus used by the node.
And on the premise of Avatar, the nodes can refer/point to the reloading attribute, so that reloading of people is supported.
The reloading system is implemented by altering grid visibility or material on the grid.
As shown in tables 7-9, the reload attribute defined in the attribute extension field (based on which the service layer provides the corresponding capabilities) includes the following information:
type (B) Description of the preferred embodiment Whether or not it is necessary to
dressUpConfigs GLTFDress Set of reloading schemes Is that
TABLE 7
Figure GDA0004058225320000152
TABLE 8
Figure GDA0004058225320000153
/>
Figure GDA0004058225320000161
TABLE 9
Where table 7 is a set of reloading schemes, table 8 is information for each reloading scheme, and table 9 is the changes contained for a single reloading.
The attribute extension field of the target format file defines the attribute of the illumination map;
wherein the illumination map attribute is to instruct an engine to pre-compute a change in surface brightness in the scene. The illumination map attribute is defined in the attribute extension field and need not point to other objects.
As shown in Table 10, the lighting map attribute defined in the attribute extension field, based on which the service layer provides the corresponding capabilities, includes the following information:
Figure GDA0004058225320000162
watch 10
Wherein each map stores different information of the lighting of the user scene.
For example, lightmapTextureInfo [ ] includes: the color of the incident light (necessary), the principal direction of the incident light (necessary), the shade of each lamp (necessary), etc.
Metadata attributes are defined in the attribute extension field of the target format file;
wherein the metadata attributes include resource description information, resource management information, legal information, and/or content reference information. The metadata attributes are defined in the attribute extension field and need not be pointed to in other objects.
Resource description information: for discovery and identification, elements may include title, abstract, author, and keywords. Arranged in order to form chapters. It describes the type, version, relationship and other characteristics of the digital material.
Resource management information: information for managing resources, such as resource type, permissions.
Legal information: providing information about the creator, copyright owner and public license.
Content reference information: information about the content.
As shown in table 11, the metadata attribute defined in the attribute extension field, based on which the service layer provides the corresponding capability, includes the following information:
Figure GDA0004058225320000171
TABLE 11
The attribute expansion field of the target format file defines a skeleton dynamics attribute;
wherein the bone dynamics attributes are used to support simulating dynamic motion of an object bound to the bone.
In an exemplary application, a skirt, hair, pendant, etc. can be simulated to follow the movement of the skeleton, body, etc.
The attribute extension field of the target format file defines post-processing attributes;
wherein the post-processing attributes comprise attributes of the volume component and attributes of the supported post-processing effects.
The post-processing attribute may be pointed to by the node and thus used by the node.
The volume components include attributes that control how they affect the camera and how they interact with other volumes. It is a full screen effect for 3D rendering, can improve rendering effect, and requires little time to set.
The following describes the properties of a volume assembly:
as shown in table 12, the attributes of a volume component (based on which the service layer provides the corresponding capabilities) include the following information:
Figure GDA0004058225320000181
TABLE 12
By means of the profile ID it is possible to specify which effect is used.
Whether globally generated or locally effected, needs to be pointed to by the node to serve the node that specified the post-processing attribute.
Wherein, the supported post-processing effect may include: ambient light masking, blooming, mixer, color difference, color adjustment, color curve, depth of field, film grain, lens distortion, lifting, gamma and gain, motion blur, panini (Panini) projection, shadow midtone spot, split tone, tone mapping, vignetting, white balance.
Each post-processing effect may define a corresponding attribute in an attribute extension field.
Vignetting, for example, means that the edges of the image are darkened and/or desaturated compared to the center. Vignetting comprises the attributes in table 13.
Figure GDA0004058225320000182
Figure GDA0004058225320000191
/>
Watch 13
The attribute extension field of the object format file is defined with a dynamic script attribute (the service layer provides corresponding capability based on the attribute);
wherein the dynamic script attribute comprises a character string for the engine to execute so as to support the interpretation and the running of the external script. The dynamic script attributes are defined in the attribute extension field and do not need to be pointed to in other objects.
In an exemplary application, the above-mentioned character strings may point to external scripts, such as pushers, lua scripts, and the like.
Rendering events and events from the input device are received, and the script engine executes the script upon receiving the corresponding events.
The event may include: the method comprises the steps of rendering a first frame by an object, starting an object assembly, closing the object assembly, destroying the object assembly, updating each frame, and calling all objects periodically according to time after all objects are updated.
Still further, the events may also include manually triggered events, such as events triggered by: keyboards, mice, joysticks, controllers, touch screens, motion sensing functions (such as accelerometers or gyroscopes), VR (Virtual Reality) and AR (Augmented Reality) controllers, etc.
The target format file defines a global scene rendering attribute in the attribute extension field; wherein the scene rendering properties comprise one or more rendering effect parameters for affecting the scene. The scene rendering properties are defined in the property extension field and do not need to be pointed to in other objects. As shown in table 14, the scene rendering attribute defined in the attribute extension field, based on which the service layer provides the corresponding capability, includes the following information:
Figure GDA0004058225320000192
/>
Figure GDA0004058225320000201
TABLE 14
The object format file defines a sky box attribute in the attribute extension field; wherein the sky-box attribute is used to instruct an engine to create an unbounded background display to color the pointed material. The sky-box attribute is defined in the attribute extension field and need not point to other objects. As shown in table 15, the sky box attribute defined in the attribute extension field (based on which the service layer provides corresponding capabilities) includes the following information:
type (B) Description of the invention Whether it is necessary or not
material id Texture using sky box shader Is that
Watch 15
Taking the video game level as an example, when the sky box is used, the level is enclosed in a cuboid. Sky, distant mountains, distant buildings and other unreachable objects are projected onto the surface of the cube creating the illusion of a distant 3D environment. The dome is equivalent, using a sphere or hemisphere instead of a cube.
The object format file defines a cube map attribute in the attribute extension field; the attributes of the cube map comprise the layout, the texture mapping and the texture of each surface of the cube map. The cube map attribute is not pointed to by a node, but rather is used within the material as a special map type point. As shown in Table 16, the cube map attribute defined in the attribute extension field (based on which the service layer provides the corresponding capabilities) may include the following information:
Figure GDA0004058225320000211
TABLE 16
Cube maps are a collection of six square textures representing reflections in the environment. The six squares form the faces of an imaginary cube surrounding an object; each face represents a view along the world axis (up, down, left, right, front, back). The image type (imageType) includes: the 6 squares in one row or column are stitched into a texture (aspect ratio 6.
The attribute extension field of the target format file defines a plot time axis attribute (the service layer provides corresponding capability based on the attribute);
wherein the scenario timeline attribute is used to arrange tracks of objects and create cut scenes and game sequences.
The storyline timeline attribute may be pointed to by a node and thus used by the node.
The plot timeline attribute may include the following information:
the name of the track resource;
an animation track group describing an animation track;
an audio track group describing audio tracks;
a track set of expression transformations (typically used for facial animation of facial expressions), describing expression transformations;
the material parameter curve track group is used for describing the material change by the change of the curve (parameter of float floating point number type) along with the time;
the material parameter curve track group is used for describing Color change when the curve changes in output value (Color type parameter) along with time;
a material parameter track group (parameters of int integer type) describing a material;
a material parameter track group (Color type parameter) describing Color;
a material parameter track group (a parameter of a Vector4 type) describing Vector4;
a Texture parameter track group (parameter of Texture2D map type) describing Texture2D (Texture);
whether the object is activated, the pool type, describing whether the object is activated;
whether the component is activated or not, the pool type, whether the description component is activated or not;
length of the entire track, floating point type, describes the track length.
Wherein all tracks comprise the following parameters: resource name, start time, end time, resource ID. The resource ID is used to specify the index position of the data source, and may be animation, map, audio, and other data.
Wherein the track parameters may include: track name (string type, not required), start time (floating point type, required), end time (floating point type, required).
The sub-track data included in the track group of each category may be represented by a generic type, such as describing all sub-track sets under a category.
Different types of track data classes, such as two track groups representing animation and audio, can be obtained after inheriting the type of the specified generic type.
For the material Curve parameter classes, they may both inherit from generic types, for example: specifying whether to use one of the plurality of textures on the renderer, perform the reverse execution again after the execution is finished, and curve data.
And the curve of the expression transformation is used for smoothly carrying out character face capturing expression conversion.
The floating point parameter curve of the material can be based on the floating point type parameter of the incessantly updated material of time, including: the name of the texture parameter to be set.
The color parameter curve of the material, which is the color type parameter of the material updated continuously based on time, is inherited from the above classes, and may include: color values at start and end. And carrying out interpolation operation based on time, and continuously updating each frame of color.
When the animation component on the designated node is obtained, only the node ID is exported, and other variables are created during loading.
When the node uses the parameters in the attribute of the plot time axis, the playing behavior of the plot time axis can be specified, wherein the playing parameters for controlling the playing behavior can include: ID (describing track name, required), whether to play automatically upon loading (pool type, not required), and whether to loop play (pool type, not required).
The attribute extension field of the target format file defines the genius attribute;
wherein the sprite attributes include a layout, a texture reference, a texture location, a bounding box, a physical shape, and/or a spatial location.
The sprite attribute may be pointed to by the node and thus used by the node.
As shown in table 17, the sprite attribute defined in the attribute extension field, based on which the service layer provides the corresponding capability, may include the following information:
Figure GDA0004058225320000221
/>
Figure GDA0004058225320000231
TABLE 17
Sprites (Sprite) are two-dimensional graphics objects. In a three-dimensional scene, the sprite is typically a standard texture. Textures can be combined and managed through the above-described sprite attributes to improve efficiency and convenience in the development process.
The target format file defines the stream media attribute in the node;
the streaming media attribute includes a URL (uniform resource locator) name, a URL address, and a streaming media format of the streaming media.
As shown in table 18, the streaming media attribute defined in the attribute extension field, based on which the service layer provides the corresponding capability, may include the following information:
type (B) Description of the preferred embodiment Whether or not it is necessary to
name string URL name Whether or not
url string URL address Is that
mimeType string Video format Whether or not
alternate List<string> Spare address Whether or not
Watch 18
Defining resource variable attributes in the nodes by the target format file;
wherein the resource variant attribute comprises a variable type and a set of reference field-to-reference indices to support use of the resource.
As shown in table 19, the resource variable attribute defined in the attribute extension field, based on which the service layer provides the corresponding capability, may include the following information:
type (B) Description of the preferred embodiment Whether it is necessary or not
type enum Variable type Whether or not
collections List<id> Set of index pointing to reference field Is that
Watch 19
Resource variable attributes to support some resources that are not currently in use, but may be used in the future. These resources may be, for example, textures, cube maps, textures, audio clips, animation clips, lighting maps.
The object format file defines part of the non-generic parameters in the attribute attachment field, which is mounted under the node or object.
The non-general parameter is relative to a general parameter, and refers to a parameter which has no global property and is updated frequently.
In the object format, attribute extension fields (Extensions) and attribute addition fields (extra) are included in addition to the normal fields. The conventional fields in the target format are the same as those of the GLTF format, so that the target format is compatible with the GLTF format. The attribute addition field is used to add some information that is not generalized. The attribute extension field is global and the attribute addition field is local. The attribute attachment field is typically mounted under a node or object, providing a customized functional complement. Such attribute information may be recorded in Extras as attributes of a few engine-supported components, or attributes of frequently updated components (after a part of the components are updated, their attribute names are changed or new fields are added). And provides a code generator to generate code quickly, in order to customize the functionality additions for the user using the SDK (software development kit). And the attribute extension field is used for recording information with strong universality. That is, the attributes recorded in the attribute extension field are more versatile and more reusable than the attributes recorded in the attribute addition field.
For example, the following attribute information may be recorded into extra:
(1) Attributes (names) of human bones.
(2) The rest of the camera is necessary information to better support the restoration of the actual scene.
(3) And 3, customizing the material information to ensure that the tool can be used by other tools.
(4) And (6) UI information.
The information supported at present is that animation, sound, camera, light, material, physics, rendering and other types of components are exported, and the variables accessed by the customized script in an open mode also support the export by using a code generation tool.
As an alternative embodiment, the object format file may implement a custom import/export.
The target format file comprises nodes, and the export attribute is mounted under the nodes so as to expand export function and the like.
The target format file also defines an import and export mode;
wherein, the derivation mode is used for defining the derivation of the provided material parameter and/or the derivation of the provided component parameter.
For example: specify the type (e.g., shader type), and define the derived items of texture parameter information.
And the following steps: derived as additional field information under the node, such as: specifying component types (e.g., animations), and derived items of common parameter information.
As can be seen from the above, compared to the GLTF format, the target format defines a large number of new attributes to support the implementation of a large number of functions or effects, as follows:
(1) The GLTF format is compatible, namely information records of Scene, node, mesh, material, texture and the like are supported.
(2) Extensions to the standard GLTF format are supported, such as official material extensions like KHR _ materials _ pbrSpeculatGlossification, KHR _ materials _ unlit, KHR _ materials _ clearcoat, etc.
(3) And official function extensions such as light import and export in a standard GLTF format are supported.
(4) The camera import export adds additional engine specific data, but still retains the support of the camera in the GLTF format.
(5) Supporting colliders such as: spherical, square, cylindrical, capsule, etc.
(6) And the import and export extension of the custom material type is supported.
(7) Bone skinning data derivation is supported.
(8) Grid deformation supporting expression transformation can be used for transformation of Avatar facial expression capture.
(9) Supporting animation, including the transformation of the spatial position (position, rotation, size) and the expressive transformation of an object.
(10) The method supports recording human skeleton data and is used for universal human-shaped animation and motion capture.
(11) And (4) reloading is supported.
(12) Audio is supported.
(13) Add URL data export.
(14) Streaming video playback is supported, and the URL references various external resources (including network files, streaming media, local files).
(15) Metadata management, etc. is supported for deciding under which usage the model can be used, such as whether use on slightly inappropriate activities is allowed.
(16) And supporting expression mixing output.
(17) The plot time axis is supported, and the mixing of various animations including animation, sound and expression control, object visibility, material parameters and the like can be realized based on the time axis.
(18) Supporting the sky box.
(19) Post-processing is supported.
(20) Skeletal dynamics (hair and clothing physical system) is supported.
(21) And the paint spraying and applique manufacturing are supported.
(22) Supporting grid-based text display
(23) Draco is supported, which is an open source mesh compression standard.
(24) Supporting cube maps.
(25) Sprites are supported for 2D rendering or UI.
(26) Supporting the lighting mapping.
(27) An event system is supported.
To make the advantages of the present application more clear, a comparison of the VRM format and the target format is provided below.
VRM (virtual reality modeling) is also a 3D file format developed based on GLTF. The VRM file allows all supported applications to run the same avatar data (3D model).
As new formats developed based on the GLTF format, the target format has the following advantages over the VRM format:
the GLTF format is compatible, can be used on various game engines, webGL, and can be opened and edited by professional design software (such as Maya, blender, C4D and the like).
The method supports scene export, animation, multimedia, sky box, grid compression, custom material parameters, script parameters and the like, and the functionality can be continuously expanded.
The system is cross-system, tools and version compatible support, one file is compatible with all devices, only Runtime needs to be owned, the device is not influenced by an engine version and target operation devices, and the device is very suitable for being used as an exchange medium to be put on shelves in a store to create ecology.
The material can be selected by oneself, establishes to belong to the standard specification of oneself, and has contained the code generation instrument, can deal with quick transform demand.
The components or user-defined logic can be flexibly customized for the services, and the data can also be exported to files, for example, the application of a VR girlfriend can be put into the files and loaded by a program framework, rather than independently generating the application, so that long-term service development and ecological construction are facilitated.
Details are given in table 20 below.
Figure GDA0004058225320000261
/>
Figure GDA0004058225320000271
Watch 20
Example two
The present embodiment is implemented based on the application framework of the first embodiment, and specific details and advantages thereof may be found in the first embodiment.
Fig. 4 schematically shows a flowchart of a 3D image implementation method according to a second embodiment of the present application.
As shown in fig. 4, the 3D image implementation method may include steps S200 to S204, in which:
and step S400, loading the object format file.
Step S402, acquiring data for constructing the 3D image according to the target format file.
And S404, providing various capabilities matched with the data or an external interface, and providing basic capability support for Runtime.
The target format file is associated with a target format compatible with a GLTF format, the target format is obtained by defining extension field information of the GLTF format, and function codes/effect data corresponding to the newly added attribute are derived through various capabilities or external interfaces.
In an optional embodiment, the method further comprises:
managing a plurality of scenes; wherein the managing comprises delaying loading of other scenes than the main scene.
In an alternative embodiment, the step S404 includes the following steps:
deriving character data, scene data, sky box data, UI data, and/or scripts;
wherein the role data comprises: the function code corresponding to the newly added attribute;
wherein the scene data includes: and the effect data of each element in the scene corresponding to the newly added attribute.
In an alternative embodiment, the step S404 further includes the following steps:
for 3D live or game services, the character is adapted for face capture and motion capture, or as a manipulated object for interacting with the scene.
In an alternative embodiment, the step S404 further includes the following steps:
and providing a file production service, and generating an object format file into which the 3D image data is imported.
In an optional embodiment, the method further comprises:
defining an interaction attribute of the object, wherein the interaction attribute is used for indicating whether the object can interact or not;
defining events for external input and binding thereof, wherein the external input comprises keyboard operation and/or mouse operation;
defining an event with which an object action in a 3D scene is bound;
the method comprises the steps of defining an input behavior of a preset monitoring interface and a custom event bound with the input behavior of the preset monitoring interface.
In an optional embodiment, the method further comprises:
defining a binding relationship between an object and an event, wherein the binding relationship is a global attribute or a local attribute;
and when a target event generated for the object is monitored, executing a response corresponding to the target event.
In an optional embodiment, the method further comprises:
and the tools are used for adding the component data and the material data supported by the target format.
EXAMPLE III
Fig. 5 schematically shows a block diagram of a 3D image implementation apparatus according to a third embodiment of the present application. The 3D image realization apparatus may be divided into one or more program modules, which are stored in a storage medium and executed by one or more processors to accomplish the embodiments of the present application. The program modules referred to in the embodiments of the present application refer to a series of computer program instruction segments that can perform specific functions, and the following description will specifically describe the functions of the program modules in the embodiments.
As shown in fig. 5, the 3D image implementation apparatus 500 may include a loading module 510, an obtaining module 520, and a service module 530, wherein:
a loading module 510, configured to load a target format file;
an obtaining module 520, configured to obtain data for constructing a 3D image according to the target format file;
a service module 530, configured to provide various capabilities matched with the data or an external interface, and provide basic capability support for Runtime;
the attribute extension field of the target format file defines a new attribute, the target format file is associated with a target format compatible with the GLTF format, the target format is obtained by defining extension field information of the GLTF format, and the service module 530 is configured to derive function codes/effect data corresponding to the new attribute.
In an optional embodiment, the apparatus further comprises a management module configured to:
managing a plurality of scenes; wherein the managing comprises delaying loading of other scenes than the main scene.
In an alternative embodiment, the derivation module 530 is further configured to:
deriving character data, scene data, sky box data, UI data, and/or scripts;
wherein the role data comprises: the function code corresponding to the newly added attribute;
wherein the scene data includes: and the effect data of each element in the scene corresponding to the newly added attribute.
In an alternative embodiment, the apparatus further comprises an adaptation module (not identified) for:
for 3D live or game services, face capture and motion capture are adapted for characters, or characters are treated as manipulated objects for interacting with the scene.
In an alternative embodiment, the apparatus further comprises a generating module (not identified) for:
and providing a file production service, and generating an object format file into which the 3D image data is imported.
In an optional embodiment, the apparatus further comprises a definition module configured to:
defining an interaction attribute of the object, wherein the interaction attribute is used for indicating whether the object can interact or not;
defining events for external input and binding thereof, wherein the external input comprises keyboard operation and/or mouse operation;
defining an event with which an object action in a 3D scene is bound;
the method comprises the steps of defining an input behavior of a preset monitoring interface and a custom event bound with the input behavior of the preset monitoring interface.
In an optional embodiment, the method further comprises:
defining a binding relation between an object and an event, wherein the binding relation is a global property or a local property;
and when a target event generated for the object is monitored, executing a response corresponding to the target event.
In an optional embodiment, the apparatus further comprises a providing module configured to:
and the tools are used for adding the component data and the material data supported by the target format.
Example four
Fig. 6 schematically shows a hardware architecture diagram of a computer device 2 suitable for implementing a 3D image implementation method according to a fourth embodiment of the present application. In the present embodiment, the computer device 2 is a device capable of automatically performing numerical calculation and/or information processing in accordance with a command set in advance or stored. For example, it may be a smartphone, tablet, laptop, virtual machine, etc. As shown in fig. 6, the computer device 2 includes at least, but is not limited to: the memory 10010, the processor 10020, and the network interface 10030 may be communicatively linked to each other through a system bus. Wherein:
the memory 10010 includes at least one type of computer-readable storage medium comprising flash memory, hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc. In some embodiments, the memory 10010 may be an internal storage module of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 10010 can also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 2. Of course, the memory 10010 may also include both internal and external memory modules of the computer device 2. In this embodiment, the memory 10010 is generally used for storing an operating system installed in the computer device 2 and various application software, such as program codes of a 3D image implementation method. In addition, the memory 10010 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 10020 may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip in some embodiments. The processor 10020 is generally configured to control overall operations of the computer device 2, such as performing control and processing related to data interaction or communication with the computer device 2. In this embodiment, the processor 10020 is configured to execute program codes stored in the memory 10010 or process data.
Network interface 10030 may comprise a wireless network interface or a wired network interface, and network interface 10030 is generally used to establish a communication link between computer device 2 and other computer devices. For example, the network interface 10030 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication link between the computer device 2 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), or Wi-Fi.
It should be noted that fig. 6 only illustrates a computer device having components 10010-10030, but it should be understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the 3D image implementation method stored in the memory 10010 can be further divided into one or more program modules and executed by one or more processors (in this embodiment, the processor 10020) to implement the embodiment of the present application.
EXAMPLE five
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the 3D image implementation method in the embodiments.
In this embodiment, the computer-readable storage medium includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the computer readable storage medium may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the computer readable storage medium may be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device. Of course, the computer-readable storage medium may also include both internal and external storage devices of the computer device. In this embodiment, the computer-readable storage medium is generally used to store an operating system and various types of application software installed in the computer device, for example, program codes of the 3D image implementation method in the embodiment, and the like. Further, the computer-readable storage medium may also be used to temporarily store various types of data that have been output or are to be output.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
It should be noted that the above mentioned embodiments are only preferred embodiments of the present application, and not intended to limit the scope of the present application, and all the equivalent structures or equivalent flow transformations made by the contents of the specification and the drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (9)

1. A 3D image implementation system based on an application framework, comprising:
the Runtime layer is used for loading a target format file;
the data providing layer is used for acquiring data for constructing a 3D image according to the target format file;
the service layer is used for providing various capabilities matched with the data or an external interface and providing basic capability support for Runtime;
the target format file is associated with a target format compatible with a GLTF format, the target format is obtained by defining extension field information of the GLTF format, and the service layer is used for exporting function codes/effect data corresponding to the newly added attribute;
the Runtime layer is also used for:
constructing a face capturing expression, limb actions and scenes of the driving role;
managing a plurality of scenes; wherein the object format file comprises a plurality of scenes;
the service layer is further configured to:
import/export character data, scene data, sky box data, UI data, and/or scripts; wherein the role data comprises: the scene data includes the function code corresponding to the added attribute: effect data of each element in the scene corresponding to the newly added attribute;
for 3D live broadcast or game service, adapting face capture and motion capture for a character, or using the character as a controlled object for interacting with a scene;
providing a file production service for generating an object format file into which the 3D image data has been imported.
2. The system of claim 1, wherein the Runtime layer is further configured to:
managing a plurality of scenes; wherein the managing comprises delaying loading of other scenes than the main scene.
3. The system of any one of claims 1 to 2, further comprising an input operation layer for:
defining an interaction attribute of the object, wherein the interaction attribute is used for indicating whether the object can interact or not;
defining events for external input and binding thereof, wherein the external input comprises keyboard operation and/or mouse operation;
defining events with which object actions in the 3D scene are bound;
the method comprises the steps of defining an input behavior of a preset monitoring interface and a custom event bound with the input behavior of the preset monitoring interface.
4. The system of claim 3,
the input operation is further defined as: the binding relationship between the object and the event is a global property or a local property;
and when a target event generated for the object is monitored, executing a response corresponding to the target event.
5. The system of any one of claims 1 to 2, further comprising a tool set layer for:
and the tools are used for adding the component data and the material data supported by the target format.
6. A3D image implementation method, comprising:
loading a target format file;
acquiring data for constructing a 3D image according to the target format file;
providing various capabilities matched with data or an external interface, and providing basic capability support for Runtime;
the target format file is associated with a target format compatible with a GLTF format, the target format is obtained by defining extension field information of the GLTF format, and function codes/effect data corresponding to the newly added attribute are exported through various provided capabilities or external interfaces;
constructing a face capturing expression, limb actions and scenes of the driving role;
managing a plurality of scenes; wherein the object format file comprises a plurality of scenes;
import/export character data, scene data, sky box data, UI data, and/or scripts; wherein the role data comprises: the scene data includes the function code corresponding to the added attribute: effect data of each element in the scene corresponding to the newly added attribute;
for 3D live broadcast or game service, adapting face capture and motion capture for a character, or using the character as a controlled object for interacting with a scene;
providing a file production service for generating an object format file into which the 3D image data has been imported.
7. A 3D image realization apparatus, characterized in that the apparatus comprises:
the loading module is used for loading the target format file;
the acquisition module is used for acquiring data for constructing a 3D image according to the target format file;
the service module is used for providing various capabilities matched with the data or an external interface and providing basic capability support for Runtime;
the target format file is associated with a target format compatible with a GLTF format, the target format is obtained by defining extension field information of the GLTF format, and the service module is used for exporting function codes/effect data corresponding to the newly added attribute;
the device is also used for driving the construction of the face capturing expression, the limb action and the scene of the character;
the device further comprises:
the management module is used for managing a plurality of scenes; wherein the object format file comprises a plurality of scenes;
an export module for importing/exporting character data, scene data, sky box data, UI data, and/or scripts; wherein the role data comprises: the scene data includes the function code corresponding to the added attribute: effect data of each element in the scene corresponding to the newly added attribute;
the adaptation module is used for adapting face capture and motion capture for a role for 3D live broadcast or game service, or taking the role as a controlled object for interacting with a scene;
and the generating module is used for providing file production service and generating the object format file imported with the 3D image data.
8. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor is adapted to carry out the steps of the 3D image realization method as claimed in claim 6 when executing the computer program.
9. A computer-readable storage medium, having stored thereon a computer program, the computer program being executable by at least one processor to cause the at least one processor to perform the steps of the 3D image implementation method of claim 6.
CN202210814006.7A 2022-07-11 2022-07-11 3D image implementation system and method based on application program framework Active CN115170707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210814006.7A CN115170707B (en) 2022-07-11 2022-07-11 3D image implementation system and method based on application program framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210814006.7A CN115170707B (en) 2022-07-11 2022-07-11 3D image implementation system and method based on application program framework

Publications (2)

Publication Number Publication Date
CN115170707A CN115170707A (en) 2022-10-11
CN115170707B true CN115170707B (en) 2023-04-11

Family

ID=83493744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210814006.7A Active CN115170707B (en) 2022-07-11 2022-07-11 3D image implementation system and method based on application program framework

Country Status (1)

Country Link
CN (1) CN115170707B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002009037A2 (en) * 2000-07-24 2002-01-31 Reflex Systems Inc. Modeling human beings by symbol manipulation
WO2010128830A2 (en) * 2009-05-08 2010-11-11 삼성전자주식회사 System, method, and recording medium for controlling an object in virtual world
CN110751696A (en) * 2019-12-25 2020-02-04 广联达科技股份有限公司 Method, device, equipment and medium for converting BIM (building information modeling) model data into glTF (glTF) data
CN112704872A (en) * 2021-01-08 2021-04-27 完美世界(北京)软件科技发展有限公司 Scene data synchronization method, device, system and storage medium
CN113838181A (en) * 2020-06-23 2021-12-24 英特尔公司 System and method for dynamic scene update
WO2022116759A1 (en) * 2020-12-03 2022-06-09 腾讯科技(深圳)有限公司 Image rendering method and apparatus, and computer device and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019213450A1 (en) * 2018-05-02 2019-11-07 Quidient, Llc A codec for processing scenes of almost unlimited detail
KR102663906B1 (en) * 2019-01-14 2024-05-09 삼성전자주식회사 Electronic device for generating avatar and method thereof
US11263358B2 (en) * 2019-07-26 2022-03-01 Geopogo Rapid design and visualization of three-dimensional designs with multi-user input
US11405699B2 (en) * 2019-10-01 2022-08-02 Qualcomm Incorporated Using GLTF2 extensions to support video and audio data
US11695932B2 (en) * 2020-09-23 2023-07-04 Nokia Technologies Oy Temporal alignment of MPEG and GLTF media
WO2022069616A1 (en) * 2020-10-02 2022-04-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Data stream, devices and methods for volumetric video data
US11748955B2 (en) * 2020-10-06 2023-09-05 Nokia Technologies Oy Network-based spatial computing for extended reality (XR) applications
CN112347212A (en) * 2020-11-06 2021-02-09 中铁第一勘察设计院集团有限公司 Railway cloud GIS platform for BIM application and building method thereof
US20230418381A1 (en) * 2020-11-12 2023-12-28 Interdigital Ce Patent Holdings, Sas Representation format for haptic object
US11800184B2 (en) * 2021-01-06 2023-10-24 Tencent America LLC Method and apparatus for media scene description
CN114359459A (en) * 2021-12-31 2022-04-15 深圳市大富网络技术有限公司 File format conversion method and device and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002009037A2 (en) * 2000-07-24 2002-01-31 Reflex Systems Inc. Modeling human beings by symbol manipulation
WO2010128830A2 (en) * 2009-05-08 2010-11-11 삼성전자주식회사 System, method, and recording medium for controlling an object in virtual world
CN110751696A (en) * 2019-12-25 2020-02-04 广联达科技股份有限公司 Method, device, equipment and medium for converting BIM (building information modeling) model data into glTF (glTF) data
CN113838181A (en) * 2020-06-23 2021-12-24 英特尔公司 System and method for dynamic scene update
WO2022116759A1 (en) * 2020-12-03 2022-06-09 腾讯科技(深圳)有限公司 Image rendering method and apparatus, and computer device and storage medium
CN112704872A (en) * 2021-01-08 2021-04-27 完美世界(北京)软件科技发展有限公司 Scene data synchronization method, device, system and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Andrew Malcolm等.LBD Server: Visualising Building Graphs in Web-Based Environments Using Semantic Graphs and GlTF-Models.Formal Methods in Architecture.2021,287–293. *

Also Published As

Publication number Publication date
CN115170707A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
US7336280B2 (en) Coordinating animations and media in computer display output
CA2795739C (en) File format for representing a scene
US20070294270A1 (en) Layering and Referencing of Scene Description
CN107393013A (en) Virtual roaming file generated, display methods, device, medium, equipment and system
WO2007130689A2 (en) Character animation framework
CN110675466A (en) Rendering system, rendering method, rendering device, electronic equipment and storage medium
MXPA06012368A (en) Integration of three dimensional scene hierarchy into two dimensional compositing system.
US20200320795A1 (en) System and layering method for fast input-driven composition and live-generation of mixed digital content
CN116302366B (en) Terminal development-oriented XR application development system, method, equipment and medium
CN113689534B (en) Physical special effect rendering method and device, computer equipment and storage medium
US11625900B2 (en) Broker for instancing
Pape et al. XP: An authoring system for immersive art exhibitions
CN115167940A (en) 3D file loading method and device
Davison Pro Java 6 3D Game Development: Java 3D, JOGL, JInput and JOAL APIs
CN116339737B (en) XR application editing method, device and storage medium
CN115170707B (en) 3D image implementation system and method based on application program framework
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
CN117009029A (en) XR application and content running method, device and storage medium
CN115205430A (en) 3D file importing and exporting method and device
EP4097607B1 (en) Applying non-destructive edits to nested instances for efficient rendering
CN115170708B (en) 3D image realization method and system
US20240009560A1 (en) 3D Image Implementation
Rhalibi et al. Charisma: High-performance Web-based MPEG-compliant animation framework
Anstey et al. Building a VR narrative
US20240111496A1 (en) Method for running instance, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant