CN114359461A - File format conversion method, conversion device and computer storage medium - Google Patents

File format conversion method, conversion device and computer storage medium Download PDF

Info

Publication number
CN114359461A
CN114359461A CN202111683153.7A CN202111683153A CN114359461A CN 114359461 A CN114359461 A CN 114359461A CN 202111683153 A CN202111683153 A CN 202111683153A CN 114359461 A CN114359461 A CN 114359461A
Authority
CN
China
Prior art keywords
data
format
target
model
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111683153.7A
Other languages
Chinese (zh)
Inventor
李西峙
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jihui Technology Co ltd
Original Assignee
Shenzhen Tatfook Network Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tatfook Network Tech Co Ltd filed Critical Shenzhen Tatfook Network Tech Co Ltd
Priority to CN202111683153.7A priority Critical patent/CN114359461A/en
Publication of CN114359461A publication Critical patent/CN114359461A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a file format conversion method, a file format conversion device and a computer storage medium. The file format conversion method comprises the following steps: analyzing the preset format model by using preset software to obtain model data corresponding to the preset format model; and extracting vertex data, texture data and preset data in the model data, and writing the vertex data, the texture data and the preset data into the target format model according to a writing rule corresponding to the target format file so that preset software outputs the target format file. Through the mode, the preset format model can be converted into the target format file in the preset software, so that other software can use the target format file, and the universality and universality of the use of the preset format model are improved.

Description

File format conversion method, conversion device and computer storage medium
Technical Field
The present application relates to the field of format conversion technologies, and in particular, to a file format conversion method, a file format conversion device, and a computer storage medium.
Background
In order to improve the technical security, an enterprise or a user can independently develop application software, the independently developed application software is used as preset software, a corresponding preset format model of the application software can only be used by the preset software, other software cannot use the preset format model, and the preset format model is low in universality and difficult to be widely used, so that a file format conversion method is needed to enable the preset software to convert the preset format model into a general standard format file.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a file format conversion method, a conversion device and a computer storage medium, which can convert a preset format model into a target format file in preset software, so that other software can use the target format file, and the universality of the use of the preset format model are improved.
In a first aspect, the present application provides a file format conversion method, including: analyzing the preset format model by using preset software to obtain model data corresponding to the preset format model; and extracting vertex data, texture data and preset data in the model data, and writing the vertex data, the texture data and the preset data into the target format model according to a writing rule corresponding to the target format file so that preset software outputs the target format file.
The step of extracting the vertex data in the model data comprises the following steps:
acquiring the number of vertexes and vertex information of each vertex, and confirming target parameters in the vertex information; calculating the maximum value and the minimum value corresponding to each target parameter; the target parameters at least comprise one or more of the coordinates, normal lines, texture coordinates, bone index values and bone weight values of the vertexes.
Wherein, the vertex information further includes a vertex index value corresponding to the vertex, and after the step of calculating the maximum value and the minimum value corresponding to each item of the target parameter, the method further includes: reading a first threshold value of the vertex; wherein the first threshold is greater than or equal to 3; finding out corresponding target vertexes based on the vertex index value, and connecting the target vertexes with the number not less than the first threshold value into polygons so as to enable at least one polygon to form a mesh.
The preset data is animation data, and the step of extracting the preset data in the model data comprises the following steps: acquiring an animation sequence and a skeleton corresponding to the animation data; and calculating key frame data corresponding to the bone based on the animation sequence and the bone.
The key frame data comprises a time interval between adjacent key frames and corresponding translation change values, rotation change values and scaling change values between the adjacent key frames.
Wherein the step of extracting texture data in the model data comprises: calculating a target index value based on the model data to search texture data according to the target index value; a first format picture is generated based on the texture data.
The step of writing the vertex data, the texture data and the preset data into a target format model according to a writing rule corresponding to a target format file comprises the following steps: writing the mesh into a corresponding location in the target format model; writing the target parameters of the vertex information, the maximum values and the minimum values corresponding to the target parameters, and the key frame data into corresponding positions in the target format model; and writing the first format picture into a corresponding position in the target format model.
Wherein, before the step of writing the mesh into the corresponding position in the target format model, the method further comprises: and acquiring the position and the layer of the bone based on the bone index value of the bone, generating skin data according to the position and the layer of the bone, and writing the skin data into a corresponding position in the target format model.
In a second aspect, the present application provides a file format conversion apparatus, which includes a memory and a processor coupled to the memory; wherein the memory is configured to store program data, and the processor is configured to execute the program data to implement the file format conversion method of the first aspect of the present application.
In a third aspect, the present application provides a computer storage medium storing program data for implementing the file format conversion method of the first aspect of the present application when executed by a processor.
The beneficial effect of this application is: according to the method and the device, the preset format model is analyzed through the preset software, the model data corresponding to the preset format model special for the preset software is obtained, the vertex data, the texture data and the preset data in the model data are extracted and written into the target format model according to the writing rule corresponding to the target format file, so that the preset format model is converted into the target format model, the preset format model is converted into the target format file in the universal standard format through the preset software, and the universality and universality of the use of the preset format model are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flowchart of an embodiment of a file format conversion method provided in the present application;
FIG. 2 is a flowchart illustrating an embodiment of the method for extracting vertex data corresponding to the model data in step S102 in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of extracting correspondence between animation data in model data in step S102 in FIG. 1;
FIG. 4 is a flowchart illustrating an embodiment of extracting texture data corresponding to the model data in step S102 in FIG. 1;
FIG. 5 is a flowchart illustrating an embodiment of writing the target format model according to the writing rule corresponding to the target format file in step S102 in FIG. 1;
FIG. 6 is a schematic structural diagram of an embodiment of a file format conversion apparatus provided in the present application;
FIG. 7 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the problem that the universality of a preset format model corresponding to professional preset software is low in the prior art, the application provides a file format conversion method. Referring to fig. 1 in detail, fig. 1 is a schematic flow chart of an embodiment of a file format conversion method provided in the present application, which is applied to a file format conversion device.
The file format conversion device can be a server, a terminal device, or a system in which the server and the terminal device are matched with each other. Accordingly, each part, such as each unit, subunit, module, and submodule, included in the file format conversion apparatus may be all disposed in the server, may be all disposed in the terminal device, and may also be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein.
For convenience of understanding, in the description of the file format conversion method of the present application, a self-developed preset software paracraft is used as an execution subject in a unified manner, but the present application does not specifically limit the type of the preset software, the preset software paracraft is developed by npl (neural Parallel language), and the preset software paracraft is a special software for generating an animation model.
As shown in fig. 1, the file format conversion method of the present embodiment specifically includes the following steps:
step S101: and analyzing the preset format model by using preset software to obtain model data corresponding to the preset format model.
Specifically, the preset format is a special format of the preset software, and a user can open a file in the preset format by using the preset software, so that the preset software loads the file in the preset format. The user may also generate the preset format model using preset software. When a user selects to convert the preset format model into a target format file, the preset software analyzes the preset format model, the model data contained in the preset format model is analyzed, and after the preset format model is analyzed, the preset software lists all data with different attributes to form a collection.
In some embodiments, the preset format model corresponding to the paracraft software is a ParaX model. After the paracraft software analyzes the paraX model, obtaining model data CParaXModel class, wherein the CParaXModel class contains code data of various attributes and types generated after the paraX model is analyzed, and the CParaXModel is the total name of the code data.
Step S102: and extracting vertex data, texture data and preset data in the model data, and writing the vertex data, the texture data and the preset data into the target format model according to a writing rule corresponding to the target format file so that preset software outputs the target format file.
Specifically, the target format is a universal standard format, and after the preset software completes analysis of the preset format model, the preset software further extracts vertex data, texture data and preset data contained in the model data. The process that the preset software writes the data into the target format model according to the writing rule corresponding to the target format file is equivalent to the process that the preset software combines the vertex data, the texture data and the preset data according to the combination rule corresponding to the target format file, so that the preset software outputs the target format file.
In some embodiments, the target format is a generic standard format, glTF, format, which provides a uniform standard for data formats of 3D content. The paracraft software analyzes the paraX model to obtain model data of the CParaXModel class, further extracts vertex data, texture data and preset data in the model data of the CParaXModel class, and writes the vertex data, the texture data and the preset data into corresponding positions in the glTF model so that the paracraft software outputs a file in the glTF format. The preset data may be animation data or bone data.
In other embodiments, the target format is a common standard format, FBX format, which provides a uniform standard for the data format of the 3D content. The paracraft software extracts vertex data, texture data and preset data in the model data and writes the vertex data, the texture data and the preset data into corresponding positions in the FBX model, so that the paracraft software outputs files in the FBX format. The preset data may be animation data or bone data.
Further, referring to fig. 2, the step of extracting vertex data in the model data in step S102 specifically includes:
step S201: and acquiring the number of the vertexes and vertex information of each vertex, and confirming target parameters in the vertex information.
Specifically, the preset software acquires the number of vertexes in the model data according to the number attribute in the model data, traverses the specific code data in the model data to acquire vertex information corresponding to each vertex after acquiring the number of vertexes, confirms required target parameters after acquiring the vertex information, extracts the target parameters in the vertex information, and performs information acquisition and calculation for writing the vertex information into a target format model subsequently. The target parameters at least comprise one or more of vertex coordinates, normals, texture coordinates, bone index values and bone weight values.
In an application mode, all target parameters in the vertex information are extracted, so that the integrity of various types of parameters of the vertex can be kept during format conversion.
In another application, the necessary target parameters of the target format file are confirmed, and at least the necessary target parameters of the target format file are extracted from all the target parameters of the vertex information, so that the conversion of the target format file can be completed, and the conversion efficiency is improved.
Step S202: and calculating the maximum value and the minimum value corresponding to each target parameter.
Specifically, the maximum value and the minimum value corresponding to each target parameter are calculated, so that the interval corresponding to each target parameter is determined, and the conversion accuracy of the target format file is improved.
In some embodiments, the paracraft software analyzes the ParaX model to obtain model data of the CParaXModel class, and obtains the number of vertices according to a subfunction paraxmodel objnum attribute of the CParaXModel class, where the number of different types of data is recorded in the paraxmodel objnum attribute. Furthermore, the preset software traverses the ModelVertex array to obtain information of each vertex, the vertex information corresponding to each vertex is recorded in the ModelVertex array, parameters are extracted from the vertex information to obtain coordinates, a normal line, texture coordinates, a bone index value and a bone weight value of each vertex, and then the maximum value and the minimum value corresponding to each item mark parameter are calculated.
Further, the vertex information further includes a vertex index value corresponding to the vertex, and after the step of calculating the maximum value and the minimum value corresponding to each target parameter, the method further includes: reading a first threshold value of a vertex; wherein the first threshold is greater than or equal to 3; and finding corresponding target vertexes based on the vertex index value, and connecting the target vertexes with the number not less than the first threshold value into polygons so as to enable at least one polygon to form a mesh.
Specifically, the preset software reads a first threshold of a vertex, further traverses the vertex index value, finds the position of a target vertex according to the vertex index value, connects the target vertices of which the number is not less than the first threshold into polygons, and further combines at least one polygon to form a mesh. Wherein the first threshold is greater than or equal to 3 to satisfy the condition that the plurality of target vertices are connected into a polygon.
In an application mode, after the preset software finds 3 target vertexes, the 3 target vertexes are combined into a triangle, and the triangles adjacent to the target vertexes are further combined to form a grid.
In another application mode, after the preset software finds 4 target vertexes, the 4 target vertexes are combined into a quadrangle, and the quadrangles adjacent to the positions are further combined to form a grid.
In some embodiments, the paracraft software traverses a ModelRenderPass array in the CParaXModel class, the ModelRenderPass array stores vertex index values corresponding to vertices, finds at least 3 target vertices using the vertex index values, combines the found target vertices into polygons, and combines the polygons into a mesh, so that the vertices are combined to form a mesh, the mesh can be skinned to a single bone, and each vertex in the mesh includes one or more joint indexes and corresponding weighting factors. In addition, the data source of each vertex in the grid is the same and is obtained from the CParaXModel class, the vertex index value is obtained by calculation from the ModelRenderPass array, the vertex corresponding to the parent node of the current vertex index value can be found according to the current vertex index value, and the calculation is performed by analogy in turn.
Further, referring to fig. 3, the preset data is animation data, and the step of extracting the preset data from the model data in step S102 specifically includes:
step S301: and acquiring an animation sequence and a skeleton corresponding to the animation data.
Specifically, preset software acquires a corresponding animation sequence and a corresponding skeleton in animation data, wherein the skeleton is a main body structure of the animation data, the animation sequence records different time nodes of skeleton change, and the skeleton change at least comprises one of skeleton position change, skeleton rotation and skeleton proportion change.
In some embodiments, the paracraft software derives the animation sequence and skeleton corresponding to the animation data from the paraxmodel objnum attribute.
Step S302: and calculating key frame data corresponding to the skeleton based on the animation sequence and the skeleton.
Specifically, the preset software calculates key frame data corresponding to the skeleton according to the animation sequence and the skeleton, and calculates the key frame data at the moment when the skeleton changes due to the fact that the animation sequence records different time nodes when the skeleton changes, the key frame data records a complete picture when the skeleton changes, a plurality of key frame data are obtained according to the skeleton and the animation sequence, and then the picture at the moment when the skeleton changes in the preset format model is sent is extracted to obtain the picture of the corresponding time node, so that data corresponding to the important time node when the preset format model is converted to the target format model can be completely reserved.
In some embodiments, the paracraft software obtains the animation sequence and the skeleton, extracts key frame data corresponding to the skeleton in the animation sequence based on the animation sequence and the skeleton, and stores the key frame data.
In particular, the keyframe data includes the time interval between adjacent keyframes, and the corresponding translational, rotational, and scaling changes between adjacent keyframes. Every time the bone changes in the original bone changing process, the corresponding scaling matrix, rotation matrix and translation matrix will change. Therefore, when the key frame data is calculated, the time interval between the current key frame and the previous key frame, namely the adjacent key frame, as well as the translation change value, the rotation change value and the scaling change value are extracted and stored, and then the change of the skeleton can be completely recorded.
Optionally, the preset data is bone data, and the step of extracting the preset data from the model data in step S102 specifically includes: and acquiring bones corresponding to the bone data, and calculating key frame data corresponding to the bones based on the bones so as to completely record the positions of the bones corresponding to the bone data.
Further, referring to fig. 4, the step of extracting the texture data from the model data in step S102 specifically includes:
step S401: a target index value is calculated based on the model data to look up texture data according to the target index value.
Specifically, the preset software calculates a target index value according to the model data, and then traverses the target index value to search corresponding texture data according to the target index value.
In some embodiments, the paracraft software obtains the target index value from the model renderpass array, and then looks up texture data corresponding to the target index value based on the target index value.
Step S402: a first format picture is generated based on the texture data.
Specifically, after the preset software obtains texture data, the texture data is converted to generate a first format picture, wherein the first format is any one of bmp, jpg, png, tif and gif formats, and then the texture data is converted to a picture in a general format, so that the texture data is written into the target format model.
In some embodiments, the paracraft software obtains corresponding texture data from the TextureEntity array and converts the texture data to generate the png format picture, and then subsequently writes the png format picture into the target format model.
Further, referring to fig. 5, the step of writing the vertex data, the texture data, and the preset data into the target format model according to the writing rule corresponding to the target format file in step S102 includes:
step S501: the mesh is written to a corresponding location in the target format model.
Specifically, the preset software writes the mesh obtained in step S202 into a position corresponding to the target format model.
Further, before writing the mesh into the position corresponding to the target format model, the preset software writes the attribute information of the target format model into the target format model.
In some embodiments, the target format is a generic standard format glTF format, and the paracraft software writes the metadata value, scene, and nodes into the glTF model according to the rule corresponding to the glTF format. The glTF uses the Json format as a representation method of model data, in which the metadata value includes an asset attribute. Each glTF must have an asset attribute. The asset object must contain the glTF version. Other metadata may be stored in an optional attribute, such as generator or copy. Zero or more scenes are included in the glTF model, which is a set of visual objects to be rendered. Where scene is defined in a scene array, node is defined in a nodes array, and all nodes listed in the scene array must be root nodes. When converting the ParaX model into the glTF model, only one scene and node are set. In the glTF model, a mesh is defined as an array of primitives. And writing the grid data into a corresponding primitive array by the paracraft software, wherein the primitives correspond to data required by GPU drawing call. The primitive array can be a triangle set, and dividing a grid into a plurality of primitives is beneficial to reducing the index number called during drawing and reducing the operation amount.
Step S502: and writing the target parameters of the vertex information, the maximum values and the minimum values corresponding to the target parameters and the key frame data into corresponding positions in the target format model.
Specifically, the preset software writes the target parameters of the vertex information obtained in step S201 and the corresponding maximum and minimum values thereof into corresponding positions in the target format model, and writes the key frame data corresponding to the animation data or the bone data obtained in step S302 into corresponding positions in the target format model.
In some embodiments, the paracraft software writes vertex data and animation data into buffers, accessors, samples, and animations of the glTF model where all big data is stored in buffers and retrieved by the accessor. Where buffers are used to store binary data, the binary data does not require decompression and therefore no other parsing, and any number of buffers may be present to enable flexibility for a variety of applications. Furthermore, all accessors are stored in the accessors array, samplers are stored in the samplers array, and all animations are stored in the animation array.
Step S503: and writing the first format picture into a corresponding position in the target format model.
Specifically, the preset software writes the first format picture obtained in step S402 into a corresponding position in the target format model.
In some embodiments, the paracraft software writes the texture data and png format pictures into textores, Images, respectively, of the glTF model, where all Textures are stored in a textores array and the Images referenced by the Textures are stored in Images in an Images array.
Further, before the step of writing the mesh into the corresponding position in the target format model, the method further includes: and acquiring the position and the layer of the skeleton based on the skeleton index value of the skeleton, generating skin data according to the position and the layer of the skeleton, and writing the skin data into a corresponding position in the target format model.
Specifically, the bone index value provides a corresponding path for finding bones, so that the positions and the levels of the bones are conveniently found, the preset software finds the positions and the levels of the bones in the preset format model according to the bone index value, skin data are generated based on the positions and the levels of the bones, the skin data comprise all the bones and matrixes of the bones changing among the spatial levels, and the skin data are written into the corresponding positions in the target format model.
In some embodiments, the paracraft software finds the position and the hierarchy of the bone according to the hierarchy index value of the bone, so as to generate skin data and write the skin data into skins of the glTF model, all skins are stored in skins arrays in the glTF model, and then the paracraft software writes vertex data, animation data and texture data into the glTF model according to rules corresponding to the glTF format, so as to finally convert the ParaX model into a glTF format file.
According to the file format conversion method, the preset format model is analyzed through the preset software, the model data corresponding to the preset format model special for the preset software are obtained, the vertex data, the texture data and the preset data in the model data are extracted and written into the target format model according to the writing rule corresponding to the target format file, so that the preset format model is converted into the target format model, the preset format model is converted into the target format file in the general standard format through the preset software, and the universality and universality of the use of the preset format model are improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a file format conversion apparatus 60 provided in the present application, which includes a memory 601 and a processor 602 coupled to the memory 601. The memory 601 is used for storing program data (not shown), and the processor 602 is used for executing the program data to implement the file format conversion method in the above embodiments, and for the description of the related contents, reference is made to the detailed description of the above method embodiments, which is not repeated herein for brevity.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application, the computer storage medium 70 stores program data 700, and the program data 700 is used to implement the file format conversion method in the above embodiments when executed by a processor, and for a description of relevant contents, reference is made to the detailed description of the above method embodiments, which is not repeated herein.
It should be noted that, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A file format conversion method, characterized in that the conversion method comprises:
analyzing a preset format model by preset software to obtain model data corresponding to the preset format model;
and extracting vertex data, texture data and preset data in the model data, and writing the vertex data, the texture data and the preset data into a target format model according to a writing rule corresponding to a target format file so that the preset software outputs the target format file.
2. The conversion method according to claim 1, wherein the step of extracting vertex data in the model data comprises:
acquiring the number of vertexes and vertex information of each vertex, and confirming target parameters in the vertex information;
calculating the maximum value and the minimum value corresponding to each target parameter;
the target parameters at least comprise one or more of the coordinates, normal lines, texture coordinates, bone index values and bone weight values of the vertexes.
3. The transformation method according to claim 2, wherein the vertex information further includes vertex index values corresponding to the vertices, and after the step of calculating the maximum and minimum values corresponding to the target parameters, the method further includes:
reading a first threshold value of the vertex; wherein the first threshold is greater than or equal to 3;
finding out corresponding target vertexes based on the vertex index value, and connecting the target vertexes with the number not less than the first threshold value into polygons so as to enable at least one polygon to form a mesh.
4. The conversion method according to claim 3, wherein the preset data is animation data, and the step of extracting the preset data from the model data includes:
acquiring an animation sequence and a skeleton corresponding to the animation data;
and calculating key frame data corresponding to the bone based on the animation sequence and the bone.
5. The conversion method according to claim 4,
the key frame data includes a time interval between adjacent key frames, and corresponding translational change values, rotational change values, and scaling change values between adjacent key frames.
6. The conversion method according to claim 4, wherein the step of extracting texture data in the model data comprises:
calculating a target index value based on the model data to search texture data according to the target index value;
a first format picture is generated based on the texture data.
7. The conversion method according to claim 6, wherein the step of writing the vertex data, the texture data and the preset data into the target format model according to the writing rule corresponding to the target format file comprises:
writing the mesh into a corresponding location in the target format model;
writing the target parameters of the vertex information, the maximum values and the minimum values corresponding to the target parameters, and the key frame data into corresponding positions in the target format model;
and writing the first format picture into a corresponding position in the target format model.
8. The transformation method of claim 7, wherein said step of writing said mesh to a corresponding location in said target format model is preceded by the step of:
and acquiring the position and the layer of the bone based on the bone index value of the bone, generating skin data according to the position and the layer of the bone, and writing the skin data into a corresponding position in the target format model.
9. A file format conversion device, comprising a memory and a processor coupled to the memory;
wherein the memory is used for storing program data, and the processor is used for executing the program data to realize the file format conversion method of any one of claims 1-8.
10. A computer storage medium, characterized in that the computer storage medium stores program data for implementing a file format conversion method according to any one of claims 1 to 8 when executed by a processor.
CN202111683153.7A 2021-12-31 2021-12-31 File format conversion method, conversion device and computer storage medium Pending CN114359461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111683153.7A CN114359461A (en) 2021-12-31 2021-12-31 File format conversion method, conversion device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111683153.7A CN114359461A (en) 2021-12-31 2021-12-31 File format conversion method, conversion device and computer storage medium

Publications (1)

Publication Number Publication Date
CN114359461A true CN114359461A (en) 2022-04-15

Family

ID=81105034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111683153.7A Pending CN114359461A (en) 2021-12-31 2021-12-31 File format conversion method, conversion device and computer storage medium

Country Status (1)

Country Link
CN (1) CN114359461A (en)

Similar Documents

Publication Publication Date Title
Tewari et al. Advances in neural rendering
Komodakis et al. Image completion using efficient belief propagation via priority scheduling and dynamic pruning
Jaklic et al. Segmentation and recovery of superquadrics
US8612040B2 (en) Automated derivative view rendering system
US10394221B2 (en) 3D printing using 3D video data
CN111161365B (en) Compression method and device for bone animation data
Mirzaei et al. Laterf: Label and text driven object radiance fields
US10997778B2 (en) Method for generating three-dimensional model, and terminal device
CN114359447A (en) Bone data modeling method, computer device and storage medium
Guan et al. Voxel-based quadrilateral mesh generation from point cloud
JP7368623B2 (en) Point cloud processing method, computer system, program and computer readable storage medium
Ertugrul et al. Embedding 3D models in offline physical environments
US9275487B1 (en) System and method for performing non-affine deformations
CN114359461A (en) File format conversion method, conversion device and computer storage medium
KR100353843B1 (en) Method for storing and retrieving 3-dimension data on internet
CN114359459A (en) File format conversion method and device and computer storage medium
Regateiro et al. Deep4d: A compact generative representation for volumetric video
Liu et al. Rocnet: Recursive octree network for efficient 3d deep representation
TW202312100A (en) Grid generation method, electronic device and computer-readable storage medium
CN113781658A (en) Method and device for processing 3D model data in streaming mode
Tariq et al. Instanced model simplification using combined geometric and appearance-related metric
US6567082B1 (en) Incremental resolution changes in multi-resolution meshes with update records
Wang et al. ThemeStation: Generating Theme-Aware 3D Assets from Few Exemplars
CN111581412B (en) Method, device, equipment and storage medium for constructing face shape library
CN113434514B (en) Voxelization index and output method of offshore oil and gas field point cloud model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220824

Address after: 518104 3rd Floor, Building A2, No. 2072, Jincheng Road, Haoxiang Community, Shajing Street, Baoan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Jihui Technology Co.,Ltd.

Address before: 518104 a, 4th floor, building A4, third industrial zone, Shajing Industrial Company, Ho Xiang Road, Shajing street, Bao'an District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN TATFOOK NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right