CN114022616A - Model processing method and device, electronic device and storage medium - Google Patents

Model processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN114022616A
CN114022616A CN202111358022.1A CN202111358022A CN114022616A CN 114022616 A CN114022616 A CN 114022616A CN 202111358022 A CN202111358022 A CN 202111358022A CN 114022616 A CN114022616 A CN 114022616A
Authority
CN
China
Prior art keywords
model
map
frame
original
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111358022.1A
Other languages
Chinese (zh)
Other versions
CN114022616B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202111358022.1A priority Critical patent/CN114022616B/en
Publication of CN114022616A publication Critical patent/CN114022616A/en
Application granted granted Critical
Publication of CN114022616B publication Critical patent/CN114022616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A model processing method and device, electronic equipment and storage medium are provided, the model processing method includes: establishing a frame model based on a model frame of a first model, wherein the model precision of the first model is a first precision, and a coordinate system of the first model is a model coordinate system; generating a model map of the first model based on the first model, mapping the model map from a model coordinate system to a map coordinate system, and creating a map model, wherein the model map and a model frame have a binding relationship; and generating a second model based on the frame model and the map model, wherein the model precision of the second model is a second precision, and the second precision is different from the first precision. The model processing method can optimize the conversion process between models with different precisions.

Description

Model processing method and device, electronic device and storage medium
Technical Field
Embodiments of the present disclosure relate to a model processing method, a model processing apparatus, an electronic device, and a storage medium.
Background
Three-dimensional modeling generally refers to building a model with three-dimensional data, i.e., a three-dimensional model, from a virtual three-dimensional space using three-dimensional fabrication software. Three-dimensional models may be used for polygonal representations of objects, which may typically be displayed using a computer or other video device or the like. The displayed object may be a real physical object in the real world, or may be a fictional object or the like.
Disclosure of Invention
At least one embodiment of the present disclosure provides a model processing method, including: establishing a frame model based on a model frame of a first model, wherein the model precision of the first model is a first precision, and a coordinate system of the first model is a model coordinate system; generating a model map of the first model based on the first model, mapping the model map from the model coordinate system to a map coordinate system, and creating a map model, wherein the model map and the model frame have a binding relationship; and generating a second model based on the frame model and the chartlet model, wherein the model precision of the second model is a second precision, and the second precision is different from the first precision.
For example, in a model processing method provided in an embodiment of the present disclosure, the first precision is greater than the second precision.
For example, in a model processing method provided in an embodiment of the present disclosure, a storage format of the second model is different from a storage format of the first model.
For example, in the model processing method provided in an embodiment of the present disclosure, a storage format of the framework model is different from a storage format of the first model.
For example, the model processing method provided by an embodiment of the present disclosure further includes: obtaining an original model; and adjusting model parameters of the original model to create the first model.
For example, in a model processing method provided in an embodiment of the present disclosure, adjusting model parameters of the original model, and creating the first model includes: determining whether there is a match between an original model frame of the original model and an original model map of the original model; adjusting model parameters of the original model in response to the original model frame matching the original model map, creating the first model.
For example, in the model processing method provided in an embodiment of the present disclosure, adjusting the model parameters of the original model, and creating the first model further includes: in response to the original model frame not matching the original model map, adjusting a binding relationship between the original model frame and the original model map.
For example, in the model processing method provided in an embodiment of the present disclosure, adjusting the model parameters of the original model, and creating the first model further includes: in response to the original model frame not matching the original model map, deleting an excess of original model maps relative to the original model frame or supplementing an absent original model map relative to the original model frame.
For example, in a model processing method provided in an embodiment of the present disclosure, creating the framework model based on a model framework of the first model includes: obtaining a first preprocessing model according to the number of the surfaces of the first model; and creating the framework model based on a model framework of the first pre-processing model.
For example, in the model processing method provided in an embodiment of the present disclosure, obtaining the first preprocessing model according to the number of faces of the first model includes: determining whether the number of planes of the first model is larger than the number of reference planes; in response to the fact that the number of surfaces of the first model is larger than the number of reference surfaces, performing surface subtraction on the first model to obtain a first preprocessing model, wherein the number of surfaces of the first preprocessing model is smaller than or equal to the number of reference surfaces; and in response to the number of planes of the first model being less than or equal to the reference number of planes, using the first model as the first preprocessing model.
For example, in a model processing method provided in an embodiment of the present disclosure, creating the framework model based on a model framework of the first preprocessing model includes: deleting material information in the first preprocessing model to obtain the frame model, wherein the material information comprises mapping information of the first preprocessing model.
For example, in a model processing method provided in an embodiment of the present disclosure, creating the framework model based on a model framework of the first model further includes: and correcting the frame model based on the model structure of the frame model.
For example, in a model processing method provided in an embodiment of the present disclosure, generating a model map of the first model based on the first model includes: determining whether the generated model map of the first model is accurate; in response to the generated model map of the first model being inaccurate, re-generating a model map of the first model.
For example, in a model processing method provided in an embodiment of the present disclosure, mapping the model map from the model coordinate system to the map coordinate system, and creating the map model includes: and mapping the model map from the model coordinate system to the map coordinate system to obtain a first map corresponding to the model map, wherein the map model comprises the first map.
For example, in a model processing method provided in an embodiment of the present disclosure, generating the second model based on the frame model and the chartlet model includes: mapping the first map from the map coordinate system to a plane coordinate system to obtain a second map corresponding to the first map; determining the binding relationship between the second map and the model frame according to the binding relationship between the model map and the model frame; and binding the second map with the frame model according to the binding relationship between the second map and the model frame to generate the second model.
For example, in a model processing method provided in an embodiment of the present disclosure, generating the second model based on the frame model and the chartlet model further includes: and giving the second map material attribute based on the material information of the first model.
For example, in the model processing method provided by an embodiment of the present disclosure, each step in the model processing method is suitable for being executed in the same software environment.
At least one embodiment of the present disclosure also provides a model processing apparatus including a frame model creating unit, a chartlet model creating unit, and a model generating unit; the frame model creating unit is configured to create a frame model based on a model frame of a first model, wherein the model precision of the first model is a first precision, and the coordinate system of the first model is a model coordinate system; the map model creating unit is configured to generate a model map of the first model based on the first model, map the model map from the model coordinate system to a map coordinate system, and create a map model, wherein the model map has a binding relationship with the model frame; the model generation unit is configured to generate a second model based on the frame model and the map model, the model accuracy of the second model being a second accuracy, the second accuracy being different from the first accuracy.
At least one embodiment of the present disclosure also provides an electronic device comprising a processor and a memory; the memory stores computer-executable instructions; the computer-executable instructions are stored in the memory and configured to be executed by the processor to implement the model processing method of any embodiment of the present disclosure.
At least one embodiment of the present disclosure also provides a storage medium storing computer-executable instructions that, when executed by a processor, implement the model processing method according to any one of the embodiments of the present disclosure.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
FIG. 1 is a schematic flow chart diagram of a model processing method provided by some embodiments of the present disclosure;
FIG. 2A is a schematic diagram of a first model provided by some embodiments of the present disclosure;
FIG. 2B is a schematic diagram of a framework model provided in some embodiments of the present disclosure;
FIG. 2C is a diagram of a chartlet model according to some embodiments of the present disclosure;
FIG. 3 is a schematic flow chart diagram of another model processing method provided by some embodiments of the present disclosure;
fig. 4 is a schematic flow chart of step S22 in a model processing method according to some embodiments of the present disclosure;
fig. 5 is a schematic flow chart of step S11 in a model processing method according to some embodiments of the present disclosure;
fig. 6 is a schematic flowchart of step S401 in a model processing method according to some embodiments of the present disclosure;
fig. 7 is a schematic flow chart of step S12 in a model processing method according to some embodiments of the present disclosure;
fig. 8 is a schematic flow chart of step S13 in a model processing method according to some embodiments of the present disclosure;
FIG. 9 is a schematic flow chart diagram of a specific example of a model processing method provided by some embodiments of the present disclosure;
FIG. 10 is a schematic block diagram of a model processing apparatus provided in some embodiments of the present disclosure;
FIG. 11 is a schematic block diagram of another model processing apparatus provided in some embodiments of the present disclosure;
fig. 12 is a schematic block diagram of an electronic device provided by some embodiments of the present disclosure;
fig. 13 is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure; and
fig. 14 is a schematic diagram of a storage medium according to some embodiments of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Also, the use of the terms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
Three-dimensional modeling techniques are now widely used in a variety of different industries and fields. For example, in the medical industry, three-dimensional modeling techniques may be employed to produce accurate three-dimensional models of, for example, organs or medical instruments; in the building industry, the constructed three-dimensional model can be used for displaying buildings, home decoration or landscapes and the like; in the video game industry, three-dimensional models can be used as production resources in, for example, computers and video games to enhance the visual and interactive experiences of users.
For example, in the building industry, using three-dimensional models to show information such as structural layout, home furnishing, decorative design, etc. of a house to a user, modeling techniques are generally used to construct multiple models with different accuracies about the house to meet different practical application requirements. For example, when modeling the house, a high-precision model of the house is usually established, so that designers can further perform operations such as drawing, modifying, adjusting and the like on the model; meanwhile, due to the limitation of factors such as the performance of the equipment, the high-precision model may not be directly displayed on some clients or application terminals, so that a low-precision model for the house is usually established, so that the built house model can meet the application requirements of realizing normal display on different types or different types of equipment.
In general, when building house models with different precisions, operations such as modification or adjustment may be performed based on the created house model to convert the created house model into a new house model with different precisions. However, when the house models with different accuracies are converted, for example, when a corresponding low-accuracy house model is generated based on an existing high-accuracy house model, due to possible differences between storage formats of the high-accuracy house model and the low-accuracy house model, supported operating environments, and the like, in the process of generating the corresponding low-accuracy house model based on the high-accuracy house model, the binding relationship between the framework of the model and the material information (or texture information, that is, texture maps of the model) is often lost or confused. Therefore, a user may be required to additionally consume a large amount of manual operation time and effort to bind and match the frame of the model with the material information again, thereby increasing operation cost and seriously reducing model conversion efficiency.
Moreover, because different storage formats of the house models supported by different operating software or operating environments may differ, for a high-precision house model and a low-precision house model with different storage formats, it may be necessary to perform an additional format conversion operation in different operating software or operating environments to implement conversion between house models with different precisions. In this way, not only is the operation cost further increased, but also the binding relationship between the framework of the model and the material information is easily lost or confused in the process of converting the model format.
At least one embodiment of the present disclosure provides a model processing method, including: establishing a frame model based on a model frame of a first model, wherein the model precision of the first model is a first precision, and a coordinate system of the first model is a model coordinate system; generating a model map of the first model based on the first model, mapping the model map from a model coordinate system to a map coordinate system, and creating a map model, wherein the model map and a model frame have a binding relationship; and generating a second model based on the frame model and the map model, wherein the model precision of the second model is a second precision, and the second precision is different from the first precision.
The model processing method provided by the above embodiment of the present disclosure can realize, for example, generating the corresponding second models with different accuracies based on the first model in the same software environment, and further realize the conversion between the first models and the second models with different accuracies. Therefore, the conversion process between models with different precisions can be simplified, and the loss or confusion of the binding relationship or the matching relationship between the model frame and the model map which may occur when the corresponding second model is generated based on the first model can be reduced or avoided, so that the operation time and the operation cost which are needed to be spent are reduced, the model conversion efficiency is improved, and the optimization of the model conversion process is facilitated.
In some embodiments of the present disclosure, each step in the model processing method may be executed in the same software environment, which is helpful to further simplify the conversion process between models with different accuracies and improve the model conversion efficiency.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. It should be noted that the same reference numerals in different figures will be used to refer to the same elements that have been described.
Fig. 1 is a schematic flow chart of a model processing method according to some embodiments of the present disclosure.
As shown in fig. 1, the model processing method provided by the embodiment of the present disclosure includes steps S11 to S13.
Step S11: and creating a frame model based on a model frame of the first model, wherein the model precision of the first model is a first precision, and the coordinate system of the first model is a model coordinate system.
Step S12: and generating a model map of the first model based on the first model, mapping the model map from the model coordinate system to the map coordinate system, and creating a map model, wherein the model map has a binding relationship with the model frame.
Step S13: and generating a second model based on the frame model and the map model, wherein the model precision of the second model is a second precision, and the second precision is different from the first precision.
For example, the operation processes of generating the corresponding second models based on the first models in the above steps S11 to S13 may be implemented in the same software environment, or may also be implemented in different software environments according to actual requirements. The model processing method provided by the embodiment of the disclosure can be applied to operations or steps of different software environments without performing additional conversion on the storage format of the model, thereby simplifying the overall operation process of model processing, and simultaneously reducing or avoiding loss or confusion of the binding relationship or matching relationship between the model frame and the model map, thereby reducing the required operation time and operation cost, improving the model conversion efficiency, and optimizing the model conversion process.
For example, the software environment may be a UE4 (unknown Engine 4) software operating environment, and the storage formats adopted by the first model and the second model may be a uaset format, a GLB format, or a FBX format, etc. supported in the UE4 software operating environment.
For example, in some embodiments, all of the steps of the model processing method described above may be adapted to be performed in the same software environment. Therefore, all operation processes of generating the corresponding second model based on the first model can be realized in the UE4 software operation environment, and the storage formats of the first model and the second model do not need to be converted through different software operation environments, so that additional operation steps are omitted, the operation flow is simplified, the operation cost is further reduced, and the operation efficiency is improved.
For example, in some embodiments, the specific implementation manner of each step in the model processing method may also be suitable for being executed in different software environments according to different actual requirements. For example, the above step S13 may be implemented jointly based on specific operations respectively performed in the UE4 software operating environment and in other software operating environments different from the UE4 software for, e.g., image processing, such as in the UE4 software operating environment and in the dcc (digital Content creation) software operating environment. Therefore, the flexibility and the practicability of the model processing method provided by the embodiment of the disclosure can be improved, and different requirements of users can be met.
For example, for step S11, the model frame does not include the model map of the first model, i.e., does not include the texture map of the first model. In other words, the model frame is used for showing the frame structure of the first model, and does not include mapping information, material information, texture information, and the like about the first model. Accordingly, the frame model created based on the model frame refers to a "white model" corresponding to the first model, which is obtained by deleting mapping information such as material and texture in the first model, that is, compared with the existing first model, none of the frame structures in the frame model (i.e., the model frame of the frame model) is bound or matched with the model mapping or the texture mapping, or it can be understood that mapping information corresponding to the frame structures in the frame model is set to be blank or set as an initial parameter, which is helpful for appropriately modifying or adjusting the frame structures in the frame model in the subsequent steps, reduces the required operation amount and operation cost, and improves the operation efficiency.
For example, in step S12, the model map is used to represent map information such as material and texture in the first model. Therefore, by mapping the model map based on the first model from the model coordinate system to the map coordinate system and creating a new map model, the model map can be adjusted or modified in the map coordinate system according to different actual requirements in the subsequent step, and the loss or confusion of the binding relationship or matching relationship between the model map and the model frame can be avoided through the mapping operation, so that the generation of a new second model according to the corresponding map information and the frame information in the subsequent step S13 is facilitated.
For example, for step S13 above, the accuracy of the model may refer to the accuracy or precision of the modeled object exhibited by the model. For example, the higher the accuracy of the model, the more realistic the model is to display the modeled object, i.e., closer to and consistent with the actual structure and appearance of the modeled object. For example, the accuracy of the model may be represented by the number of faces or the texture pattern included in the model, for example, the higher the accuracy of the model is, the more the number of faces included in the model is, the finer the texture pattern is, and accordingly, the higher the accuracy and precision of the modeled object is, for example, the more detailed features of the modeled object are displayed; accordingly, the memory space required and the performance requirements for the operating environment and the operating equipment may be relatively high.
For example, in step S13, the frame structure in the frame model or the map information in the map model may be modified or adaptively adjusted, and when the map information in the map model is matched and bound with the frame structure in the frame model, a second model with a different precision from the first model may be obtained, thereby realizing conversion between models with different precisions based on the same modeling object.
For example, the first model and the second model may be two-dimensional models or three-dimensional models constructed based on various types of physical objects or virtual objects, for example, the physical objects or virtual objects may be buildings (e.g., houses, etc.), furniture, electric appliances, home decorations, vehicles, characters, animals, daily necessities, musical instruments, art, and the like, and the embodiments of the present disclosure do not specifically limit the types, structures, forms, and the like of the modeling objects.
For example, in some embodiments of the present disclosure, the first accuracy of the first model is greater than the second accuracy of the second model, i.e., the value of the first accuracy is greater than the value of the second accuracy, and the model accuracy of the first model is higher than the model accuracy of the second model. In other words, the first model can exhibit more detailed features about the modeled object than the second model, with higher precision and accuracy, e.g., a greater number of facets or more texture patterns included in the first model. For example, the memory space required for the first model and the performance requirements for the operating environment and operating equipment are also relatively higher than for the second model.
For example, the first accuracy of the first model may be a high accuracy and the second accuracy of the second model may be a low accuracy, i.e., the first model may be a high accuracy model and the second model may be a low accuracy model. Therefore, the second model with lower precision is generated based on the first model with higher precision, the high-precision model can be stored, displayed and modified according to actual requirements, and meanwhile, the low-precision model which can be suitable for more different operating environments, different application scenes or different performance devices is provided, so that the application range of different models constructed for the same modeling object is expanded, and the universality and the practicability of the constructed models are improved.
It should be noted that, in some other embodiments of the present disclosure, the first accuracy of the first model may be smaller than the second accuracy of the second model, that is, the value of the first accuracy is smaller than the value of the second accuracy, and the model accuracy of the first model is lower than the model accuracy of the second model, for example, the first model may be a low-accuracy model, and the second model may be a high-accuracy model. In other words, the model processing method provided by the above embodiment of the present disclosure may be applied to both the operation process of generating the corresponding low-precision model based on the high-precision model and the operation process of generating the corresponding high-precision model based on the low-precision model.
For example, the storage format of the second model is different from the storage format of the first model. For example, taking the first model as a high-precision model and the second model as a low-precision model as an example, compared with the first model, the generated second model may adopt a different storage format with higher universality or applicability, or may also adopt a different storage format with less occupied storage space, so that the first model and the second model are respectively suitable for different operating environments or software to better serve different application scenarios.
For example, the high-precision first model may be stored in a storage format such as uaset, i.e., the high-precision model may be stored as a uaset file, and the low-precision second model may be stored in a storage format such as GLB or FBX, i.e., the low-precision model may be stored as a GLB file or FBX file, etc. Therefore, the high-precision model and the low-precision model can be applied to different application scenes respectively, so that different practical application requirements can be met. For example, a high-precision model adopting the uaset storage format can serve designers to create, draw, modify and the like the model, so that the detailed characteristics of a modeling object can be better reflected by the high-precision model; for another example, the low-precision model adopting the storage format such as GLB or FBX may better meet the operation requirements of different clients or application terminals or be applicable to different operation environments, thereby facilitating the application of the model generated based on the modeling object to devices or terminals with different performances, and enhancing the universality and applicability of the model.
For example, the storage format of the frame model generated based on the first model is different from that of the first model, for example, compared with the first model, the generated frame model may adopt a different storage format with higher universality or applicability, or may also adopt a different storage format with less occupied storage space, so as to facilitate corresponding operations such as modifying, adjusting and the like on the frame model.
In the following, some embodiments of the present disclosure specifically describe the model processing method provided by the embodiments of the present disclosure by taking a modeling object as a home design of a bed in a room as an example, but it should be noted that the embodiments of the present disclosure include, but are not limited to, this.
Fig. 2A is a schematic diagram of a first model provided by some embodiments of the present disclosure, for example, a modeling object of the first model is a household design related to a bed, for example, including features of a bed structure, a decoration design, a display and ornament (for example, bedding articles), and the like, that is, the first model is modeled based on the bed in an actual household scene, for example.
Fig. 2B is a schematic diagram of a framework model provided by some embodiments of the present disclosure, for example, the framework model is used to display a model framework corresponding to the first model constructed based on the bed shown in fig. 2A, that is, the framework model shown in fig. 2B is created based on the model framework of the first model shown in fig. 2A. The frame model shown in fig. 2B does not contain the model map in the first model shown in fig. 2A, for example, map information of the frame structure of the frame model shown in fig. 2B may be set to an initial value, for example, not containing color information, texture information, and the like regarding the layout design of the bed.
Fig. 2C is a schematic diagram of a mapping model provided by some embodiments of the present disclosure, for example, the mapping model corresponds to the model mapping of the bed-based constructed first model shown in fig. 2A, that is, the mapping model shown in fig. 2C is created based on the model mapping of the first model shown in fig. 2A. For example, mapping information in the first model shown in fig. 2A, such as color information, pattern information, texture information, and the like regarding the layout design of the bed, is shown in fig. 2C.
For example, the first model and the second model in the above steps S11 to S13 may be home design models for showing features of the bed, such as structure, ornamental design, display furnishing, and the like, to the user. The embodiment of the disclosure takes a first model as a high-precision model (for example, the first model shown in fig. 2A) related to a home design, and a second model as a low-precision model (for example, a second model generated based on the first model shown in fig. 2A) related to a home design as an example, and the model processing method provided by the embodiment of the disclosure is specifically described with reference to fig. 2A to 2C, but it should be noted that the embodiment of the disclosure includes but is not limited thereto.
Fig. 3 is a schematic flow chart of another model processing method provided in some embodiments of the present disclosure. It should be noted that, except for steps S21 and S22, other steps S23 to S25 in the model processing method shown in fig. 3 are substantially the same as or similar to steps S11 to S13 in the model processing method shown in fig. 1, and repeated description is omitted.
For example, as shown in fig. 3, in some embodiments of the present disclosure, the model processing method includes the following steps S21 and S22 in addition to steps S23 to S25.
Step S21: and acquiring an original model.
Step S22: and adjusting model parameters of the original model to create a first model.
For example, the original model may be directly selected from the created models, or may be generated based on the modeled object rendering.
For example, for the step S22, the model parameters of the original model may be adjusted or modified, for example, the adjustment or modification may include correcting or optimizing the binding relationship or matching relationship between the model frame and the model map of the original model, for example, modifying the binding relationship or matching relationship that may have errors, deletions, or confusion, for example, adjusting the binding relationship or matching relationship between the existing model frame and the model map, deleting the model map that is redundant with respect to the model frame, or adding the model map that is missing with respect to the model frame.
For example, with respect to step S22, by adjusting the model parameters of the original model, on the basis of implementing correction and optimization of the original model, the model accuracy of the original model may also be modified or adjusted appropriately according to actual requirements, so as to facilitate subsequent corresponding model processing operations based on the obtained first model.
Fig. 4 is a schematic flowchart of step S22 in a model processing method according to some embodiments of the present disclosure. For example, as shown in fig. 4, in some embodiments of the present disclosure, the above step S22 may include the following steps S301 and S302.
Step S301: it is determined whether there is a match between the original model frame of the original model and the original model map of the original model.
Step S302: in response to the original model frame matching the original model map, model parameters of the original model are adjusted, creating a first model.
For example, with respect to the above steps S301 and S302, after determining that the original model frame of the original model matches the original model map, according to different practical application requirements, structural features of the original model frame or model parameters of the original model map, such as color, brightness, texture, etc., may be modified to obtain the optimized first model. For example, in step S302, the model parameters of the original model may be adjusted according to actual needs, so as to adaptively adjust, for example, structural features, pattern texture features, model accuracy, and the like of the original model, thereby facilitating a subsequent corresponding model processing or model conversion operation based on the obtained first model.
For another example, in some embodiments of the present disclosure, after the step S301 is executed and the original model frame of the original model is determined to be matched with the original model map, the model parameters of the original model may not be modified, and the original model may be directly used as the first model, so as to simplify the operation flow of model processing and improve the processing efficiency.
For example, in some embodiments of the present disclosure, the step S22 may further include the following step S303.
Step S303: and in response to the original model frame not matching the original model map, adjusting the binding relationship between the original model frame and the original model map.
For example, in the case that there is a mismatch between the original model frame and the original model map, for example, a binding relationship between the original model frame and the original model map is incorrect or disordered, the binding relationship that is incorrect or disordered needs to be modified or re-matched, so that the original model frame and the original model map after the binding relationship is re-established can be correctly matched with each other.
For example, in some embodiments of the present disclosure, the step S22 may further include the following step S304.
Step S304: in response to the original model frame not matching the original model map, the original model map that is redundant with respect to the original model frame is deleted or the original model map that is missing with respect to the original model frame is supplemented.
For example, in the case that there is a mismatch between the original model frame and the original model map, there may be an original model map in which a part of the original model frame does not establish a corresponding binding relationship with the part of the original model frame, or, in the case that all the original model frames have original model maps which are correctly matched with the original model frames and establish a binding relationship, there are also redundant model maps in the original model, that is, redundant map information which does not establish a binding relationship with the original model frames. Therefore, the step S304 can complete the original model maps which are missing from the original model frame, so as to improve the accuracy and precision of the obtained first model, and delete the redundant map information, thereby reducing the amount of redundant computation which may be generated, reducing the operation cost, and improving the operation efficiency.
Fig. 5 is a schematic flowchart of step S11 in a model processing method according to some embodiments of the present disclosure. For example, as shown in fig. 5, in some embodiments of the present disclosure, the above step S11 may include the following steps S401 and S402.
Step S401: and obtaining a first preprocessing model according to the number of the surfaces of the first model.
Step S402: a framework model is created based on a model framework of the first pre-processing model.
For example, the number of faces of the model may be used to represent the accuracy of the model, e.g., the greater the number of faces included in the model, the greater the accuracy and precision of the model representation for the modeled object, and the more detailed features about the modeled object, for example, may be represented.
For example, in some embodiments of the present disclosure, the step S402 includes: and deleting the material information in the first preprocessing model to obtain a frame model. For example, the texture information includes mapping information of the first pre-processing model, for example, including pattern information, texture information, or the like about the first pre-processing model.
For example, as shown in fig. 5, in some embodiments of the present disclosure, the step S11 may further include the following step S403.
Step S403: and correcting the frame model based on the model structure of the frame model.
For example, for the obtained framework model, since the number of faces in the model may be modified or adjusted when the first pre-processing model is generated, for example, a face decreasing or face increasing operation is performed. Therefore, errors or confusion which may occur during the face reducing or face increasing operation, for example, such as unreasonable or confused structural features of the model which may be caused after the face reducing or face increasing operation is performed on the model, can be avoided by executing step S403, thereby facilitating further optimization of the generated framework model and improving the accuracy and precision of the second model which is subsequently generated based on the framework model.
Fig. 6 is a schematic flowchart of step S401 in a model processing method according to some embodiments of the present disclosure. For example, as shown in fig. 6, in some embodiments of the present disclosure, the above step S401 may include the following steps S4011 to S4013.
Step S4011: it is determined whether the number of planes of the first model is greater than the number of reference planes.
Step S4012: and in response to the fact that the number of the surfaces of the first model is larger than the number of the reference surfaces, performing surface subtraction on the first model to obtain a first preprocessing model, wherein the number of the surfaces of the first preprocessing model is smaller than or equal to the number of the reference surfaces.
Step S4013: and in response to the number of planes of the first model being less than or equal to the number of reference planes, treating the first model as a first preprocessing model.
For example, taking the example of generating a low-accuracy second model based on a high-accuracy first model, the number of faces included in the second model may be relatively small in order to reduce the model accuracy of the generated second model. Therefore, when the framework model is created, whether the surface reduction processing needs to be performed on the first model can be determined according to the number of the surfaces contained in the first model, so that the obtained framework model can contain relatively less number of the surfaces. In other words, the number of faces included in the first pre-processing model, the frame model, and the second model may be substantially the same or substantially consistent with each other.
For example, the number of reference planes may be determined according to actual application requirements, for example, may be determined according to model accuracy of the second model that needs to be provided, standards and requirements of an application scenario or an operating environment, and the like, thereby facilitating that the subsequently generated second model may be applicable to different application devices or application scenarios.
Fig. 7 is a schematic flowchart of step S12 in a model processing method according to some embodiments of the present disclosure. For example, as shown in fig. 7, in some embodiments of the present disclosure, the above step S12 may include the following steps S501 and S502.
Step S501: it is determined whether the generated model map of the first model is accurate.
Step S502: in response to the generated model map of the first model being inaccurate, the model map of the first model is regenerated.
For example, with respect to the above steps S501 and S502, before mapping the model map obtained based on the first model from the model coordinate system to the map coordinate system, the generated model map may be compared with, for example, actual structure, form, pattern, and texture information of the modeling object, so as to ensure that the obtained model map is consistent with the modeling object, thereby facilitating to improve the accuracy and precision of the second model generated based on the map model and the frame model.
For example, as shown in fig. 7, in some embodiments of the present disclosure, the step S12 may further include the following step S503.
Step S503: and mapping the model map from the model coordinate system to the map coordinate system to obtain a first map corresponding to the model map, wherein the map model comprises the first map.
For example, the first map and the model map correspond to each other, so that the mapped first map can also be matched with the model frame of the first model, thereby obtaining the binding relationship between the first map and the model frame.
Fig. 8 is a schematic flowchart of step S13 in a model processing method according to some embodiments of the present disclosure. For example, as shown in fig. 8, in some embodiments of the present disclosure, the above step S13 may include the following steps S601 to S603.
Step S601: and mapping the first map from the map coordinate system to the plane coordinate system to obtain a second map corresponding to the first map.
Step S602: and determining the binding relationship between the second map and the model frame according to the binding relationship between the model map and the model frame.
Step S603: and binding the second map and the frame model according to the binding relationship between the second map and the model frame to generate a second model.
For example, to facilitate accurate extraction of the map information in the model, the first map and model map may be used to represent three-dimensional map information in the model. For example, the second map may be used to implement conversion of the three-dimensional map information into two-dimensional map information, thereby facilitating subsequent binding or matching between the second map and the model frame, e.g., performing a synthesis process based on the second map and the model frame to generate a corresponding second model.
For example, the second map and the first map correspond to each other, that is, there is also a correspondence between the second map and the model map. Therefore, the second map obtained after mapping can be matched with the model frame of the first model, and the binding relationship between the second map and the model frame is established. And then, after the second map and the frame model are subjected to processing such as binding or synthesis, the corresponding second model can be obtained.
For example, as shown in fig. 8, in some embodiments of the present disclosure, the step S13 may further include the following step S604.
Step S604: and giving a second map material attribute based on the material information of the first model.
For example, the texture information may include information related to texture, pattern, structural material, etc. of the modeled object, so that the second map may be matched or matched with the actual pattern, texture, etc. of the modeled object, thereby improving the accuracy and precision of the second model generated based on the second map.
For example, in some embodiments, the above operation process of obtaining a corresponding second map based on a first map in the map model and the operation process of creating the map model may be implemented in the same software environment; alternatively, in some embodiments, the software environment for obtaining the corresponding second map based on the first map in the map model may also be different from the software environment for creating the map model, and this is not particularly limited by the embodiments of the present disclosure.
Fig. 9 is a schematic flow chart of a specific example of a model processing method according to some embodiments of the present disclosure.
For example, taking the example of the home design model for a bed shown in fig. 2A to 2C as an example, the model processing method provided by the embodiments of the present disclosure may be implemented in a UE4 software operating environment, for example.
As shown in fig. 9, taking the example of generating a corresponding low-precision home model based on a high-precision home model, a high-precision model that has been created and can be run in the UE4 software operating environment, that is, the original model described above, is obtained first. Then, it is determined whether the material information of the high-precision model and the frame structure (section) are consistent. If the material information of the high-precision model is consistent with the frame structure, for example, the material information and the frame structure correspond or match with each other, continuing to execute the subsequent steps; and if the material information of the high-precision model is inconsistent with the frame structure, correcting or correcting the material information of the high-precision model and the frame structure so as to keep the material information of the high-precision model consistent with the frame structure and then continuously executing the subsequent steps.
After the material information and the frame structure of the high-precision model are matched with each other, the model parameters (e.g., UV parameters) of the high-precision model may be set according to actual requirements to generate a new model a (e.g., the home model shown in fig. 2A), i.e., the first model described above. After obtaining the model a shown in fig. 2A, for example, the map information or material information in the model a may be extracted to obtain a model map (e.g., LightmapUV) of the model a, and the frame information or structure information in the model a may be extracted to obtain a model frame of the model a.
For example, before extracting the model frame of the model a, it is necessary to determine in advance whether the face reduction processing needs to be performed on the model a according to the number of faces of the model a. For example, it is necessary to determine whether the number of planes of the model a is greater than the preset number of low-modulus reference planes, i.e., the number of reference planes described above. If the number of surfaces of the model A is larger than the preset number of low-modulus reference surfaces, surface reduction processing needs to be carried out on the model A. For example, when the surface reduction operation is performed, it is necessary to calculate a corresponding surface reduction ratio, and perform surface reduction processing on the model a according to the surface reduction ratio, where the surface reduction ratio may be a ratio between a preset low-modulus reference surface number and a surface number of the model a, so as to obtain a surface reduction model after performing surface reduction processing on the model a, that is, the first preprocessing model described above. If the number of surfaces of the model a is equal to or less than the preset number of low-modulus reference surfaces, the model a does not need to be subjected to surface reduction processing, for example, the corresponding surface reduction ratio is 1, that is, the surface reduction operation is not required, and the model a is directly used as the first preprocessing model to be subjected to subsequent steps.
For example, after obtaining the first pre-processing model, the mapping information such as the texture structure and the texture pattern in the first pre-processing model is deleted to obtain the "white model" corresponding to the first pre-processing model, i.e. the model B (e.g. the model shown in fig. 2B), i.e. the frame model described above. For example, the model B may be stored and derived, for example, in GLB format or FBX format, etc. Then, it is determined whether the obtained model B needs to be calibrated or corrected, for example, whether the size, the center point, or the like of the model B stored in a format such as GLB or FBX conforms to a modeling rule, for example, whether a low-precision model rule is satisfied. If the model B meets the modeling rule, the model B can be continuously used in the subsequent steps; if the model B does not conform to the modeling rule, the correction process needs to be performed again on the model B, such as the size, the center point, and the like, so that the model B conforms to the modeling rule.
For example, for the extracted model map of the model a, whether the model map is accurate or not can be determined according to the modeled home object. If the obtained model map of the model a is determined to be correct, the model map is mapped into a map coordinate system to generate a model C (e.g., the model shown in fig. 2C), i.e., the map model described above. For example, the model map may be rolled out sequentially in the map coordinate system by each vertex of the map to construct model C. If the obtained model map of the model a is inaccurate, for example, the pattern, texture, etc. of the model map do not match the modeling object, or image information is missing or lost, it is necessary to extract a corresponding model map (for example, denoted as UV0) from the model a again, and continuously determine whether the model map obtained again is accurate.
After the model C is generated, the map information in the model C may be compared and analyzed based on the actual home scene to generate the corresponding planar Lightmap map, i.e. the second map described above. Furthermore, after the material information or the texture information corresponding to the home scene is subsequently given to the Lightmap map, the Lightmap and the model frame may be matched or bound by a synthesis unit or a synthesis plug-in (e.g., blend), so as to generate a low-precision model for the home object, for example, the low-precision model may be stored in a GLB format or an FBX format.
It should be noted that, in the embodiments of the present disclosure, the flows of the model processing methods provided in the above embodiments of the present disclosure may include more or less operations, and these operations may be executed sequentially or in parallel. Although the flow of the model processing method described above includes a plurality of operations occurring in a particular order, it should be clearly understood that the order of the plurality of operations is not limited. The above-described model processing method may be performed once or may be performed a plurality of times according to a predetermined condition.
At least one embodiment of the present disclosure further provides a model processing apparatus. Fig. 10 is a schematic block diagram of a model processing apparatus according to some embodiments of the present disclosure.
For example, as shown in fig. 10, the model processing apparatus 700 includes a frame model creating unit 701, a map model creating unit 702, and a model generating unit 703.
The frame model creating unit 701 is configured to create a frame model based on a model frame of a first model, a model accuracy of the first model being a first accuracy, and a coordinate system of the first model being a model coordinate system. For example, the framework model creation unit 701 may execute step S11 in the model processing method shown in fig. 1.
The map model creating unit 702 is configured to generate a model map of the first model based on the first model, and map the model map from the model coordinate system into the map coordinate system, creating a map model, where the model map has a binding relationship with the model frame. For example, the map model creation unit 702 may execute step S12 in the model processing method shown in fig. 1.
The model generation unit 703 is configured to generate a second model based on the frame model and the map model, the model accuracy of the second model being a second accuracy, the second accuracy being different from the first accuracy. For example, the model generation unit 703 may execute step S13 in the model processing method shown in fig. 1.
For example, the framework model creation unit 701, the map model creation unit 702, and the model generation unit 703 include codes and programs stored in a memory; the processor may execute the code and programs to implement some or all of the functions of the framework model creation unit 701, the map model creation unit 702, and the model generation unit 703 as described above. For example, the framework model creation unit 701, the chartlet model creation unit 702, and the model generation unit 703 may be dedicated hardware devices for implementing some or all of the functions of the framework model creation unit 701, the chartlet model creation unit 702, and the model generation unit 703 as described above. For example, the framework model creation unit 701, the chartlet model creation unit 702, and the model generation unit 703 may be one circuit board or a combination of a plurality of circuit boards for implementing the functions as described above. In the embodiment of the present application, the one or a combination of a plurality of circuit boards may include: (1) one or more processors; (2) one or more non-transitory memories connected to the processor; and (3) firmware stored in the memory executable by the processor.
It should be noted that, the frame model creating unit 701 is configured to implement step S11 shown in fig. 1, the chartlet model creating unit 702 is configured to implement step S12 shown in fig. 1, and the model generating unit 703 is configured to implement step S13 shown in fig. 1, so that, regarding the specific descriptions of the frame model creating unit 701, the chartlet model creating unit 702, and the model generating unit 703, reference may be made to the related descriptions corresponding to step S11 to step S13 shown in fig. 1 in the embodiment of the model processing method described above, respectively. In addition, the model processing apparatus can achieve similar technical effects to those of the model processing method, and will not be described herein again.
Fig. 11 is a schematic block diagram of another model processing apparatus provided in some embodiments of the present disclosure. For example, as shown in fig. 11, the model processing apparatus 800 includes a model acquisition unit 704 in addition to a frame model creation unit 701, a map model creation unit 702, and a model generation unit 703.
For example, the model acquisition unit 704 is configured to: obtaining an original model, adjusting model parameters of the original model, and creating a first model.
For example, the model acquisition unit 704 may be configured to: determining whether an original model frame of an original model is matched with an original model map of the original model; and adjusting model parameters of the original model in response to the original model frame matching the original model map, creating a first model.
For example, the model acquisition unit 704 may also be configured to: and in response to the original model frame not matching the original model map, adjusting the binding relationship between the original model frame and the original model map.
For example, the model acquisition unit 704 may also be configured to: in response to the original model frame not matching the original model map, the original model map that is redundant with respect to the original model frame is deleted or the original model map that is missing with respect to the original model frame is supplemented.
For example, the model acquisition unit 704 includes codes and programs stored in a memory; the processor may execute the code and programs to implement some or all of the functionality of the model acquisition unit 704 as described above. For example, the model acquisition unit 704 may be a dedicated hardware device that implements some or all of the functionality of the model acquisition unit 704 as described above. For example, the model acquisition unit 704 may be a circuit board or a combination of circuit boards for implementing the functions described above. In the embodiment of the present application, the one or a combination of a plurality of circuit boards may include: (1) one or more processors; (2) one or more non-transitory memories connected to the processor; and (3) firmware stored in the memory executable by the processor. It should be noted that the model obtaining unit 704 may be configured to implement, for example, steps S21 and S22 shown in fig. 3, so that for specific description of the model obtaining unit 704, reference may be made to the related description corresponding to steps S21 and S22 shown in fig. 3 in the above embodiment of the model processing method, and details are not repeated here.
For example, the framework model creation unit 701 is further configured to: obtaining a first preprocessing model according to the number of faces of the first model; and creating a framework model based on the model framework of the first pre-processing model.
For example, the framework model creation unit 701 is further configured to: determining whether the number of planes of the first model is larger than the number of reference planes; in response to the fact that the number of the surfaces of the first model is larger than the number of the reference surfaces, performing surface reduction operation on the first model to obtain a first preprocessing model, wherein the number of the surfaces of the first preprocessing model is smaller than or equal to the number of the reference surfaces; and in response to the number of planes of the first model being less than or equal to the number of reference planes, treating the first model as a first pre-processing model.
For example, the framework model creation unit 701 is further configured to: deleting the material information in the first preprocessing model to obtain the frame model, wherein the material information comprises the mapping information of the first preprocessing model.
For example, the framework model creation unit 701 is further configured to: and correcting the frame model based on the model structure of the frame model.
For example, the charting model creation unit 702 is further configured to: determining whether the generated model map of the first model is accurate; and in response to the generated model map of the first model being inaccurate, regenerating the model map of the first model.
For example, the charting model creation unit 702 is further configured to: and mapping the model map from the model coordinate system to the map coordinate system to obtain a first map corresponding to the model map, wherein the map model comprises the first map.
For example, the model generation unit 703 is further configured to: mapping the first map from the map coordinate system to a plane coordinate system to obtain a second map corresponding to the first map; determining the binding relationship between the second map and the model frame according to the binding relationship between the model map and the model frame; and binding the second map with the frame model according to the binding relationship between the second map and the model frame to generate a second model.
For example, the model generation unit 703 is further configured to: and giving a second map material attribute based on the material information of the first model.
At least one embodiment of the present disclosure also provides an electronic device including a processor and a memory. The memory includes one or more computer-executable instructions. One or more computer-executable instructions are stored in the memory and configured to be executed by the processor, the one or more computer-executable instructions for implementing the model processing method provided by any of the embodiments of the present disclosure.
Fig. 12 is a schematic block diagram of an electronic device provided in some embodiments of the present disclosure. As shown in fig. 12, the electronic device 300 includes a processor 310 and a memory 320. Memory 320 is used to non-transitory store computer-executable instructions (e.g., one or more computer program modules). The processor 310 is configured to execute the computer-executable instructions, and when the computer-executable instructions are executed by the processor 310, the computer-executable instructions may perform one or more steps of the model processing method described above, thereby implementing the model processing method described above. The memory 320 and the processor 310 may be interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the processor 310 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other form of processing unit having data processing capabilities and/or program execution capabilities. For example, the Central Processing Unit (CPU) may be an X86 or ARM architecture or the like. The processor 310 may be a general-purpose processor or a special-purpose processor that may control other components in the electronic device 300 to perform desired functions.
For example, memory 320 may include any combination of one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer program modules may be stored on the computer-readable storage medium and executed by processor 310 to implement various functions of electronic device 300. Various applications and various data, as well as various data used and/or generated by the applications, and the like, may also be stored in the computer-readable storage medium.
It should be noted that, in the embodiment of the present disclosure, reference may be made to the above description on the model processing method and the model processing apparatus for specific functions and technical effects of the electronic device 300, and details are not described here.
Fig. 13 is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure. The electronic device 400 is, for example, suitable for implementing the model processing method provided by the embodiments of the present disclosure. The electronic device 400 may be a terminal device or the like. The electronic device 400 may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), wearable electronic devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. It should be noted that the electronic device 400 shown in fig. 13 is only an example, and does not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 13, electronic device 400 may include a processing means (e.g., central processing unit, graphics processor, etc.) 410 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)420 or a program loaded from a storage device 480 into a Random Access Memory (RAM) 430. In the RAM 430, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 410, the ROM 420, and the RAM 430 are connected to each other by a bus 440. An input/output (I/O) interface 450 is also connected to bus 440.
Generally, the following devices may be connected to the I/O interface 450: input devices 460 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 470 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, or the like; storage 480 including, for example, magnetic tape, hard disk, etc.; and a communication device 490. The communication device 490 may allow the electronic device 400 to communicate wirelessly or by wire with other electronic devices to exchange data. While fig. 12 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided, and that the electronic device 400 may alternatively be implemented or provided with more or less means.
For example, according to an embodiment of the present disclosure, the above-described model processing method may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program comprising program code for performing the model processing method described above. In such embodiments, the computer program may be downloaded and installed from a network through communication device 490, or installed from storage device 480, or installed from ROM 420. When executed by the processing device 410, the computer program may implement the functions defined in the model processing method provided by the embodiments of the present disclosure.
Fig. 14 is a schematic diagram of a storage medium according to some embodiments of the present disclosure. For example, as shown in fig. 14, the storage medium 500 may be a non-transitory computer-readable storage medium for storing non-transitory computer-executable instructions 501. The model processing method described in embodiments of the present disclosure may be implemented when the non-transitory computer-executable instructions 501 are executed by a processor, e.g., one or more steps in accordance with the model processing method described above may be performed when the non-transitory computer-executable instructions 501 are executed by a processor.
For example, the storage medium 500 may be applied to the electronic device described above, and for example, the storage medium 500 may include a memory in the electronic device.
For example, the storage medium may include a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a flash memory, or any combination of the above, as well as other suitable storage media.
For example, the description of the storage medium 500 may refer to the description of the memory in the embodiment of the electronic device, and repeated descriptions are omitted. The specific functions and technical effects of the storage medium 500 can be referred to the above descriptions of the model processing method and the model processing apparatus, which are not described herein again.
For the present disclosure, there are also the following points to be explained:
(1) the drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to common designs.
(2) Without conflict, embodiments of the present disclosure and features of the embodiments may be combined with each other to arrive at new embodiments.
The above description is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and the scope of the present disclosure should be subject to the scope of the claims.

Claims (20)

1. A model processing method, comprising:
establishing a frame model based on a model frame of a first model, wherein the model precision of the first model is a first precision, and a coordinate system of the first model is a model coordinate system;
generating a model map of the first model based on the first model, mapping the model map from the model coordinate system to a map coordinate system, and creating a map model, wherein the model map and the model frame have a binding relationship; and
and generating a second model based on the frame model and the map model, wherein the model precision of the second model is a second precision, and the second precision is different from the first precision.
2. The model processing method of claim 1, wherein the first precision is greater than the second precision.
3. The model processing method according to claim 1 or 2, wherein a storage format of the second model is different from a storage format of the first model.
4. The model processing method according to claim 1 or 2, wherein a storage format of the framework model is different from a storage format of the first model.
5. The model processing method according to claim 1 or 2, further comprising:
obtaining an original model; and
and adjusting the model parameters of the original model to create the first model.
6. The model processing method of claim 5, wherein adjusting model parameters of the original model, creating the first model, comprises:
determining whether there is a match between an original model frame of the original model and an original model map of the original model;
adjusting model parameters of the original model in response to the original model frame matching the original model map, creating the first model.
7. The model processing method of claim 6, wherein adjusting model parameters of the original model, creating the first model, further comprises:
in response to the original model frame not matching the original model map, adjusting a binding relationship between the original model frame and the original model map.
8. The model processing method of claim 6, wherein adjusting model parameters of the original model, creating the first model, further comprises:
in response to the original model frame not matching the original model map, deleting an excess of original model maps relative to the original model frame or supplementing an absent original model map relative to the original model frame.
9. The model processing method according to claim 1 or 2, wherein creating the framework model based on the model framework of the first model comprises:
obtaining a first preprocessing model according to the number of the surfaces of the first model; and
creating the framework model based on a model framework of the first pre-processing model.
10. The model processing method of claim 9, wherein deriving the first pre-processing model from the number of faces of the first model comprises:
determining whether the number of planes of the first model is larger than the number of reference planes;
in response to the fact that the number of faces of the first model is larger than the reference number of faces, performing a face subtraction operation on the first model to obtain the first preprocessing model, wherein the number of faces of the first preprocessing model is smaller than or equal to the reference number of faces;
and in response to the number of planes of the first model being less than or equal to the reference number of planes, using the first model as the first preprocessing model.
11. The model processing method of claim 9, wherein creating the framework model based on the model framework of the first pre-processing model comprises:
deleting material information in the first preprocessing model to obtain the frame model, wherein the material information comprises mapping information of the first preprocessing model.
12. The model processing method according to claim 1 or 2, wherein creating the framework model based on a model framework of the first model further comprises:
and correcting the frame model based on the model structure of the frame model.
13. The model processing method according to claim 1 or 2, wherein generating a model map of the first model based on the first model comprises:
determining whether the generated model map of the first model is accurate;
in response to the generated model map of the first model being inaccurate, re-generating a model map of the first model.
14. The model processing method of claim 1 or 2, wherein mapping the model map from the model coordinate system into the map coordinate system, creating the map model, comprises:
and mapping the model map from the model coordinate system to the map coordinate system to obtain a first map corresponding to the model map, wherein the map model comprises the first map.
15. The model processing method of claim 14, wherein generating the second model based on the frame model and the map model comprises:
mapping the first map from the map coordinate system to a plane coordinate system to obtain a second map corresponding to the first map;
determining the binding relationship between the second map and the model frame according to the binding relationship between the model map and the model frame; and
and binding the second map and the frame model according to the binding relationship between the second map and the model frame to generate the second model.
16. The model processing method of claim 15, wherein generating the second model based on the frame model and the map model, further comprises:
and giving the second map material attribute based on the material information of the first model.
17. The model processing method according to claim 1 or 2, wherein the steps of the model processing method are adapted to be performed in the same software environment.
18. A model processing apparatus comprising:
a frame model creating unit configured to create a frame model based on a model frame of a first model, wherein the model precision of the first model is a first precision, and the coordinate system of the first model is a model coordinate system;
the mapping model creating unit is configured to generate a model mapping of the first model based on the first model, map the model mapping from the model coordinate system to a mapping coordinate system, and create a mapping model, wherein the model mapping and the model frame have a binding relationship;
a model generating unit configured to generate a second model based on the frame model and the map model, wherein the model accuracy of the second model is a second accuracy, and the second accuracy is different from the first accuracy.
19. An electronic device, comprising:
a processor;
a memory storing computer-executable instructions;
wherein the computer-executable instructions are stored in the memory and configured to be executed by the processor to implement the model processing method of any one of claims 1-17.
20. A storage medium storing computer-executable instructions that, when executed by a processor, implement the model processing method of any one of claims 1-17.
CN202111358022.1A 2021-11-16 2021-11-16 Model processing method and device, electronic equipment and storage medium Active CN114022616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111358022.1A CN114022616B (en) 2021-11-16 2021-11-16 Model processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111358022.1A CN114022616B (en) 2021-11-16 2021-11-16 Model processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114022616A true CN114022616A (en) 2022-02-08
CN114022616B CN114022616B (en) 2023-07-07

Family

ID=80064836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111358022.1A Active CN114022616B (en) 2021-11-16 2021-11-16 Model processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114022616B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820905A (en) * 2022-06-24 2022-07-29 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931288A (en) * 2016-04-12 2016-09-07 广州凡拓数字创意科技股份有限公司 Construction method and system of digital exhibition hall
CN106780680A (en) * 2016-11-28 2017-05-31 幻想现实(北京)科技有限公司 Three-dimensional animation generation method, terminal and system based on augmented reality
CN107578467A (en) * 2017-09-04 2018-01-12 苏州英诺迈医学创新服务有限公司 A kind of medicine equipment three-dimensional modeling method and device
CN110287351A (en) * 2019-06-27 2019-09-27 哈工大(张家口)工业技术研究院 A kind of threedimensional model lightweight displaying optimization method
CN110363845A (en) * 2019-07-22 2019-10-22 南京聚润工程科技有限公司 A kind of BIM threedimensional model point subtracts the method and system that face stores and reconstructs
CN111028361A (en) * 2019-11-18 2020-04-17 杭州群核信息技术有限公司 Three-dimensional model and material merging method, device, terminal, storage medium and rendering method
CN112288873A (en) * 2020-11-19 2021-01-29 网易(杭州)网络有限公司 Rendering method and device, computer readable storage medium and electronic equipment
CN112419511A (en) * 2020-12-26 2021-02-26 董丽萍 Three-dimensional model file processing method and device, storage medium and server
CN112734930A (en) * 2020-12-30 2021-04-30 长沙眸瑞网络科技有限公司 Three-dimensional model weight reduction method, system, storage medium, and image processing apparatus
CN113032699A (en) * 2021-03-04 2021-06-25 广东博智林机器人有限公司 Robot model construction method, robot model construction device and robot processor
CN113052981A (en) * 2019-12-27 2021-06-29 北京国双科技有限公司 Model editing and constructing method and related equipment and computer storage medium
CN113204897A (en) * 2021-06-02 2021-08-03 北京慧拓无限科技有限公司 Scene modeling method, device, medium and equipment for parallel mine simulation system
CN113498532A (en) * 2020-01-21 2021-10-12 京东方科技集团股份有限公司 Display processing method, display processing device, electronic apparatus, and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931288A (en) * 2016-04-12 2016-09-07 广州凡拓数字创意科技股份有限公司 Construction method and system of digital exhibition hall
CN106780680A (en) * 2016-11-28 2017-05-31 幻想现实(北京)科技有限公司 Three-dimensional animation generation method, terminal and system based on augmented reality
CN107578467A (en) * 2017-09-04 2018-01-12 苏州英诺迈医学创新服务有限公司 A kind of medicine equipment three-dimensional modeling method and device
CN110287351A (en) * 2019-06-27 2019-09-27 哈工大(张家口)工业技术研究院 A kind of threedimensional model lightweight displaying optimization method
CN110363845A (en) * 2019-07-22 2019-10-22 南京聚润工程科技有限公司 A kind of BIM threedimensional model point subtracts the method and system that face stores and reconstructs
CN111028361A (en) * 2019-11-18 2020-04-17 杭州群核信息技术有限公司 Three-dimensional model and material merging method, device, terminal, storage medium and rendering method
CN113052981A (en) * 2019-12-27 2021-06-29 北京国双科技有限公司 Model editing and constructing method and related equipment and computer storage medium
CN113498532A (en) * 2020-01-21 2021-10-12 京东方科技集团股份有限公司 Display processing method, display processing device, electronic apparatus, and storage medium
CN112288873A (en) * 2020-11-19 2021-01-29 网易(杭州)网络有限公司 Rendering method and device, computer readable storage medium and electronic equipment
CN112419511A (en) * 2020-12-26 2021-02-26 董丽萍 Three-dimensional model file processing method and device, storage medium and server
CN112734930A (en) * 2020-12-30 2021-04-30 长沙眸瑞网络科技有限公司 Three-dimensional model weight reduction method, system, storage medium, and image processing apparatus
CN113032699A (en) * 2021-03-04 2021-06-25 广东博智林机器人有限公司 Robot model construction method, robot model construction device and robot processor
CN113204897A (en) * 2021-06-02 2021-08-03 北京慧拓无限科技有限公司 Scene modeling method, device, medium and equipment for parallel mine simulation system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820905A (en) * 2022-06-24 2022-07-29 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and readable storage medium
CN114820905B (en) * 2022-06-24 2022-09-20 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN114022616B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
JP4846120B2 (en) Graphics device with skeleton animation stitching hardware
US20220080318A1 (en) Method and system of automatic animation generation
CN111815755A (en) Method and device for determining shielded area of virtual object and terminal equipment
KR20120009564A (en) Apparatus and method for generating 3 dimentional mouse pointer
JP7443647B2 (en) Keypoint detection and model training method, apparatus, device, storage medium, and computer program
CN111803945A (en) Interface rendering method and device, electronic equipment and storage medium
CN113470092B (en) Terrain rendering method and device, electronic equipment and storage medium
JP2023029984A (en) Method, device, electronic apparatus, and readable storage medium for generating virtual image
CN115984447B (en) Image rendering method, device, equipment and medium
WO2023173828A1 (en) Scene element processing method and apparatus, device, and medium
CN114022616B (en) Model processing method and device, electronic equipment and storage medium
CN109697748A (en) Model compression processing method, model pinup picture processing method device, storage medium
CN113827965A (en) Rendering method, device and equipment of sample lines in game scene
CN113626903A (en) Road curve setting method and device, electronic equipment and storage medium
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN115965735B (en) Texture map generation method and device
CN116452704A (en) Method and device for generating lens halation special effect, storage medium and electronic device
CN110705242A (en) Method and device for manufacturing slide template and electronic equipment
CN114119831A (en) Snow accumulation model rendering method and device, electronic equipment and readable medium
CN115544622A (en) Urban and rural participation type three-dimensional planning design platform, method, equipment and storage medium
CN115390976A (en) Layout method of interface design, display method of interface and related equipment
CN109445868A (en) The generation method and device of a kind of scene of game Road segment model
WO2021254127A1 (en) Image processing method, apparatus and device, and storage medium
CN112686990B (en) Three-dimensional model display method and device, storage medium and computer equipment
CN114820980A (en) Three-dimensional reconstruction method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant