CN113781618A - Method and device for lightening three-dimensional model, electronic equipment and storage medium - Google Patents
Method and device for lightening three-dimensional model, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113781618A CN113781618A CN202110813682.8A CN202110813682A CN113781618A CN 113781618 A CN113781618 A CN 113781618A CN 202110813682 A CN202110813682 A CN 202110813682A CN 113781618 A CN113781618 A CN 113781618A
- Authority
- CN
- China
- Prior art keywords
- model
- dimensional
- original
- point cloud
- sequence frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000003860 storage Methods 0.000 title claims abstract description 27
- 238000013461 design Methods 0.000 claims abstract description 69
- 230000000694 effects Effects 0.000 claims abstract description 37
- 239000000463 material Substances 0.000 claims abstract description 26
- 238000013507 mapping Methods 0.000 claims abstract description 19
- 238000009877 rendering Methods 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims description 19
- 230000006835 compression Effects 0.000 claims description 8
- 238000007906 compression Methods 0.000 claims description 8
- 238000012821 model calculation Methods 0.000 claims description 7
- 238000004140 cleaning Methods 0.000 claims description 4
- 238000004040 coloring Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 238000007726 management method Methods 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 239000013585 weight reducing agent Substances 0.000 description 4
- 238000009430 construction management Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 235000002245 Penicillium camembertii Nutrition 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 229910001220 stainless steel Inorganic materials 0.000 description 1
- 239000010935 stainless steel Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000012780 transparent material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The disclosure provides a light weight method and device of a three-dimensional model, electronic equipment and a storage medium, and belongs to the technical field of three-dimensional modeling. The method comprises the following steps: importing an original three-dimensional design model into three-dimensional software with three-dimensional rendering effect capability, and carrying out omnibearing continuous shooting on the original three-dimensional design model by utilizing a camera function in the three-dimensional software to obtain a shot animation; outputting the animation in the form of an ultra-high-definition picture sequence frame; taking the ultrahigh-definition picture sequence frame as a picture material, performing three-dimensional model reconstruction, and outputting a high-quality point cloud model; simplifying the point cloud model, reducing the number of triangular surfaces in the point cloud model, and performing texture mapping; and compressing the model subjected to texture mapping, and outputting a lightweight model.
Description
Technical Field
The present disclosure relates to the field of three-dimensional modeling technologies, and in particular, to a method and an apparatus for reducing weight of a three-dimensional model, an electronic device, and a storage medium.
Background
With the deep popularization and application of Building Information Modeling (BIM) technology in the engineering construction industry, designers can utilize excellent BIM design software in their respective specialties to perform related professional design work, so that a project has very rich design results, but has very high dependence on professional software and high operation technical requirements on application personnel, and project members need to install various professional software and be familiar with operation methods of various professional software, which is not favorable for the coordination management of the project. In order to get rid of professional software dependence, design results of all the professions of a design project are finally integrated to a comprehensive construction management platform, unified digital management is carried out, and the method is a development trend in the industry.
With the development of the current Web Graphics Library (Web Graphics Library, webbl) technology, the above concepts are realized, and many lightweight engines are also present in the market at present, and the technical route mainly includes directly or indirectly exporting the geometric data and attribute data information of the model to an open-source or enterprise-defined data format, and loading the geometric data and attribute data information at the Web end. The scheme can integrate and process the design results of each BIM design software. However, the common problems are that after data converted by the lightweight engine enters the lightweight platform, the display effect of the model is poor, that is, the final design effect expression of the engineering is not good enough, the effect of realistic rendering cannot be achieved, and the real design intention of the design cannot be restored. And when loading the model at the Web end according to the route, the user needs to render in real time, and considering the terminal configuration of the current general user, the performance requirement of real-time rendering is difficult to guarantee.
Disclosure of Invention
The embodiment of the disclosure provides a light-weight method and device for a three-dimensional model, electronic equipment and a storage medium. The technical scheme is as follows:
in one aspect, a method for lightening a three-dimensional model is provided, the method comprising:
importing an original three-dimensional design model into three-dimensional software with three-dimensional rendering effect capability, and carrying out omnibearing continuous shooting on the original three-dimensional design model by utilizing a camera function in the three-dimensional software to obtain a shot animation;
outputting the animation in the form of an ultra-high-definition picture sequence frame;
taking the ultrahigh-definition picture sequence frame as a picture material, performing three-dimensional model reconstruction, and outputting a high-quality point cloud model;
simplifying the point cloud model, reducing the number of triangular surfaces in the point cloud model, and performing texture mapping;
and compressing the model subjected to texture mapping, and outputting a lightweight model.
Optionally, the shooting the original three-dimensional design model in an omnidirectional and continuous manner by using a camera function in the three-dimensional software includes:
acquiring a shooting path of the camera, wherein the shooting path of the camera spirally winds the original three-dimensional design model for a plurality of circles;
and controlling a camera in the three-dimensional software to shoot the original three-dimensional design model at intervals along the shooting path, wherein the original three-dimensional design model is wholly positioned in a shooting picture during each shooting, and the angle difference of the camera is less than 5 degrees during two adjacent times of shooting.
Optionally, the ultrahigh-definition picture sequence frame includes more than 600 pictures, and the resolution of the ultrahigh-definition picture sequence frame is at least 8K.
Optionally, the step of performing three-dimensional model reconstruction by using the ultra-high-definition picture sequence frame as a picture material and outputting a high-quality point cloud model includes:
selecting a picture from the ultrahigh-definition picture sequence frame as a reference;
aligning all pictures in the ultrahigh-definition picture sequence frame based on the reference;
after the pictures are aligned, model calculation is carried out based on the ultrahigh-definition picture sequence frame;
and performing texture coloring on the calculated model to obtain the point cloud model.
Optionally, the simplifying the point cloud model includes:
acquiring a vertex of the point cloud model;
replacing the near-flat surface with a flat surface based on vertices of the point cloud model.
Optionally, the compressing the texture mapped model and outputting a lightweight model includes:
cleaning dirty data in the model after the texture mapping;
and outputting the cleaned model as a GLTF-format model.
Optionally, before the omni-directionally and continuously shooting the original three-dimensional design model by using the camera function in the three-dimensional software, the method further comprises:
adopting a color with bright color contrast with the original three-dimensional design model as a scene background color in the three-dimensional software;
and adjusting the material with high light or transparency in the original three-dimensional design model into a diffuse reflection material with a similar effect.
In one aspect, there is provided a weight reduction apparatus for a three-dimensional model, the apparatus including:
the shooting module is used for importing an original three-dimensional design model into three-dimensional software with three-dimensional rendering effect capability, and carrying out omnibearing continuous shooting on the original three-dimensional design model by utilizing a camera function in the three-dimensional software to obtain a shot animation;
the picture output module is used for outputting the animation in the form of an ultra-high-definition picture sequence frame;
the three-dimensional reconstruction module is used for reconstructing a three-dimensional model by taking the ultrahigh-definition picture sequence frame as a picture material and outputting a high-quality point cloud model;
the simplification module is used for simplifying the point cloud model, reducing the number of triangular surfaces in the point cloud model and carrying out texture mapping;
and the compression module is used for compressing the model subjected to texture mapping and outputting a lightweight model.
In one aspect, an electronic device is provided, which includes a processor and a memory, where the memory stores at least one program code, and the program code is loaded and executed by the processor to implement the method for lightening the three-dimensional model.
In one aspect, a computer-readable storage medium is provided, which stores at least one program code, which is loaded and executed by the processor to implement the method for lightening the three-dimensional model.
The beneficial effects brought by the technical scheme provided by the embodiment of the disclosure at least comprise:
the method for lightening the three-dimensional model is used for carrying out high-quality lightening of the three-dimensional model based on a reverse modeling technology, the original three-dimensional design model is subjected to omnibearing continuous shooting by utilizing a camera function in three-dimensional software to output ultra-high-definition picture sequence frames, then three-dimensional modeling is carried out based on the ultra-high-definition picture sequence frames, and as picture materials for carrying out three-dimensional modeling are the ultra-high-definition picture sequence frames, the display effect of the model after lightening can be ensured, and the reality degree and the building aesthetic feeling of a building in design can be restored; after three-dimensional modeling, the quantity of triangular surfaces of a point cloud model is simplified, a chartlet is adopted to replace textures and is compressed, and under the condition of ensuring the quality of the model, the quantity of the model is greatly reduced, so that the configuration requirement for model rendering is reduced, the over-high requirement is not configured for the terminal of a client, and mobile clients such as a computer or a mobile phone which are configured commonly can participate in the use of a construction management platform based on the requirement, so that the use threshold is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic flow diagram of a method for weight reduction of a three-dimensional model according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic flow diagram of a method for weight reduction of a three-dimensional model according to an exemplary embodiment of the disclosure;
fig. 3 is a block diagram illustrating a structure of a light-weight apparatus for a three-dimensional model according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for lightening a three-dimensional model according to an exemplary embodiment of the present disclosure. Referring to fig. 1, the method includes:
101: and importing an original three-dimensional design model into three-dimensional software with a three-dimensional rendering effect capability, and carrying out omnibearing continuous shooting on the original three-dimensional design model by utilizing a camera function in the three-dimensional software to obtain a shot animation.
The shooting is carried out by utilizing a camera in three-dimensional software, and the shooting process of an unmanned aerial vehicle around a building in the oblique photography technology is simulated to obtain the shooting animation of the original three-dimensional design model. The process can also be called roaming animation design, and the omnibearing display of the original three-dimensional design model is realized.
The original three-dimensional design model in the step is usually a BIM (building information model), and the BIM is lightened through each step, so that the lightened model can be applied to WEBGL, on one hand, the model precision is maintained, the model design effect is restored, and on the other hand, the requirement on the configuration of the user terminal is low.
The method provided by the present disclosure is executed by an electronic device, such as a personal computer, a server, etc. The electronic equipment is pre-loaded with three-dimensional processing and rendering software, so that the processing of the three-dimensional model is realized.
102: and outputting the animation in the form of an ultra-high-definition picture sequence frame.
103: and (3) taking the ultrahigh-definition picture sequence frame as a picture material, performing three-dimensional model reconstruction, and outputting a high-quality point cloud model.
104: and simplifying the point cloud model, reducing the number of triangular surfaces in the point cloud model, and performing texture mapping.
105: and compressing the model subjected to texture mapping, and outputting a lightweight model.
In the embodiment of the disclosure, the method for lightening the three-dimensional model performs high-quality lightening of the three-dimensional model based on a reverse modeling technology, the original three-dimensional design model is shot and output ultra-high-definition picture sequence frames in an all-around and continuous manner by utilizing the camera function in the three-dimensional software, then three-dimensional modeling is performed based on the ultra-high-definition picture sequence frames, and since the picture material for performing three-dimensional modeling is the ultra-high-definition picture sequence frames, the display effect of the model after lightening can be ensured, and the reality degree and the building aesthetic feeling of the building in design can be restored; after three-dimensional modeling, the quantity of triangular surfaces of a point cloud model is simplified, a chartlet is adopted to replace textures and compression processing is carried out, the quantity of the model is greatly reduced under the condition of ensuring the quality of the model, and therefore the configuration requirement for model rendering is reduced, the over-high requirement is not configured for the terminal of a client any more, mobile clients such as a computer or a mobile phone which are configured commonly can participate in the use of a construction management platform based on the requirement, and the use threshold is reduced.
Fig. 2 is a schematic flow chart of a method for lightening a three-dimensional model according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the method may include:
200: and preprocessing the original three-dimensional design model.
The original three-dimensional design model only has a three-dimensional model with simple colors, and preprocessing is carried out in three-dimensional software with three-dimensional rendering effect capability, wherein the preprocessing comprises the modification of lamplight, materials and the like, so that the scene is closer to the real effect.
Illustratively, the preprocessing in step 200 includes: adopting a color with bright color contrast with the original three-dimensional design model as a scene background color in the original three-dimensional design model; and adjusting the material with high light or transparency in the original three-dimensional design model into a diffuse reflection material with a similar effect.
For example, the original three-dimensional design model is imported to the aforementioned three-dimensional software such as 3ds max, lumion, enscape, and the like to perform model pre-processing. In the three-dimensional software, the model scene is closer to the real effect by means of adjusting Global Illumination (GI) of the scene, replacing simple colors of model components with pbr (physical Based rendering) materials, and the like. In order to facilitate the feature recognition of the post-rendering picture of the model, a color with clear contrast with the color of the model is used as a background color of the scene, and a sky box background (Skybox) of the scene is not required to be arranged. Transparent materials such as glass and highlight materials such as stainless steel in the original three-dimensional design model are adjusted to be diffuse reflection materials with similar effects, so that the problem that the transparent and highlight parts are poor in processing effect in later-stage picture recognition is solved, and a better recognition effect is obtained.
201: and acquiring a shooting path of the camera, wherein the shooting path of the camera spirally winds the original three-dimensional design model for a plurality of circles.
202: and controlling cameras in the three-dimensional software to shoot the original three-dimensional design model at intervals along a shooting path, wherein the whole shot original three-dimensional design model is positioned in a shooting picture every time, and the angle difference of the cameras in two adjacent shots is less than 5 degrees.
In order to obtain the ultrahigh-definition picture sequence frame as a material, the original three-dimensional design model after pretreatment is subjected to simulation shooting in a scene by simulating the mode of unmanned aerial vehicle field aerial shooting data acquisition by using the animation production function of rendering software.
The simulation shooting process mainly comprises the adjustment of the camera and the recording of key frames. For example, the position, focus, angle, far clipping distance, near clipping distance and other parameters of the camera are set, and the key frame is recorded, so that the current camera picture has a good display effect on the whole model and the components, the whole model does not exceed the camera picture, and the whole model can be filled with more than 80% of picture size. And gradually adjust the position of the camera to achieve the full-scale 360 coverage of the model. In order to meet the reverse modeling effect of the large-volume three-dimensional model, the camera switching angle of two adjacent frames in the finally output ultrahigh-definition picture sequence frame is ensured to be within 5 degrees.
203: and outputting the animation in the form of an ultra-high-definition picture sequence frame.
The ultrahigh-definition picture sequence frame comprises more than 600 pictures, the output of pictures in too short time affects the identification precision and integrity, and the output of pictures in too long time causes too much time consumption in the process of output of rendered pictures and later-stage identification reconstruction.
And outputting the camera animation shot in the front simulation mode into an ultrahigh-definition picture sequence frame form by utilizing the animation production function of rendering software, and using the camera animation as a back reverse modeling picture material.
In a possible implementation manner, when the picture is output, the resolution is set to be more than 8K as far as possible, so that the optimal recognition effect and the best model reconstruction quality during three-dimensional reconstruction are ensured. In other implementations, the system maximum resolution may also be used for picture output.
After the ultrahigh-definition picture sequence frame is obtained, utilizing mature software Based on Image Based Modeling and Rendering (IBMR) technology, such as Reality Capture, Autodesk Recap, photoscan and the like, to perform photo recognition, reconstruct a three-dimensional model and output a high-quality point cloud model. The process includes steps 204 to 207.
204: and selecting a picture from the ultra high definition picture sequence frame as a reference.
Taking Reality Capture as an example, in response to the user using the importing function of the software, the electronic device imports the pictures of the ultrahigh-definition picture sequence frame into the Reality Capture; in response to the user using the Color currors function of the software, the electronic device selects an appropriate picture from the sequence frame pictures as a reference.
The reference picture is preferably selected as an axonometric view, which responds better to both the side and top surfaces of the model. This facilitates navigation in complex scenes and better data organization inside the software to improve the quality of the output point cloud model.
205: and aligning all pictures in the ultrahigh-definition picture sequence frame based on the reference.
Taking Reality Capture as an example, in response to the user clicking the Align Images button, the electronic device performs photo alignment. After the pictures are aligned, an inspection tool carried by software is used for performing alignment quality rechecking, so that all pictures are applied to model calculation as much as possible.
206: and after the pictures are aligned, performing model calculation based on the ultrahigh-definition picture sequence frame.
Taking Reality Capture as an example, in response to a user clicking a call Model pull-down button, the electronic device performs Model calculation. The model calculation recommends that calculation with general precision is carried out firstly, high-precision calculation is carried out after the calculated white mold is checked to have no obvious defects, and the calculation is carried out again after the defects are found out quickly and the reference in the step 204 is adjusted in time.
207: and performing texture coloring on the model obtained by calculation to obtain a point cloud model.
Taking Reality Capture as an example, in response to the user clicking the Texture button, the electronic device performs model Texture rendering. The texture-colored model is a model that is close to the real rendering effect.
Through the steps 204 to 207, the ultra-high-definition picture sequence frame is used as a picture material, three-dimensional model reconstruction is carried out based on an image IBMR technology, and finally an uncompressed original point cloud model in the obj format is output.
208: and acquiring the vertex of the point cloud model.
Step 207 may typically set the number of vertices of the output to around 200000.
209: and replacing the approximately flat surface with a flat surface based on the top points of the point cloud model, and performing texture mapping.
The high-quality point cloud model obtained in step 207 generally occupies a large amount of storage, even several times the storage capacity of the original three-dimensional design model. An optimization process is therefore required so that the smallest volume of data can carry the optimal display effect. Through steps 208 and 209, the obtained high-quality point cloud model is optimized, and the model texture map replaces the point information expression in the point cloud data, so that the data with the minimum volume can bear the optimized display effect.
And reducing the number of the point cloud model vertexes by directly expressing the information of the rendered texture map replacement points. Usually, the replaced data still guarantees a very large amount of model information, and the optimized obj model data volume of the point output in the order of magnitude is small.
Taking Reality Capture as an example, the process of step 209 is as follows:
in response to the user clicking the Simplify button, the electronic device performs model simplification. The number of model triangular faces can be significantly reduced by connecting approximately flat surfaces using simplified tools, for example, a wall surface consisting of 1000 triangular faces is replaced by an equivalent wall surface of 2 triangular faces.
Step 207 may set the number of output vertices to be about 200000, perform calculation simplification to obtain a white mold, and then perform operations such as texture mapping again, which may generally output an obj-format model not exceeding 100M.
210: and cleaning dirty data in the model after the texture mapping.
And importing the simplified obj model into point cloud editing software such as a blender and the like, and cleaning partial dirty data in the obj model, which is caused by picture identification, wherein the dirty data mainly comprises burrs of boundaries and the like.
211: and outputting the cleaned model as a GLTF-format model.
And obj is exported to a GLTF format with better Web end support effect, so that the data volume can be further compressed, and Web data transmission and later-stage lightweight platform loading are facilitated.
For example, the cleaned obj model outputs a model in a GLTF format by using a blender function, so that loading of a lightweight part of the building and management platform is facilitated. The compression ratio obj format of the GLTF format is better, and finally, the performance is greatly improved in the aspects of data storage and Web data transmission.
The texture-mapped model is compressed in step 210 and step 211, and a lightweight model is output.
For a test case, the original three-dimensional design model is 3.36G, the size of the point cloud model after simulation shooting and identification is 1.5G before simplification, the simplified point cloud model reaches 60M, the model is only 15M after conversion into a GLTF format, complete detail information and a real rendering display effect are achieved, and the use requirements of projects are met.
The following table 1 shows a comparison of the original three-dimensional design model, the point cloud model and the simplified model, so that the advantages of the application in the aspect of file lightweight and display effect can be embodied.
Table 1 comparison of test effects:
and loading the optimized GLTF model result finally output to a lightweight management platform, and performing three-dimensional display of the model and developing other corresponding platform project management functions. And (4) carrying out related function research and development of the platform by combining the function requirement of the platform and the WEBGL technology.
The model processed by the previous steps restores a real effect under the global illumination condition set during shooting. The effect is carried on the final model in a texture mapping mode, so that complex real-time calculation is not needed during operation, and the project cooperation platform communication work based on the effect can be participated only by generally configuring an office notebook or a mobile phone.
In order to meet the individual requirements of the platform, the data can be further processed, such as performing operations of model splitting and the like. For example, the final model is imported into point cloud editing software, and the work such as splitting of the single model is performed.
According to the lightweight method, simulation shooting and reverse modeling are carried out firstly, the quantity of the reconstructed point cloud model is larger than that of the original three-dimensional design model, and the model quantity is enlarged. Here, it should be noted that the original three-dimensional design model has an inner surface and an outer surface, and the reconstructed three-dimensional model only includes the outer surface, that is, information inside the original three-dimensional design model is discarded, but this does not affect the final display effect of the three-dimensional model. In the reverse modeling process, due to the adopted ultra-high-definition pictures, the model is changed from vectors to lattices, and the volume of the model is increased.
After reverse modeling is completed, model simplification is carried out, planes which are close to be flat in the model are combined, the number of middle points of the model is reduced, and therefore the number of triangular surfaces is greatly reduced.
It should be noted that the original three-dimensional design model is a vector model, and each component of the model is independent from each other, in this case, the model simplification step cannot be directly performed, and the simplification step can be performed only after an integrated point cloud model is obtained through reconstruction.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 3 is a block diagram illustrating a structure of a three-dimensional model weight reduction device according to an embodiment of the present disclosure. The apparatus may be implemented as all or a portion of an electronic device. As shown in fig. 3, the apparatus includes: a photographing module 301, a picture output module 302, a three-dimensional reconstruction module 303, a simplification module 304 and a compression module 305.
The shooting module 301 is configured to import an original three-dimensional design model into three-dimensional software with a three-dimensional rendering effect capability, and perform omnibearing continuous shooting on the original three-dimensional design model by using a camera function in the three-dimensional software to obtain a shot animation;
the picture output module 302 is configured to output the animation in the form of an ultra high definition picture sequence frame;
the three-dimensional reconstruction module 303 is used for reconstructing a three-dimensional model by using the ultrahigh-definition picture sequence frame as a picture material and outputting a high-quality point cloud model;
the simplification module 304 is used for simplifying the point cloud model, reducing the number of triangular surfaces in the point cloud model and performing texture mapping;
and a compression module 305, configured to perform compression processing on the texture mapped model and output a lightweight model.
Optionally, the shooting module 301 is configured to obtain a shooting path of the camera, where the shooting path of the camera winds around the original three-dimensional design model in a spiral manner for multiple circles;
and controlling cameras in the three-dimensional software to shoot the original three-dimensional design model at intervals along a shooting path, wherein the whole shot original three-dimensional design model is positioned in a shooting picture every time, and the angle difference of the cameras in two adjacent shots is less than 5 degrees.
Optionally, the ultrahigh-definition picture sequence frame includes more than 600 pictures, and the resolution of the ultrahigh-definition picture sequence frame is at least 8K.
Optionally, the three-dimensional reconstruction module 303 is configured to select a picture from the ultra high definition picture sequence frame as a reference;
aligning all pictures in the ultrahigh-definition picture sequence frame based on the reference;
after the pictures are aligned, model calculation is carried out based on the ultrahigh-definition picture sequence frame;
and performing texture coloring on the model obtained by calculation to obtain a point cloud model.
Optionally, a simplification module 304, configured to obtain a vertex of the point cloud model;
replacing the nearly flat surface with a flat surface based on the vertices of the point cloud model.
Optionally, a compression module 305, configured to clean up dirty data in the texture-mapped model;
and outputting the cleaned model as a GLTF-format model.
Optionally, the apparatus further includes a preprocessing module 306, configured to adopt a color with sharp color contrast with the original three-dimensional design model as a background color of a scene in the three-dimensional software before the original three-dimensional design model is continuously photographed in all directions by using the camera function in the three-dimensional software;
and adjusting the material with high light or transparency in the original three-dimensional design model into a diffuse reflection material with a similar effect.
The embodiment of the disclosure also provides an electronic device, which may be the terminal or the server. The electronic device may comprise a processor and a memory, said memory storing at least one program code, said program code being loaded and executed by said processor to implement the method as described above.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure. Referring to fig. 4, the electronic device 400 includes a Central Processing Unit (CPU) 401, a system Memory 404 including a Random Access Memory (RAM) 402 and a Read-Only Memory (ROM) 403, and a system bus 405 connecting the system Memory 404 and the CPU 401. The electronic device 400 also includes a basic Input/Output system (I/O system) 406, which facilitates the transfer of information between devices within the computer, and a mass storage device 407 for storing an operating system 413, application programs 414, and other program modules 415.
The basic input/output system 406 includes a display 408 for displaying information and an input device 409 such as a mouse, keyboard, etc. for user input of information. Wherein a display 408 and an input device 409 are connected to the central processing unit 401 through an input output controller 410 connected to the system bus 405. The basic input/output system 406 may also include an input/output controller 410 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input/output controller 410 may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 407 is connected to the central processing unit 401 through a mass storage controller (not shown) connected to the system bus 405. The mass storage device 407 and its associated computer-readable media provide non-volatile storage for the electronic device 400. That is, the mass storage device 407 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory or other solid state Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD), or other optical, magnetic, tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 404 and mass storage device 407 described above may be collectively referred to as memory.
According to various embodiments of the present disclosure, the electronic device 400 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the electronic device 400 may be connected to the network 412 through the network interface unit 411 connected to the system bus 405, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 411.
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU. The CPU 401 realizes the aforementioned method for reducing the weight of the three-dimensional model by executing the one or more programs.
Those skilled in the art will appreciate that the configuration shown in fig. 4 does not constitute a limitation of the electronic device 400, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
The disclosed embodiments also provide a computer readable storage medium storing at least one program code, the program code being loaded and executed by the processor to implement the method as described above. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The disclosed embodiments also provide a computer program product having at least one program code stored therein, which is loaded and executed by the processor to implement the method as described above.
It should be understood that reference to "a plurality" in this disclosure means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. A method for lightening a three-dimensional model, the method comprising:
importing an original three-dimensional design model into three-dimensional software with three-dimensional rendering effect capability, and carrying out omnibearing continuous shooting on the original three-dimensional design model by utilizing a camera function in the three-dimensional software to obtain a shot animation;
outputting the animation in the form of an ultra-high-definition picture sequence frame;
taking the ultrahigh-definition picture sequence frame as a picture material, performing three-dimensional model reconstruction, and outputting a high-quality point cloud model;
simplifying the point cloud model, reducing the number of triangular surfaces in the point cloud model, and performing texture mapping;
and compressing the model subjected to texture mapping, and outputting a lightweight model.
2. The method of claim 1, wherein said taking an omnidirectional succession of shots of said original three-dimensional design model using camera functionality in said three-dimensional software comprises:
acquiring a shooting path of the camera, wherein the shooting path of the camera spirally winds the original three-dimensional design model for a plurality of circles;
and controlling a camera in the three-dimensional software to shoot the original three-dimensional design model at intervals along the shooting path, wherein the original three-dimensional design model is wholly positioned in a shooting picture during each shooting, and the angle difference of the camera is less than 5 degrees during two adjacent times of shooting.
3. The method according to claim 1, wherein the ultra high definition picture sequence frame comprises more than 600 pictures, and the resolution of the ultra high definition picture sequence frame is at least 8K.
4. The method of claim 1, wherein the performing three-dimensional model reconstruction and outputting a high-quality point cloud model by using the ultra high definition picture sequence frame as a picture material comprises:
selecting a picture from the ultrahigh-definition picture sequence frame as a reference;
aligning all pictures in the ultrahigh-definition picture sequence frame based on the reference;
after the pictures are aligned, model calculation is carried out based on the ultrahigh-definition picture sequence frame;
and performing texture coloring on the calculated model to obtain the point cloud model.
5. The method of claim 1, wherein the simplifying the point cloud model comprises:
acquiring a vertex of the point cloud model;
replacing the near-flat surface with a flat surface based on vertices of the point cloud model.
6. The method according to claim 1, wherein the compressing the texture-mapped model to output a lightweight model comprises:
cleaning dirty data in the model after the texture mapping;
and outputting the cleaned model as a GLTF-format model.
7. The method according to any one of claims 1 to 6, wherein before said taking an omnidirectional succession of shots of said original three-dimensional design model using camera functionality in said three-dimensional software, said method further comprises:
adopting a color with bright color contrast with the original three-dimensional design model as a scene background color in the three-dimensional software;
and adjusting the material with high light or transparency in the original three-dimensional design model into a diffuse reflection material with a similar effect.
8. An apparatus for lightening a three-dimensional model, the apparatus comprising:
the shooting module is used for importing an original three-dimensional design model into three-dimensional software with three-dimensional rendering effect capability, and carrying out omnibearing continuous shooting on the original three-dimensional design model by utilizing a camera function in the three-dimensional software to obtain a shot animation;
the picture output module is used for outputting the animation in the form of an ultra-high-definition picture sequence frame;
the three-dimensional reconstruction module is used for reconstructing a three-dimensional model by taking the ultrahigh-definition picture sequence frame as a picture material and outputting a high-quality point cloud model;
the simplification module is used for simplifying the point cloud model, reducing the number of triangular surfaces in the point cloud model and carrying out texture mapping;
and the compression module is used for compressing the model subjected to texture mapping and outputting a lightweight model.
9. An electronic device, comprising a processor and a memory, the memory storing at least one program code, the program code being loaded and executed by the processor to implement the method according to any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores at least one program code, which is loaded and executed by a processor to implement the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110813682.8A CN113781618B (en) | 2021-07-19 | 2021-07-19 | Three-dimensional model light weight method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110813682.8A CN113781618B (en) | 2021-07-19 | 2021-07-19 | Three-dimensional model light weight method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113781618A true CN113781618A (en) | 2021-12-10 |
CN113781618B CN113781618B (en) | 2024-07-19 |
Family
ID=78836013
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110813682.8A Active CN113781618B (en) | 2021-07-19 | 2021-07-19 | Three-dimensional model light weight method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781618B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180036879A1 (en) * | 2016-08-02 | 2018-02-08 | Accel Robotics Corporation | Robotic Camera System |
CN110827402A (en) * | 2020-01-13 | 2020-02-21 | 武大吉奥信息技术有限公司 | Method and system for simplifying three-dimensional model of similar building based on rasterization technology |
CN114241159A (en) * | 2021-12-21 | 2022-03-25 | 湖南师范大学 | Three-dimensional reconstruction and PBR mapping manufacturing method based on close-range photogrammetry method |
-
2021
- 2021-07-19 CN CN202110813682.8A patent/CN113781618B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180036879A1 (en) * | 2016-08-02 | 2018-02-08 | Accel Robotics Corporation | Robotic Camera System |
CN110827402A (en) * | 2020-01-13 | 2020-02-21 | 武大吉奥信息技术有限公司 | Method and system for simplifying three-dimensional model of similar building based on rasterization technology |
CN114241159A (en) * | 2021-12-21 | 2022-03-25 | 湖南师范大学 | Three-dimensional reconstruction and PBR mapping manufacturing method based on close-range photogrammetry method |
Non-Patent Citations (2)
Title |
---|
JIANG W: "UAV-based 3D reconstruction for hoist site mapping and layout planning in petrochemical construction", 《AUTOMATION IN CONSTRUCTION》, 31 December 2020 (2020-12-31), pages 113 * |
牛爽: "人机协作拆卸中考虑避碰的多移动机器人路径规划", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 04, 15 April 2024 (2024-04-15), pages 140 - 154 * |
Also Published As
Publication number | Publication date |
---|---|
CN113781618B (en) | 2024-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114119849B (en) | Three-dimensional scene rendering method, device and storage medium | |
Zhang et al. | Nerfactor: Neural factorization of shape and reflectance under an unknown illumination | |
WO2019242454A1 (en) | Object modeling movement method, apparatus and device | |
US7657060B2 (en) | Stylization of video | |
JP2005038426A (en) | Image-base rendering and editing method and apparatus | |
WO2024193609A1 (en) | Image rendering method and apparatus, electronic device, storage medium and program product | |
WO2024193609A9 (en) | Image rendering method and apparatus, electronic device, storage medium and program product | |
US11443450B2 (en) | Analyzing screen coverage of a target object | |
CN116228943B (en) | Virtual object face reconstruction method, face reconstruction network training method and device | |
CN115797555A (en) | Human body real-time three-dimensional reconstruction method based on depth camera | |
CN115100337A (en) | Whole body portrait video relighting method and device based on convolutional neural network | |
CN116958344A (en) | Animation generation method and device for virtual image, computer equipment and storage medium | |
CN117197323A (en) | Large scene free viewpoint interpolation method and device based on neural network | |
US11217002B2 (en) | Method for efficiently computing and specifying level sets for use in computer simulations, computer graphics and other purposes | |
Maxim et al. | A survey on the current state of the art on deep learning 3D reconstruction | |
CN115713585B (en) | Texture image reconstruction method, apparatus, computer device and storage medium | |
US20240193850A1 (en) | Editing neural radiance fields with neural basis decomposition | |
CN109166176B (en) | Three-dimensional face image generation method and device | |
US20220392121A1 (en) | Method for Improved Handling of Texture Data For Texturing and Other Image Processing Tasks | |
CN113781618B (en) | Three-dimensional model light weight method, device, electronic equipment and storage medium | |
Srinivasan | Scene Representations for View Synthesis with Deep Learning | |
US11170533B1 (en) | Method for compressing image data having depth information | |
WO2022005302A1 (en) | Method for computation of local densities for virtual fibers | |
WO2023285874A1 (en) | Computing illumination of an elongated shape having a noncircular cross section | |
WO2023277702A1 (en) | Spectral uplifting converter using moment-based mapping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230224 Address after: 430071 No.2, Zhongnan 2nd Road, Wuchang District, Wuhan City, Hubei Province Applicant after: CENTRAL-SOUTH ARCHITECTURAL DESIGN INSTITUTE Co.,Ltd. Address before: 430205 4th floor, office building a, No. 777, Guanggu Third Road, Donghu New Technology Development Zone, Wuhan City, Hubei Province Applicant before: ZHONGNAN DESIGN GROUP (WUHAN) ENGINEERING TECHNOLOGY RESEARCH INSTITUTE Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |