Method for stylized migration of three-dimensional architectural model of metauniverse
Technical Field
The invention relates to the technical field of a meta universe, in particular to a stylized migration method of a three-dimensional building model of the meta universe.
Background
The core idea of the universe of meta space is to provide people with a virtual world that can interact in real time, where people can move around and chat and interact with other users in a digitized form. The building is one of basic elements in the virtual world of the meta universe, the three-dimensional attribute and the design mode of the building are naturally connected with the 3D visualization technology, and on the basis, how to design and modify the building facade in the virtual world and provide diversified artistic effect expression are hot spots concerned by the academic world and the industrial world at present.
The stylized migration technology which has emerged in recent years is a creative work result in the field of deep learning, and can skillfully integrate the painting style of an artistic style painting X into a content image Y to generate a new image YX, wherein the generated YX not only keeps the original image content of the content image Y, for example, the content image is an automobile, but also remains an automobile after being integrated and cannot become a motorcycle, but also keeps the specific style of the style image X, such as texture, tone, brush stroke and the like. The stylized migration technology can provide diversified artistic effects for building generation and transformation in the metauniverse virtual world, enhance the interactivity of users, and provide personalized and diversified experiences for the users on the virtual world.
Currently, the main work of applying the stylized migration technique to the field of architectural design includes: the method has the common characteristics that the method carries out overall stylized migration on images and shows some defects when the method is used for building generation and transformation of a virtual world of the meta universe:
1. the existing style migration technology is only limited to final output images, and cannot perform style migration on a three-dimensional digital model, so that the model interaction requirement in a virtual world is difficult to meet.
2. The existing style migration technology needs to perform independent model training according to an original image and a target image for each style migration, and the efficiency and the real-time performance of the style migration are greatly influenced.
3. The existing style migration technology generally migrates the whole image instead of the part naturally understood by semantics, and does not have the capability of recombining and reusing the migrated result.
An effective solution to the problems in the related art has not been proposed yet.
Disclosure of Invention
The invention aims to provide a method for stylized migration of a three-dimensional architectural model of a metasequoiy, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for stylized migration of a three-dimensional architectural model of a metastic universe, the method comprising:
a1. preprocessing a building component model file;
a2. stylized migration of building element maps;
a3. and learning features of different styles by using a deep learning model, carrying out style transformation on the input original two-dimensional or three-dimensional building component by using the features, and carrying out texture and concave-convex mapping replacement on the transformed two-dimensional or three-dimensional building component.
It is further provided that the step a1 includes the following steps:
b 1: making the UV of the model continuously expandable;
b 2: expanding the model texture mapping and ensuring that the semantics of the model texture mapping are continuously recognizable;
b 3: and carrying out overlapped cutting on the texture mapping, and outputting the cutting mapping tile.
It is further provided that the step a2 includes the following steps:
c 1: carrying out style migration on the cut chartlet tiles by using a deep learning model;
c 2: performing super-resolution sampling amplification based on deep learning on the mapping tiles after style migration to the original cutting size;
c 3: and re-splicing the magnified mapping tile to the original texture mapping size.
It is further provided that the step a3 includes the following steps:
d 1: generating a concave-convex mapping according to the texture mapping after the style migration based on the deep learning model;
d 2: and mapping the generated concave-convex mapping and texture mapping onto the original building component model to obtain the three-dimensional building component model with the style transferred.
Further setting that the deep learning model converts the input two-dimensional image or three-dimensional model into different styles, retains the style of the original two-dimensional image or three-dimensional model, and re-extracts the original content when the original content needs to be extracted.
It is further configured that, after the step b2, a mapping is extracted, and then the extracted mapping is subjected to mapping cutting or mapping preprocessing.
Further, the method for cutting the map in the step b3 is as follows:
e 1: cutting the mapping into pixel points with different sizes;
e 2: the size of the field of view covered by each tile needs to be considered when cutting, and at least 50% of overlap between each tile is needed to ensure the later seamless splicing.
And further setting that after the map is cut, a new file is input for guidance, the character style and the content are converted into a new map style by using the deep learning model, and a new map is output.
Further, the pixel sizes are respectively 512, 1024 and 2048.
Further setting the identified size information to be classified into 1 type, 2 type and 3 type, comparing the size of the 1 type with the original cutting size, and automatically reducing the size to the original size if the size of the 1 type is larger than the original size; comparing the class 2 size with the original cutting size, and if the class 2 size is larger than the original size, automatically reducing the class 2 size to the original size; comparing the 3 types of sizes with the original cutting size, and if the 3 types of sizes are larger than the original size, automatically reducing the sizes to the original size; if the sizes of class 1, class 2 and class 3 are smaller than the original size, the size is automatically enlarged to the original size.
Compared with the prior art, the invention has the following beneficial effects:
1. the method is applied to the three-dimensional model based on the style migration of deep learning, and meets the requirement of carrying out the style migration on the three-dimensional model in the virtual digital world (metauniverse);
2. according to the method, for style migration of the three-dimensional building component based on deep learning, building component kits with different styles can be automatically generated based on a set of component templates, so that the labor consumption in building scenes with different styles is reduced, and the increasing requirements on the three-dimensional scene contents in the rapidly-developing metastic industry are met;
3. the invention improves the efficiency and the real-time property of style migration;
4. the invention has the capability of recombining and reusing the migrated results.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of the steps of a method for stylized migration of a three-dimensional architectural model of the Meta-universe according to the present invention;
FIG. 2 is a detailed step diagram of the stylized migration method of the three-dimensional architectural model of the metastic universe according to the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of devices consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The invention is further described with reference to the following drawings and detailed description:
as shown in fig. 1 and fig. 2, a method for stylized migration of a three-dimensional architectural model of a metastic universe includes:
a1. preprocessing a building component model file;
a2. stylized migration of architectural element maps;
a3. and learning features of different styles by using a deep learning model, carrying out style transformation on the input original two-dimensional or three-dimensional building component by using the features, and carrying out texture and concave-convex mapping replacement on the transformed two-dimensional or three-dimensional building component.
According to the above, the step a1 includes the following steps:
b 1: making the UV of the model continuously expandable;
b 2: expanding the model texture mapping and ensuring that the semantics of the model texture mapping are continuously recognizable;
b 3: and carrying out overlapped cutting on the texture mapping, and outputting the cutting mapping tile.
According to the above, the step a2 includes the following steps:
c 1: carrying out style migration on the cut chartlet tiles by using a deep learning model;
c 2: performing super-resolution sampling amplification based on deep learning on the mapping tiles after style migration to the original cutting size;
c 3: and re-splicing the magnified mapping tile to the original texture mapping size.
According to the above, the step a3 includes the following steps:
d 1: generating a concave-convex mapping according to the texture mapping after the style migration based on the deep learning model;
d 2: and mapping the generated concave-convex mapping and texture mapping onto the original building component model to obtain the three-dimensional building component model with the style transferred.
According to the content, the deep learning model converts the input two-dimensional image or three-dimensional model into different styles, retains the style of the original two-dimensional image or three-dimensional model, and re-extracts the original content when the original content needs to be extracted.
According to the above, the step b2 is followed by extracting the map, and then the extracted map is subjected to map cutting or map preprocessing.
According to the above, the method for cutting the map in step b3 is as follows:
e 1: cutting the mapping into pixel points with different sizes;
e 2: the size of the view covered by each tile needs to be considered during cutting, so that the model cannot be understood, and the detail is lost if the size is too large;
e 3: at least 50% overlap between each tile is required to ensure a later seamless split.
According to the content, after the map is cut, a new file is input for guidance, the character style and the content are converted into a new map style by using a deep learning model, and a new map is output.
According to the content, the pixel sizes are 512, 1024 and 2048 respectively.
According to the content, classifying the identified size information into 1 type, 2 type and 3 type, comparing the size of the 1 type with the original cutting size, and automatically reducing the size to the original size if the size of the 1 type is larger than the original size; comparing the class 2 size with the original cutting size, and if the class 2 size is larger than the original size, automatically reducing the class 2 size to the original size; comparing the 3 types of sizes with the original cutting size, and if the 3 types of sizes are larger than the original size, automatically reducing the sizes to the original size; if the sizes of class 1, class 2 and class 3 are smaller than the original size, the size is automatically enlarged to the original size.
The size information is classified, the rate of the deep learning model can be improved, the accuracy of the deep learning model can be improved, and the deep learning model is more stable and efficient after repeated tests.
The style migration method of the three-dimensional architectural model of the meta universe is based on the application of deep learning style migration on the three-dimensional model, and meets the requirement of carrying out style migration on the three-dimensional model in a virtual digital world; building component kits of different styles are automatically generated based on a set of component templates, so that the labor consumption in building scenes of different styles is reduced, the increasing requirements for three-dimensional scene contents in the rapidly-developing metauniverse industry are met, the style migration efficiency and the real-time performance are improved, and meanwhile, the capacity of recombining and reusing the migrated results is realized.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.