CN116958498A - Method, device and equipment for large-scale generation of building model - Google Patents
Method, device and equipment for large-scale generation of building model Download PDFInfo
- Publication number
- CN116958498A CN116958498A CN202310870407.9A CN202310870407A CN116958498A CN 116958498 A CN116958498 A CN 116958498A CN 202310870407 A CN202310870407 A CN 202310870407A CN 116958498 A CN116958498 A CN 116958498A
- Authority
- CN
- China
- Prior art keywords
- building
- model
- data
- preset
- prefabricated member
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000009877 rendering Methods 0.000 claims abstract description 35
- 238000003062 neural network model Methods 0.000 claims description 56
- 230000006870 function Effects 0.000 claims description 49
- 238000012549 training Methods 0.000 claims description 30
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 12
- 238000003860 storage Methods 0.000 claims description 11
- 238000013135 deep learning Methods 0.000 claims description 9
- 239000000463 material Substances 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 7
- 238000013461 design Methods 0.000 description 4
- 230000035772 mutation Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000002068 genetic effect Effects 0.000 description 3
- 238000010187 selection method Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005215 recombination Methods 0.000 description 2
- 230000006798 recombination Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000009194 climbing Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- IXSZQYVWNJNRAL-UHFFFAOYSA-N etoxazole Chemical compound CCOC1=CC(C(C)(C)C)=CC=C1C1N=C(C=2C(=CC=CC=2F)F)OC1 IXSZQYVWNJNRAL-UHFFFAOYSA-N 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000002922 simulated annealing Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computer Hardware Design (AREA)
- Architecture (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The present disclosure provides a method, apparatus, and device for large-scale generation of building models, including: classifying the acquired original building data according to preset building characteristics to acquire classified building data; respectively generating corresponding prefabricated member module models according to the classified building data, and setting corresponding names; loading the corresponding prefabricated member module model for rendering according to preset assembly rules and assembly rules, and outputting a rendering result. The scheme of the disclosure can generate a high-precision city building model in a low-cost and efficient manner.
Description
Technical Field
The present disclosure relates to the field of computing information processing technologies, and in particular, to a method, an apparatus, and a device for generating a building model on a large scale.
Background
In the prior art, large-scale generation of 3D urban building models faces a number of technical pain and difficulties. First, high quality models often require a large amount of detail, but in large scale scenarios, processing a large amount of detail can bring a significant computational burden; secondly, building styles of different regions are different, and building models with multiple styles are difficult to contain in one system, and the respective characteristics are maintained at the same time; finally, manually creating and adjusting models is costly in time, but fully automated generation systems often have difficulty meeting the demands of realism and diversity.
Disclosure of Invention
The technical problem to be solved by the present disclosure is how to provide a method, apparatus and equipment for generating a building model on a large scale, and to generate a high-precision city building model in a low-cost and efficient manner.
To solve the above technical problems, an embodiment of the present disclosure provides a method for generating a building model on a large scale, including:
classifying the acquired original building data according to preset building characteristics to acquire classified building data;
respectively generating corresponding prefabricated member module models according to the classified building data, and setting corresponding names;
loading the corresponding prefabricated member module model for rendering according to preset assembly rules and assembly rules, and outputting a rendering result.
Optionally, classifying the obtained raw building data according to the preset building features includes: classifying the acquired original building data by adopting a deep learning algorithm, wherein the deep learning algorithm comprises the following steps:
training a deep neural network model through a large amount of marked building data to obtain a trained deep neural network model, wherein the deep neural network model learns the characteristic relation between building characteristics and categories thereof through a loss function;
After training is completed, inputting the acquired original building data into a trained deep neural network model, and performing nonlinear transformation on the model through an activation function to acquire nonlinear expression data;
and applying the characteristic relation to the data of the nonlinear expression by the model through forward propagation, calculating to obtain probability distribution of an output layer, and taking the category with the highest probability as a classification result of the original building data.
Optionally, determining the target vector annotation layer according to the vector graphic parameter, the preset text annotation information and/or the preset graphic annotation information includes:
and carrying out vectorization description on preset text annotation information and/or preset graphic annotation information according to the vector graphic parameters to obtain a target vector annotation layer with a transparent channel.
Optionally, training the deep neural network model through a large amount of marked building data to obtain a trained deep neural network model, where the deep neural network model learns a feature relation between building features and categories thereof through a loss function, and the training comprises:
the marked building data comprises the characteristic data of the area, the height and the shape of the building and the category data of the residence, the commercial building and the industrial building;
Inputting the characteristic data and the category data into a preset deep neural network model for training, optimizing a cross entropy loss function through a back propagation algorithm and a preset optimizer, and obtaining a trained deep neural network model through iterative learning.
Optionally, the preset optimizer includes:
m=β1*m+(1-β1)*g;
v=β2*v+(1-β2)*g^2;
m_hat=m/(1-β1^t);
v_hat=v/(1-β2^t);
w=w-α*m_hat/(sqrt(v_hat)+ε);
where g represents the gradient, β1 and β2 represent momentum factors, α represents the learning rate, m and v represent the first and second moment estimates of the gradient g, t represents the number of steps of the current iteration, ε represents a constant, and w represents the weight that needs to be updated.
Optionally, generating corresponding prefabricated member module models according to the classified building data respectively, and setting corresponding names, including:
generating a corresponding prefabricated member module model according to the classified building data, wherein the prefabricated member module model comprises: the main parts of the main structure, the outer wall material, the window, the door and the roof of the building, wherein each type of building generates a set of corresponding prefabricated member module models;
setting a unique name for each prefabricated member module model, and coding the setting of the name based on the category and the characteristics of the name;
and storing the generated prefabricated member module model and the corresponding name in a database or a file system, and establishing an index.
Optionally, loading the corresponding prefabricated member module model for rendering according to a preset assembly rule and an assembly rule, and outputting a rendering result, wherein the method comprises the following steps:
the assembly rule comprises positions and connection modes among prefabricated member module models;
the assembly rules include orientation and placement of the preform module model.
Optionally, the preset assembly rules and the assembly rules are generated by:
initializing a set of random assembly rules and assembly rules as an initial solution;
evaluating each group of assembly rules and assembly rules according to a preset fitness function, and calculating the fitness of the assembly rules and the assembly rules, wherein the fitness function comprises an aesthetic degree index and a practicability index of the assembled building model;
selecting according to the fitness of each solution, and entering the next generation in the solutions with the fitness exceeding a preset threshold;
in the next generation of solutions, two solutions are randomly selected, and then part of the rules of the solutions are exchanged to generate new solutions, and/or part of the rules in the next generation of solutions are randomly changed;
and repeating the iterative operation until the preset iterative times are reached.
The embodiment of the disclosure also provides a processing device for generating a building model on a large scale, which comprises:
The classification module is used for classifying the acquired original building data according to preset building characteristics to acquire classified building data;
the processing module is used for respectively generating corresponding prefabricated member module models according to the classified building data and setting corresponding names;
and the rendering module is used for loading the corresponding prefabricated member module model for rendering according to preset assembly rules and outputting a rendering result.
Optionally, the classifying module is configured to classify the obtained original building data according to a preset building feature, including: classifying the acquired original building data by adopting a deep learning algorithm, wherein the deep learning algorithm comprises the following steps:
training a deep neural network model through a large amount of marked building data to obtain a trained deep neural network model, wherein the deep neural network model learns the characteristic relation between building characteristics and categories thereof through a loss function;
after training is completed, inputting the acquired original building data into a trained deep neural network model, and performing nonlinear transformation on the model through an activation function to acquire nonlinear expression data;
and applying the characteristic relation to the data of the nonlinear expression by the model through forward propagation, calculating to obtain probability distribution of an output layer, and taking the category with the highest probability as a classification result of the original building data.
Optionally, the classifying module is configured to determine a target vector annotation layer according to the vector graphic parameter, the preset text annotation information and/or the preset graphic annotation information, and includes:
and carrying out vectorization description on preset text annotation information and/or preset graphic annotation information according to the vector graphic parameters to obtain a target vector annotation layer with a transparent channel.
Optionally, the classification module is configured to perform training of the deep neural network model through a large number of marked building data, and obtain a trained deep neural network model, where the deep neural network model learns a feature relation between building features and categories thereof through a loss function, and includes:
the marked building data comprises the characteristic data of the area, the height and the shape of the building and the category data of the residence, the commercial building and the industrial building;
inputting the characteristic data and the category data into a preset deep neural network model for training, optimizing a cross entropy loss function through a back propagation algorithm and a preset optimizer, and obtaining a trained deep neural network model through iterative learning.
Optionally, the preset optimizer in the classification module includes:
m=β1*m+(1-β1)*g;
v=β2*v+(1-β2)*g^2;
m_hat=m/(1-β1^t);
v_hat=v/(1-β2^t);
w=w-α*m_hat/(sqrt(v_hat)+ε);
where g represents the gradient, β1 and β2 represent momentum factors, α represents the learning rate, m and v represent the first and second moment estimates of the gradient g, t represents the number of steps of the current iteration, ε represents a constant, and w represents the weight that needs to be updated.
Optionally, the processing module is configured to generate corresponding prefabricated member module models according to the classified building data, and set corresponding names, and includes:
generating a corresponding prefabricated member module model according to the classified building data, wherein the prefabricated member module model comprises: the main parts of the main structure, the outer wall material, the window, the door and the roof of the building, wherein each type of building generates a set of corresponding prefabricated member module models;
setting a unique name for each prefabricated member module model, and coding the setting of the name based on the category and the characteristics of the name;
and storing the generated prefabricated member module model and the corresponding name in a database or a file system, and establishing an index.
Optionally, the rendering module is configured to load a corresponding prefabricated member module model for rendering according to a preset assembly rule and an assembly rule, and output a rendering result, and includes:
the assembly rule comprises positions and connection modes among prefabricated member module models;
the assembly rules include orientation and placement of the preform module model.
Optionally, the rendering module is configured to generate the preset assembly rule and the assembly rule by:
initializing a set of random assembly rules and assembly rules as an initial solution;
Evaluating each group of assembly rules and assembly rules according to a preset fitness function, and calculating the fitness of the assembly rules and the assembly rules, wherein the fitness function comprises an aesthetic degree index and a practicability index of the assembled building model;
selecting according to the fitness of each solution, and entering the next generation in the solutions with the fitness exceeding a preset threshold;
in the next generation of solutions, two solutions are randomly selected, and then part of the rules of the solutions are exchanged to generate new solutions, and/or part of the rules in the next generation of solutions are randomly changed;
and repeating the iterative operation until the preset iterative times are reached.
Embodiments of the present disclosure also provide a computing device comprising: a processor, a memory storing a computer program which, when executed by the processor, performs the method as described above.
Embodiments of the present disclosure also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform a method as described above.
The scheme of the present disclosure at least comprises the following beneficial effects:
through classification of building data, the scheme enables the characteristics of various types of buildings to be accurately captured and described, and the diversity and the authenticity of the model are enhanced. And secondly, by generating the prefabricated member module model and setting corresponding names for the prefabricated member module model, the scheme realizes the modularized management of the model and improves the efficiency and the flexibility of model generation. And finally, performing model rendering according to preset assembly rules and assembly rules, so that the generated building model is more in line with the actual city building style, and the realism and fidelity of the model are improved.
Drawings
FIG. 1 is a flow chart of a method of large scale generation of building models provided by embodiments of the present disclosure;
FIG. 2 is a flow chart of classifying raw building data provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of training a deep neural network model provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of generating a preform module model provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart of a preform module model rendering provided by an embodiment of the present disclosure;
fig. 6 is a block diagram of a mass-production building model processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Technical solutions in the embodiments of the present disclosure will be clearly described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments obtained by one of ordinary skill in the art based on the embodiments in this disclosure are within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, where appropriate, such that embodiments of the disclosure may be practiced in sequences other than those illustrated and described herein, and that the objects identified by "first," "second," etc. are generally of the same type and are not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Fig. 1 is a method for large-scale generation of building models provided by an embodiment of the present disclosure, referring to fig. 1, the method may include the steps of:
step 11, classifying the obtained original building data according to preset building characteristics to obtain classified building data;
further, the step 11 may include:
and step 111, training a deep neural network model through a large amount of marked building data to obtain a trained deep neural network model, wherein the deep neural network model learns the characteristic relation between building characteristics and categories thereof through a loss function.
In one embodiment of the present disclosure, the specific implementation is as follows:
to better provide accurate building features for generating 3D city building models, a large amount of building data is collected first and labeled, including the type of building, the structural features of the building (e.g., number of floors, outline structure, etc.).
By using these labeled building data, a deep neural network model is designed and trained. The model is constructed through a convolution layer, a full connection layer, an activation function and the like and is used for extracting key features from building data and performing feature mapping. The loss function of the model may be designed as a cross entropy loss function to measure the error between the model predicted building type and the real type.
The training process uses a large amount of marked building data, and the training effect of the model can be improved through deep learning technology such as batch normalization, dropout and the like. Through several rounds of training, the deep neural network model can successfully learn the relations between various building features and the categories thereof, and achieves high accuracy on the verification set.
When a new 3D city building model needs to be generated, the original building data can be input into the trained deep neural network model, the characteristics and types of the building can be obtained rapidly and accurately, and accurate guidance is provided for subsequent model generation.
In an embodiment of the present disclosure, the step 111 may further include:
step 1111, the marked building data includes feature data of the area, height, shape of the building and category data of the house, commercial building, industrial building.
And step 1112, inputting the characteristic data and the category data into a preset deep neural network model for training, optimizing a cross entropy loss function through a back propagation algorithm and a preset optimizer, and obtaining a trained deep neural network model through iterative learning.
In one embodiment of the present disclosure, a large amount of marked building data is first collected. The data includes characteristic data of the area, height, shape, etc. of the building, and type data of the building, such as residential, commercial, or industrial building. These feature and class data provide the necessary inputs for training the deep neural network model; and inputting the characteristic data and the category data into a preset deep neural network model for training. In the training process, a back propagation algorithm is used for updating the weight and the bias in the model so as to reduce the prediction error of the model; the role of the optimizer is, among other things, to decide how to update these weights and biases, and the loss function is used to measure the gap between the predicted and true results of the model, choosing the cross entropy loss function as the loss function, since it works well for classification problems. Through multiple rounds of iterative learning, the loss function can be continuously optimized, so that the model can learn the relation between the building characteristics and the categories of the building characteristics better, and finally, the trained deep neural network model is obtained.
For example: when a large amount of building data including characteristic data such as the area, the height and the shape of various buildings and type data of the buildings are collected, the data can be input into a preset deep neural network model for training, a back propagation algorithm and the optimizer are used for optimizing a cross entropy loss function, and after multiple rounds of iterative learning, a trained deep neural network model is finally obtained. This model can accurately predict the type of the building from its characteristic data.
In an embodiment of the disclosure, the optimizer may specifically be:
m=β1*m+(1-β1)*g;
v=β2*v+(1-β2)*g^2;
m_hat=m/(1-β1^t);
v_hat=v/(1-β2^t);
w=w-α*m_hat/(sqrt(v_hat)+ε);
wherein g represents a gradient, β1 and β2 represent momentum factors, α represents a learning rate, m and v represent first-order moment estimation and second-order moment estimation of the gradient g, t represents the number of steps of the current iteration, ε represents a constant, and w represents a weight to be updated.
In an embodiment of the present disclosure, in some cases, the optimizer may have a significantly reduced learning rate during training. Therefore, some modifications can be made to the weight decay portion so that the equivalence of weight decay and L2 regularization can be maintained while using learning rate decay.
Specifically, the update formula for each step is:
`m_t=beta1*m_{t-1}+(1-beta1)*g_t`;
`v_t=beta2*v_{t-1}+(1-beta2)*g_t*g_t`;
`w_t=w_{t-1}-lr*m_t/(sqrt(v_t)+epsilon)-lr*wd*w_{t-1}`;
Where 'g_t' is the gradient of't' at time step, 'm_t' and 'v_t' are estimates of first and second moments, respectively, 'w_t' is the weight of't' at time step, 'lr' is the learning rate, 'wd' is the weight decay factor, 'epsilon' is a small number to prevent divide by 0 error.
In this formula, the weight decay 'wd' acts directly on the weight 'w' instead of on the gradient 'g' as in the original optimizer, which allows the equivalence of the weight decay and L2 regularization to be maintained while using the learning rate decay.
And 112, after training is completed, inputting the obtained original building data into a trained deep neural network model, and performing nonlinear transformation on the model through an activation function to obtain nonlinear expression data.
In one embodiment of the disclosure, the specific implementation cases are as follows:
after the deep neural network model is trained, the newly acquired raw building data is input into the model. These raw building data may include features of building height, area, structure, etc. The model will process the incoming raw data through a series of computation layers (e.g., convolution layers, full connection layers, etc.). In this process, the activation function plays a vital role.
In particular, the main task of the activation function is to introduce nonlinearities into the network. Since many of the data relationships in the real world are nonlinear, without activation functions, the expressive power of the neural network can be greatly reduced. An activation function, such as a ReLU (linear rectification function), converts negative values in the network to 0, leaving positive values unchanged, thereby achieving a nonlinear transformation. Thus, through the processing of the neural network, a deeper nonlinear data expression can be obtained from the original building data, and richer information can be provided for subsequent processing.
For example: when it is necessary to generate a new 3D model of the business district based on the original data of the area of the land, the surrounding environment, etc. These data are input into the neural network model that has been trained. The model extracts deeper features from these raw data, such as the development potential of plots, expected volume of people in business areas, etc., through multi-layer convolution and full-join operations, coupled with nonlinear transformation of the ReLU activation function. The deep characteristic information provides strong support for the subsequent generation of the 3D model, so that the generated 3D model meets the actual requirements better.
And 113, applying the characteristic relation to the nonlinear expression data by the model through forward propagation, calculating to obtain probability distribution of an output layer, and taking the category with the highest probability as a classification result of the original building data.
In one embodiment of the present disclosure, the trained deep neural network model will be applied to the raw building data of unknown class. Specifically, the obtained original building data is input into a trained deep neural network model, the model carries out a series of linear and nonlinear transformations (activation functions) on the input data of each layer through forward propagation, and thus nonlinear expression data is obtained; these non-linearly expressed data will then be fed into the output layer of the model; at the output layer, the model calculates probability distribution of each possible category according to the characteristic relation learned in the training stage; finally, the model selects the category with the highest probability as the classification result of the original building data.
For example: the new acquired building data is classified using the trained deep neural network model. Firstly, inputting the newly acquired building data into a model; the model then calculates the probability distribution of these data over the various categories (e.g., residential, commercial, industrial, etc.) by forward propagation; and finally, selecting the category with the highest probability as the classification result of the building data by the model. Therefore, the newly acquired building data can be classified rapidly and accurately, and the generation efficiency is improved.
Step 12, respectively generating corresponding prefabricated member module models according to the classified building data, and setting corresponding names;
further, the step 12 may include:
step 121, generating a corresponding prefabricated member module model according to the classified building data, wherein the prefabricated member module model comprises: the main parts of the main structure, the outer wall material, the window, the door and the roof of the building, wherein each type of building generates a set of corresponding prefabricated member module models;
step 122, setting a unique name for each prefabricated member module model generated, wherein the setting of the name is coded based on the category and the characteristics of the name;
and step 123, storing the generated prefabricated member module model and the corresponding name in a database or a file system, and establishing an index.
In one embodiment of the disclosure, firstly, generating a corresponding prefabricated member module model for each type according to the building data which is classified; these prefabricated module models comprise the main parts of the main structure, exterior wall materials, windows, doors and roofs of the building, and in particular, 3D models of these parts can be simulated and generated based on existing building data by means of computer graphics; each prefabricated module model has its unique characteristics reflecting its corresponding building class, such as residential, commercial or industrial building. A unique name is then set for each generated preform module model, which is encoded according to the module's class and characteristics, helping us to locate and identify each module quickly.
For example, an exterior Wall module of a residential class may be named "R_Hous001_Wall," where "R_" represents the residential class, "Hous001" represents the first model of the residential class, and "Wall" indicates that it is an exterior Wall module.
After all the preform module models are generated and named, they are stored in a database or file system and an index is built. This index is based on the module name and can help to find a module quickly when it is needed. In addition, the index can be optimized, so that frequently used modules can be more easily retrieved, and the efficiency of model generation is improved.
In one embodiment, it is desirable to create a city model that includes various buildings, classify the collected building data by a deep learning model, and generate a set of prefabricated component module models for each type of building. For example, a series of residential facade modules were created, including different design styles and materials, and then these modules were named "R_House001_Wall", "R_House002_Wall", etc., and stored in a database. When the city model needs to be created, the required modules can be quickly found by searching the names of the prefabricated module models, and then the modules are assembled according to preset assembly rules, so that the city model is efficiently created.
And 13, loading a corresponding prefabricated member module model for rendering according to preset assembly rules and assembly rules, and outputting a rendering result.
The assembly rule comprises positions and connection modes among prefabricated member module models; the assembly rules include orientation and placement of the preform module model.
In an embodiment of the disclosure, the corresponding preform module model is loaded into the rendering engine according to preset assembly rules and assembly rules. The assembly rules may be preset, which determine the manner and order of assembly of the preform module model. The assembly rules are rules on how the individual preform module models are connected to construct a complete building model, including information on the relative position, angle, etc. between the models. After the model is loaded, rendering is performed by using a computer graphics method according to preset illumination conditions, camera parameters and the like, and a rendering result is finally output.
Further, the step 13 may specifically include:
step 131, initializing a set of random assembly rules and assembly rules as an initial solution;
specifically, parameters for each rule may be generated using a uniformly distributed random number generator, resulting in a set of random assembly rules and assembly rules.
Step 132, evaluating each group of assembly rules and assembly rules according to a preset fitness function, and calculating the fitness of each group of assembly rules and assembly rules, wherein the fitness function comprises an aesthetic degree index and a practicability index of the assembled building model;
specifically, the choice of fitness function depends on the specific application scenario and target, and common fitness functions include mean square error, cross entropy, and the like. Here, the esthetic degree index and the practicality index may be set as evaluation indexes, and they may be combined as fitness functions. In computing, it may be necessary to introduce some specialized areas of knowledge and technology, such as computer vision, architecture, etc.
Step 133, selecting according to the fitness of each solution, and entering the next generation in the solutions with the fitness exceeding a preset threshold;
specifically, the selection method can use a roulette selection method or a tournament selection method, and the setting of the selection threshold can be adjusted according to actual needs.
Step 134, in the next generation of solutions, randomly selecting two solutions, and then exchanging part of the rules to generate new solutions, and/or randomly changing part of the rules in the next generation of solutions;
in particular, for the crossover operation, common methods are single-point crossover, multi-point crossover, uniform crossover, and the like. For mutation operation, part of rules can be randomly selected to be randomly changed.
And step 135, repeating the iterative operation until the preset iterative times are reached.
Specifically, a fixed number of iterations may be set as the end condition, or a certain condition (e.g., the fitness exceeds a certain threshold) may be set as the end condition.
In the above steps, the quality of the solution can be better assessed by introducing various fitness functions. For example, in addition to the aesthetic measure and the practicability measure, other measures such as complexity and stability of the building model may be considered. In addition, in the selection, crossover and mutation operations, we can introduce some customized strategies to improve the efficiency of the search and the quality of the results, according to the nature and needs of the problem.
In some cases, it may be desirable to optimize multiple objectives simultaneously, such as aesthetics, practicality, etc. In this case, a multi-objective genetic algorithm may be used to find the optimal solution that satisfies multiple objectives.
In the optimization process, a local search method, such as a hill climbing algorithm or a simulated annealing algorithm, can be introduced to further optimize each generation of solution generated by the genetic algorithm, so that the local search can be better performed on the basis of global search, and the quality of the solution is improved.
In actual operation, a parallelization strategy can be adopted according to requirements to improve the operation efficiency of the genetic algorithm. For example, multiple algorithm instances may be run simultaneously on multiple processors, or fitness calculations for multiple solutions may be processed simultaneously, etc., which is a great advantage for handling large-scale problems.
For parameter settings of the algorithm, such as population size, crossover rate, mutation rate, etc., dynamic adjustment can be performed by some adaptive strategy, instead of using fixed parameter values. For example, the crossover rate and the mutation rate can be dynamically adjusted in the iterative process according to the information such as diversity of the population, the best fitness and the like.
When the preset assembly rules and the assembly rules are constructed, the introduction of some domain knowledge such as building design principles, style characteristics, cultural elements and the like can be considered, so that the complexity of a search space is reduced and the search efficiency is improved.
For example, a set of assembly and set of rules may be predefined with reference to the characteristics and principles of various architectural styles (e.g., gothic, baroque, modern, etc.), including but not limited to architectural structure, ornamental design, material use, etc. In this way, the generation model can be performed according to the corresponding architectural style and principles when assembling and assembling the prefabricated module model, thereby reducing the search space and improving the efficiency.
In addition, the assembly and assembly rules can be further refined according to regional characteristics and cultural elements. For example, when the traditional Chinese building is generated, the knowledge of the traditional Chinese building field, such as mortise and tenon structures, cornice and the like, can be introduced, and specific assembly and assembly rules are set on the basis of the knowledge.
The method has the advantages that the prior domain knowledge can be fully utilized, the style and the characteristics of the model can be maintained while the building model is generated on a large scale, and the diversity and the authenticity of the model are increased.
The embodiment of the present disclosure enables the features of various types of buildings to be accurately captured and described through classification of building data, enhancing the diversity and authenticity of models. And secondly, by generating the prefabricated member module model and setting corresponding names for the prefabricated member module model, the scheme realizes the modularized management of the model and improves the efficiency and the flexibility of model generation. And finally, performing model rendering according to preset assembly rules and assembly rules, so that the generated building model is more in line with the actual city building style, and the realism and fidelity of the model are improved.
Fig. 6 is a schematic structural diagram of a processing apparatus 50 for generating a building model on a large scale according to an embodiment of the present disclosure, and referring to fig. 6, the apparatus 60 includes:
The classification module 61 is configured to classify the obtained original building data according to preset building features, and obtain classified building data;
the processing module 62 is configured to generate corresponding prefabricated member module models according to the classified building data, and set corresponding names;
and the rendering module 63 is configured to load the corresponding prefabricated member module model for rendering according to a preset assembly rule and an assembly rule, and output a rendering result.
Optionally, the classifying module 61 is configured to classify the obtained raw building data according to the preset building characteristics, including: classifying the acquired original building data by adopting a deep learning algorithm, wherein the deep learning algorithm comprises the following steps:
training a deep neural network model through a large amount of marked building data to obtain a trained deep neural network model, wherein the deep neural network model learns the characteristic relation between building characteristics and categories thereof through a loss function;
after training is completed, inputting the acquired original building data into a trained deep neural network model, and performing nonlinear transformation on the model through an activation function to acquire nonlinear expression data;
and applying the characteristic relation to the data of the nonlinear expression by the model through forward propagation, calculating to obtain probability distribution of an output layer, and taking the category with the highest probability as a classification result of the original building data.
Optionally, the classifying module 61 is configured to determine the target vector annotation layer according to the vector graphic parameter, the preset text annotation information and/or the preset graphic annotation information, and includes:
and carrying out vectorization description on preset text annotation information and/or preset graphic annotation information according to the vector graphic parameters to obtain a target vector annotation layer with a transparent channel.
Optionally, the classification module 61 is configured to perform training of the deep neural network model through a large amount of marked building data, and obtain a trained deep neural network model, where the deep neural network model learns a feature relation between a building feature and its category through a loss function, and includes:
the marked building data comprises the characteristic data of the area, the height and the shape of the building and the category data of the residence, the commercial building and the industrial building;
inputting the characteristic data and the category data into a preset deep neural network model for training, optimizing a cross entropy loss function through a back propagation algorithm and a preset optimizer, and obtaining a trained deep neural network model through iterative learning.
Optionally, the preset optimizer in the classification module 61 includes:
m=β1*m+(1-β1)*g;
v=β2*v+(1-β2)*g^2;
m_hat=m/(1-β1^t);
v_hat=v/(1-β2^t);
w=w-α*m_hat/(sqrt(v_hat)+ε);
wherein g represents a gradient, β1 and β2 represent momentum factors, α represents a learning rate, m and v represent first-order moment estimation and second-order moment estimation of the gradient g, t represents the number of steps of the current iteration, ε represents a constant, and w represents a weight to be updated.
Optionally, the processing module 62 is configured to generate corresponding prefabricated member module models according to the categorized building data, and set corresponding names, including:
generating a corresponding prefabricated member module model according to the classified building data, wherein the prefabricated member module model comprises: the main parts of the main structure, the outer wall material, the window, the door and the roof of the building, wherein each type of building generates a set of corresponding prefabricated member module models;
setting a unique name for each prefabricated member module model, and coding the setting of the name based on the category and the characteristics of the name;
and storing the generated prefabricated member module model and the corresponding name in a database or a file system, and establishing an index.
Optionally, the rendering module 63 is configured to load a corresponding prefabricated member module model for rendering according to a preset assembly rule and an assembly rule, and output a rendering result, including:
the assembly rule comprises positions and connection modes among prefabricated member module models;
the assembly rules include orientation and placement of the preform module model.
Optionally, the rendering module 63 is configured to generate the preset assembly rule and the assembly rule by:
Initializing a set of random assembly rules and assembly rules as an initial solution;
evaluating each group of assembly rules and assembly rules according to a preset fitness function, and calculating the fitness of the assembly rules and the assembly rules, wherein the fitness function comprises an aesthetic degree index and a practicability index of the assembled building model;
selecting according to the fitness of each solution, and entering the next generation in the solutions with the fitness exceeding a preset threshold;
in the next generation of solutions, two solutions are randomly selected, and then part of the rules of the solutions are exchanged to generate new solutions, and/or part of the rules in the next generation of solutions are randomly changed;
and repeating the iterative operation until the preset iterative times are reached.
Embodiments of the present disclosure also provide a communication device including: a processor, a memory storing a computer program which, when executed by the processor, performs the method as in the above embodiments. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Embodiments of the present disclosure also provide a computer-readable storage medium comprising instructions that, when run on a computer, cause the computer to perform a method as in the above embodiments. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present disclosure. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
Furthermore, it should be noted that in the apparatus and method of the present disclosure, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure. Also, the steps of performing the series of processes described above may naturally be performed in chronological order in the order of description, but are not necessarily performed in chronological order, and some steps may be performed in parallel or independently of each other. It will be appreciated by those of ordinary skill in the art that all or any of the steps or components of the methods and apparatus of the present disclosure may be implemented in hardware, firmware, software, or a combination thereof in any computing device (including processors, storage media, etc.) or network of computing devices, as would be apparent to one of ordinary skill in the art upon reading the present disclosure.
Thus, the objects of the present disclosure may also be achieved by running a program or set of programs on any computing device. The computing device may be a well-known general purpose device. Thus, the objects of the present disclosure may also be achieved by simply providing a program product containing program code for implementing the method or apparatus. That is, such a program product also constitutes the present disclosure, and a storage medium storing such a program product also constitutes the present disclosure. Obviously, the storage medium may be any known storage medium or any storage medium developed in the future. It should also be noted that in the apparatus and methods of the present disclosure, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure. The steps of executing the series of processes may naturally be executed in chronological order in the order described, but are not necessarily executed in chronological order. Some steps may be performed in parallel or independently of each other.
While the foregoing is directed to the preferred embodiments of the present disclosure, it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present disclosure and are intended to be comprehended within the scope of the present disclosure.
Claims (10)
1. A method for large scale generation of building models, comprising:
classifying the acquired original building data according to preset building characteristics to acquire classified building data;
respectively generating corresponding prefabricated member module models according to the classified building data, and setting corresponding names;
and loading the corresponding prefabricated member module model for rendering according to preset assembly rules and assembly rules, and outputting a rendering result.
2. The method of claim 1, wherein classifying the acquired raw building data according to the preset building characteristics comprises: classifying the acquired raw building data by a deep learning algorithm, the deep learning algorithm comprising:
training a deep neural network model through a large amount of marked building data to obtain a trained deep neural network model, wherein the deep neural network model learns the characteristic relation between building characteristics and categories thereof through a loss function;
After the training is completed, inputting the obtained original building data into a trained deep neural network model, and performing nonlinear transformation on the model through an activation function to obtain nonlinear expression data;
and applying the characteristic relation to the nonlinear expression data by the model through forward propagation, calculating to obtain probability distribution of an output layer, and taking the category with the highest probability as a classification result of the original building data.
3. The method of claim 2, wherein training the deep neural network model with a plurality of labeled building data to obtain a trained deep neural network model, wherein the deep neural network model learns a feature relation of building features and categories thereof with a loss function, comprising:
the marked building data comprise the characteristic data of the area, the height and the shape of the building and the category data of the residence, the commercial building and the industrial building;
inputting the characteristic data and the category data into a preset deep neural network model for training, optimizing a cross entropy loss function through a back propagation algorithm and a preset optimizer, and obtaining a trained deep neural network model through iterative learning.
4. A method according to claim 3, wherein the preset optimizer comprises:
m=β1*m+(1-β1)*g;
v=β2*v+(1-β2)*g^2;
m_hat=m/(1-β1^t);
v_hat=v/(1-β2^t);
w=w-α*m_hat/(sqrt(v_hat)+ε);
wherein g represents a gradient, β1 and β2 represent momentum factors, α represents a learning rate, m and v represent first-order moment estimation and second-order moment estimation of the gradient g, t represents the number of steps of the current iteration, ε represents a constant, and w represents a weight to be updated.
5. The method according to claim 1, wherein generating corresponding prefabricated member module models from the classified building data and setting corresponding names, respectively, comprises:
generating a corresponding prefabricated member module model according to the classified building data, wherein the prefabricated member module model comprises: the main parts of the main structure, the outer wall material, the window, the door and the roof of the building, wherein each type of building generates a set of corresponding prefabricated member module models;
setting a unique name for each prefabricated member module model generated, wherein the setting of the name is coded based on the category and the characteristics of the name;
and storing the generated prefabricated member module model and the corresponding name in a database or a file system, and establishing an index.
6. The method according to claim 1, wherein loading the corresponding prefabricated member module model for rendering according to a preset assembly rule and an assembly rule, and outputting a rendering result, comprises:
The assembly rule comprises positions and connection modes among prefabricated member module models;
the assembly rules comprise the direction and the placement position of the prefabricated member module model.
7. The method of claim 6, wherein the pre-set assembly rules and assembly rules are generated by:
initializing a set of random assembly rules and assembly rules as an initial solution;
evaluating each group of assembly rules and assembly rules according to a preset fitness function, and calculating the fitness of the assembly rules and the assembly rules, wherein the fitness function comprises an aesthetic degree index and a practicability index of the assembled building model;
selecting according to the fitness of each solution, and entering the next generation in the solutions with the fitness exceeding a preset threshold;
in the next generation of solutions, two solutions are randomly selected, and then part of the rules of the solutions are exchanged to generate new solutions, and/or part of the rules in the next generation of solutions are randomly changed;
and repeating the iterative operation until the preset iterative times are reached.
8. A processing apparatus for mass-generating building models, comprising:
the classification module is used for classifying the acquired original building data according to preset building characteristics to acquire classified building data;
The processing module is used for respectively generating corresponding prefabricated member module models according to the classified building data and setting corresponding names;
and the rendering module is used for loading the corresponding prefabricated member module model for rendering according to preset assembly rules and outputting a rendering result.
9. A computing device, comprising: a processor, a memory storing a computer program which, when executed by the processor, performs the method of any one of claims 1 to 7.
10. A computer readable storage medium storing instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310870407.9A CN116958498A (en) | 2023-07-14 | 2023-07-14 | Method, device and equipment for large-scale generation of building model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310870407.9A CN116958498A (en) | 2023-07-14 | 2023-07-14 | Method, device and equipment for large-scale generation of building model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116958498A true CN116958498A (en) | 2023-10-27 |
Family
ID=88452276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310870407.9A Pending CN116958498A (en) | 2023-07-14 | 2023-07-14 | Method, device and equipment for large-scale generation of building model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116958498A (en) |
-
2023
- 2023-07-14 CN CN202310870407.9A patent/CN116958498A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111310438B (en) | Chinese sentence semantic intelligent matching method and device based on multi-granularity fusion model | |
CN112633604B (en) | Short-term power consumption prediction method based on I-LSTM | |
CN112000772B (en) | Sentence-to-semantic matching method based on semantic feature cube and oriented to intelligent question and answer | |
CN116526450A (en) | Error compensation-based two-stage short-term power load combination prediction method | |
CN114282646B (en) | Optical power prediction method and system based on two-stage feature extraction and BiLSTM improvement | |
CN109583588B (en) | Short-term wind speed prediction method and system | |
CN114065646B (en) | Energy consumption prediction method based on hybrid optimization algorithm, cloud computing platform and system | |
CN111917785A (en) | Industrial internet security situation prediction method based on DE-GWO-SVR | |
Nourkojouri et al. | Development of a machine-learning framework for overall daylight and visual comfort assessment in early design stages | |
CN117313795A (en) | Intelligent building energy consumption prediction method based on improved DBO-LSTM | |
CN118247393A (en) | AIGC-based 3D digital man driving method | |
CN116821586A (en) | Landslide displacement prediction method based on attention mechanism and bidirectional gating circulating unit | |
CN113449182A (en) | Knowledge information personalized recommendation method and system | |
CN113656707A (en) | Financing product recommendation method, system, storage medium and equipment | |
US20230401454A1 (en) | Method using weighted aggregated ensemble model for energy demand management of buildings | |
CN116996555A (en) | Personalized recommendation method, system and storage medium based on optimization meta learning | |
CN116958498A (en) | Method, device and equipment for large-scale generation of building model | |
CN111382333A (en) | Case element extraction method in news text sentence based on case correlation joint learning and graph convolution | |
Suardani et al. | Optimization of feature selection using Genetic Algorithm with Naïve Bayes classification for home improvement recipients | |
CN115796029A (en) | NL2SQL method based on explicit and implicit characteristic decoupling | |
Guo et al. | Mobile user credit prediction based on lightgbm | |
CN113392958B (en) | Parameter optimization and application method and system of fuzzy neural network FNN | |
CN105512249A (en) | Noumenon coupling method based on compact evolution algorithm | |
CN113537540B (en) | Heat supply gas consumption prediction model based on automatic characteristic engineering | |
Zheng et al. | [Retracted] Clustering and Analysis of Verbs in English Language Based on Artificial Intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |