CN116579051A - Two-dimensional house type information identification and extraction method based on house type data augmentation - Google Patents

Two-dimensional house type information identification and extraction method based on house type data augmentation Download PDF

Info

Publication number
CN116579051A
CN116579051A CN202310387120.0A CN202310387120A CN116579051A CN 116579051 A CN116579051 A CN 116579051A CN 202310387120 A CN202310387120 A CN 202310387120A CN 116579051 A CN116579051 A CN 116579051A
Authority
CN
China
Prior art keywords
house type
wall
data
dimensional
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310387120.0A
Other languages
Chinese (zh)
Other versions
CN116579051B (en
Inventor
王兵
柯建生
戴振军
陈学斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Pole 3d Information Technology Co ltd
Original Assignee
Guangzhou Pole 3d Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Pole 3d Information Technology Co ltd filed Critical Guangzhou Pole 3d Information Technology Co ltd
Priority to CN202310387120.0A priority Critical patent/CN116579051B/en
Publication of CN116579051A publication Critical patent/CN116579051A/en
Application granted granted Critical
Publication of CN116579051B publication Critical patent/CN116579051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses a two-dimensional house type information identification and extraction method based on house type data augmentation, which comprises the following steps: combining house type modeling data and prior house type pattern example mapping to construct an AI model training data set; processing the training data set in a self-defining configuration file mode according to the reliability index of the AI model, and outputting a house type diagram training sample and a labeling file; training an AI model according to the house type graph training sample and the annotation file to obtain a house type door, window and wall segmentation prediction result; and (3) iteratively optimizing the segmentation prediction result, and obtaining the central line position information of the door, window and wall to reconstruct the house type. The application has high efficiency, can improve the performance of the model, and can be widely applied to the technical field of computers.

Description

Two-dimensional house type information identification and extraction method based on house type data augmentation
Technical Field
The application relates to the technical field of computers, in particular to a two-dimensional house type information identification and extraction method based on house type data augmentation.
Background
The whole house customization is a house customization solution integrating services such as house design, customization, installation and the like. The whole house customization is environment-friendly, furniture materials are selected, the requirements of owners can be met in an all-around mode, the home inertia of the owners can be customized, and the home inertia system is humanized.
The house type is the step of the initial realization of the whole house custom design, is also the very critical step, and the reproduction of the whole house type structure, the layout of the environment and the whole size influence the design effect, and more directly determine the direction and thought of the whole house soft package design. The style design of the whole cabinet and furniture also determines the style selection of the cabinet and furniture. Therefore, constructing a model of the actual building house type of the consumer is the basis of all work.
The basic house type is composed of three elements, namely a wall, a window and a door, and a designer is required to draw according to one track of house type drawings provided by a user in the past, so that the working efficiency is low. The existing three-dimensional house type reconstruction technology comprises the steps of preprocessing a wall body diagram, performing refining operation to obtain a wall body center line, and finally correcting the wall body center line to obtain structural information of the wall body; and then, according to the wall body diagram, the position relation between the door and the window and the wall body is obtained, the door and window subgraphs are cut at the corresponding positions in the house type diagram by utilizing the position relation, then, the door and window subgraphs at the cut positions are identified, and then, the structural information of the door and the window is obtained from the house type diagram. The method can acquire the structural information of the door and window walls by using the combined action of the wall body diagram and the house type diagram. Under some application scenes, the characteristics of the door and window walls can be obtained only by intelligently identifying the house type graph, and then the structural information of the house type door and window walls can be obtained for house type reconstruction by post-processing.
Disclosure of Invention
Therefore, the embodiment of the application provides the two-dimensional house type information identification and extraction method based on house type data augmentation, which is high in efficiency, so that the performance of the model is improved.
An aspect of the embodiment of the application provides a two-dimensional house type information identification and extraction method based on house type data augmentation, which comprises the following steps:
combining house type modeling data and prior house type pattern example mapping to construct an AI model training data set;
processing the training data set in a self-defining configuration file mode according to the reliability index of the AI model, and outputting a house type diagram training sample and a labeling file;
training an AI model according to the house type graph training sample and the annotation file to obtain a house type door, window and wall segmentation prediction result;
and (3) iteratively optimizing the segmentation prediction result, and obtaining the central line position information of the door, window and wall to reconstruct the house type.
Optionally, the method further comprises a step of making a house type identification training data set, the step comprising:
analyzing and preprocessing the historical house type three-dimensional model data;
drawing a house type diagram;
uncertainty checking and data optimization are performed.
Optionally, the analyzing and preprocessing the historical house type three-dimensional model data includes:
according to the related files of the three-dimensional data of the historical house type modeling, the house type information is arranged by using a self-grinding analysis tool, and the house type information comprises the house type area, the wall body, the door and window structural body and the information of furniture in space;
carrying out data preprocessing on the house type information by combining with a self-defined configuration file;
the step of preprocessing the data of the house type information by combining with the self-defined configuration file comprises the following steps:
the dimension between the object size and the absolute position is configured uniformly;
referring to a priori house pattern example, summarizing common furniture categories including cabinets, beds, sofas and tables;
carrying out the same-direction and closed-loop treatment on the wall line, so that the outline of the outer-wrapping wall body is in the same head-to-tail direction, and regional closed-loop is realized;
converting the analyzed wall information into wall block contour endpoints to form independent color block areas; if the configuration file designates the changed wall thickness, performing interference check on the wall body and furniture, and determining whether the wall thickness is reasonable or not based on the furniture position; calculating a full-image minimum bounding box, wherein the full-image minimum bounding box is used for calculating the full-image scaling factor;
generating a door and window structure body area;
furniture region generation and visual prioritization.
Optionally, the performing house type drawing includes:
according to the relation between the size unit and the pixel in the configuration file, the unit pixel scale and the designated full-image resolution, calculating the scaling factor and then performing data global scaling;
carrying out category, object name and proportion matching on all the pre-drawn structure data in a pre-prepared style library, obtaining an image tag in an svg file linked by the best matching item, and designating filling or tiling and zooming options;
the OBB of the structure of the map is placed according to the requirement, rotated and aligned, and an SVG format label is generated; for the structure without mapping, directly encoding the key points into an SVG format;
and marking the house type graph according to the color block area generated in the data preprocessing process.
Optionally, the performing uncertainty checking and data optimization includes:
the uncertainty is used as one of reliability indexes of the house type graph recognition result of the measurement model to judge the credibility of the model recognition;
and generating a batch of samples for optimization by setting drawing requirements of the designated positions of the configuration files for places with high uncertainty.
Optionally, the training the AI model according to the house type graph training sample and the annotation file to obtain a division prediction result of the house type door, window and wall includes:
identifying and dividing an input house type drawing by using a deep v3+ algorithm to obtain low-dimensional features;
the method comprises the steps of carrying out cavity convolution with different cavity rates in series and in parallel, obtaining context information and multi-scale information, introducing a larger receptive field while controlling the resolution of a feature map, and transmitting the extracted feature map into an ASPP cavity space pyramid pooling module to obtain high-dimensional features;
and connecting the low-dimensional features and the high-dimensional features to obtain a semantic segmentation prediction result.
Optionally, the iteratively optimizing the segmentation prediction result, acquiring the centerline position information of the door, window and wall for house type reconstruction includes:
optimizing the room outline from the segmentation prediction result;
the horizontal wall body combination and the vertical wall body combination are searched within 20 degrees of horizontal and vertical direction offset, and the same-direction aligned wall body combination is obtained in a parallel searching mode, so that the inclined wall possibly generated by connection is straightened.
Another aspect of the embodiment of the present application further provides a two-dimensional house type information identifying and extracting device based on house type data augmentation, including:
the first module is used for combining the house type modeling data and the prior house type pattern case map to construct an AI model training data set;
the second module is used for processing the training data set in a self-defined configuration file mode according to the reliability index of the AI model and outputting a house type diagram training sample and a labeling file;
the third module is used for training an AI model according to the house type graph training sample and the annotation file to obtain a division prediction result of a house type door, a house type window and a house type wall;
and the fourth module is used for iteratively optimizing the segmentation prediction result, acquiring the central line position information of the door, the window and the wall and reconstructing the house type.
Another aspect of the embodiment of the application also provides an electronic device, which includes a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
Another aspect of the embodiments of the present application also provides a computer-readable storage medium storing a program that is executed by a processor to implement a method as described above.
Embodiments of the present application also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
The embodiment of the application combines the house type modeling data and the priori house type pattern example map to construct an AI model training data set; processing the training data set in a self-defining configuration file mode according to the reliability index of the AI model, and outputting a house type diagram training sample and a labeling file; training an AI model according to the house type graph training sample and the annotation file to obtain a house type door, window and wall segmentation prediction result; and (3) iteratively optimizing the segmentation prediction result, and obtaining the central line position information of the door, window and wall to reconstruct the house type. The application improves the efficiency and can improve the performance of the model.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart illustrating the overall steps of an embodiment of the present application;
FIG. 2 is a schematic diagram showing contrast of uncertainty of background areas before and after fine-tune;
FIG. 3 is a schematic view of a wall visualization of a segmentation prediction result;
fig. 4 is a schematic diagram of the house type reconstruction result.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Aiming at the problems of small samples (such as few samples, insufficient sample diversity, low efficiency of house type labeling and the like) existing in the current house type diagram data, the method for constructing the AI model training data set based on the combination of house type modeling data and the house type diagram case map on the market is provided, and the house type diagram training samples and the label files can be output in a targeted mode through a custom configuration file by combining the AI model reliability index, so that the performance of the model is effectively improved.
After the AI model is trained, the input house type drawing is predicted, and the characteristics of the house type doors, windows, walls and the background are obtained. The door, window and wall features are treated separately to calculate the thickness of wall. And optimizing the vertex coordinates of the polygon through iterative optimization, reducing the number of vertices, and extracting the skeleton and the outline of the wall body. And deleting the protruding points of the wall contour through joint alignment, horizontally and vertically aligning the wall, leveling right-angle corners of the wall, deleting redundant points in the midline of the wall to obtain the position information of the midline of the door, the window and the wall, and importing the information into a three-dimensional modeling program to finish house type modeling.
Specifically, the embodiment describes a method for performing house type identification and reconstruction based on pictures. The flow chart of the method is shown in figure 1. Firstly, combining house type modeling data with case mapping of common house type patterns on the market to construct an AI model training data set, and then combining AI model reliability indexes to output house type pattern training samples and labeling files in a targeted mode through a custom configuration file mode. And then training an AI model to obtain the segmentation prediction results of the house type doors, windows and walls. Finally, obtaining the central line position information of the door, window and wall by iterative optimization segmentation prediction results to reconstruct the house type. Specifically, the method comprises the following steps:
1. house type identification training data set manufacturing:
I. analyzing and preprocessing historical house type three-dimensional model data:
a. and according to the related files of the three-dimensional data of the historical house type modeling, the house type information is tidied by using a self-grinding analysis tool, and the information comprises the house type area, the wall body, the door and window structure body, furniture and the like in space.
b. And carrying out data preprocessing on the household type information by combining with a self-defined configuration file.
a) Unified dimension. The absolute position and the object size are usually different in dimension selected during setting, so that dimension unification is performed for convenience in subsequent scaling.
b) And (5) filtering furniture. Considering the limitation of artificial collection of pattern maps, referring to the common house pattern examples on the current network, summarizing common furniture categories including cabinets, beds, sofas, tables and the like, and categories with unobvious display effect or small occurrence probability on two-dimensional house patterns, such as hanging pictures and the like, filtering by combining a product database based on the categories, so as to reduce the data processing cost.
c) And (5) processing the outline of the indoor subarea. For facilitating contour drawing and ground mapping, the wall lines are subjected to homodromous and closed-loop treatment, so that the contour of the outer enclosing wall body is in the same head-tail direction, and regional closed-loop is realized.
d) And (6) generating wall blocks. Converting the analyzed wall information (end points and wall widths) into wall block contour end points to form an independent color block area; if the configuration file designates the wall thickness, interference check of the wall body and furniture is also carried out, and whether the wall thickness is reasonable is determined based on the furniture position; in addition, the process is performed, and the full-image minimum bounding box is calculated, so that the full-image zoom magnification is calculated at a later stage.
e) And generating a door and window structure body area. Besides the wall body, doors and windows or furniture are oriented, and a problem that the doors are opened left and right exists, so that a directional bounding box (Oriented Bounding Box, OBB) is required to be generated for the objects to determine the area; for doors and windows it is generally shown that they are composed of two parts: the wall body overlapping area and the non-wall body area, such as a threshold stone part or a flat window area, are usually overlapped with the wall body, so that the subsequent rendering is convenient, and the wall body needs to be divided for the wall body overlapping area, namely the original wall body is divided into two parts according to the overlapping area; and the rest areas perform fixed-point rotation on the local OBB to obtain the final area.
f) Furniture region generation and visual prioritization. As described in e), generating an OBB for the furniture; in addition, because the two-dimensional house type graph is a top view, the problem of up and down shielding exists, and the data needs to be ordered to obtain a rendering sequence so as to ensure that the top view coverage relationship is reasonable.
II, drawing a house type diagram:
c. global scaling. According to the relation between the size unit and the pixel in the configuration file, the unit pixel scale and the designated full-image resolution, calculating the scaling factor and then performing data global scaling;
d. and (5) drawing.
a) And matching the category, the object name and the proportion of all the pre-drawn structural data in a pre-prepared style library, acquiring an image tag in an svg file linked by the best matching item, and designating filling or tiling, zooming options and the like.
b) The OBB of the structure of the map is placed according to the requirement, rotated and aligned, and an SVG format label is generated; for the structure without mapping, directly encoding the key points into an SVG format; specific drawing information such as frames, filling and the like is required to be specified for the two, so that the house type drawing is completed.
e. And labeling the house type graph. The essence of the color block area generated in the data preprocessing process is the area needing to be marked, so that the automatic marking file generation can be realized according to the appointed marking type, such as a minimum Bounding Box (BBOX) or an OBB.
Uncertainty checking and data optimization:
f. and (5) adopting the uncertainty as one of reliability indexes of the house type graph recognition result of the measurement model, and judging the credibility of the model recognition. The method is characterized in that a model dropout layer is opened in a model verification stage, the same data are used for carrying out n times of prediction to obtain a plurality of predicted values, at the moment, the mean value is used as a final predicted value of the model, and the variance of the plurality of predicted values is uncertainty.
g. The high uncertainty of the model can be theoretically compensated by providing more training data, so that a batch of samples are generated to further perform fine-tune by setting drawing requirements of the designated positions of the configuration files aiming at places with high uncertainty, and the recognition performance of the model is improved to a certain extent. For example, if the effect is poor in the house type graph with the changed wall thickness, a batch of house type graphs with random wall thickness are generated; for the house type graph with the marked wire frames, the house type graph can be generated by inserting various marked wire frame interference items. As shown in FIG. 2, the uncertainty of the background area is compared with that of the fine-tune.
2. Carrying out semantic segmentation prediction on the house type drawing according to a semantic segmentation algorithm:
based on the existing deep learning semantic segmentation technology, a neural network model and a loss function are improved, and training is carried out through a collected house type identification data set.
And I, identifying and dividing the input house type drawing by using a deep v3+ algorithm.
The house type identification data set is input into a neural network of the pretrained Resnet50 to extract the corresponding feature map.
II, carrying out cavity convolution with different cavity rates in series and parallel to obtain more context and multi-scale information, and introducing a larger receptive field while controlling the resolution of the feature map;
the extracted feature map F0 is transmitted into an ASPP cavity space pyramid pooling module, which comprises two parts:
h.1 x 1 convolutions and three 3*3 holes have hole convolutions of 6, 12 and 18, respectively, all followed by the corresponding BN layer.
i. And carrying out global average pooling on all the input channels of F0, constructing a new feature map through 1*1 convolution, and finally obtaining the feature map with required resolution through bilinear interpolation.
The output concat of the a part and the b part is combined, and then convolution of the input 1*1 is carried out to obtain a final characteristic diagram F1;
III, combining the high-dimensional feature F1 obtained in the step II with the low-dimensional feature F0 concat obtained in the step I to improve the accuracy of the segmentation boundary:
j. firstly, performing bilinear interpolation on the high-dimensional characteristics to obtain a characteristic F2 of multiplied by 4;
k. then the convolution of 1*1 is used for reducing the dimension of the low-dimension characteristic F0 to obtain F3;
l. F2 and F3 concat;
and m, further fusing the concat features by convolution of 3*3, and finally obtaining a segmentation prediction P with the same size as the original picture by bilinear interpolation, wherein the wall segmentation result is visualized as shown in fig. 3.
3. And (4) iteratively optimizing the segmentation result to obtain the door, window and wall center line position information, and converting the door, window and wall center line position information into a house type file format to complete house type reconstruction, as shown in fig. 4.
And separating the division results of the door, the window and the wall from the division prediction results, calculating the thickness of the wall by traversing pixels, extracting the skeleton to obtain the outline of the whole house, and carrying out iterative optimization to obtain the central line position information of the door, the window and the wall.
I. Room profile optimization:
and n, optimizing and reducing polygon vertex coordinates. And fusing the two points into one point when the distance between the two vertexes of the polygon is lower than the threshold value, and taking the average value of the coordinates. And when the verticality of two sides of the polygon is lower than a threshold value, removing the middle point, connecting the two end points, and leveling the two sides into one side.
And o, optimizing and reducing the coordinates of the wall intersection points. When the distance between the end point of one wall and the end point of the other wall is lower than the threshold value, the coordinates are averaged by fusing the points of intersection of the walls. When the distance from a point to an edge is below a threshold, the point is incorporated into the edge, and an edge becomes two edges. The threshold is half the thickness of the wall.
And II, joint alignment:
the horizontal wall body combination and the vertical wall body combination are searched within 20 degrees of horizontal and vertical direction offset, and the same-direction aligned wall body combination is obtained in a merging and collecting mode according to the following three rules, so that the inclined wall possibly generated by connection is straightened.
p, co-pointing in the same direction;
q. there is a common diagonal (without threshold value) in the same direction;
and r, the vertical distance and the shortest distance in the same direction are smaller than a threshold value (corresponding to the intersection point of three sides, the threshold value is required to be small, otherwise, the step surface is filtered).
In summary, the method of the application has the following characteristics:
1. a customized house type map data augmentation module based on the historical house type modeling data and the map;
2. the uncertainty-based house type identification evaluation index and training data optimization method;
3. a method for obtaining the central line position information by iteratively optimizing the intersection point of the room vertexes and the wall body;
4. methods that incorporate rule straightening post-processing in a parallel-checking fashion may create a diagonal wall.
The application has the following advantages:
after the method is applied, a designer does not need to model the house type according to the house type drawing provided by the user, the wall line drawing function and the door and window design function are utilized to model the house type one by one according to the marked size on the drawing, the house type drawing can be directly uploaded, the scale information is set, and the recognition and the modeling of the house type drawing are completed by one key. The operation is simple, the technical requirements of designers are reduced, and the design efficiency of the house type modeling of the designers is greatly improved.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the application is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the application, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the embodiments described above, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.

Claims (10)

1. The two-dimensional house type information identification and extraction method based on house type data augmentation is characterized by comprising the following steps of:
combining house type modeling data and prior house type pattern example mapping to construct an AI model training data set;
processing the training data set in a self-defining configuration file mode according to the reliability index of the AI model, and outputting a house type diagram training sample and a labeling file;
training an AI model according to the house type graph training sample and the annotation file to obtain a house type door, window and wall segmentation prediction result;
and (3) iteratively optimizing the segmentation prediction result, and obtaining the central line position information of the door, window and wall to reconstruct the house type.
2. The method for identifying and extracting two-dimensional house type information based on house type data augmentation according to claim 1, wherein the method further comprises the step of creating a house type identification training data set, the step comprising:
analyzing and preprocessing the historical house type three-dimensional model data;
drawing a house type diagram;
uncertainty checking and data optimization are performed.
3. The method for identifying and extracting two-dimensional house type information based on house type data augmentation according to claim 2, wherein the analyzing and preprocessing the historical house type three-dimensional model data comprises the following steps:
according to the related files of the three-dimensional data of the historical house type modeling, the house type information is arranged by using a self-grinding analysis tool, and the house type information comprises the house type area, the wall body, the door and window structural body and the information of furniture in space;
carrying out data preprocessing on the house type information by combining with a self-defined configuration file;
the step of preprocessing the data of the house type information by combining with the self-defined configuration file comprises the following steps:
the dimension between the object size and the absolute position is configured uniformly;
referring to a priori house pattern example, summarizing common furniture categories including cabinets, beds, sofas and tables;
carrying out the same-direction and closed-loop treatment on the wall line, so that the outline of the outer-wrapping wall body is in the same head-to-tail direction, and regional closed-loop is realized;
converting the analyzed wall information into wall block contour endpoints to form independent color block areas; if the configuration file designates the changed wall thickness, performing interference check on the wall body and furniture, and determining whether the wall thickness is reasonable or not based on the furniture position; calculating a full-image minimum bounding box, wherein the full-image minimum bounding box is used for calculating the full-image scaling factor;
generating a door and window structure body area;
furniture region generation and visual prioritization.
4. The method for identifying and extracting two-dimensional house type information based on house type data augmentation according to claim 2, wherein the drawing of house type graph comprises the following steps:
according to the relation between the size unit and the pixel in the configuration file, the unit pixel scale and the designated full-image resolution, calculating the scaling factor and then performing data global scaling;
carrying out category, object name and proportion matching on all the pre-drawn structure data in a pre-prepared style library, obtaining an image tag in an svg file linked by the best matching item, and designating filling or tiling and zooming options;
the OBB of the structure of the map is placed according to the requirement, rotated and aligned, and an SVG format label is generated; for the structure without mapping, directly encoding the key points into an SVG format;
and marking the house type graph according to the color block area generated in the data preprocessing process.
5. The method for identifying and extracting two-dimensional house type information based on house type data augmentation according to claim 1, wherein the performing uncertainty checking and data optimizing comprises the steps of:
the uncertainty is used as one of reliability indexes of the house type graph recognition result of the measurement model to judge the credibility of the model recognition;
and generating a batch of samples for optimization by setting drawing requirements of the designated positions of the configuration files for places with high uncertainty.
6. The method for identifying and extracting two-dimensional house type information based on house type data augmentation according to claim 1, wherein the step of obtaining the division prediction result of house type doors, windows and walls according to the house type diagram training sample and the annotation file training AI model comprises the following steps:
identifying and dividing an input house type drawing by using a deep v3+ algorithm to obtain low-dimensional features;
the method comprises the steps of carrying out cavity convolution with different cavity rates in series and in parallel, obtaining context information and multi-scale information, introducing a larger receptive field while controlling the resolution of a feature map, and transmitting the extracted feature map into an ASPP cavity space pyramid pooling module to obtain high-dimensional features;
and connecting the low-dimensional features and the high-dimensional features to obtain a semantic segmentation prediction result.
7. The method for identifying and extracting two-dimensional house type information based on house type data augmentation according to claim 1, wherein the iterative optimization segmentation prediction result is obtained to reconstruct house type by acquiring the center line position information of a door, a window and a wall, and the method comprises the following steps:
optimizing the room outline from the segmentation prediction result;
the horizontal wall body combination and the vertical wall body combination are searched within 20 degrees of horizontal and vertical direction offset, and the same-direction aligned wall body combination is obtained in a parallel searching mode, so that the inclined wall possibly generated by connection is straightened.
8. Two-dimensional house type information recognition and extraction device based on house type data augmentation, characterized by comprising:
the first module is used for combining the house type modeling data and the prior house type pattern case map to construct an AI model training data set;
the second module is used for processing the training data set in a self-defined configuration file mode according to the reliability index of the AI model and outputting a house type diagram training sample and a labeling file;
the third module is used for training an AI model according to the house type graph training sample and the annotation file to obtain a division prediction result of a house type door, a house type window and a house type wall;
and the fourth module is used for iteratively optimizing the segmentation prediction result, acquiring the central line position information of the door, the window and the wall and reconstructing the house type.
9. An electronic device comprising a processor and a memory;
the memory is used for storing programs;
the processor executing the program implements the method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium stores a program that is executed by a processor to implement the method of any one of claims 1 to 7.
CN202310387120.0A 2023-04-11 2023-04-11 Two-dimensional house type information identification and extraction method based on house type data augmentation Active CN116579051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310387120.0A CN116579051B (en) 2023-04-11 2023-04-11 Two-dimensional house type information identification and extraction method based on house type data augmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310387120.0A CN116579051B (en) 2023-04-11 2023-04-11 Two-dimensional house type information identification and extraction method based on house type data augmentation

Publications (2)

Publication Number Publication Date
CN116579051A true CN116579051A (en) 2023-08-11
CN116579051B CN116579051B (en) 2024-05-07

Family

ID=87542200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310387120.0A Active CN116579051B (en) 2023-04-11 2023-04-11 Two-dimensional house type information identification and extraction method based on house type data augmentation

Country Status (1)

Country Link
CN (1) CN116579051B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814575A (en) * 2020-06-12 2020-10-23 上海品览数据科技有限公司 Household pattern recognition method based on deep learning and image processing
CN113449355A (en) * 2021-09-01 2021-09-28 江苏华邦工程造价咨询有限公司 Building house type graph automatic generation method based on artificial intelligence
CN114612923A (en) * 2022-02-08 2022-06-10 百安居信息技术(上海)有限公司 House type graph wall processing method, system, medium and equipment based on target detection
CN115410218A (en) * 2022-08-15 2022-11-29 智云数创(洛阳)数字科技有限公司 Household pattern recognition and modeling method based on artificial intelligence image recognition
CN115908900A (en) * 2022-10-31 2023-04-04 奥格科技股份有限公司 AI-based building plan parameterization identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814575A (en) * 2020-06-12 2020-10-23 上海品览数据科技有限公司 Household pattern recognition method based on deep learning and image processing
CN113449355A (en) * 2021-09-01 2021-09-28 江苏华邦工程造价咨询有限公司 Building house type graph automatic generation method based on artificial intelligence
CN114612923A (en) * 2022-02-08 2022-06-10 百安居信息技术(上海)有限公司 House type graph wall processing method, system, medium and equipment based on target detection
CN115410218A (en) * 2022-08-15 2022-11-29 智云数创(洛阳)数字科技有限公司 Household pattern recognition and modeling method based on artificial intelligence image recognition
CN115908900A (en) * 2022-10-31 2023-04-04 奥格科技股份有限公司 AI-based building plan parameterization identification method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李昌华;田思敏;周方晓;: "自适应分块的BIM墙体轮廓提取及三维重建研究", 计算机科学与探索, no. 03 *
祖朋达;李晓敏;陈更生;许薇;: "DODNet:一种扩张卷积优化的图像语义分割模型", 复旦学报(自然科学版), no. 05 *
网友: "《DeepLabv3+概述》", 《HTTPS://BLOG.CSDN.NET/M0_64524798/ARTICLE/DETAILS/129206808》 *

Also Published As

Publication number Publication date
CN116579051B (en) 2024-05-07

Similar Documents

Publication Publication Date Title
EP3506211B1 (en) Generating 3d models representing buildings
EP3506160B1 (en) Semantic segmentation of 2d floor plans with a pixel-wise classifier
Qiu et al. Pipe-run extraction and reconstruction from point clouds
Nguyen et al. A robust 3d-2d interactive tool for scene segmentation and annotation
Wang et al. Vision-assisted BIM reconstruction from 3D LiDAR point clouds for MEP scenes
Wu et al. Stereo matching with fusing adaptive support weights
Pizarro et al. Automatic floor plan analysis and recognition
Silva et al. Efficient propagation of light field edits
Holzmann et al. Semantically aware urban 3d reconstruction with plane-based regularization
CN112700529A (en) Method and system for generating three-dimensional model according to standard document
Liu An adaptive process of reverse engineering from point clouds to CAD models
Yu et al. A global energy optimization framework for 2.1 D sketch extraction from monocular images
Koch et al. Real estate image analysis: A literature review
Oskouie et al. Automated recognition of building façades for creation of As-Is Mock-Up 3D models
Demir et al. Guided proceduralization: Optimizing geometry processing and grammar extraction for architectural models
Parente et al. Integration of convolutional and adversarial networks into building design: A review
Guo et al. SGLBP: Subgraph‐based local binary patterns for feature extraction on point clouds
CN116579051B (en) Two-dimensional house type information identification and extraction method based on house type data augmentation
Martens et al. VOX2BIM+-A Fast and Robust Approach for Automated Indoor Point Cloud Segmentation and Building Model Generation
Mahmoud et al. Automated BIM generation for large-scale indoor complex environments based on deep learning
CN113744350B (en) Cabinet structure identification method, device, equipment and medium based on single image
Collins et al. Towards applicable Scan-to-BIM and Scan-to-Floorplan: an end-to-end experiment
Dekkers et al. A sketching interface for feature curve recovery of free-form surfaces
Roth et al. Shape Analysis and Visualization in Building Floor Plans
Biadgie et al. Speed-up feature detector using adaptive accelerated segment test

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant