CN108804815B - Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning - Google Patents

Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning Download PDF

Info

Publication number
CN108804815B
CN108804815B CN201810587788.9A CN201810587788A CN108804815B CN 108804815 B CN108804815 B CN 108804815B CN 201810587788 A CN201810587788 A CN 201810587788A CN 108804815 B CN108804815 B CN 108804815B
Authority
CN
China
Prior art keywords
wall
wall body
layer
cad
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810587788.9A
Other languages
Chinese (zh)
Other versions
CN108804815A (en
Inventor
王宇涵
唐睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN201810587788.9A priority Critical patent/CN108804815B/en
Publication of CN108804815A publication Critical patent/CN108804815A/en
Application granted granted Critical
Publication of CN108804815B publication Critical patent/CN108804815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Abstract

The invention discloses a method for identifying a wall body in a CAD (computer aided design) based on deep learning assistance, which comprises the following steps of: acquiring and analyzing CAD file data corresponding to the house type graph; acquiring a first wall obtained by identifying the CAD file data; identifying the house type graph corresponding to the CAD file data by using a wall identification model to obtain a second wall; performing cross validation on the first wall by using the second wall to obtain a final wall; the wall body recognition model is obtained by training on the basis of a deep learning network. The invention also discloses a device for assisting in identifying the wall in the CAD based on the deep learning. The method and the device can improve the accuracy of wall identification.

Description

Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning
Technical Field
The invention belongs to the technical field of building indoor design, and particularly relates to a method and a device for assisting in identifying a wall body in CAD (computer-aided design) based on deep learning.
Background
At present, the architectural indoor space Design is mainly drawn by means of Computer Aided Design (CAD). The method has very important significance for the design drawing of the indoor space of the building drawn by the CAD and the recognition of the indoor wall of the building on the indoor area calculation, the heat supply design, the air supply design, the ventilation and air conditioning system design and the like.
Most of the existing wall identification methods identify walls according to image features on design drawings, and specifically comprise the following steps: firstly, matching straight lines in a design drawing according to the geometric structures of interference objects such as furniture and the like, and deleting the straight lines with similarity reaching a certain preset threshold value as the interference objects; then, carrying out linear matching on the remaining linear sets to generate a plurality of pairs of parallel lines, and combining the collinear lines with smaller gaps; and finally, generating the wall according to the paired parallel lines and the width range of the wall. The wall body identification method relying on the line segment constraint and the geometric structure is complex in calculation process and low in calculation efficiency. In addition, abstract or complex geometric structures are difficult to remove in the wall body identification method, so that wall body identification deviation is large, and identification accuracy is low.
The patent application with application publication number CN103971098a discloses a method for identifying a wall in a house type graph. The identification method comprises the following steps: preprocessing the house type graph; detecting the outline of the floor plan; processing the house type graph by adopting a wall threshold segmentation method to obtain a binary graph; carrying out corrosion, expansion and edge detection on the binary image; and carrying out hough transformation on the edge image to obtain linear coordinate information, and acquiring the coordinate information of the wall according to the linear coordinate information. The partition threshold T in the wall threshold partition method is determined by the average gray value of the wall and the average gray value of the area outside the wall. The method does not exclude the furniture and other interfering objects in the house-type picture, so the identification accuracy is low.
The patent application with the application publication number of CN106156438A discloses a wall body identification method and device. The wall body identification method comprises the following steps: acquiring a house type graph uploaded by a user; rasterizing the house type graph; displaying the rasterized house type graph to a user; acquiring a position point which belongs to a wall body area and is selected by a user; and identifying the wall body area from the rasterized floor plan according to the information of the position point. According to the wall body identification method, the building wall body is identified by simply interacting the webpage front end and the background server, although the calculation process is simple and the identification accuracy is better, the webpage front end (user) and the background server are required to interact, automation cannot be achieved, and errors exist in selection of the user, so that the problem of low identification accuracy also exists.
Disclosure of Invention
The invention aims to provide a method and a device for assisting in identifying a wall body in a CAD (computer-aided design) based on deep learning, so as to solve the problem of low accuracy of wall body identification.
In order to solve the above problems, the present invention provides the following technical solutions:
in one aspect, an embodiment of the present invention provides a method for identifying a wall in a CAD based on deep learning assistance, including the following steps:
acquiring and analyzing CAD file data corresponding to the house type graph;
acquiring a first wall obtained by identifying the CAD file data;
identifying the house type graph corresponding to the CAD file data by using a wall body identification model to obtain a second wall body;
performing cross validation on the first wall by using the second wall to obtain a final wall;
the wall body recognition model is obtained by training on the basis of a deep learning network.
On the other hand, an embodiment of the present invention provides a device for identifying a wall in a CAD based on deep learning assistance, including:
one or more processors, a memory, and one or more computer programs stored in the memory and executable on the one or more processors, the one or more processors implementing the steps of the above-described method when executing the one or more computer programs.
According to the method and the device for identifying the wall body in the CAD based on the deep learning assistance, the house type graph is identified by using the constructed wall body identification model, the second wall body is obtained, and the first wall body corresponding to the house type graph is subjected to cross verification by using the second wall body, so that the final wall body is obtained. The wall body identification model has a strong learning and memory function, so that a large amount of furniture and other interference information in a house type picture can be eliminated by utilizing the wall body identification model to identify the wall body, a second wall body with almost no interference information is obtained, the interference information in the first wall body can be eliminated when the second wall body is utilized to carry out cross verification on the first wall body, and a final wall body is obtained, so that the accuracy of a wall body identification result is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts. Like reference symbols in the various drawings indicate like elements throughout. In the drawings:
FIG. 1 is a flowchart illustrating a method for identifying a wall in a CAD based on deep learning assistance according to an embodiment of the present invention;
FIG. 2 is a diagram of a deep neural network constructed according to another embodiment of the present invention;
FIG. 3 is a flowchart of a method for identifying walls in CAD based on deep learning assistance according to another embodiment of the present invention;
FIG. 4 is a graph of house types after rasterization and scaling processing in another embodiment of the present invention;
FIG. 5 is a first wall diagram of the house layout of FIG. 4 after wall identification using a prior art method;
fig. 6 is a final wall diagram of the first wall shown in fig. 5 after cross-validation of the second wall obtained by house pattern recognition shown in fig. 4 using a wall recognition model.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention are described below clearly and completely in conjunction with the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the existing method is used for identifying the wall, because the interference information such as furniture and the like cannot be accurately eliminated, the identified wall contains some interference information, and the identification accuracy is low.
In order to improve the identification accuracy of the wall, the embodiment of the invention provides a method for identifying the wall in the CAD based on deep learning assistance, which comprises the following steps as shown in fig. 1:
and S101, acquiring and analyzing CAD file data corresponding to the house type graph.
CAD file data refers to data stored in the form dwg, dxf, etc., which, after parsing and display, may correspond to one or more layout drawings.
In order to facilitate the preprocessing of the CAD file data, the CAD file data needs to be parsed into a data format that facilitates the rasterization process, i.e., the data format is generally a vector diagram represented by vectorized data. Specifically, the parsed data includes straight lines, arcs, circles, and for data structures in other formats, such as multi-segment lines, spline curves, ellipses, and the like, the data structures are approximately converted into straight lines and arcs. In this case, the straight line is represented by a binary group ((start point coordinates), (end point coordinates)) consisting of the start point coordinates and the end point coordinates, the arc line is represented by a ternary group ((start point coordinates), (end point coordinates), arc) consisting of the start point coordinates and the arc, and the circle is represented by a binary group ((center point coordinates), radius) consisting of the center point coordinates and the radius.
And S102, acquiring a first wall obtained by identifying the CAD file data.
The first wall body is obtained by identifying CAD file data by utilizing the prior art, is stored in a memory and can be directly called for use when needed.
Specifically, the first wall body can be obtained by using the method disclosed in application publication No. CN103971098a, the first wall body can be obtained by using the method disclosed in application publication No. CN106156438a, and the first wall body can be obtained by using the following method, and the method comprises the following steps: firstly, matching straight lines in a design drawing according to the geometric structures of interference objects such as furniture and the like, and deleting the straight lines with similarity reaching a certain preset threshold value as the interference objects; then, carrying out linear matching on the remaining linear sets to generate a plurality of pairs of parallel lines, and combining the collinear lines with smaller gaps; and finally, generating the wall according to the paired parallel lines and the width range of the wall.
Some of the methods do not eliminate the interference information, so that the wall body identification result is inaccurate; although some methods include a step of eliminating interference information, the elimination of the interference information is not accurate, and for abstract and complex interference structures which are difficult to eliminate, the elimination is not clean, so that the wall identification result still includes some interference information.
And S103, identifying the house type graph corresponding to the CAD file data by using the wall body identification model to obtain a second wall body.
The wall body recognition model is obtained by training a large number of training samples based on a deep learning network, specifically a prediction model obtained by training a deep neural network, and when a house type graph to be recognized is input into the wall body recognition model, a one-dimensional vector result can be obtained to indicate whether the image content characteristics corresponding to the vector are a wall body.
The deep neural network is a neural network for capturing two-dimensional data content characteristics and is mainly applied to the field of image recognition. Therefore, in this embodiment, the CAD file data needs to be converted into an image, i.e., data in a two-dimensional array form, so as to extract the content features of the two-dimensional data by using the deep neural network.
Specifically, rasterization processing is performed on the analyzed CAD file data to obtain a layout corresponding to the CAD file data.
As described above, the analyzed CAD file data is a vector diagram including vector data, and the analyzed CAD file data is rasterized, that is, the vector diagram is converted into a bitmap. A bitmap is a recognizable image made up of a series of pixels that is represented and stored in a two-dimensional array.
The deep neural network structure constructed in the embodiment is shown in fig. 2, and includes two parts, the first part is used for extracting wall features of two-dimensional data, and the other part is used for generating a corresponding reconstructed image according to the extracted wall features. Specifically, the first part of the deep neural network comprises a data input layer, a convolution layer, a down-sampling layer, an activation function layer and a full-connection layer, the second part of the deep neural network comprises a data transposition convolution layer, an activation function layer and a data output layer, the full-connection layer of the first part is connected with the transposition convolution layer of the second part, and a result of the full-connection layer is used as processing data of the transposition convolution layer, so that wall body characteristics are obtained by using the deep neural network of the first part, and a reconstructed image is generated by reconstructing an image of a wall body by using the deep neural network of the second part.
After the infrastructure of the deep neural network is determined, the category dimension and size of each layer are determined. The category dimension determines the number of categories abstracted from the lower layer to the upper layer, and the window size is the size of the two-dimensional image in each layer.
For the data input layer, the category dimension is 3, that is, a two-dimensional image represented by RGB3 color paper is input; the window size is the size of the two-dimensional image. For convolutional layers, active function layers, downsample layers, transpose convolutional layers, and fully-connected layers, the class dimension represents the number of feature maps in each layer; the window size represents the size of the feature map of the layer. For the data output layer, the category dimension is 3 and the window size is the size of the reconstructed image.
After the category dimensions and sizes of the layers are determined, the size of a kernel mapping area, a kernel mapping area moving stepping value and a window edge expansion value are set for the convolutional layer, the full connection layer, the activation function layer and the transposed convolutional layer. The size of the kernel mapping area determines the unit of the feature area with which size is abstracted to the upper layer, and for the convolution layer of the first layer, the size of the kernel mapping area is correspondingly consistent with the area shape of the two-dimensional array; the kernel mapping region movement stepping value is mainly set for the convolutional layer, determines the step size of kernel mapping region movement, and is usually set to be 1; the window edge extension value is mainly set for the convolutional layer, the size of an area covered outside the two-dimensional array edge is determined, and when the window edge extension value is set to be 0, information outside the two-dimensional array edge is not included in the kernel mapping area.
As shown in fig. 2, the first part of the deep neural network established in this embodiment includes a data input layer 201, 8 combination layers 202 to 209, a full-link layer 210, and a full-link layer 211, where each combination layer includes a convolution layer, an activation function layer, and a down-sampling layer, and the convolution layer is used to map pixel blocks (including multiple adjacent pixel points) in an image to feature points in an upper-layer feature map through convolution calculation; the activation function layer is used for processing the data of the feature points by using a relu function; the down-sampling layer is used for sub-sampling the feature map after convolution, and sampling a plurality of adjacent feature points in the feature map into one feature point in the upper layer feature map, so that the data processing amount can be reduced, and the feature information can be kept. Size = a × b, "a × a" represents the window Size, and "b" represents the category dimension for each layer.
The size =256 × 3 of the data input layer 201 indicates an image of 256 × 256 size in which the input data is represented by RGB3 color values. Size =128 × 64 of the combined layer 202, representing a convolutional layer containing size =256 × 64, an activation function layer of size =256 × 64, and a downsampled layer of size =128 × 64, where the window size of the convolutional layer is the same as the window size of the previous layer, the category dimension changes, the activation function layer processes the feature point data, and the downsampled layer ensures that the category dimension does not change, and reduces the window size. Size =64 × 128 for combined layer 203, size =32 × 256 for combined layer 204, and size =16 × 512 for combined layer 205, in combined layers 203-205, category dimension doubling is achieved by convolution layers, feature point data is processed by activation function layers, and window size halving is achieved by downsampling layers to obtain corresponding feature maps. For example, size =32 × 256 of the combined layer 204, representing a convolutional layer containing size =64 × 256, an activation function layer of size =64 × 256, and a downsampled layer of size =32 × 256, yields a feature map of size 32 × 32 for 256 class dimensions. Size =8 × 512 of the combined layer 206, size =4 × 512 of the combined layer 207, size =2 × 512 of the combined layer 208, and size =1 × 512 of the combined layer 209, and in the combined layers 206 to 209, the category dimension is unchanged by the convolution operation, the feature point data is processed by the activation function layer, and the window size is halved by the down-sampling layer, so that the corresponding feature map is obtained. Size =1 × 512 of the fully-connected layer 210 indicates that all 512 signatures in the combined layer 209 were connected to 512 signatures, "1*1" indicates that the signature in the fully-connected layer 210 was a signature of 1*1, and size =1 × 512 of the fully-connected layer 211 indicates that the result of the fully-connected layer 210 was subjected to another full-connection process to obtain 512 signatures of 1*1.
The second part of the deep neural network includes 8 transposed convolution and activation combining layers 212-219, the transposed convolution and activation combining layer 212 connected to the fully-connected layer 211. Each transposition convolution and activation combination layer comprises a transposition convolution layer and an activation function layer, the transposition convolution layer is used for restoring and restoring the characteristics, the characteristic points in the low-resolution characteristic diagram are mapped to a plurality of characteristic points on the high-resolution characteristic diagram, and finally the characteristics on the high-resolution characteristic diagram are constructed by aliasing of a plurality of characteristic point information at the same position; and the activation function layer is used for processing the data of each feature point in the feature map.
Wherein the size =2 × 512 of the transposed convolution and activation combination layer 212, the size =4 × 512 of the transposed convolution and activation combination layer 213, the size =8 × 512 of the transposed convolution and activation combination layer 214, the size =16 × 16 of the transposed convolution and activation combination layer 215, the category dimensions are unchanged in the transposed convolution and activation combination layers 212-215, the window size doubling is achieved by the transposed convolution layer, and the data of each feature point in the feature map is processed by the activation function layer to obtain the corresponding feature map. Size =32 × 256 of the transposed convolution and active combination layer 216, size =64 × 128 of the transposed convolution and active combination layer 217, size =128 × 64 of the transposed convolution and active combination layer 218, halving the category dimension by means of transposed convolution, doubling the window size, and processing the data of each feature point in the feature map by means of the active function layer in the transposed convolution and active combination layers 216-218 to obtain the corresponding feature map. For example, the size =64 × 128 of the transposed convolution and activation combined layer 217, representing the transposed convolution layer containing size =64 × 128 and the activation function layer of size =64 × 128, results in a feature map of 128 class dimensions, size 64 × 64. Size =256 × 3 for the transposed convolution and activation combined layer 219, which is the same as the size of the input layer 201, indicates that the class dimension is converted to 3 by the transposed convolution layer of size =256 × 3, and the window size is doubled to 256 × 256, resulting in a reconstructed image of size 256 × 256 in RGB3 color values, and the result of the transposed convolution and activation combined layer 219 is the data output layer.
After the deep neural network is constructed, the constructed deep neural network is trained by using the training samples, and internal parameters of the deep neural network are changed to obtain a wall body recognition model.
Specifically, each training sample comprises a rasterized image and wall labeling information for labeling a wall in the rasterized image. The network training process is a supervised learning process, after the wall probability value of a training sample is obtained, the wall probability value is compared with wall marking information, if the training result is consistent with the wall marking information, the setting of network parameters is proper, and after different training samples are trained for multiple times, if the result can reach the standard in the aspects of accuracy, stability and the like, a wall recognition model is obtained; and if the wall probability value obtained by training is inconsistent with the wall labeling information, performing layer-by-layer feedback according to the difference degree of the wall probability value and the wall labeling information, and sequentially adjusting parameters of each layer to enable the training result to be close to or equal to the wall labeling information, wherein the process is backward propagation. And after a large number of training samples are used for carrying out parameter adjustment on the deep neural network for many times, the wall body recognition model can be obtained.
After the wall recognition model is obtained, the wall in the input image can be recognized by using the wall recognition model. Specifically, the identifying the house type graph corresponding to the CAD file data by using the wall identification model to obtain the second wall includes:
inputting the house type graph corresponding to the CAD file data into a wall body recognition model, obtaining a wall body characteristic matrix through calculation, generating a reconstructed image according to the wall body characteristic matrix, and then intercepting the reconstructed image according to a preset wall body type threshold value to generate a second wall body.
The reconstructed image is actually a wall confidence coefficient matrix, the size of the wall confidence coefficient matrix is the same as that of the input image, and the probability that the position corresponding to each position point in the matrix is the wall is the corresponding position of the input image, namely the wall confidence coefficient.
The wall type threshold value can be an RGB value, pixels larger than the RGB value are regarded as a wall, pixels smaller than the RGB value are regarded as a background, and a second wall is generated according to the RGB value. The RGB values are set according to practical situations, and are not limited herein.
Inputting a rasterized image obtained by rasterizing CAD file data into a wall body recognition model, performing wall body feature extraction on the rasterized image by using a network structure and network parameters in the wall body recognition model, obtaining a wall body feature recognition result, and performing feature expansion according to the wall body feature recognition result to generate a reconstructed image, wherein the reconstructed image not only contains wall body information, but also contains some other information. And after the reconstructed image is obtained, processing the reconstructed image by using a preset wall type threshold to generate a second wall, namely representing the position of the wall by using white.
Because the interference information (furniture, irregular structures and abstract structures) in the input image is not learned when the wall body recognition model is constructed, only the wall body features can be extracted when the wall body recognition model is used for extracting the features of the house type image, so that the interference information is hardly contained in the reconstructed image obtained by the wall body recognition model, the interference information can be well eliminated, namely the reconstructed image is intercepted by using the wall body type threshold value, and the generated second wall body eliminates the interference information.
Since the size of the input layer of the wall recognition model is fixed, in order to meet the format requirement of the input data, the input data needs to be resized. Specifically, the house type graph corresponding to the CAD file data is input into the wall recognition model after being scaled to meet the size of the input image of the wall recognition model. In this embodiment, the house pattern is input into the wall recognition model after being adjusted to 256 × 256. And in order to not change the obtained second wall size, after a reconstructed image is obtained, the reconstructed image is scaled to the original floor plan size. The original floor plan size is the size of the floor plan before adjustment. Specifically, after the reconstructed image is scaled to the size of the original household-type image, the reconstructed image is intercepted according to a preset wall type threshold value, and a second wall is generated.
And S104, performing cross validation on the first wall by using the second wall to obtain a final wall.
Because the first wall body is identified from the house type graph by using the existing method, the identified first wall body contains some interference information (for example, the first wall body is identified as a wall body which is furniture in the original house type graph), and the second wall body is used for carrying out cross validation on the first wall body, the interference information in the first wall body can be removed, and an accurate wall body can be obtained.
Specifically, the cross-validation of the first wall by using the second wall to obtain the final wall includes:
and counting the overlapping proportion or the overlapping area of the first wall body on the second wall body, eliminating the first wall body with the overlapping proportion lower than an overlapping threshold value, and taking the rest first wall body as a final wall body.
For a floor plan, each wall generated by the existing method is called a first wall, and each wall generated by the wall identification model is called a second wall. The step of counting the overlapping proportion of the first wall bodies on the second wall bodies refers to counting the overlapping proportion of each first wall body on the second wall bodies at the same position as the first wall body. The overlapping proportion refers to the proportion of the overlapping area of the first wall body on the second wall body in the whole area of the second wall body. The overlap threshold may be a numerical value representing a ratio, and when a certain first wall is not overlapped with the second wall, or the overlap ratio of the first wall to the second wall is smaller than a preset overlap threshold, it is indicated that the first wall has a high possibility of interfering with information, and the first wall is removed. And the rest first walls are the first walls which are overlapped with the second walls to a larger extent, and the walls are regarded as real walls and reserved as the identified final walls.
Of course, the overlapping area of the first wall body on the second wall body can also be directly counted, the first wall body with the overlapping proportion lower than the overlapping threshold value is removed, and the remaining first wall body is the final wall body. In this case, the overlap threshold is a numerical value representing the area. The overlap threshold is set according to actual conditions, and is not limited herein.
In another embodiment, on the basis of the method for identifying a wall in a CAD based on deep learning assistance, the method for identifying a wall in a CAD based on deep learning assistance further includes: and carrying out regularization treatment on the final wall body so as to meet the communication relation of the wall body.
Although the final wall after cross validation meets various limitations of the wall in the actual house type in semantic and geometric relations, collinear walls and vertical walls with unconnected corners exist at a short distance, and therefore, the walls need to be subjected to regularization processing.
Specifically, the regularizing the final wall includes:
combining collinear walls with the distance smaller than a first distance threshold, taking a longer wall of the two collinear walls as a main wall, and extending the main wall towards a shorter wall until the main wall covers the shorter wall, so as to realize the combination of the collinear walls;
calculating intersection points of the mutually vertical walls, and extending the walls with the distance less than a second distance threshold value from the intersection points to form wall corners; alternatively, two walls perpendicular to each other are simultaneously extended to the intersection point to form a wall corner.
By the method, some collinear walls which are close to each other can be communicated with unconnected mutually perpendicular walls, and the communication relation of the walls in the actual house type is met.
In another embodiment, on the basis of the method for identifying the wall in the CAD based on the deep learning assist, the method for identifying the wall in the CAD based on the deep learning assist further includes screening and processing the basic vectorized data, specifically, after the CAD file data is analyzed, a line segment with a length smaller than a length threshold is removed, and a line segment with an inclination angle within a preset angle range is converted into a horizontal line segment or a vertical line segment.
After the CAD file is analyzed, vectorized data obtained contain line segments with small required length or line segments with large inclination angle, and the line segments are unlikely to be the wall in the house-type graph, so that the line segments with short length are deleted before recognition, the line segments with large inclination angle are adjusted, and the calculation amount of the wall recognition model can be reduced.
The length threshold is set relative to the overall structure and size of the house type diagram, and the line segment below the length threshold is generally not considered to be a wall, and is deleted. The size of the length threshold is set according to actual conditions, and is not limited herein.
The preset angle range is also a relative concept, and generally, a line segment with an angle of less than 45 degrees with the horizontal direction is converted into a horizontal line segment, and a line segment with an angle of less than 45 degrees with the vertical direction is converted into a vertical line segment. The preset angle range is also set according to the actual situation, and is not limited herein.
In another embodiment, on the basis of the method for identifying a wall in a CAD based on deep learning assistance, the method for identifying a wall in a CAD based on deep learning assistance further includes:
after the CAD file data is analyzed, according to the dependency relationship of adjacent line segments, taking an area where no line segment with the dependency relationship exists in the original floor plan or an area where the number of the line segments with the dependency relationship is less than a dependency threshold as a boundary, splitting to obtain a plurality of sub-areas, and taking each sub-area as a floor plan.
When a CAD file contains a plurality of house types, clustering operations are required to be performed on line segments in the CAD file data to identify a plurality of house types. Specifically, the dependency of neighboring line segments is detected, which may be that the neighboring line segments have common endpoints, the neighboring line segments intersect, the neighboring line segments are parallel but closer in distance, and so on. In one case, when there is no line segment with dependency relationship in a certain region of the original house type graph, the region is used as a boundary, the original house type graph is split into several sub-regions, and each sub-region is used as a house type graph. In another case, when there are line segments with dependency relationships in a certain region of the original house type graph, but the number of the line segments with dependency relationships is small and smaller than a preset dependency threshold, where the dependency threshold represents the number of the line segments with dependency relationships, the region is taken as a boundary, the original house type graph is split into several sub-regions, and each sub-region is taken as a house type graph.
Before the wall body identification model is used for identifying the second wall body, the multi-house type graphs are split, each house type graph is independently used as an input image and is input into the wall body identification model, the calculated amount of the wall body identification model can be reduced, and the identification accuracy can be improved.
Another embodiment of the present invention provides a method for identifying a wall in a CAD based on deep learning assistance, as shown in fig. 3, including the following steps:
and S301, acquiring and analyzing CAD file data corresponding to the user type graph.
S302, removing line segments with the length smaller than a length threshold value from the CAD file data, and converting the line segments with the inclination angles within a preset angle range into horizontal line segments or vertical line segments.
And S303, clustering the analyzed CAD file data to realize multi-family splitting.
Specifically, after the CAD file data is analyzed, according to the dependency relationship of adjacent line segments, taking an area where no line segment with the dependency relationship exists in the original floor plan or an area where the number of the line segments with the dependency relationship is less than a dependency threshold as a boundary, splitting to obtain a plurality of sub-areas, and taking each sub-area as a floor plan.
S304, acquiring a first wall obtained by identifying the CAD file data.
S305, rasterizing the analyzed CAD file data to obtain a house type graph corresponding to the CAD file data, and zooming the house type graph;
and S306, identifying the scaled house type graph by using the wall body identification model to obtain a second wall body.
Specifically, the house type graph corresponding to the CAD file data is input into a wall body identification model, a wall body characteristic matrix is obtained through calculation, a reconstructed image is generated according to the wall body characteristic matrix, the reconstructed image is zoomed to the size of an original house type graph, then the reconstructed image is intercepted according to a preset wall body type threshold value, and a second wall body is generated.
S307, the first wall is subjected to cross validation by using the second wall, and a final wall is obtained.
Specifically, the overlapping proportion or the overlapping area of the first wall body on the second wall body is counted, the first wall body with the overlapping proportion lower than the overlapping threshold value is removed, and the remaining first wall body is the final wall body.
And S308, carrying out regularization treatment on the final wall body so as to meet the communication relation of the wall body.
Specifically, collinear walls with a distance smaller than a first distance threshold are combined, a longer wall of the two collinear walls is taken as a main wall, the main wall is extended towards a shorter wall until the main wall covers the shorter wall, and the collinear walls are combined;
and calculating intersection points of the mutually perpendicular walls, and extending the walls with the distance less than a second distance threshold value from the intersection points to form wall corners.
In this embodiment, because the wall body recognition model has a very strong learning and memory function, the wall body recognition model is used to recognize the wall body, so that a large amount of interference information such as furniture in a house type graph can be eliminated, a second wall body with almost no interference information can be obtained, when the second wall body is used to perform cross validation on the first wall body, the interference information in the first wall body can be eliminated, and a final wall body can be obtained, thereby improving the accuracy of the wall body recognition result.
Another embodiment of the present invention provides an apparatus for assisting in identifying a wall in a CAD based on deep learning, including:
one or more processors, a memory, and one or more computer programs stored in the memory and executable on the one or more processors, where the one or more processors implement any step of the method provided in any of the foregoing embodiments when executing the one or more computer programs, and details are not described herein again.
The processor and memory may be any processor and memory known in the art and are not limited thereto.
The accuracy of the wall identified by the method is explained in the following by combining with a specific house type diagram.
Firstly, aiming at the obtained CAD file, the method is utilized to analyze, vector data screening and processing, multi-house type splitting and rasterizing to zoom the CAD file, and a house type graph as shown in FIG. 4 is obtained.
Then, the existing method is used to identify the house map as shown in fig. 4, and a first wall map is obtained. The specific process is as follows; (1) Matching straight lines in the design drawing according to the geometrical structures of the interference objects such as furniture and the like, and deleting the straight lines with the similarity reaching a certain preset threshold value as the interference objects; (2) Performing linear matching on the remaining linear sets to generate a plurality of pairs of parallel lines, and combining the collinear lines with smaller gaps; (3) And generating a first wall according to the paired parallel lines and the wall width range, as shown in fig. 5.
Next, the house type graph shown in fig. 4 is identified by using the wall identification model, and a second wall is obtained.
Finally, the second wall is used to perform cross validation on the final wall map of the first wall shown in fig. 5, as shown in fig. 6.
Comparing fig. 4-6, it is clear that the A, B area in the floor plan as shown in fig. 4 is identified as a wall in the prior art method, as shown in the C, D area in fig. 5. The wall in the area C, D in fig. 5 is not a real wall, but rather interferes with the information. After the first wall body is corrected by the second wall body, the interference information in the C, D area in fig. 5 is removed, and the accuracy of final wall body identification is guaranteed, which can be clearly obtained from fig. 6.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (7)

1. A method for assisting in identifying a wall body in CAD based on deep learning comprises the following steps:
acquiring and analyzing CAD file data corresponding to the house type graph, and rasterizing the analyzed CAD file data to acquire the house type graph corresponding to the CAD file data;
acquiring a first wall obtained by identifying the CAD file data;
the method comprises the steps of constructing a wall body recognition model on the basis of a deep learning network, wherein the deep learning network comprises a first part and a second part, the first part is used for extracting wall body characteristics of two-dimensional data, the second part is used for generating corresponding reconstructed images according to the extracted wall body characteristics, the first part comprises a data input layer, a convolution layer, a down-sampling layer, an activation function layer and a full connection layer, the second part comprises a data transposition convolution layer, an activation function layer and a data output layer, the full connection layer of the first part is connected with the transposition convolution layer of the second part, the result of the full connection layer is used as processing data of the transposition convolution layer, the wall body characteristics are obtained by using the deep neural network of the first part, the image reconstruction image is generated by using the deep neural network of the second part, the supervised training is carried out on the deep learning network by using training samples, the internal parameters of the deep neural network are changed, so that the wall body recognition model is obtained, and each training sample comprises a rasterized image and wall body labeling information for labeling the wall body in the rasterized image;
utilizing the wall body recognition model to recognize the house type graph to obtain a second wall body, comprising the following steps: inputting the house type graph corresponding to the CAD file data into a wall body recognition model, obtaining a wall body characteristic matrix through calculation, generating a reconstructed image according to the wall body characteristic matrix, wherein the reconstructed image is actually a wall body confidence coefficient matrix, the size of the wall body confidence coefficient matrix is the same as that of the input house type graph, the probability that the position corresponding to each position point in the wall body confidence coefficient matrix is input into the house type graph is the wall body, namely the wall body confidence coefficient, and then intercepting the reconstructed image according to a preset wall body type threshold value to generate a second wall body;
and performing cross validation on the first wall by using the second wall, and rejecting the first wall with the overlapping proportion lower than an overlapping threshold value by counting the overlapping proportion or the overlapping area of the first wall on the second wall, wherein the rest first walls are final walls.
2. The method for identifying the wall body in the CAD based on the deep learning assistance as claimed in claim 1, wherein the house type graph corresponding to the CAD file data is input into the wall body identification model after being scaled to meet the size of the input image of the wall body identification model;
after obtaining the reconstructed image, scaling the reconstructed image to the original floor plan size.
3. The method for assisting in identifying a wall in a CAD based on deep learning according to claim 1, wherein the method further comprises:
and carrying out regularization treatment on the final wall body so as to meet the communication relation of the wall body.
4. The method for identifying the wall body in the CAD based on the deep learning assistance as claimed in claim 3, wherein the regularizing the final wall body comprises:
combining collinear walls with the distance smaller than a first distance threshold value, taking a longer wall of the two collinear walls as a main wall, and extending the main wall towards a shorter wall until the main wall covers the shorter wall, so as to realize the combination of the collinear walls;
calculating intersection points of the mutually vertical walls, and extending the walls with the distance less than a second distance threshold value from the intersection points to form wall corners; alternatively, two walls perpendicular to each other are simultaneously extended to the intersection point to form a wall corner.
5. The method for assisting in identifying a wall in a CAD based on deep learning according to claim 1, wherein the method further comprises:
and after analyzing the CAD file data, eliminating line segments with the length smaller than a length threshold value, and converting the line segments with the inclination angles within a preset angle range into horizontal line segments or vertical line segments.
6. The method for assisted identification of walls in CAD according to claim 1 or 3, wherein said method further comprises:
after the CAD file data is analyzed, according to the dependency relationship of adjacent line segments, taking an area without the line segments with the dependency relationship in the original floor plan or an area with the number of the line segments with the dependency relationship less than a dependency threshold value as a boundary, splitting to obtain a plurality of sub-areas, and taking each sub-area as a floor plan.
7. An apparatus for assisting in identifying a wall in a CAD based on deep learning, comprising: one or more processors, memory, and one or more computer programs stored in the memory and executable on the one or more processors,
the one or more processors, when executing the one or more computer programs, implement the steps of the method of any of claims 1-6.
CN201810587788.9A 2018-06-08 2018-06-08 Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning Active CN108804815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810587788.9A CN108804815B (en) 2018-06-08 2018-06-08 Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810587788.9A CN108804815B (en) 2018-06-08 2018-06-08 Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning

Publications (2)

Publication Number Publication Date
CN108804815A CN108804815A (en) 2018-11-13
CN108804815B true CN108804815B (en) 2023-04-07

Family

ID=64088136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810587788.9A Active CN108804815B (en) 2018-06-08 2018-06-08 Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning

Country Status (1)

Country Link
CN (1) CN108804815B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615679B (en) * 2018-12-05 2022-07-08 江苏艾佳家居用品有限公司 Identification method of house type component
CN109785435A (en) * 2019-01-03 2019-05-21 东易日盛家居装饰集团股份有限公司 A kind of wall method for reconstructing and device
CN110096949A (en) * 2019-03-16 2019-08-06 平安城市建设科技(深圳)有限公司 Floor plan intelligent identification Method, device, equipment and computer readable storage medium
CN110059690A (en) * 2019-03-28 2019-07-26 广州智方信息科技有限公司 Floor plan semanteme automatic analysis method and system based on depth convolutional neural networks
CN109993797B (en) * 2019-04-04 2021-03-02 广东三维家信息科技有限公司 Door and window position detection method and device
CN110176057A (en) * 2019-04-12 2019-08-27 平安城市建设科技(深圳)有限公司 Three-dimensional house type model generating method, device, equipment and storage medium
CN110020502A (en) * 2019-04-18 2019-07-16 广东三维家信息科技有限公司 The generation method and device of floor plan
CN110188495B (en) * 2019-06-04 2023-05-02 中住(北京)数据科技有限公司 Method for generating three-dimensional house type graph based on two-dimensional house type graph of deep learning
CN110909602A (en) * 2019-10-21 2020-03-24 广联达科技股份有限公司 Two-dimensional vector diagram sub-domain identification method and device
CN113095109A (en) * 2019-12-23 2021-07-09 中移(成都)信息通信科技有限公司 Crop leaf surface recognition model training method, recognition method and device
CN113742810B (en) * 2020-05-28 2023-08-15 杭州群核信息技术有限公司 Scale identification method and three-dimensional model building system based on copy
CN111815602B (en) * 2020-07-06 2022-10-11 清华大学 Building PDF drawing wall identification device and method based on deep learning and morphology
CN112417538B (en) * 2020-11-10 2024-04-16 杭州群核信息技术有限公司 Window identification method and device based on CAD drawing and window three-dimensional reconstruction method
CN113536408B (en) * 2021-07-01 2022-12-13 华蓝设计(集团)有限公司 Residential core tube area calculation method based on CAD external reference collaborative mode
CN113808192B (en) * 2021-09-23 2024-04-09 深圳须弥云图空间科技有限公司 House pattern generation method, device, equipment and storage medium
CN115797962B (en) * 2023-01-13 2023-05-02 深圳市大乐装建筑科技有限公司 Wall column identification method and device based on assembly type building AI design

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971098B (en) * 2014-05-19 2017-05-10 北京明兰网络科技有限公司 Method for recognizing wall in house type image and method for automatically correcting length ratio of house type image
CN104821011A (en) * 2015-05-20 2015-08-05 郭小虎 Method of generating 3D house type model by 2D house type model based on camera shooting
CN106156438A (en) * 2016-07-12 2016-11-23 杭州群核信息技术有限公司 Body of wall recognition methods and device
CN107122528B (en) * 2017-04-13 2021-11-19 广州乐家数字科技有限公司 House type graph parameterization re-editable modeling method
CN107194938A (en) * 2017-04-17 2017-09-22 上海大学 Image outline detection method based on depth convolutional neural networks

Also Published As

Publication number Publication date
CN108804815A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108804815B (en) Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning
CN108763813B (en) Method and device for identifying wall in copy picture based on deep learning
Dodge et al. Parsing floor plan images
EP3506160B1 (en) Semantic segmentation of 2d floor plans with a pixel-wise classifier
US20110069890A1 (en) Fast line linking
Dal Poz et al. Automated extraction of road network from medium-and high-resolution images
Lacoste et al. Unsupervised line network extraction in remote sensing using a polyline process
CN112419202B (en) Automatic wild animal image recognition system based on big data and deep learning
CN113435240A (en) End-to-end table detection and structure identification method and system
Kim et al. Accurate segmentation of land regions in historical cadastral maps
Liu et al. LB-LSD: A length-based line segment detector for real-time applications
Liu An adaptive process of reverse engineering from point clouds to CAD models
Yang et al. Semantic segmentation in architectural floor plans for detecting walls and doors
JPH07220090A (en) Object recognition method
KR102535054B1 (en) Automatic extraction method of indoor spatial information from floor plan images through patch-based deep learning algorithms and device thereof
Li et al. Revisiting spectral clustering for near-convex decomposition of 2D shape
Huang et al. Symmetrization of 2D Polygonal Shapes Using Mixed-Integer Programming
US11687886B2 (en) Method and device for identifying number of bills and multiple bill areas in image
Gan et al. How many bedrooms do you need? A real-estate recommender system from architectural floor plan images
CN116579051B (en) Two-dimensional house type information identification and extraction method based on house type data augmentation
CN117095423B (en) Bank bill character recognition method and device
CN116403269B (en) Method, system, equipment and computer storage medium for analyzing occlusion human face
US20230252198A1 (en) Stylization-based floor plan generation
Xia et al. Vectorizing historical maps with topological consistency: A hybrid approach using transformers and contour-based instance segmentation
CN117370591A (en) Vector diagram identification method, device, terminal and storage medium based on point set representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant