CN108763813B - Method and device for identifying wall in copy picture based on deep learning - Google Patents

Method and device for identifying wall in copy picture based on deep learning Download PDF

Info

Publication number
CN108763813B
CN108763813B CN201810586660.0A CN201810586660A CN108763813B CN 108763813 B CN108763813 B CN 108763813B CN 201810586660 A CN201810586660 A CN 201810586660A CN 108763813 B CN108763813 B CN 108763813B
Authority
CN
China
Prior art keywords
wall
line segment
line segments
copy
wall body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810586660.0A
Other languages
Chinese (zh)
Other versions
CN108763813A (en
Inventor
王宇涵
唐睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN201810586660.0A priority Critical patent/CN108763813B/en
Publication of CN108763813A publication Critical patent/CN108763813A/en
Application granted granted Critical
Publication of CN108763813B publication Critical patent/CN108763813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Civil Engineering (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for assisting in identifying a wall in a copy picture based on deep learning, which comprises the following steps: acquiring an uploaded copy picture; identifying the copy graph by using a wall identification model to obtain an initial wall, wherein the wall identification model is obtained by training on the basis of a deep learning network; carrying out polygon fitting on the initial wall body in sequence to obtain a polygon formed by fitting line segments; filtering and adjusting the fitting line segments of the polygon; and generating the wall according to the processed line segment and the preset wall width. The method and the device can reduce the operation amount of a user and realize automatic wall drawing according to the copy picture.

Description

Method and device for identifying wall in copy picture based on deep learning
Technical Field
The invention belongs to the technical field of building indoor design, and particularly relates to a method and a device for identifying a wall in a copy picture based on deep learning.
Background
In the field of design of indoor space of buildings, the wall has very important significance for indoor area calculation, heat supply design, air supply design, ventilation and air conditioning system design and the like. Therefore, it is necessary to identify or draw a wall in a design drawing or a copy drawing.
At present, wall drawing based on copy pictures all needs to utilize drawing tools to carry out manual operation, and the concrete process is: based on the uploaded copying diagram, a user manually draws the wall body by using an online drawing tool in a copying or copying mode, and if the wall body needs to be combined and split, the wall body still needs to be manually drawn. The manual wall drawing not only increases the operation amount of a user, but also consumes more time, and the non-automatic wall drawing brings poor user experience to the user.
Patent application publication No. CN103971098A discloses a method for identifying a wall in a house type graph. The identification method comprises the following steps: preprocessing the house type graph; detecting the outline of the floor plan; processing the house type graph by adopting a wall threshold segmentation method to obtain a binary graph; carrying out corrosion, expansion and edge detection on the binary image; and carrying out hough transformation on the edge image to obtain linear coordinate information, and acquiring the coordinate information of the wall according to the linear coordinate information. The partition threshold T in the wall threshold partition method is determined by the average gray value of the wall and the average gray value of the area outside the wall. The data source processed by the method is a house type graph drawn based on CAD, and the method is not applicable to a copy graph.
Patent application with application publication number CN106156438A discloses a wall identification method and device. The wall body identification method comprises the following steps: acquiring a house type graph uploaded by a user; rasterizing the house type graph; displaying the rasterized house type graph to a user; acquiring a position point which belongs to a wall body area and is selected by a user; and identifying the wall body area from the rasterized floor plan according to the information of the position point. The wall body identification method realizes the identification of the building wall body by simple interaction between the webpage front end and the background server, and although the calculation process is simple and has better identification accuracy, the webpage front end (user) and the background server are required to interact, so that the automation can not be realized.
Disclosure of Invention
The invention aims to provide a method and a device for identifying a wall in a copy picture based on deep learning.
In order to achieve the purpose of the invention, the invention provides the following technical scheme:
on one hand, the embodiment of the invention provides a method for identifying a wall in a copy picture based on deep learning, which comprises the following steps:
acquiring an uploaded copy picture;
recognizing the copy picture by using a wall recognition model to obtain an initial wall, wherein the wall recognition model is obtained by training on the basis of a deep learning network;
carrying out polygon fitting on the initial wall body in sequence to obtain a polygon formed by fitting line segments;
filtering and adjusting the fitting line segment of the polygon;
and generating a wall body according to the processed line segment and the preset wall body width, and carrying out standardized adjustment and filtering processing on the wall body to obtain a final wall body.
On the other hand, the embodiment of the invention provides a device for assisting in identifying a wall in a copy map based on deep learning, which comprises:
one or more processors, a memory, and one or more computer programs stored in the memory and executable on the one or more processors, the one or more processors implementing the steps of the above-described method when executing the one or more computer programs.
According to the method and the device for identifying the wall body in the CAD based on the deep learning assistance, provided by the embodiment of the invention, the constructed wall body identification model is used for automatically identifying the copy picture to obtain the initial wall body, and the initial wall body is subjected to post-processing to generate the wall body. A user only needs to upload the copy picture, namely the wall body is automatically drawn through the method and the device, so that the operation amount of the user can be reduced, and the wall body can be automatically drawn according to the copy picture.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of a method for assisting in identifying a wall in a copy map based on deep learning according to an embodiment of the present invention;
FIG. 2 is a diagram of a deep neural network constructed according to another embodiment of the present invention;
FIG. 3 is a diagram illustrating a set of parallel segments paired into a dual segment according to another embodiment of the present invention;
FIG. 4 is a flowchart of a method for identifying a wall in a CAD based on deep learning assistance according to another embodiment of the present invention;
FIG. 5 is a copy diagram provided by another embodiment of the present invention;
FIG. 6 is an initial wall map obtained by wall recognition of the copy map shown in FIG. 5 using a wall recognition model;
fig. 7 is a wall view obtained by post-treating the initial wall shown in fig. 6.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In order to realize the automatic wall drawing of the copy diagram, an embodiment of the present invention provides a method for identifying a wall in the copy diagram based on deep learning, as shown in fig. 1, including the following steps:
s101, obtaining the uploaded copy picture.
The copy picture can be a user type picture drawn by a designer by hand, or a user type picture downloaded from a third-party website. The copy includes not only house type diagram, but also some scale information, supplier information, etc. The copy picture is stored by pixel points.
And S102, identifying the copy picture by using a wall identification model to obtain an initial wall.
The wall body recognition model is obtained by training a large number of training samples based on a deep learning network, specifically a prediction model obtained by training a deep neural network, and when a copy image is input into the wall body recognition model, a one-dimensional vector result can be obtained to indicate whether the image content characteristics corresponding to the vector are a wall body.
The deep neural network structure constructed in this embodiment is shown in fig. 2, and includes two parts, a first part is used for extracting wall features in the copy image, and the other part is used for generating a corresponding reconstructed image according to the extracted wall features. Specifically, the deep neural network of the first part comprises a data input layer, a convolution layer, a down-sampling layer, an activation function layer and a full-connection layer, the deep neural network of the second part comprises a data transposition convolution layer, an activation function layer and a data output layer, the full-connection layer of the first part is connected with the transposition convolution layer of the second part, and the result of the full-connection layer is used as processing data of the transposition convolution layer, so that the wall body characteristics are obtained by using the deep neural network of the first part, and the image reconstruction is carried out on the wall body by using the deep neural network of the second part to generate a reconstructed image.
After the infrastructure of the deep neural network is determined, the category dimension and size of each layer are determined. The category dimension determines the number of categories abstracted from the lower layer to the upper layer, and the window size is the size of the two-dimensional image in each layer.
For the data input layer, the category dimension is 3, that is, a two-dimensional image represented by RGB 3 color paper is input; the window size is the size of the two-dimensional image. For convolutional layers, activation function layers, downsampling layers, transposed convolutional layers, and fully-connected layers, the class dimension represents the number of feature maps in each layer; the window size represents the size of the feature map of the layer. For the data output layer, the category dimension is 3 and the window size is the size of the reconstructed image.
After the category dimensions and sizes of the layers are determined, the size of a kernel mapping area, a kernel mapping area moving stepping value and a window edge expansion value are set for the convolutional layer, the full connection layer, the activation function layer and the transposed convolutional layer. The size of the kernel mapping area determines the unit of the characteristic area with which size is abstracted towards the upper layer, and for the first layer of convolution layer, the size of the kernel mapping area is correspondingly consistent with that of the copy graph; the kernel mapping area moving stepping value is mainly set for a convolutional layer, the moving step size of the kernel mapping area is determined, and the moving step size is usually set to be 1; the window edge extension value is mainly set for the convolutional layer, the size of an area covered outside the two-dimensional array edge is determined, and when the window edge extension value is set to be 0, information outside the two-dimensional array edge is not included in the kernel mapping area.
As shown in fig. 2, the first part of the deep neural network established in this embodiment includes a data input layer 201, 8 combination layers 202 to 209, a full-link layer 210, and a full-link layer 211, where each combination layer includes a convolution layer, an activation function layer, and a down-sampling layer, and the convolution layer is used to map pixel blocks (including multiple adjacent pixel points) in an image to feature points in an upper-layer feature map through convolution calculation; the activation function layer is used for processing the data of the feature points by using a relu function; the down-sampling layer is used for sub-sampling the feature map after convolution, and sampling a plurality of adjacent feature points in the feature map into one feature point in the upper layer feature map, so that the data processing amount can be reduced, and the feature information can be kept. Size = a b, "a" denotes the window Size, and "b" denotes the category dimension for each layer.
The size =256 × 3 of the data input layer 201 indicates an image having a size of 256 × 256 in which the input data is represented by RGB 3 color values. Size =128 × 64 of the combined layer 202 represents a convolutional layer containing size =256 × 64, an activation function layer of size =256 × 64, and a downsampled layer of size =128 × 64, where the window size of the convolutional layer is the same as the window size of the previous layer, the category dimension changes, the activation function layer processes the feature point data, and the downsampled layer ensures the category dimension does not change, and the window size is reduced. Size =64 × 128 for combined layer 203, size =32 × 256 for combined layer 204, and size =16 × 512 for combined layer 205, in combined layers 203-205, category dimension doubling is achieved by convolution layers, feature point data is processed by activation function layers, and window size halving is achieved by downsampling layers to obtain corresponding feature maps. For example, size =32 × 256 of the combined tier 204, representing a convolutional tier containing size =64 × 256, an activation function tier of size =64 × 256, and a downsampled tier of size =32 × 256, yields a feature map of size 32 in 256 class dimensions. Size =8 × 512 of the combined layer 206, size =4 × 512 of the combined layer 207, size =2 × 512 of the combined layer 208, and size =1 × 512 of the combined layer 209, and in the combined layers 206 to 209, the category dimension is unchanged by the convolution operation, the feature point data is processed by the activation function layer, and the window size is halved by the down-sampling layer, so that the corresponding feature map is obtained. The size =1 × 512 of the fully connected layer 210 indicates that all 512 feature maps in the combined layer 209 are connected to 512 feature maps, "1 × 1" indicates that the feature maps in the fully connected layer 210 are feature maps with a size of 1 × 1, and the size =1 × 512 of the fully connected layer 211 indicates that the result of the fully connected layer 210 is subjected to the full connection processing again to obtain 512 feature maps with a size of 1 × 1.
The second part of the deep neural network includes 8 transposed convolution and activation combining layers 212-219, the transposed convolution and activation combining layer 212 connected to the fully-connected layer 211. Each transposition convolution and activation combination layer comprises a transposition convolution layer and an activation function layer, the transposition convolution layer is used for restoring and restoring the characteristics, the characteristic points in the low-resolution characteristic diagram are mapped to a plurality of characteristic points on the high-resolution characteristic diagram, and finally the characteristics on the high-resolution characteristic diagram are constructed by aliasing of a plurality of characteristic point information at the same position; and the activation function layer is used for processing the data of each feature point in the feature map.
Wherein the size =2 × 512 of the transposed convolution and activation combination layer 212, the size =4 × 512 of the transposed convolution and activation combination layer 213, the size =8 × 512 of the transposed convolution and activation combination layer 214, the size =16 × 512 of the transposed convolution and activation combination layer 215, the category dimensions are unchanged in the transposed convolution and activation combination layers 212 to 215, the window size doubling is achieved by the transposed convolution layer, and the data of each feature point in the feature map is processed by the activation function layer to obtain the corresponding feature map. Size =32 × 256 of the transposed convolution and active combination layer 216, size =64 × 128 of the transposed convolution and active combination layer 217, size =128 × 64 of the transposed convolution and active combination layer 218, halving the category dimension by means of transposed convolution, doubling the window size, and processing the data of each feature point in the feature map by means of the active function layer in the transposed convolution and active combination layers 216-218 to obtain the corresponding feature map. For example, the size =64 × 128 of the transposed convolution and activation combination layer 217 represents the transposed convolution layer containing size =64 × 128 and the activation function layer of size =64 × 128, resulting in a feature map of 128 class dimensions, size 64 × 64. Size =256 × 3 of the transposed convolution and activation combination layer 219, which is the same as the size of the input layer 201, indicates that the class dimension is converted into 3 by the transposed convolution layer of size =256 × 3, and the window size is doubled to 256 × 256, to obtain a reconstructed image of size 256 × 256 in RGB 3 color values, and the result of the transposed convolution and activation combination layer 219 is the data output layer.
After the deep neural network is built, the built deep neural network is trained by using the training samples, and internal parameters of the deep neural network are changed to obtain a wall body recognition model.
Specifically, each training sample comprises a copy drawing and wall labeling information for labeling the wall in the copy drawing. The network training process is a supervised learning process, after the wall probability value of a training sample is obtained, the wall probability value is compared with wall marking information, if the training result is consistent with the wall marking information, the setting of network parameters is proper, and after different training samples are trained for multiple times, if the result can reach the standards in the aspects of accuracy, stability and the like, a wall identification model is obtained; and if the wall probability value obtained by training is inconsistent with the wall labeling information, performing layer-by-layer feedback according to the difference degree of the wall probability value and the wall labeling information, and sequentially adjusting parameters of each layer to enable the training result to be close to or equal to the wall labeling information, wherein the process is backward propagation. And after a large number of training samples are used for carrying out parameter adjustment on the deep neural network for many times, the wall body recognition model can be obtained.
After the wall identification model is obtained, the initial wall in the copy map can be identified by using the wall identification model. Specifically, the identifying the copy graph by using the wall identification model to obtain an initial wall includes:
inputting the copy picture into a wall body recognition model, obtaining a wall body characteristic matrix through calculation, generating a reconstructed image according to the wall body characteristic matrix, and then intercepting the reconstructed image according to a preset wall body type threshold value to generate an initial wall body.
The reconstructed image is actually a wall confidence coefficient matrix, the size of the wall confidence coefficient matrix is the same as that of the input copy image, and the probability that the value of each position point in the matrix corresponds to the position of the input image as a wall is the wall confidence coefficient.
The wall type threshold value can be an RGB value, pixels larger than the RGB value are regarded as a wall, pixels smaller than the RGB value are regarded as a background, and an initial wall is generated according to the RGB value. The RGB values are set according to actual conditions, and are not limited herein.
Inputting the copy image into a wall body recognition model, carrying out wall body feature extraction on the copy image by using a network structure and network parameters in the wall body recognition model, obtaining a wall body feature recognition result, and carrying out feature expansion according to the wall body feature recognition result to generate a reconstructed image, wherein the reconstructed image not only contains wall body information, but also contains some other information. And after the reconstructed image is obtained, processing the reconstructed image by using a preset wall type threshold value to generate an initial wall.
Because the interference information (furniture, irregular structure and abstract structure) in the copy graph is not learned when the wall body identification model is constructed, only the wall body characteristics can be extracted when the wall body identification model is used for extracting the characteristics of the house type graph, so that the interference information is hardly contained in the reconstructed image obtained by the wall body identification model, the interference information can be well eliminated, namely, the reconstructed image is intercepted by using the wall body type threshold value, and the generated initial wall body excludes the interference information.
And S103, carrying out polygon fitting on the initial wall body in sequence to obtain a polygon formed by fitting line segments.
The initial wall generated by using the wall recognition model is an irregular shape surrounded by arcs, and the irregular shape does not conform to the rectangle of the actual wall, so that the irregular area (namely, the initial wall) needs to be subjected to polygon fitting. Specifically, the sequentially performing polygon fitting on the initial wall body comprises:
performing edge extraction on the initial wall to obtain a boundary line between the wall and the space;
performing Hough transform on the boundary lines by using edge points on the boundary lines to obtain line segments;
converting the line segment with the inclination angle within a preset angle range into a horizontal line segment or a vertical line segment;
the mutually perpendicular line segments are extended to generate intersections, and a plurality of polygons are generated and stored.
There are many methods for extracting the edge, and in the present application, a Canny operator or a Sobel operator is used to extract the edge of the initial wall, so as to obtain the boundary line between the wall and the space. The Canny operator and the Sobel operator are used as a universal edge extraction algorithm, and can accurately extract the boundary line between the wall and the space.
Specifically, the converting the line segment having the inclination angle within the preset angle range into the horizontal line segment or the vertical line segment includes:
and aiming at the line segment with the inclination angle within the preset angle range, keeping the length and the midpoint of the line segment, and adjusting the line segment into a horizontal line segment or a vertical line segment according to the size of the included angle between the line segment and the horizontal direction.
The preset angle range is a relative concept, and generally, a line segment having an angle of less than 45 ° with the horizontal direction is converted into a horizontal line segment, and a line segment having an angle of less than 45 ° with the vertical direction is converted into a vertical line segment. The preset angle range is also set according to actual conditions, and is not limited herein.
And S104, filtering and adjusting the fitting line segment of the polygon.
Because some fitting line segments are too short, some fitting line segments are collinear but are not connected together, some fitting line segments are parallel but are close to each other, and the line segments do not meet the design requirements of the wall body and influence the generation of the wall body, the filtering and adjusting treatment of the line segments is needed. Specifically, the filtering and adjusting the fitted line segment of the polygon includes:
screening out line segments with the length smaller than the line segment threshold value according to the set line segment threshold value;
for the collinear line segments, merging the collinear line segments with the distance between the adjacent end points smaller than a first distance threshold;
for the parallel line segments, merging the parallel line segments with the distance between the line segments smaller than a second distance threshold;
clustering the line segments, matching each line segment set obtained by clustering with a specific template, and excluding the line segment sets matched with the specific template;
and extending and connecting the mutually vertical line segments to form corners.
The line segment threshold is the length threshold of the line segment, and the line segment with the too small length is deleted by utilizing the line segment threshold so as to filter the length of the line segment. The size of the line segment threshold is related to the size of the house type, and is specifically set according to the actual situation, which is not limited herein.
Collinear segments are segments that lie on the same line, but have a distance between the end points of two adjacent segments. In this embodiment, collinear line segments are merged by using a set first distance threshold, specifically, for a collinear line segment whose distance between adjacent end points is smaller than the first distance threshold, one line segment extends toward the other line segment with an end point adjacent to the other line segment as a starting point until the two adjacent end points are connected to form a new line segment; or, the two line segments are extended towards the opposite line segment direction by taking respective adjacent end points as starting points until the two adjacent end points are connected to form a new line segment. The size of the first distance threshold is set according to practical situations, and is not limited herein.
The parallel line segments are a plurality of line segments which are parallel to each other, and because the distance between some parallel line segments is too small, the too small distance does not meet the design requirements of the wall body, and the generated wall body is influenced, so that the parallel line segments need to be adjusted. Specifically, for parallel line segments of which the distance between the line segments is smaller than a second distance threshold, one line segment is deleted, the other line segment is reserved, and the parallel line segments are merged; or, a new line segment parallel to the two line segments is generated between the two line segments, the original two line segments are deleted, the length of the new line segment is close to that of the original two line segments, and the parallel line segments are merged. The size of the second distance threshold is set according to practical situations, and is not limited herein.
Clustering the line segments, namely clustering the line segments according to a geometric structure relation formed by adjacent line segments, wherein the line segments in each clustered line segment set possibly form a certain geometric structure, matching the geometric structures with a specific template, wherein the specific template is a predefined rectangle, a circle and the like with relatively small size and represents some furniture such as sofas, cupboards and the like, and eliminating the line segment set matched with the specific template to realize the interference of interference information (the furniture such as the sofas, the cupboards and the like) on the generated wall.
Some of the processed line segments perpendicular to each other are not connected together, and the line segments need to be processed. Specifically, an intersection is calculated for mutually perpendicular line segments, and two mutually perpendicular line segments are simultaneously extended to the intersection to form a corner.
And S105, generating a wall body according to the processed line segment and the preset wall body width, and carrying out standardized adjustment and filtering processing on the wall body to obtain the final wall body.
After the processed line segment is obtained, the wall body may be produced, specifically, the generating the wall body according to the processed line segment and the preset wall body width includes:
firstly, matching parallel line segments into double line segments;
then, detecting the overlapping relation between the double line segments, and merging the double line segments with the overlapping parts;
and finally, generating the wall body according to the preset wall body width on the basis of the paired double line segments.
As shown in fig. 3, the line segment a, the line segment b, and the line segment c are parallel to each other and form a group of parallel line segments, and when they are paired, three pairs of double line segments can be formed, that is, a double line segment ab composed of the line segment a and the line segment b, a double line segment ac composed of the line segment a and the line segment c, and a double line segment bc composed of the line segment b and the line segment c, and through detection, the double line segment ac can be overlapped with the double line segment ab and the double line segment bc, respectively, and at this time, the double line segment ab, the double line segment ac, and the double line segment bc need to be merged into a group of new double line segments, which can be the line segment ab, the double line segment ac, the double line segment bc, or a regenerated double line segment.
Tests show that the generated wall bodies are intersected, the similar parallel wall bodies are not connected, the similar vertical wall bodies are not provided with corners and isolated wall bodies, and the wall bodies do not meet the actual existence rules of the wall bodies, so that the generated wall bodies need to be subjected to normalized adjustment and filtering treatment. Specifically, the performing the normalized adjustment and the filtering process on the generated wall specifically includes:
carry out standardization adjustment to the wall body and handle, include:
for collinear walls, merging collinear walls with the distance between adjacent ends smaller than a third distance threshold;
for the parallel walls, merging the parallel walls of which the distance between the walls is smaller than a fourth distance threshold;
for the intersected wall, splitting the intersected wall into a plurality of walls on the basis of the intersection points;
for the vertical wall, two mutually vertical walls with the distance between the adjacent end parts smaller than a fifth distance threshold value are connected in an extending manner to form a corner;
filtering the wall body after the normalized adjustment treatment, comprising:
if the area enclosed by the closed wall is smaller than the area threshold, deleting the closed wall;
deleting isolated walls which are not connected with other walls;
the overlapping area between the walls is deleted.
Collinear walls are walls which are positioned on the same straight line, but a certain distance exists between the end parts of two adjacent walls. In this embodiment, collinear wall bodies are merged by using a set third distance threshold, specifically, for a collinear line segment whose distance between adjacent end portions is smaller than the third distance threshold, one wall body is extended toward the other wall body with an end portion adjacent to the other wall body as a starting point until the two adjacent end points are connected to form a new wall body; or, the two walls extend towards the opposite wall direction by taking the respective adjacent end parts as starting points until the adjacent end parts are connected to form a new wall. The size of the third distance threshold is set according to practical situations, and is not limited herein.
The parallel walls are a plurality of parallel walls, and because the distances between some parallel walls are too small, and the too small distances do not meet the design requirements of the walls, the generated walls are affected, and therefore, the parallel walls need to be adjusted. Specifically, for parallel walls of which the distance between the walls is smaller than a fourth distance threshold, one wall is deleted, and the other wall is reserved, so that the parallel walls are merged; or a new wall body parallel to the two wall bodies is generated between the two wall bodies, the original two wall bodies are deleted, the length of the new wall body is close to that of the original two wall bodies, and the parallel wall bodies are combined. The size of the fourth distance threshold is set according to practical situations, and is not limited herein.
The wall bodies capable of forming the closed areas are closed wall bodies, the areas formed by the closed wall bodies are too small to be smaller than the normal room area, then some wall bodies are considered to be wrong wall bodies, and the closed wall bodies are deleted by adopting the set area threshold value. In an actual house type, there is also no isolated wall body which is not connected with any wall body, so that it is necessary to delete the isolated wall body. In creating walls, there are also some overlapping walls that do not meet the actual requirements, so the overlapping areas between the walls are eliminated.
According to the method and the device for identifying the wall body in the CAD based on the deep learning assistance, provided by the embodiment of the invention, the constructed wall body identification model is used for automatically identifying the copy picture to obtain the initial wall body, and the initial wall body is subjected to post-processing to generate the wall body. The user only needs to upload the copy picture, namely the wall body is automatically drawn through the method and the device, so that the operation amount of the user can be reduced, and the automatic drawing of the wall body according to the copy picture is realized.
In another embodiment, on the basis of the above method for identifying a wall in a copy diagram based on deep learning assistance, the method for identifying a wall in a copy diagram based on deep learning assistance further includes: before the wall body recognition model is used for recognizing the copy picture, the obtained copy picture is subjected to interference information removal and size conversion processing.
Because characters such as trademarks, sizes and the like exist in the obtained copy picture, the characters can interfere with wall body identification and belong to interference information, the interference information can be removed, and the interference information can be removed in an image framing and cutting mode.
Because the size of the input layer of the wall recognition model is fixed, in order to meet the format requirement of the input data, the size of the input data needs to be adjusted. Specifically, the copy image is input into the wall recognition model after being scaled to meet the size of the input image of the wall recognition model. In this embodiment, the copy map is input into the wall recognition model after being adjusted to 256 × 256. In order to not change the size of the obtained wall, after a reconstructed image is obtained, the reconstructed image is scaled to the size of the original copy. The original floor plan size is the size of the floor plan before adjustment. Specifically, after the reconstructed image is scaled to the size of the original household-type image, the reconstructed image is intercepted according to a preset wall type threshold value, and an initial wall is generated.
Another embodiment of the present invention provides a method for assisting in identifying a wall in a copy map based on deep learning, as shown in fig. 4, including the following steps:
s401, obtaining the uploaded copy picture.
S402, removing interference information and carrying out size conversion processing on the obtained copy diagram.
And S403, identifying the copy picture by using a wall identification model to obtain an initial wall.
And S404, sequentially carrying out polygon fitting on the initial wall body to obtain a polygon formed by fitting line segments.
And S405, filtering and adjusting the fitting line segment of the polygon.
And S406, generating a wall body according to the processed line segment and the preset wall body width, and carrying out normalized adjustment and filtering processing on the generated wall body.
Another embodiment of the present invention provides an apparatus for assisting in identifying a wall in a copy map based on deep learning, including:
one or more processors, a memory, and one or more computer programs stored in the memory and executable on the one or more processors, where the one or more processors implement any step of the method provided in any of the foregoing embodiments when executing the one or more computer programs, and details are not described herein.
The processor and memory may be any processor and memory known in the art and are not limited thereto.
The accuracy of the wall identified by the method is explained in the following by combining a specific copy diagram.
Firstly, for the obtained copy diagram shown in fig. 5, identifying the copy diagram by using a wall identification model to obtain an initial wall body shown in fig. 6;
and then, carrying out polygon fitting on the initial wall body in sequence, filtering and adjusting fitting line segments, generating the wall body according to the processed line segments, and carrying out standardized adjustment and filtering on the wall body to generate a final wall body, as shown in fig. 7.
Comparing fig. 5 to 7, it can be clearly seen that, in the initial wall body shown in fig. 6 obtained by wall body recognition model recognition, some interference information may exist in the areas a and D, and the wall body may be interrupted in the areas B and C. After the initial wall in fig. 6 is post-processed, the wall shown in fig. 7 is generated, which can be clearly obtained from fig. 7, interference information in areas a and D in fig. 6 is eliminated, and the wall in areas B and C is completely connected, so that the accuracy of final wall identification is ensured. The wall can be automatically drawn by using the method and the device only by uploading the copy picture, so that the operation amount of a user is reduced.
The technical solutions and advantages of the present invention have been described in detail in the foregoing detailed description, and it should be understood that the above description is only the most preferred embodiment of the present invention, and is not intended to limit the present invention, and any modifications, additions, and equivalents made within the scope of the principles of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A method for identifying a wall in a copy picture based on deep learning comprises the following steps:
acquiring an uploaded copy picture;
recognizing the copy graph by utilizing a wall recognition model which is based on a deep learning network and is obtained through training to obtain an initial wall, wherein the method comprises the following steps: inputting the facsimile image into a wall recognition model, obtaining a wall characteristic matrix through calculation, generating a reconstructed image according to the wall characteristic matrix, wherein the reconstructed image is a wall confidence coefficient matrix, the size of the wall confidence coefficient matrix is the same as that of the input facsimile image, the value of each position point in the wall confidence coefficient matrix corresponds to the probability that the position corresponding to the input image is a wall, namely the wall confidence coefficient, and intercepting the reconstructed image according to a preset wall type threshold value to generate an initial wall, wherein the initial wall is an irregular shape surrounded by arcs;
carrying out polygon fitting on the initial wall body in sequence to obtain a polygon formed by fitting line segments;
filtering and adjusting the fitting line segments of the polygon;
generating a wall according to the processed line segment and the preset wall width, comprising: firstly, matching parallel line segments into double line segments; then, detecting the overlapping relation between the double line segments, and merging the double line segments with the overlapping parts; finally, generating a wall body according to the preset wall body width on the basis of the paired double line segments;
carrying out standardized adjustment and filtering treatment on the wall body to obtain a final wall body, wherein the method comprises the following steps:
carrying out standardized adjustment processing on the wall body, comprising the following steps: for collinear walls, merging the collinear walls with the distance between the adjacent end parts smaller than a third distance threshold; for the parallel walls, merging the parallel walls of which the distance between the walls is smaller than a fourth distance threshold; for the intersected wall, splitting the intersected wall into a plurality of walls on the basis of the intersection points; for the vertical wall, extending and connecting two mutually vertical walls with the distance between the adjacent end parts smaller than a fifth distance threshold value to form a corner;
filtering the wall body after the normalized adjustment treatment, comprising: if the area enclosed by the closed wall is smaller than the area threshold value, deleting the closed wall; deleting isolated walls which are not connected with other walls; the overlapping area between the walls is deleted.
2. The method for identifying a wall in a copy map based on deep learning of claim 1, wherein the performing polygon fitting on the initial wall in sequence comprises:
performing edge extraction on the initial wall to obtain a boundary line between the wall and the space;
carrying out Hough transform on the boundary line by using the edge point on the boundary line to obtain a line segment;
converting the line segment with the inclination angle within a preset angle range into a horizontal line segment or a vertical line segment;
the mutually perpendicular line segments are extended to generate intersections, and a plurality of polygons are generated and stored.
3. The method for identifying the wall in the copy map based on the deep learning of claim 2, wherein the Canny operator or the Sobel operator is used for carrying out edge extraction on the initial wall to obtain the boundary line between the wall and the space.
4. The method for identifying the wall in the copy map based on the deep learning of claim 2, wherein the converting the line segment with the inclination angle within the preset angle range into the horizontal line segment or the vertical line segment comprises:
and aiming at the line segment with the inclination angle within the preset angle range, keeping the length and the midpoint of the line segment, and adjusting the line segment into a horizontal line segment or a vertical line segment according to the size of the included angle between the line segment and the horizontal direction.
5. The method for identifying a wall in a copy map based on deep learning of claim 1, wherein the filtering and adjusting process of the fitting line segments of the polygon comprises:
screening out line segments with the length smaller than the line segment threshold value according to the set line segment threshold value;
for the collinear line segments, merging the collinear line segments with the distance between the adjacent end points smaller than a first distance threshold;
for the parallel line segments, merging the parallel line segments with the distance between the line segments smaller than a second distance threshold;
clustering the line segments, matching each line segment set obtained by clustering with a specific template, and excluding the line segment sets matched with the specific template;
and extending and connecting the mutually vertical line segments to form corners.
6. The method for identifying walls in a copy map based on deep learning according to any one of claims 1 to 5, wherein the method further comprises:
before the wall body recognition model is used for recognizing the copy picture, the obtained copy picture is subjected to interference information removal and size conversion processing.
7. An apparatus for assisting in identifying a wall in a copy map based on deep learning, comprising: one or more processors, memory, and one or more computer programs stored in the memory and executable on the one or more processors,
the one or more processors, when executing the one or more computer programs, implement the steps of the method of any of claims 1-6.
CN201810586660.0A 2018-06-08 2018-06-08 Method and device for identifying wall in copy picture based on deep learning Active CN108763813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810586660.0A CN108763813B (en) 2018-06-08 2018-06-08 Method and device for identifying wall in copy picture based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810586660.0A CN108763813B (en) 2018-06-08 2018-06-08 Method and device for identifying wall in copy picture based on deep learning

Publications (2)

Publication Number Publication Date
CN108763813A CN108763813A (en) 2018-11-06
CN108763813B true CN108763813B (en) 2022-11-15

Family

ID=63999648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810586660.0A Active CN108763813B (en) 2018-06-08 2018-06-08 Method and device for identifying wall in copy picture based on deep learning

Country Status (1)

Country Link
CN (1) CN108763813B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785435A (en) * 2019-01-03 2019-05-21 东易日盛家居装饰集团股份有限公司 A kind of wall method for reconstructing and device
CN110176057A (en) * 2019-04-12 2019-08-27 平安城市建设科技(深圳)有限公司 Three-dimensional house type model generating method, device, equipment and storage medium
CN112747734B (en) * 2019-10-31 2024-04-30 深圳拓邦股份有限公司 Method, system and device for adjusting direction of environment map
CN110990912A (en) * 2019-11-04 2020-04-10 上海吉舍云计算机技术有限公司 Calculation method and system for demolished wall and display system
CN112836554A (en) * 2019-11-25 2021-05-25 广东博智林机器人有限公司 Image verification model construction method, image verification method and device
CN110973859B (en) * 2019-12-19 2021-08-31 江苏艾佳家居用品有限公司 Method for customizing table top and front and back water retaining of kitchen cabinet in home decoration design
CN111815602B (en) * 2020-07-06 2022-10-11 清华大学 Building PDF drawing wall identification device and method based on deep learning and morphology
CN114842494B (en) * 2021-12-23 2024-04-05 华南理工大学 Method for automatically identifying connection relation of power system station wiring diagrams
CN114973297A (en) * 2022-06-17 2022-08-30 广州市圆方计算机软件工程有限公司 Wall area identification method, system, equipment and medium for planar house type graph
CN115797962B (en) * 2023-01-13 2023-05-02 深圳市大乐装建筑科技有限公司 Wall column identification method and device based on assembly type building AI design

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750553A (en) * 2012-06-28 2012-10-24 北京中科广视科技有限公司 Recognizing method of wall plane profile
CN104732192A (en) * 2013-12-23 2015-06-24 中国移动通信集团设计院有限公司 Method and device for recognizing walls on architecture drawing

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2950725C (en) * 2009-05-15 2020-10-27 Eagle View Technologies, Inc. Pitch determination systems and methods for aerial roof estimation
CN103971098B (en) * 2014-05-19 2017-05-10 北京明兰网络科技有限公司 Method for recognizing wall in house type image and method for automatically correcting length ratio of house type image
KR20170001352A (en) * 2015-06-26 2017-01-04 삼성물산 주식회사 Construction method for outside wall and inside wall
JP6730121B2 (en) * 2016-07-25 2020-07-29 鹿島建設株式会社 Underground wall construction method
CN107194938A (en) * 2017-04-17 2017-09-22 上海大学 Image outline detection method based on depth convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750553A (en) * 2012-06-28 2012-10-24 北京中科广视科技有限公司 Recognizing method of wall plane profile
CN104732192A (en) * 2013-12-23 2015-06-24 中国移动通信集团设计院有限公司 Method and device for recognizing walls on architecture drawing

Also Published As

Publication number Publication date
CN108763813A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108763813B (en) Method and device for identifying wall in copy picture based on deep learning
CN108804815B (en) Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning
US11971726B2 (en) Method of constructing indoor two-dimensional semantic map with wall corner as critical feature based on robot platform
US11544900B2 (en) Primitive-based 3D building modeling, sensor simulation, and estimation
EP3506160A1 (en) Semantic segmentation of 2d floor plans with a pixel-wise classifier
US10878173B2 (en) Object recognition and tagging based on fusion deep learning models
Dal Poz et al. Automated extraction of road network from medium-and high-resolution images
CN108230376B (en) Remote sensing image processing method and device and electronic equipment
CN110827398A (en) Indoor three-dimensional point cloud automatic semantic segmentation algorithm based on deep neural network
US20220392165A1 (en) Techniques for Producing Three-Dimensional Models from One or More Two-Dimensional Images
Liu An adaptive process of reverse engineering from point clouds to CAD models
CN110136153A (en) A kind of image processing method, equipment and storage medium
Yang et al. Semantic segmentation in architectural floor plans for detecting walls and doors
CN111127652A (en) Indoor map construction method and device for robot and electronic equipment
JPH07220090A (en) Object recognition method
CN111783561A (en) Picture examination result correction method, electronic equipment and related products
KR102535054B1 (en) Automatic extraction method of indoor spatial information from floor plan images through patch-based deep learning algorithms and device thereof
CN113077477B (en) Image vectorization method and device and terminal equipment
Prerna et al. Evaluation of LiDAR and image segmentation based classification techniques for automatic building footprint extraction for a segment of Atlantic County, New Jersey
JP2003141567A (en) Three-dimensional city model generating device and method of generating three-dimensional city model
CN112598737A (en) Indoor robot positioning method and device, terminal equipment and storage medium
WO2020049112A1 (en) Generating a spatial model of an indoor structure
CN112530554B (en) Scanning positioning method and device, storage medium and electronic equipment
JP2022182359A (en) Three-dimensional model generation support system, program, and recording medium
CN116152382B (en) Digital representation conversion method and device of structural floor plan and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant