CN110197153B - Automatic wall identification method in house type graph - Google Patents

Automatic wall identification method in house type graph Download PDF

Info

Publication number
CN110197153B
CN110197153B CN201910460783.4A CN201910460783A CN110197153B CN 110197153 B CN110197153 B CN 110197153B CN 201910460783 A CN201910460783 A CN 201910460783A CN 110197153 B CN110197153 B CN 110197153B
Authority
CN
China
Prior art keywords
wall
house type
type graph
gray
wall body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910460783.4A
Other languages
Chinese (zh)
Other versions
CN110197153A (en
Inventor
王庆利
黄雨琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weilijia Intelligent Technology Co ltd
Original Assignee
Nanjing Weilijia Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weilijia Intelligent Technology Co ltd filed Critical Nanjing Weilijia Intelligent Technology Co ltd
Priority to CN201910460783.4A priority Critical patent/CN110197153B/en
Publication of CN110197153A publication Critical patent/CN110197153A/en
Application granted granted Critical
Publication of CN110197153B publication Critical patent/CN110197153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for automatically identifying a wall in a house type diagram, which comprises the following steps: position analysis is carried out on the interference information in the house type graph, and then the interference information in the house type graph is removed by utilizing the maximum outline of the wall body; setting a segmentation threshold according to the bimodal characteristic of the wall gray histogram in the house type graph, segmenting a wall region from the gray house type graph according to the segmentation threshold, and filtering interference items according to the size of the connected region area; and generating a vector data structure of the wall body in a directional linear fitting mode, so that the wall body is identified from the house type graph. According to the method for automatically identifying the wall in the house type graph, in the stage of preprocessing the house type graph, the removal processing of interference information in the house type graph is added, so that the difficulty of subsequent wall identification is reduced, and the accuracy of wall identification is improved; the wall body is divided by using an improved histogram double peak method, and the situation that the gray level is uneven due to the fact that edge information is damaged is considered when a threshold value is selected, so that the wall body and the background can be distinguished.

Description

Automatic wall identification method in house type graph
Technical Field
The invention relates to a house type diagram identification method, in particular to an automatic wall identification method in a house type diagram.
Background
With the change of modern people on house purchasing and decoration demands, people increasingly want to know the whole structure and expected decoration effect of a house in advance. However, both traditional building plan types, building house types and building construction drawings cannot bring the user with an immersive experience. At present, an emerging building effect display mode based on a virtual reality technology, a three-dimensional house type display technology, which gradually replaces a traditional two-dimensional house type graph, is shown in a house buying market.
Because a simple two-dimensional plan view cannot provide information of the whole building, a realistic visual effect cannot be brought, the panoramic image is obtained more complex, and the research of image identification is mainly focused on the traditional image. The traditional raster house type image (bitmap) is identified by analyzing geometric elements by using raster images obtained by scanning, or identifying components according to vector characteristics after vectorizing the raster images, and has high requirements on the normalization of drawings in the image preprocessing stage, manual interaction is required in the process, and the accuracy of identification also depends on the quality of vectorization algorithm.
The wall body is a main frame of the building house type diagram, determines the layout planning of all rooms, and is also a key for identifying information of other components. Therefore, the recognition of the wall body in the house type graph is the basis of the house type graph recognition reconstruction, and is also a difficulty in the whole house type graph recognition process. If the manual interaction in the process of recognizing the wall body by the traditional method can be avoided, the recognition effect is improved, and the process of reconstructing the three-dimensional building model by the two-dimensional house type graph can be more efficient and intelligent.
The identification method in the prior art comprises the following steps: (1) Performing edge detection on a wall in the house type diagram, and detecting a straight line segment of the wall; (2) Identifying according to the wall body characteristics in a component identification mode; (3) Directly extracting a vector structure of the wall body through a sparse pixel vectorization algorithm; (4) Based on the shape characteristics of the wall body, the wall body area is divided through the flatness of the wall body. The four existing methods described above have the disadvantages:
(1) The method for detecting the wall edges is greatly affected by the definition of the image edges, the degree of distinction of the image edges is required to be high, the detected wall lines are not complete enough, and the edges of a plurality of interference elements such as furniture and the like in the house type graph are easily mixed with the wall edges, so that difficulty in recognition is caused;
(2) The component identification method is often good in vectorized image effect, the wall body is detected according to the characteristics that the wall body line consists of parallel line segments, and some prior experience and assumptions on the shape of the wall body are often needed to restrain the detection result, or drawing information contains layer information capable of extracting the wall body;
(3) Although the sparse pixel tracking method can directly extract the wall skeleton, the skeleton of other background objects can be extracted together while the wall skeleton is extracted, so that the influence of structures such as doors, windows and furniture is brought to the structure, the intersection of the wall skeleton is easy to distort, and the difficulty of post-processing work is increased.
(4) The method for dividing the wall area according to the wall flatness depends on the shape of the wall, but the scanned image is often affected by noise, and the edge information is easy to be lost, so that the calculation of the flatness is affected.
Disclosure of Invention
The invention aims at: the wall body automatic identification method in the house type graph can analyze the difference of the wall body and other areas in gray scale, avoids the influence caused by edge information, and enables information extraction to be comprehensive.
In order to achieve the above purpose, the invention provides a method for automatically identifying a wall in a house type diagram, which comprises the following steps:
step 1, carrying out position analysis on interference information in a house type graph, and eliminating the interference information in the house type graph by utilizing the maximum outline of a wall body;
step 2, setting a segmentation threshold according to the bimodal characteristic of the wall gray histogram in the house type graph, segmenting a wall region from the gray house type graph according to the segmentation threshold, and filtering interference items according to the size of the connected region area;
and step 3, generating a vector data structure of the wall body in a directional linear fitting mode, so that the wall body is identified from the house type graph.
Further, in step 1, the interference information includes size labeling, picture description information and internal text information.
Further, in step 1, the specific step of removing the interference information in the house type graph by using the maximum outer contour of the wall body is as follows:
step 1.1, carrying out edge detection on a color house type graph by using a Canny operator so as to eliminate the influence of the background color of the color house type graph on contour detection;
step 1.2, searching each contour in the color family pattern by utilizing a findContours function in an OpenCV library;
step 1.3, interference information can be removed by utilizing a contour area limiting rule;
and 1.4, carrying out gray processing on the color house type graph with the interference information removed, and converting the color house type graph into a gray house type graph.
Further, in step 1.3, the specific step of removing the interference information by using the contour area limiting rule is as follows:
step 1.3.1, traversing all detected contours, calculating the areas of all contours, and finding the contour with the largest area;
step 1.3.2, reserving the outline with the largest area as the outline of the house main body, and deleting other parts outside the outline of the house main body;
and 1.3.3, filling the internal color of the outline of the house main body, and setting the pixel values of the black background part outside the outline of the house main body corresponding to the positions in the original color house type graph as white background.
Further, in step 2, the specific steps of setting the segmentation threshold according to the bimodal feature of the wall gray histogram in the house type graph are as follows:
step 2.1, smoothing a gray histogram of a gray house type graph by using a secondary exponential smoothing algorithm to eliminate burr peaks;
step 2.2, counting a gray information set S2 after the gray house type graph is smoothed;
step 2.3, calculating a global average gray value through the gray information set S2 as follows:
Figure BDA0002077999170000031
wherein N is the total number of elements in the set S2, and f (i, j) is the gray value of the elements in the set S2;
step 2.4, calculating two gray sets of black and white as:
Figure BDA0002077999170000032
wherein S (A) and S (B) are respectively black and white gray sets;
step 2.5, calculating a bimodal peak gray value as follows:
Figure BDA0002077999170000033
wherein T is 1 And T 2 Respectively bimodal peak gray values;
step 2.6, statistics of T 1 And T 2 The trough values between the two are obtained to obtain a trough threshold value set B;
step 2.7, arranging elements in the trough threshold set B from small to large according to gray values;
step 2.8, traversing the elements in the sequenced trough threshold set B, if |B i -T 1 If the value is < 10, stopping and determining B at that time i Is a segmentation threshold T.
Further, in step 2, the wall area is segmented from the gray scale house type graph according to the segmentation threshold T as follows:
Figure BDA0002077999170000034
where f (i, j) ∈S2, T is the segmentation threshold.
Further, in step 2, the specific steps of filtering the interference term according to the size of the connected domain area are as follows:
step 2.9, processing the segmented wall area image by using a closing operation and an opening operation in morphology, and counting the area size of the image connected domain;
step 2.10, filtering the connected domain with the area lower than an area threshold value, wherein the area threshold value is an average value of the areas, and the expression is:
Figure BDA0002077999170000035
wherein A is i For each ofThe area of the connected domains, M is the number of the connected domains, T A Is the average value of the areas of the connected domains.
Further, in step 3, the specific steps of generating the vector data structure of the wall body by means of directional straight line fitting are as follows:
step 3.1, refining the partitioned wall body area by using a rapid parallel refining algorithm, and extracting a framework of the wall body;
step 3.2, projecting the skeleton pixel points in the horizontal and vertical directions to obtain the positions of skeleton projection datum lines;
step 3.3, correcting the positions of the skeleton pixel points to skeleton projection datum lines;
and 3.4, defining an eight-neighborhood model of skeleton pixel points, directionally fitting a wall body central line according to the eight-neighborhood model, counting the width of each section of wall body in a central line scanning mode, and constructing a vector data structure of the wall body by utilizing the central line starting point coordinates, the end point coordinates and the wall body width of the wall body.
Further, in step 3.4, P is defined in the eight-neighborhood model 1 Is positioned at the center, P 2 、P 6 、P 8 And P 4 Respectively at P 1 P in the upper, lower, left and right positions 9 、P 3 、P 7 And P 5 Respectively at P 1 The specific steps of directionally fitting the wall midline according to the eight neighborhood model are as follows:
step 3.4.1, if P1 is a skeleton point, the starting point of the new line segment is P1; if P4 is also a skeleton point and P2 and P6 are not skeleton points, deleting P1, otherwise reserving P1; if P1 is not a skeleton point and the current line segment has a set starting point, the end point of the line segment is P1, and the line segment is stored in a vectorized wall midline data structure;
step 3.4.2, moving the scanning points P1 to P4, repeating the step 3.4.1 until the picture is scanned;
step 3.4.3, scanning the picture again, and if P1 is a skeleton point, starting a new line segment to be P1; if P6 is also a skeleton point, deleting P1, otherwise reserving P1; if P1 is not a skeleton point and the current line segment has a set starting point, the end point of the line segment is P1, and the line segment is stored in a vectorized wall midline data structure;
and 3.4.4, moving the scanning points P1 to P6, and repeating the step 3.4.3 until the picture is scanned, wherein the finally obtained vectorized wall midline effect diagram is shown in fig. 10.
Further, in step 3.4, the specific step of counting the width of each section of wall body by means of centerline scanning is as follows:
step 3.4.5, scanning each midline with the length of L along two sides of the midline in the vertical direction at the same time;
step 3.4.6, counting the number x of pixel points perpendicular to the central line position on the scanning path;
step 3.4.7, if x is more than 50% L, adding 1 to the single-side wall width count, and continuing scanning; if x <50% L, the scan ends;
step 3.4.8, returning to the width W of the single wall i Is w 1 +w 2 +1,w 1 And w 2 The width of the single side is respectively;
step 3.4.9, calculating the average width of the wall as:
Figure BDA0002077999170000041
in which W is i The width of a single wall body is equal to n, and the total number of the wall bodies is equal to n.
The invention has the beneficial effects that: (1) The object identified by the method is a grating house type diagram, namely a common bitmap, and the house type diagrams in the market and the network are all basically mainly the grating house type diagram at present, so that the method is more universal; (2) In the preprocessing stage of the house type diagram, the method adds the removal processing of the interference information such as the common size, the common icon and the like in the house type diagram, reduces the difficulty of subsequent wall body recognition, and improves the accuracy of wall body recognition; (3) The method is based on the gray level difference between the wall area and the background, the wall is segmented by utilizing an improved histogram double-peak method, the situation of uneven gray level caused by damaged edge information is considered in threshold selection, the wall and the background can be distinguished, and meanwhile, the segmented wall area is corrected in an auxiliary mode through a connected domain and a morphological method, so that the segmentation effect is better; (4) When the method is used for vectorizing the wall, the conditions of line distortion, missing and the like of the wall caused by the traditional vectorizing method are improved through methods such as projection correction and the like, and the flatness of the line of the wall is ensured by straight line fitting in horizontal and vertical orientations.
Drawings
FIG. 1 is a flow chart of an identification method of the present invention;
FIG. 2 is a graph of a smoothed gray level histogram of the present invention;
FIG. 3 is a connected domain area statistics diagram of the present invention;
FIG. 4 is a view showing the projected position of the wall skeleton of the present invention;
FIG. 5 is a diagram of an eight neighborhood model of the present invention;
FIG. 6 is a schematic view of the thickness of a wall body scanned by a central line in accordance with the present invention;
FIG. 7 is an unprocessed original house type diagram of the present invention;
FIG. 8 is a diagram showing the effect of removing redundant information of a house type diagram by adopting the maximum outline of a wall body;
FIG. 9 is a graph showing the effect of the improved histogram double peak method for dividing wall areas according to the present invention;
fig. 10 is a view showing the effect of the line vector straight line in the wall body according to the present invention.
Detailed Description
The technical scheme of the present invention will be described in detail with reference to the accompanying drawings, but the scope of the present invention is not limited to the embodiments.
As shown in fig. 1, the method for automatically identifying the wall in the house type graph disclosed by the invention comprises the following steps:
step 1, carrying out position analysis on interference information in a house type graph, and eliminating the interference information in the house type graph by utilizing the maximum outline of a wall body;
step 2, setting a segmentation threshold according to the bimodal characteristic of the wall gray histogram in the house type graph, segmenting a wall region from the gray house type graph according to the segmentation threshold, and filtering interference items according to the size of the connected region area;
and step 3, generating a vector data structure of the wall body in a directional linear fitting mode, so that the wall body is identified from the house type graph.
Further, in step 1, the interference information includes dimension marking, picture description information and internal text information, wherein the dimension marking, the picture description information and the like are located outside the maximum outline of the wall body, the information outlines are not connected with each other and are independent from each other, and as shown in fig. 7, the interference information can be removed correspondingly according to the location characteristics.
Further, in step 1, the specific step of removing the interference information in the house type graph by using the maximum outer contour of the wall body is as follows:
step 1.1, carrying out edge detection on a color house type graph by using a Canny operator so as to eliminate the influence of the background color of the color house type graph on contour detection;
step 1.2, searching each contour in the color family pattern by utilizing a findContours (images, conductors, hiearichy, mode, method, offset) function in an OpenCV library;
step 1.3, interference information can be removed by utilizing a contour area limiting rule, because the main body area of the house is determined by the detected outermost contour as a whole, and other redundant information to be removed is scattered around and the area is much smaller than that of the main body of the house;
and step 1.4, carrying out gray processing on the color house type graph with the interference information removed, and converting the color house type graph into a gray house type graph, as shown in fig. 8. Before gray processing, contrast enhancement processing is carried out on the color image, and the processing algorithm is as follows:
g(i,j)=αf(i,j)+β
in the formula, alpha is a gain parameter for controlling contrast, alpha is more than 1, contrast is enhanced, 0 < alpha is less than 1, contrast is reduced, beta is a bias parameter for controlling brightness, beta is more than 0, beta is less than 0, brightness is reduced, in order to highlight the difference between a wall body and a background, and the image color is not unbalanced, the contrast gain parameter alpha takes an empirical value of 2, the brightness is unchanged, and the bias parameter is 0.
Further, in step 1.3, the specific step of removing the interference information by using the contour area limiting rule is as follows:
step 1.3.1, traversing all detected contours, calculating the areas of all contours, and finding the contour with the largest area;
step 1.3.2, reserving the outline with the largest area as the outline of the house main body, and deleting other parts outside the outline of the house main body;
and 1.3.3, filling the internal color of the outline of the house main body, and setting the pixel values of the black background part outside the outline of the house main body corresponding to the positions in the original color house type graph as white background.
Further, as shown in fig. 2, the gray level curve of the house type graph is more than two peaks, so that the two peaks cannot be simply divided by using a bimodal algorithm, because the pixels of the house type graph are mainly concentrated in two areas, one is an area mainly based on wall gray level, one obvious peak is formed in a low gray level area, "black" part, the other obvious peak is formed in a high gray level "white" part, and the other middle gray level parts are accompanied by peaks, but the amplitude of the peaks can be ignored compared with the two obvious peaks, so that the peaks can be simply regarded as valley areas between the peaks of the two areas, and in general, the gray level histogram of the house type graph is satisfactory to the bimodal characteristic, and only the maximum peaks T1 and T2 of the black and white areas and the valley value T of the middle part are counted in the calculation process. Therefore, in step 2, the specific steps of setting the segmentation threshold according to the bimodal feature of the wall gray histogram in the house type graph are as follows:
step 2.1, smoothing a gray histogram of a gray house type graph by using a secondary exponential smoothing algorithm to eliminate burr peaks;
step 2.2, counting a gray information set S2 after the gray house type graph is smoothed;
step 2.3, calculating a global average gray value through the gray information set S2 as follows:
Figure BDA0002077999170000071
wherein N is the total number of elements in the set S2, and f (i, j) is the gray value of the elements in the set S2;
step 2.4, calculating two gray sets of black and white as:
Figure BDA0002077999170000072
wherein S (A) and S (B) are respectively black and white gray sets;
step 2.5, calculating a bimodal peak gray value as follows:
Figure BDA0002077999170000073
wherein T is 1 And T 2 Respectively, selecting peak gray values and trough threshold values of two peaks, and not only can obtain the minimum value between two peaks, but also can select the peak T close to the wall part in order to divide the wall part as far as possible 1 Second, because there is a possibility that the wall image has edge information damaged, the gray level is uneven, T 1 Adjacent gray fields may also be wall sections;
step 2.6, statistics of T 1 And T 2 The trough values between the two are obtained to obtain a trough threshold value set B;
step 2.7, arranging elements in the trough threshold set B from small to large according to gray values;
step 2.8, traversing the elements in the sequenced trough threshold set B, if |B i -T 1 If the value is < 10, stopping and determining B at that time i Is a segmentation threshold T.
Further, in step 2, the wall area is segmented from the gray scale house type graph according to the segmentation threshold T as follows:
Figure BDA0002077999170000074
where f (i, j) ∈S2, T is the segmentation threshold.
Further, as can be seen from the result of the threshold segmentation, the wall portion has been segmented from the image, but it cannot be excluded that a small portion of pixels having a background gray value similar to that of the wall are also mistakenly considered as being segmented as the target portion, and the wall portion is one or several connected regions through analysis and observation, and these introduced interference portions substantially independently constitute a small region, which is also much smaller in width and area than the wall region, and the small interference portions can be filtered out by morphological processing and the connected region area, so as to further correct the wall region. Therefore, in step 2, the specific steps of filtering the interference term according to the size of the connected domain area are as follows:
step 2.9, processing the segmented wall area image by using a closing operation and an opening operation in morphology, and counting the area size of the image connected domain;
step 2.10, filtering the connected domain with the area lower than an area threshold value, wherein the area threshold value is an average value of the areas, and the expression is:
Figure BDA0002077999170000081
wherein A is i For the area of each connected domain, M is the number of connected domains, T A Is the average value of the areas of the connected domains.
Further, in step 3, the specific steps of generating the vector data structure of the wall body by means of directional straight line fitting are as follows:
step 3.1, refining the partitioned wall body area by using a rapid parallel refining algorithm, and extracting a framework of the wall body;
step 3.2, performing horizontal and vertical projection on the skeleton pixel points to obtain the position of a skeleton projection datum line, as shown in fig. 4;
step 3.3, correcting the positions of the skeleton pixel points to skeleton projection datum lines;
and 3.4, defining an eight-neighborhood model of skeleton pixel points, directionally fitting a wall body central line according to the eight-neighborhood model, counting the width of each section of wall body in a central line scanning mode, and constructing a vector data structure of the wall body by utilizing the central line starting point coordinates, the end point coordinates and the wall body width of the wall body as shown in fig. 5.
Further, in step 3.1, the fast parallel refinement algorithm is an improved refinement algorithm, and the specific algorithm steps are as follows:
step 3.1.1, binarizing the wall area image;
step 3.1.2, judging the pixel value P of the target pixel point 1 Whether or not=1 satisfies four conditions simultaneously:
the conditions 1,2 are less than or equal to N (P) 1 )≤6;
Condition 2, B (P) 1 )∈{65,5,20,80,13,22,52,133,141,54};
Condition 3, P 2 *P 4 *P 6 =0;
Condition 4, P 4 *P 6 *P 8 =0;
If the four conditions of step 3.1.2 are satisfied, P is deleted 1 (P 1 =0);
Step 3.1.3, judging the pixel value P of the target pixel point 1 Whether or not=1 satisfies four conditions simultaneously:
the conditions 1,2 are less than or equal to N (P) 1 )≤6;
Condition 2, B (P) 1 )∈{65,5,20,80,13,22,52,133,141,54};
Condition 3, P 2 *P 4 *P 8 =0;
Condition 4, P 2 *P 6 *P 8 =0;
If the four conditions of step 3.1.3 are satisfied, P is deleted 1 (P 1 =0)。
B(P 1 ) Is the target point P 1 The binary code corresponding to the eight neighborhoods of (a) satisfies the following expression:
Figure BDA0002077999170000091
B(P 1 ) The values in the belonging set are that several refinement conditions do not satisfy S (P 1 ) The binary coding case corresponding to the pixel neighborhood combination that is omitted for=1.
Further, in step 3.4, P is defined in the eight-neighborhood model 1 Is positioned at the center, P 2 、P 6 、P 8 And P 4 Respectively at P 1 P in the upper, lower, left and right positions 9 、P 3 、P 7 And P 5 Respectively at P 1 The specific steps of directionally fitting the wall midline according to the eight neighborhood model are as follows:
step 3.4.1, if P1 is a skeleton point, the starting point of the new line segment is P1; if P4 is also a skeleton point and P2 and P6 are not skeleton points, deleting P1, otherwise reserving P1; if P1 is not a skeleton point and the current line segment has a set starting point, the end point of the line segment is P1, and the line segment is stored in a vectorized wall midline data structure;
step 3.4.2, moving the scanning points P1 to P4, repeating the step 3.4.1 until the picture is scanned;
step 3.4.3, scanning the picture again, and if P1 is a skeleton point, starting a new line segment to be P1; if P6 is also a skeleton point, deleting P1, otherwise reserving P1; if P1 is not a skeleton point and the current line segment has a set starting point, the end point of the line segment is P1, and the line segment is stored in a vectorized wall midline data structure;
step 3.4.4, moving the scanning points P1 to P6, and repeating the step 3.4.3 until the picture is scanned.
Further, in step 3.4, the specific step of counting the width of each section of wall body by means of centerline scanning is as follows:
step 3.4.5, scanning each midline with the length of L along two sides of the midline in the vertical direction at the same time;
step 3.4.6, counting the number x of pixel points perpendicular to the central line position on the scanning path;
step 3.4.7, if x is more than 50% L, adding 1 to the single-side wall width count, and continuing scanning; if x <50% L, the scan ends;
step 3.4.8, returning to the width W of the single wall i Is w 1 +w 2 +1,w 1 And w 2 The width of the single side is respectively;
step 3.4.9, calculating the average width of the wall as:
Figure BDA0002077999170000092
in which W is i The width of a single wall body is equal to n, and the total number of the wall bodies is equal to n.
As described above, although the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limiting the invention itself. Various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. The automatic wall identification method in the house type graph is characterized by comprising the following steps of:
step 1, carrying out position analysis on interference information in a house type graph, and eliminating the interference information in the house type graph by utilizing the maximum outline of a wall body;
step 2, setting a segmentation threshold according to the bimodal characteristic of the wall gray histogram in the house type graph, segmenting a wall region from the gray house type graph according to the segmentation threshold, and filtering interference items according to the size of the connected region area;
step 3, generating a vector data structure of the wall body in a directional linear fitting mode, so that the wall body is identified from the house type graph;
in step 3, the specific steps of generating the vector data structure of the wall body by the directional straight line fitting mode are as follows:
step 3.1, refining the partitioned wall body area by using a rapid parallel refining algorithm, and extracting a framework of the wall body;
step 3.2, projecting the skeleton pixel points in the horizontal and vertical directions to obtain the positions of skeleton projection datum lines;
step 3.3, correcting the positions of the skeleton pixel points to skeleton projection datum lines;
and 3.4, defining an eight-neighborhood model of skeleton pixel points, directionally fitting a wall body central line according to the eight-neighborhood model, counting the width of each section of wall body in a central line scanning mode, and constructing a vector data structure of the wall body by utilizing the central line starting point coordinates, the end point coordinates and the wall body width of the wall body.
2. The method for automatically identifying a wall in a house type drawing according to claim 1, wherein in step 1, the interference information includes size marking, picture description information and internal text information.
3. The method for automatically identifying the wall in the house type graph according to claim 1, wherein in the step 1, the specific step of eliminating the interference information in the house type graph by using the maximum outline of the wall is as follows:
step 1.1, carrying out edge detection on a color house type graph by using a Canny operator so as to eliminate the influence of the background color of the color house type graph on contour detection;
step 1.2, searching each contour in the color family pattern by utilizing a findContours function in an OpenCV library;
step 1.3, interference information can be removed by utilizing a contour area limiting rule;
and 1.4, carrying out gray processing on the color house type graph with the interference information removed, and converting the color house type graph into a gray house type graph.
4. The method for automatically identifying a wall in a house type drawing according to claim 3, wherein in step 1.3, the specific step of removing interference information by using a contour area restriction rule is as follows:
step 1.3.1, traversing all detected contours, calculating the areas of all contours, and finding the contour with the largest area;
step 1.3.2, reserving the outline with the largest area as the outline of the house main body, and deleting other parts outside the outline of the house main body;
and 1.3.3, filling the internal color of the outline of the house main body, and setting the pixel values of the black background part outside the outline of the house main body corresponding to the positions in the original color house type graph as white background.
5. The method for automatically identifying a wall in a house type graph according to claim 3, wherein in step 2, the specific step of setting a segmentation threshold according to the bimodal feature of the gray histogram of the wall in the house type graph is as follows:
step 2.1, smoothing a gray histogram of a gray house type graph by using a secondary exponential smoothing algorithm to eliminate burr peaks;
step 2.2, counting a gray information set S2 after the gray house type graph is smoothed;
step 2.3, calculating a global average gray value through the gray information set S2 as follows:
Figure FDA0004037978330000021
wherein N is the total number of elements in the set S2, and f (i, j) is the gray value of the elements in the set S2;
step 2.4, calculating two gray sets of black and white as:
Figure FDA0004037978330000022
wherein S (A) and S (B) are respectively black and white gray sets;
step 2.5, calculating a bimodal peak gray value as follows:
Figure FDA0004037978330000023
wherein T is 1 And T 2 Respectively bimodal peak gray values;
step 2.6, statistics of T 1 And T 2 The trough values between the two are obtained to obtain a trough threshold value set B;
step 2.7, arranging elements in the trough threshold set B from small to large according to gray values;
step 2.8, traversing the elements in the sequenced trough threshold set B, if |B i -T 1 If the value is < 10, stopping and determining B at that time i Is a segmentation threshold T.
6. The method for automatically identifying a wall in a house type graph according to claim 5, wherein in step 2, the wall area is segmented from the gray house type graph according to a segmentation threshold T as follows:
Figure FDA0004037978330000024
where f (i, j) ∈S2, T is the segmentation threshold.
7. The method for automatically identifying the wall in the house type map according to claim 6, wherein in the step 2, the specific step of filtering the interference item according to the size of the area of the connected domain is as follows:
step 2.9, processing the segmented wall area image by using a closing operation and an opening operation in morphology, and counting the area size of the image connected domain;
step 2.10, filtering the connected domain with the area lower than an area threshold value, wherein the area threshold value is an average value of the areas, and the expression is:
Figure FDA0004037978330000031
wherein A is i For the area of each connected domain, M is the number of connected domains, T A Is the average value of the areas of the connected domains.
8. The method for automatically identifying a wall in a house type map according to claim 1, wherein in step 3.4, P is defined in an eight-neighborhood model 1 Is positioned at the center, P 2 、P 6 、P 8 And P 4 Respectively at P 1 P in the upper, lower, left and right positions 9 、P 3 、P 7 And P 5 Respectively at P 1 The specific steps of directionally fitting the wall midline according to the eight neighborhood model are as follows:
step 3.4.1, if P1 is a skeleton point, the starting point of the new line segment is P1; if P4 is also a skeleton point and P2 and P6 are not skeleton points, deleting P1, otherwise reserving P1; if P1 is not a skeleton point and the current line segment has a set starting point, the end point of the line segment is P1, and the line segment is stored in a vectorized wall midline data structure;
step 3.4.2, moving the scanning points P1 to P4, repeating the step 3.4.1 until the picture is scanned;
step 3.4.3, scanning the picture again, and if P1 is a skeleton point, starting a new line segment to be P1; if P6 is also a skeleton point, deleting P1, otherwise reserving P1; if P1 is not a skeleton point and the current line segment has a set starting point, the end point of the line segment is P1, and the line segment is stored in a vectorized wall midline data structure;
step 3.4.4, moving the scanning points P1 to P6, and repeating the step 3.4.3 until the picture is scanned.
9. The method for automatically identifying the wall in the house type map according to claim 8, wherein in step 3.4, the specific step of counting the width of each wall section by means of a center line scanning is as follows:
step 3.4.5, scanning each midline with the length of L along two sides of the midline in the vertical direction at the same time;
step 3.4.6, counting the number x of pixel points perpendicular to the central line position on the scanning path;
step 3.4.7, if x is more than 50% L, adding 1 to the single-side wall width count, and continuing scanning; if x <50% L, the scan ends;
step 3.4.8, returning to the width W of the single wall i Is w 1 +w 2 +1,w 1 And w 2 The width of the single side is respectively;
step 3.4.9, calculating the average width of the wall as:
Figure FDA0004037978330000032
in which W is i The width of a single wall body is equal to n, and the total number of the wall bodies is equal to n.
CN201910460783.4A 2019-05-30 2019-05-30 Automatic wall identification method in house type graph Active CN110197153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910460783.4A CN110197153B (en) 2019-05-30 2019-05-30 Automatic wall identification method in house type graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910460783.4A CN110197153B (en) 2019-05-30 2019-05-30 Automatic wall identification method in house type graph

Publications (2)

Publication Number Publication Date
CN110197153A CN110197153A (en) 2019-09-03
CN110197153B true CN110197153B (en) 2023-05-02

Family

ID=67753392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910460783.4A Active CN110197153B (en) 2019-05-30 2019-05-30 Automatic wall identification method in house type graph

Country Status (1)

Country Link
CN (1) CN110197153B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104879B (en) * 2019-12-09 2020-11-27 贝壳找房(北京)科技有限公司 Method and device for identifying house functions, readable storage medium and electronic equipment
CN111754526B (en) * 2020-06-23 2023-06-30 广东博智林机器人有限公司 House type graph dividing method, household type graph classifying method, household type graph dividing device, household type graph dividing equipment and storage medium
CN112001997B (en) * 2020-06-23 2022-02-18 北京城市网邻信息技术有限公司 Furniture display method and device
CN112561934A (en) * 2020-12-22 2021-03-26 上海有个机器人有限公司 Method and device for processing laser image, electronic equipment and computer storage medium
CN112926392B (en) * 2021-01-26 2022-07-08 杭州聚秀科技有限公司 Building plane drawing room identification method based on contour screening
CN113592976B (en) * 2021-07-27 2024-06-25 美智纵横科技有限责任公司 Map data processing method and device, household appliance and readable storage medium
CN115082850A (en) * 2022-05-23 2022-09-20 哈尔滨工业大学 Template support safety risk identification method based on computer vision
CN115714732B (en) * 2022-11-03 2024-01-26 巨擎网络科技(济南)有限公司 Method and equipment for detecting coverage condition of whole-house wireless network
CN116188480B (en) * 2023-04-23 2023-07-18 安徽同湃特机器人科技有限公司 Calculation method of AGV traveling path point during ceiling operation of spraying robot
CN116993462A (en) * 2023-09-26 2023-11-03 浙江小牛哥科技有限公司 Online automatic quotation system based on digital home decoration

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1083413A (en) * 1996-09-06 1998-03-31 Ricoh Co Ltd Method and device for recognizing building plan
CN103971098A (en) * 2014-05-19 2014-08-06 北京明兰网络科技有限公司 Method for recognizing wall in house type image and method for automatically correcting length ratio of house type image
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105279787A (en) * 2015-04-03 2016-01-27 北京明兰网络科技有限公司 Method for generating three-dimensional (3D) building model based on photographed house type image identification
CN107122528A (en) * 2017-04-13 2017-09-01 广州乐家数字科技有限公司 A kind of floor plan parametrization can edit modeling method again
CN107274486A (en) * 2017-06-26 2017-10-20 广州天翌云信息科技有限公司 A kind of model 3D effect map generalization method
CN107330979A (en) * 2017-06-30 2017-11-07 电子科技大学中山学院 Vector diagram generation method and device for building house type and terminal
CN108399644A (en) * 2018-02-05 2018-08-14 北京居然之家家居连锁集团有限公司 A kind of wall images recognition methods and its device
CN108961152A (en) * 2018-05-30 2018-12-07 链家网(北京)科技有限公司 Plane house type drawing generating method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1083413A (en) * 1996-09-06 1998-03-31 Ricoh Co Ltd Method and device for recognizing building plan
CN103971098A (en) * 2014-05-19 2014-08-06 北京明兰网络科技有限公司 Method for recognizing wall in house type image and method for automatically correcting length ratio of house type image
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105279787A (en) * 2015-04-03 2016-01-27 北京明兰网络科技有限公司 Method for generating three-dimensional (3D) building model based on photographed house type image identification
CN107122528A (en) * 2017-04-13 2017-09-01 广州乐家数字科技有限公司 A kind of floor plan parametrization can edit modeling method again
CN107274486A (en) * 2017-06-26 2017-10-20 广州天翌云信息科技有限公司 A kind of model 3D effect map generalization method
CN107330979A (en) * 2017-06-30 2017-11-07 电子科技大学中山学院 Vector diagram generation method and device for building house type and terminal
CN108399644A (en) * 2018-02-05 2018-08-14 北京居然之家家居连锁集团有限公司 A kind of wall images recognition methods and its device
CN108961152A (en) * 2018-05-30 2018-12-07 链家网(北京)科技有限公司 Plane house type drawing generating method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"3D Reconstruction of Detailed Buildings From;Lu T.et al;《Computer-Aided Design & Applications》;20131231;全文 *
"AutoCAD绘制户型图的方法探析";王文华等;《商丘职业技术学院学报》;20161231;第15卷(第5期);全文 *
"Generating 3D Building Models From Architectural;Yin X.et al;《Computer Graphics & Applications IEEE》;20091231;全文 *
"基于形状与边缘特征的户型图识别研究";江州;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180215;全文 *

Also Published As

Publication number Publication date
CN110197153A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
CN110197153B (en) Automatic wall identification method in house type graph
CN110866924B (en) Line structured light center line extraction method and storage medium
CN109118500B (en) Image-based three-dimensional laser scanning point cloud data segmentation method
Kim et al. Adaptive smoothness constraints for efficient stereo matching using texture and edge information
CN104751142B (en) A kind of natural scene Method for text detection based on stroke feature
CN108830832A (en) A kind of plastic barrel surface defects detection algorithm based on machine vision
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN113689445B (en) High-resolution remote sensing building extraction method combining semantic segmentation and edge detection
CN110298344A (en) A kind of positioning of instrument knob and detection method based on machine vision
CN114067147B (en) Ship target confirmation method based on local shape matching
CN115147448A (en) Image enhancement and feature extraction method for automatic welding
CN109781737A (en) A kind of detection method and its detection system of hose surface defect
CN113033558A (en) Text detection method and device for natural scene and storage medium
Chen et al. Image segmentation based on mathematical morphological operator
CN108717699B (en) Ultrasonic image segmentation method based on continuous minimum segmentation
KR101693247B1 (en) A Method for Extracting Mosaic Blocks Using Boundary Features
Cheng et al. Power pole detection based on graph cut
CN112258534B (en) Method for positioning and segmenting small brain earthworm parts in ultrasonic image
CN112258536B (en) Integrated positioning and segmentation method for calluses and cerebellum earthworm parts
CN112085683B (en) Depth map credibility detection method in saliency detection
CN115049628A (en) Method and system for automatically generating house type structure
Zhu Moving Objects Detection and Segmentation Based on Background Subtraction and Image Over-Segmentation.
CN115187744A (en) Cabinet identification method based on laser point cloud
CN113554695A (en) Intelligent part hole site identification and positioning method
Hsu et al. Contour extraction in medical images using initial boundary pixel selection and segmental contour following

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant