CN110197153A - Wall automatic identifying method in a kind of floor plan - Google Patents

Wall automatic identifying method in a kind of floor plan Download PDF

Info

Publication number
CN110197153A
CN110197153A CN201910460783.4A CN201910460783A CN110197153A CN 110197153 A CN110197153 A CN 110197153A CN 201910460783 A CN201910460783 A CN 201910460783A CN 110197153 A CN110197153 A CN 110197153A
Authority
CN
China
Prior art keywords
wall
floor plan
area
follows
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910460783.4A
Other languages
Chinese (zh)
Other versions
CN110197153B (en
Inventor
王庆利
黄雨琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weilijia Intelligent Technology Co Ltd
Original Assignee
Nanjing Weilijia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weilijia Intelligent Technology Co Ltd filed Critical Nanjing Weilijia Intelligent Technology Co Ltd
Priority to CN201910460783.4A priority Critical patent/CN110197153B/en
Publication of CN110197153A publication Critical patent/CN110197153A/en
Application granted granted Critical
Publication of CN110197153B publication Critical patent/CN110197153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The present invention provides wall automatic identifying method in a kind of floor plan, step includes: to carry out position analysis to the interference information in floor plan, and wall maximum outer profile is recycled to reject the interference information in floor plan;According to the bimodality feature-set segmentation threshold of wall grey level histogram in floor plan, wall body area is partitioned into from gray scale floor plan according to segmentation threshold, and distracter is filtered according to the size of connected domain area;The vector data structure of wall is generated by way of the straight line fitting of orientation, thus by identifying wall in floor plan.Wall automatic identifying method to floor plan pretreatment stage, is being added to the removal processing to interference information in floor plan in the floor plan, reduces the difficulty of subsequent wall identification, improves the accuracy of wall identification;Using improved histogram Two-peak method come segmenting wall body, and causes the non-uniform situation of gray scale in view of marginal information is impaired when threshold value is chosen, can more distinguish wall and background.

Description

Wall automatic identifying method in a kind of floor plan
Technical field
The present invention relates to a kind of floor plan recognition methods, wall automatic identifying method in especially a kind of floor plan.
Background technique
Change with modern to house-purchase and decoration requirements, people increasingly wish the overall structure for understanding house in advance With expected finishing effect.However, two traditional class architectural plans, house type figure and architectural working drawing can not all bring use A kind of experience of immersion in family.Currently, a kind of emerging architectural effect exhibition method based on virtual reality technology, three-dimensional House type display technique has gradually substituted traditional two-dimentional floor plan, appears in house-purchase market.
Since simple 2 d plane picture cannot provide the information of whole building, visual effect true to nature can not be brought, and It is again more complicated to obtain panoramic picture, the research of image recognition mainly still concentrates on traditional images.Traditional raster floor plan The identification of (bitmap) mostly analyzes geometric element using the raster image that scanning obtains or raster image carries out vector quantization Identify have high requirements in normalization of the image pre-processing phase to drawing, and mistake to component according to vector characteristic afterwards Man-machine interactively is required in journey, the accuracy rate of identification also relies on the quality of Vectorization Algorithm.
Wall is the major architectural of house type figure, determines the roomed allocation plan of institute, while being also to identify other The key of component information.Therefore, the identification of wall is that floor plan identifies that the basis rebuild, and entire floor plan are known in floor plan Difficult point during not.If being avoided that man-machine interactively of conventional method during identifying wall, recognition effect, energy are improved The process for making two-dimentional floor plan rebuild three-dimensional building model becomes more high efficiency smart.
Recognition methods in the prior art includes: that (1) carries out edge detection to wall in floor plan, detects wall straight line Section;(2) it by way of component identification, is identified according to wall feature;(3) it is directly mentioned by sparse pixel vector algorithm Take the vector structure of wall;(4) it is based on walls shape feature, passes through the straightness segmenting wall body region of wall.Above-mentioned four kinds existing Methodical disadvantage is:
(1) method for detecting wall edge is influenced very big by the clarity of image border, it is desirable that image boundary discrimination compared with Height, the wall line detected is sufficiently complete, at the same in floor plan many interference elements of such as furniture edge also be easy and wall Edge is obscured, and causes difficulty to identification;
(2) it is preferable to be often directed to the image effect after vector quantization for the method for component identification, according to wall line by parallel segment The feature of composition detects wall, be often also required to some prophet's experiences and wall it is modal hypothesis to constrain testing result, Or it requires to include the map data mining platform that can extract wall in drawing information;
(3) although the method for sparse pixel tracking can directly extract wall skeleton, wall skeleton is being extracted Meanwhile the skeleton of other background object can also be extracted together, this, which causes the structures such as door and window, furniture to affect, makes wall skeleton Infall is easy to produce distortion, increases the difficulty of post-processing work.
(4) according to wall straightness come the method in segmenting wall body region, depend on walls shape, but often scan image by Influence of noise, marginal information are easy missing, influence the calculating of straightness.
Summary of the invention
It is an object of the invention to: wall automatic identifying method in a kind of floor plan is provided, wall and other can be analyzed Difference of the region in terms of gray scale, avoids influence caused by marginal information, so that the extraction of information is more comprehensive.
In order to achieve the above-mentioned object of the invention, the present invention provides wall automatic identifying methods in a kind of floor plan, including such as Lower step:
Step 1, position analysis is carried out to the interference information in floor plan, wall maximum outer profile is recycled to reject house type Interference information in figure;
Step 2, according to the bimodality feature-set segmentation threshold of wall grey level histogram in floor plan, according to segmentation threshold It is partitioned into wall body area from gray scale floor plan, and distracter is filtered according to the size of connected domain area;
Step 3, the vector data structure of wall is generated by way of the straight line fitting of orientation, thus by knowing in floor plan It Chu not wall.
Further, in step 1, interference information includes size marking, caption information and internal text information.
Further, in step 1, the specific steps of the interference information in floor plan are rejected using wall maximum outer profile Are as follows:
Step 1.1, edge detection is carried out to colored floor plan using Canny operator, to eliminate the background of colored floor plan Influence of the color to contour detecting;
Step 1.2, each profile in colored floor plan is carried out using the findContours function in the library OpenCV It searches;
Step 1.3, interference information can be rejected using contour area restriction rule;
Step 1.4, gray processing processing is carried out to the colored floor plan for rejecting interference information, is converted into gray scale floor plan.
Further, in step 1.3, the specific steps of interference information can be rejected using contour area restriction rule are as follows:
Step 1.3.1 traverses all profiles detected, calculates each contour area, find the maximum profile of area;
Step 1.3.2, the maximum profile of Retention area delete other outside main house body profile as main house body profile Part;
Step 1.3.3 fills the internal color of main house body profile, and black background part outside main house body profile is corresponding Pixel value to the position in original color floor plan is both configured to white background.
Further, in step 2, according to the bimodality feature-set segmentation threshold of wall grey level histogram in floor plan Specific steps are as follows:
Step 2.1, it is smoothed, is eliminated using grey level histogram of the double smoothing algorithm to gray scale floor plan Burr peak value;
Step 2.2, the grayscale information set S2 after gray scale floor plan smoothing processing is counted;
Step 2.3, global average gray value is calculated by grayscale information set S2 are as follows:
In formula, N is element sum in set S2, and f (i, j) is the gray value of element in S2;
Step 2.4, black, white two gray scale collection are calculated are as follows:
In formula, S (A) and S (B) is respectively black, white two gray scale collection;
Step 2.5, bimodal peak gray value is calculated are as follows:
In formula, T1And T2Respectively bimodal peak gray value;
Step 2.6, T is counted1And T2Between valley value, obtain trough threshold value set B;
Step 2.7, the element in trough threshold value set B is arranged from small to large according to gray value;
Step 2.8, the element in the trough threshold value set B after traversal sequence, if | Bi-T1| < 10 then stops, determining this When BiFor segmentation threshold T.
Further, in step 2, wall body area is partitioned into from gray scale floor plan according to segmentation threshold T are as follows:
In formula, f (i, j) ∈ S2, T are segmentation threshold.
Further, in step 2, the specific steps of distracter are filtered according to the size of connected domain area are as follows:
Step 2.9, to the wall body area image being partitioned into using in morphology closed operation and opening operation handle, unite Count the size in image connectivity domain;
Step 2.10, the connected domain for area lower than area threshold is filtered, and area threshold takes the average value of area, Expression formula are as follows:
In formula, AiFor the area of each connected domain, M is connected domain number, TAFor the average value of connected domain area.
Further, in step 3, the specific of the vector data structure of wall is generated by way of the straight line fitting of orientation Step are as follows:
Step 3.1, the wall body area being partitioned into is refined using Quick Parallel Thinning Algorithm, extracts the bone of wall Frame;
Step 3.2, projection both horizontally and vertically is carried out to Skeleton pixel point, obtains the position of skeleton projection reference line It sets;
Step 3.3, amendment Skeleton pixel point projects the position of reference line to skeleton;
Step 3.4, the eight neighborhood model for defining a Skeleton pixel point is fitted in wall according to eight neighborhood model orientation Line, then count by way of centerline scan the width of every section of wall, using the middle line starting point coordinate of wall, terminal point coordinate and The vector data structure of width of wall body building wall.
Further, in step 3.4, in the eight neighborhood model of definition, P1Positioned at center, P2、P6、P8And P4Position respectively In P1Up, down, left and right side position on, P9、P3、P7And P5It is located at P1The upper left corner, the upper right corner, the lower left corner and the lower right corner On position, the specific steps of wall middle line are fitted according to eight neighborhood model orientation are as follows:
Step 3.4.1, if P1 is skeletal point, new line segment starting point is P1;If P4 is also skeletal point, and P2 and P6 It is not skeletal point, then deletes P1, otherwise retain P1;If P1 is not skeletal point, and starting point has been arranged in current line segment, then line segment is whole Point is P1, is saved into the wall centerline data structure of vector quantization;
Step 3.4.2, motion scan point P1 to P4 repeat step 3.4.1, until scanning through picture;
Step 3.4.3, scanned picture again, if P1 is skeletal point, new line segment starting point is P1;If P6 is also skeleton Point then deletes P1, otherwise retains P1;If P1 is not skeletal point, and starting point has been arranged in current line segment, then line segment terminal is P1, It saves into the wall centerline data structure of vector quantization;
Step 3.4.4, motion scan point P1 to P6 repeat step 3.4.3, until picture is scanned through, the arrow finally obtained The wall middle line effect picture of quantization is as shown in Figure 10.
Further, in step 3.4, the specific steps of the width of every section of wall are counted by way of centerline scan are as follows:
Step 3.4.5, the middle line of L a length of to each scan simultaneously along its vertical direction two sides;
Step 3.4.6 is counted on scan path perpendicular to the pixel number x on midline position;
Step 3.4.7, if x > 50%L, unilateral wall width gauge number adds 1, continues to scan on;If x < 50%L, knot is scanned Beam;
Step 3.4.8 returns to the width W of single walliFor w1+w2+ 1, w1And w2Respectively unilateral width;
Step 3.4.9 calculates the mean breadth of wall are as follows:
In formula, WiFor the width of single wall, n is wall sum.
The beneficial effects of the present invention are: (1) object of this method identification be grating floor plan, that is, common position Figure, the floor plan on the market, in network is substantially based on grating floor plan at present, and the method for ensure that has more universality;(2) This method to floor plan pretreatment stage, is being added at the removal to interference informations such as size common in floor plan, icons Reason reduces the difficulty of subsequent wall identification, improves the accuracy of wall identification;(3) this method is base to the segmentation of wall The difference of gray scale between wall body area and background using improved histogram Two-peak method come segmenting wall body, and is chosen in threshold value When in view of marginal information it is impaired, cause the non-uniform situation of gray scale, can more distinguish wall and background, meanwhile, pass through connection Wall body area behind domain and morphological method auxiliary amendment segmentation, keeps segmentation effect more preferable;(4) this method is sweared to wall When quantization conversion, by the methods of projection correction, the feelings such as line distortion, missing in wall caused by conventional vector method are improved Condition, the straight line fitting being horizontal and vertically oriented ensure that the straightness of wall middle line.
Detailed description of the invention
Fig. 1 is recognition methods flow chart of the invention;
Fig. 2 is smoothed out gray histogram curve of the invention;
Fig. 3 is connected domain area statistics figure of the invention;
Fig. 4 is wall skeleton projected position figure of the invention;
Fig. 5 is eight neighborhood illustraton of model of the invention;
Fig. 6 is centerline scan thickness of wall body schematic diagram of the invention;
Fig. 7 is untreated preceding original floor plan of the invention;
Fig. 8 is of the invention using wall largest contours removal floor plan redundancy effect picture;
Fig. 9 is of the invention using improved histogram Two-peak method segmenting wall body regional effect figure;
Figure 10 is wall middle line vector quantization straight line effect picture of the invention.
Specific embodiment
Technical solution of the present invention is described in detail with reference to the accompanying drawing, but protection scope of the present invention is not limited to The embodiment.
As shown in Figure 1, wall automatic identifying method in floor plan disclosed by the invention, includes the following steps:
Step 1, position analysis is carried out to the interference information in floor plan, wall maximum outer profile is recycled to reject house type Interference information in figure;
Step 2, according to the bimodality feature-set segmentation threshold of wall grey level histogram in floor plan, according to segmentation threshold It is partitioned into wall body area from gray scale floor plan, and distracter is filtered according to the size of connected domain area;
Step 3, the vector data structure of wall is generated by way of the straight line fitting of orientation, thus by knowing in floor plan It Chu not wall.
Further, in step 1, interference information includes size marking, caption information and internal text information, In, size marking and marginal data information etc. are in position except wall maximum outer profile, each information profile it Between be mutually not attached to, independently of one another, as shown in fig. 7, can correspondingly be rejected according to these position characteristics.
Further, in step 1, the specific steps of the interference information in floor plan are rejected using wall maximum outer profile Are as follows:
Step 1.1, edge detection is carried out to colored floor plan using Canny operator, to eliminate the background of colored floor plan Influence of the color to contour detecting;
Step 1.2, using in the library OpenCV findContours (image, contours, hierarchy, mode, Method, offset) function searches each profile in colored floor plan;
Step 1.3, interference information can be rejected using contour area restriction rule, is because main house body region is as one Whole to be determined by the outermost layer profile detected, the redundancy that other needs are rejected is dispersed in surrounding and area is compared with house master It is much smaller for body;
Step 1.4, gray processing processing is carried out to the colored floor plan for rejecting interference information, is converted into gray scale floor plan, such as Shown in Fig. 8.Before carrying out gray processing processing, color image degree of comparing enhancing is handled, Processing Algorithm are as follows:
G (i, j)=α f (i, j)+β
In formula, α is gain parameter, and for controlling contrast, then contrast enhances α > 1, and then contrast reduces by 0 < α < 1, β is offset parameter, and for controlling brightness, brightness enhances when β > 0, brightness deterioration when β < 0, in order to protrude between wall and background Difference, and be unlikely to make image color performance unbalance, contrast gain parameter α takes empirical value 2, and brightness is constant, offset parameter It is 0.
Further, in step 1.3, the specific steps of interference information can be rejected using contour area restriction rule are as follows:
Step 1.3.1 traverses all profiles detected, calculates each contour area, find the maximum profile of area;
Step 1.3.2, the maximum profile of Retention area delete other outside main house body profile as main house body profile Part;
Step 1.3.3 fills the internal color of main house body profile, and black background part outside main house body profile is corresponding Pixel value to the position in original color floor plan is both configured to white background.
Further, as shown in Fig. 2, the grey scale curve more than two wave crest of floor plan, therefore cannot be merely using bimodal Algorithm is split, since the pixel of floor plan is concentrated mainly on two regions, first is that the region based on wall gray scale, Low gray area " black " part forms an apparent wave crest, second is that the region based on background, in high gray scale " white " part Another apparent wave crest is formed, remaining middle gray part is also accompanied by wave crest, but their amplitude and two apparent waves In comparison peak can be ignored, therefore these wave crests can simply be regarded as the trough area between two region wave crests Domain, in general, the grey level histogram of floor plan meet double-hump characteristics, need to only count black and white respectively in calculating process Peak-peak T1, T2 in two regions and the lowest point value T of middle section.Therefore, in step 2, according to wall in floor plan The specific steps of the bimodality feature-set segmentation threshold of body grey level histogram are as follows:
Step 2.1, it is smoothed, is eliminated using grey level histogram of the double smoothing algorithm to gray scale floor plan Burr peak value;
Step 2.2, the grayscale information set S2 after gray scale floor plan smoothing processing is counted;
Step 2.3, global average gray value is calculated by grayscale information set S2 are as follows:
In formula, N is element sum in set S2, and f (i, j) is the gray value of element in S2;
Step 2.4, black, white two gray scale collection are calculated are as follows:
In formula, S (A) and S (B) is respectively black, white two gray scale collection;
Step 2.5, bimodal peak gray value is calculated are as follows:
In formula, T1And T2Respectively bimodal peak gray value, the selection of trough threshold value cannot take merely between two wave crests most Small value is considered as choosing close to wall part wave crest T to split wall part as far as possible1Valley value, secondly, Since there may be marginal informations to be damaged for wall images, cause the non-uniform situation of gray scale, T1Adjacent gray scale domain is also likely to be Wall part;
Step 2.6, T is counted1And T2Between valley value, obtain trough threshold value set B;
Step 2.7, the element in trough threshold value set B is arranged from small to large according to gray value;
Step 2.8, the element in the trough threshold value set B after traversal sequence, if | Bi-T1| < 10 then stops, determining this When BiFor segmentation threshold T.
Further, in step 2, wall body area is partitioned into from gray scale floor plan according to segmentation threshold T are as follows:
In formula, f (i, j) ∈ S2, T are segmentation threshold.
Further, wall part is split from image it can be seen from the result of Threshold segmentation, but cannot Exclusion has pixel similar in the gray value of small part background and the gray value of wall, is also erroneously interpreted as target part and is divided Out, by analysis and observation, wall part is the region of one or several connections, and these interference sections introduced are substantially A zonule is formed alone, it is also much smaller compared with wall body area on width and area, Morphological scale-space and company can be passed through Lead to domain area to filter out small interference sections, wall body area is further corrected.Therefore, in step 2, according to connected domain area Size filter the specific steps of distracter are as follows:
Step 2.9, to the wall body area image being partitioned into using in morphology closed operation and opening operation handle, unite Count the size in image connectivity domain;
Step 2.10, the connected domain for area lower than area threshold is filtered, and area threshold takes the average value of area, Expression formula are as follows:
In formula, AiFor the area of each connected domain, M is connected domain number, TAFor the average value of connected domain area.
Further, in step 3, the specific of the vector data structure of wall is generated by way of the straight line fitting of orientation Step are as follows:
Step 3.1, the wall body area being partitioned into is refined using Quick Parallel Thinning Algorithm, extracts the bone of wall Frame;
Step 3.2, projection both horizontally and vertically is carried out to Skeleton pixel point, obtains the position of skeleton projection reference line It sets, as shown in Figure 4;
Step 3.3, amendment Skeleton pixel point projects the position of reference line to skeleton;
Step 3.4, the eight neighborhood model of a Skeleton pixel point is defined, as shown in figure 5, quasi- according to eight neighborhood model orientation Wall middle line is closed, then counts the width of every section of wall by way of centerline scan, utilizes middle line starting point coordinate, the terminal of wall The vector data structure of coordinate and width of wall body building wall.
Further, in step 3.1, Quick Parallel Thinning Algorithm is follow-on thinning algorithm, specific algorithm step are as follows:
Step 3.1.1, by wall body area image binaryzation;
Step 3.1.2 judges the pixel value P of target pixel points1=1 whether simultaneously four conditions of satisfaction:
Condition 1,2≤N (P1)≤6;
Condition 2, B (P1)∈{65,5,20,80,13,22,52,133,141,54};
Condition 3, P2*P4*P6=0;
Condition 4, P4*P6*P8=0;
If meeting four conditions of step 3.1.2, P is deleted1(P1=0);
Step 3.1.3 judges the pixel value P of target pixel points1=1 whether simultaneously four conditions of satisfaction:
Condition 1,2≤N (P1)≤6;
Condition 2, B (P1)∈{65,5,20,80,13,22,52,133,141,54};
Condition 3, P2*P4*P8=0;
Condition 4, P2*P6*P8=0;
If meeting four conditions of step 3.1.3, P is deleted1(P1=0).
B(P1) it is target point P1The corresponding binary coding of eight neighborhood, meet following expression:
B(P1) belonging to set in value be that several refinement conditions are unsatisfactory for S (P1)=1 and the neighborhood of pixels deleted by leakage combines Corresponding binary coding situation.
Further, in step 3.4, in the eight neighborhood model of definition, P1Positioned at center, P2、P6、P8And P4Position respectively In P1Up, down, left and right side position on, P9、P3、P7And P5It is located at P1The upper left corner, the upper right corner, the lower left corner and the lower right corner On position, the specific steps of wall middle line are fitted according to eight neighborhood model orientation are as follows:
Step 3.4.1, if P1 is skeletal point, new line segment starting point is P1;If P4 is also skeletal point, and P2 and P6 It is not skeletal point, then deletes P1, otherwise retain P1;If P1 is not skeletal point, and starting point has been arranged in current line segment, then line segment is whole Point is P1, is saved into the wall centerline data structure of vector quantization;
Step 3.4.2, motion scan point P1 to P4 repeat step 3.4.1, until scanning through picture;
Step 3.4.3, scanned picture again, if P1 is skeletal point, new line segment starting point is P1;If P6 is also skeleton Point then deletes P1, otherwise retains P1;If P1 is not skeletal point, and starting point has been arranged in current line segment, then line segment terminal is P1, It saves into the wall centerline data structure of vector quantization;
Step 3.4.4, motion scan point P1 to P6 repeat step 3.4.3, until scanning through picture.
Further, in step 3.4, the specific steps of the width of every section of wall are counted by way of centerline scan are as follows:
Step 3.4.5, the middle line of L a length of to each scan simultaneously along its vertical direction two sides;
Step 3.4.6 is counted on scan path perpendicular to the pixel number x on midline position;
Step 3.4.7, if x > 50%L, unilateral wall width gauge number adds 1, continues to scan on;If x < 50%L, knot is scanned Beam;
Step 3.4.8 returns to the width W of single walliFor w1+w2+ 1, w1And w2Respectively unilateral width;
Step 3.4.9 calculates the mean breadth of wall are as follows:
In formula, WiFor the width of single wall, n is wall sum.
As described above, must not be explained although the present invention has been indicated and described referring to specific preferred embodiment For the limitation to invention itself.It without prejudice to the spirit and scope of the invention as defined in the appended claims, can be right Various changes can be made in the form and details for it.

Claims (10)

1. wall automatic identifying method in a kind of floor plan, which comprises the steps of:
Step 1, position analysis is carried out to the interference information in floor plan, wall maximum outer profile is recycled to reject in floor plan Interference information;
Step 2, according to the bimodality feature-set segmentation threshold of wall grey level histogram in floor plan, according to segmentation threshold from ash It is partitioned into wall body area in degree floor plan, and distracter is filtered according to the size of connected domain area;
Step 3, the vector data structure of wall is generated by way of the straight line fitting of orientation, thus by identifying in floor plan Wall.
2. wall automatic identifying method in floor plan according to claim 1, which is characterized in that in step 1, interference information Including size marking, caption information and internal text information.
3. wall automatic identifying method in floor plan according to claim 1, which is characterized in that in step 1, utilize wall Maximum outer profile rejects the specific steps of the interference information in floor plan are as follows:
Step 1.1, edge detection is carried out to colored floor plan using Canny operator, to eliminate the background color of colored floor plan Influence to contour detecting;
Step 1.2, each profile in colored floor plan is searched using the findContours function in the library OpenCV;
Step 1.3, interference information can be rejected using contour area restriction rule;
Step 1.4, gray processing processing is carried out to the colored floor plan for rejecting interference information, is converted into gray scale floor plan.
4. wall automatic identifying method in floor plan according to claim 3, which is characterized in that in step 1.3, utilize wheel Profile surface product restriction rule can reject the specific steps of interference information are as follows:
Step 1.3.1 traverses all profiles detected, calculates each contour area, find the maximum profile of area;
Step 1.3.2, the maximum profile of Retention area delete other portions outside main house body profile as main house body profile Point;
Step 1.3.3 fills the internal color of main house body profile, black background part outside main house body profile is corresponded to original The pixel value of position in colored floor plan is both configured to white background.
5. wall automatic identifying method in floor plan according to claim 3, which is characterized in that in step 2, according to house type The specific steps of the bimodality feature-set segmentation threshold of wall grey level histogram in figure are as follows:
Step 2.1, it is smoothed using grey level histogram of the double smoothing algorithm to gray scale floor plan, eliminates burr Peak value;
Step 2.2, the grayscale information set S2 after gray scale floor plan smoothing processing is counted;
Step 2.3, global average gray value is calculated by grayscale information set S2 are as follows:
In formula, N is element sum in set S2, and f (i, j) is the gray value of element in S2;
Step 2.4, black, white two gray scale collection are calculated are as follows:
In formula, S (A) and S (B) is respectively black, white two gray scale collection;
Step 2.5, bimodal peak gray value is calculated are as follows:
In formula, T1And T2Respectively bimodal peak gray value;
Step 2.6, T is counted1And T2Between valley value, obtain trough threshold value set B;
Step 2.7, the element in trough threshold value set B is arranged from small to large according to gray value;
Step 2.8, the element in the trough threshold value set B after traversal sequence, if | Bi-T1| < 10 then stops, and determines at this time BiFor segmentation threshold T.
6. wall automatic identifying method in floor plan according to claim 5, which is characterized in that in step 2, according to segmentation Threshold value T is partitioned into wall body area from gray scale floor plan are as follows:
In formula, f (i, j) ∈ S2, T are segmentation threshold.
7. wall automatic identifying method in floor plan according to claim 6, which is characterized in that in step 2, according to connection The size of domain area filters the specific steps of distracter are as follows:
Step 2.9, to the wall body area image being partitioned into using in morphology closed operation and opening operation handle, statistical chart As the size of connected domain;
Step 2.10, the connected domain for area lower than area threshold is filtered, and area threshold takes the average value of area, expression Formula are as follows:
In formula, AiFor the area of each connected domain, M is connected domain number, TAFor the average value of connected domain area.
8. wall automatic identifying method in floor plan according to claim 1, which is characterized in that in step 3, pass through orientation Straight line fitting mode generate wall vector data structure specific steps are as follows:
Step 3.1, the wall body area being partitioned into is refined using Quick Parallel Thinning Algorithm, extracts the skeleton of wall;
Step 3.2, projection both horizontally and vertically is carried out to Skeleton pixel point, obtains the position of skeleton projection reference line;
Step 3.3, amendment Skeleton pixel point projects the position of reference line to skeleton;
Step 3.4, the eight neighborhood model for defining a Skeleton pixel point is fitted wall middle line according to eight neighborhood model orientation, then The width that every section of wall is counted by way of centerline scan utilizes middle line starting point coordinate, terminal point coordinate and the wall of wall The vector data structure of width building wall.
9. wall automatic identifying method in floor plan according to claim 8, which is characterized in that in step 3.4, definition In eight neighborhood model, P1Positioned at center, P2、P6、P8And P4It is located at P1Up, down, left and right side position on, P9、P3、 P7And P5It is located at P1The upper left corner, the upper right corner, on the lower left corner and lower right position, wall is fitted according to eight neighborhood model orientation The specific steps of body middle line are as follows:
Step 3.4.1, if P1 is skeletal point, new line segment starting point is P1;If P4 is also skeletal point, and P2 and P6 are not Skeletal point then deletes P1, otherwise retains P1;If P1 is not skeletal point, and starting point has been arranged in current line segment, then line segment terminal is P1 is saved into the wall centerline data structure of vector quantization;
Step 3.4.2, motion scan point P1 to P4 repeat step 3.4.1, until scanning through picture;
Step 3.4.3, scanned picture again, if P1 is skeletal point, new line segment starting point is P1;If P6 is also skeletal point, P1 is then deleted, P1 is otherwise retained;If P1 is not skeletal point, and starting point has been arranged in current line segment, then line segment terminal is P1, saves Into the wall centerline data structure of vector quantization;
Step 3.4.4, motion scan point P1 to P6 repeat step 3.4.3, until scanning through picture.
10. wall automatic identifying method in floor plan according to claim 9, which is characterized in that in step 3.4, pass through The mode of centerline scan counts the specific steps of the width of every section of wall are as follows:
Step 3.4.5, the middle line of L a length of to each scan simultaneously along its vertical direction two sides;
Step 3.4.6 is counted on scan path perpendicular to the pixel number x on midline position;
Step 3.4.7, if x > 50%L, unilateral wall width gauge number adds 1, continues to scan on;If x < 50%L, the end of scan;
Step 3.4.8 returns to the width W of single walliFor w1+w2+ 1, w1And w2Respectively unilateral width;
Step 3.4.9 calculates the mean breadth of wall are as follows:
In formula, WiFor the width of single wall, n is wall sum.
CN201910460783.4A 2019-05-30 2019-05-30 Automatic wall identification method in house type graph Active CN110197153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910460783.4A CN110197153B (en) 2019-05-30 2019-05-30 Automatic wall identification method in house type graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910460783.4A CN110197153B (en) 2019-05-30 2019-05-30 Automatic wall identification method in house type graph

Publications (2)

Publication Number Publication Date
CN110197153A true CN110197153A (en) 2019-09-03
CN110197153B CN110197153B (en) 2023-05-02

Family

ID=67753392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910460783.4A Active CN110197153B (en) 2019-05-30 2019-05-30 Automatic wall identification method in house type graph

Country Status (1)

Country Link
CN (1) CN110197153B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104879A (en) * 2019-12-09 2020-05-05 贝壳技术有限公司 Method and device for identifying house functions, readable storage medium and electronic equipment
CN112001997A (en) * 2020-06-23 2020-11-27 北京城市网邻信息技术有限公司 Furniture display method and device
CN112926392A (en) * 2021-01-26 2021-06-08 杭州聚秀科技有限公司 Building plane drawing room identification method based on contour screening
CN113592976A (en) * 2021-07-27 2021-11-02 美智纵横科技有限责任公司 Map data processing method and device, household appliance and readable storage medium
CN115082850A (en) * 2022-05-23 2022-09-20 哈尔滨工业大学 Template support safety risk identification method based on computer vision
CN115714732A (en) * 2022-11-03 2023-02-24 巨擎网络科技(济南)有限公司 Whole-house wireless network coverage condition detection method and equipment
CN116188480A (en) * 2023-04-23 2023-05-30 安徽同湃特机器人科技有限公司 Calculation method of AGV traveling path point during ceiling operation of spraying robot
CN111754526B (en) * 2020-06-23 2023-06-30 广东博智林机器人有限公司 House type graph dividing method, household type graph classifying method, household type graph dividing device, household type graph dividing equipment and storage medium
CN116993462A (en) * 2023-09-26 2023-11-03 浙江小牛哥科技有限公司 Online automatic quotation system based on digital home decoration

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1083413A (en) * 1996-09-06 1998-03-31 Ricoh Co Ltd Method and device for recognizing building plan
CN103971098A (en) * 2014-05-19 2014-08-06 北京明兰网络科技有限公司 Method for recognizing wall in house type image and method for automatically correcting length ratio of house type image
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105279787A (en) * 2015-04-03 2016-01-27 北京明兰网络科技有限公司 Method for generating three-dimensional (3D) building model based on photographed house type image identification
CN107122528A (en) * 2017-04-13 2017-09-01 广州乐家数字科技有限公司 A kind of floor plan parametrization can edit modeling method again
CN107274486A (en) * 2017-06-26 2017-10-20 广州天翌云信息科技有限公司 A kind of model 3D effect map generalization method
CN107330979A (en) * 2017-06-30 2017-11-07 电子科技大学中山学院 Vector diagram generation method and device for building house type and terminal
CN108399644A (en) * 2018-02-05 2018-08-14 北京居然之家家居连锁集团有限公司 A kind of wall images recognition methods and its device
CN108961152A (en) * 2018-05-30 2018-12-07 链家网(北京)科技有限公司 Plane house type drawing generating method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1083413A (en) * 1996-09-06 1998-03-31 Ricoh Co Ltd Method and device for recognizing building plan
CN103971098A (en) * 2014-05-19 2014-08-06 北京明兰网络科技有限公司 Method for recognizing wall in house type image and method for automatically correcting length ratio of house type image
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105279787A (en) * 2015-04-03 2016-01-27 北京明兰网络科技有限公司 Method for generating three-dimensional (3D) building model based on photographed house type image identification
CN107122528A (en) * 2017-04-13 2017-09-01 广州乐家数字科技有限公司 A kind of floor plan parametrization can edit modeling method again
CN107274486A (en) * 2017-06-26 2017-10-20 广州天翌云信息科技有限公司 A kind of model 3D effect map generalization method
CN107330979A (en) * 2017-06-30 2017-11-07 电子科技大学中山学院 Vector diagram generation method and device for building house type and terminal
CN108399644A (en) * 2018-02-05 2018-08-14 北京居然之家家居连锁集团有限公司 A kind of wall images recognition methods and its device
CN108961152A (en) * 2018-05-30 2018-12-07 链家网(北京)科技有限公司 Plane house type drawing generating method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LU T.ET AL: ""3D Reconstruction of Detailed Buildings From", 《COMPUTER-AIDED DESIGN & APPLICATIONS》 *
YIN X.ET AL: ""Generating 3D Building Models From Architectural", 《COMPUTER GRAPHICS & APPLICATIONS IEEE》 *
江州: ""基于形状与边缘特征的户型图识别研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王文华等: ""AutoCAD绘制户型图的方法探析"", 《商丘职业技术学院学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104879B (en) * 2019-12-09 2020-11-27 贝壳找房(北京)科技有限公司 Method and device for identifying house functions, readable storage medium and electronic equipment
CN111104879A (en) * 2019-12-09 2020-05-05 贝壳技术有限公司 Method and device for identifying house functions, readable storage medium and electronic equipment
CN112001997B (en) * 2020-06-23 2022-02-18 北京城市网邻信息技术有限公司 Furniture display method and device
CN112001997A (en) * 2020-06-23 2020-11-27 北京城市网邻信息技术有限公司 Furniture display method and device
CN111754526B (en) * 2020-06-23 2023-06-30 广东博智林机器人有限公司 House type graph dividing method, household type graph classifying method, household type graph dividing device, household type graph dividing equipment and storage medium
CN112926392A (en) * 2021-01-26 2021-06-08 杭州聚秀科技有限公司 Building plane drawing room identification method based on contour screening
CN113592976A (en) * 2021-07-27 2021-11-02 美智纵横科技有限责任公司 Map data processing method and device, household appliance and readable storage medium
CN115082850A (en) * 2022-05-23 2022-09-20 哈尔滨工业大学 Template support safety risk identification method based on computer vision
CN115714732A (en) * 2022-11-03 2023-02-24 巨擎网络科技(济南)有限公司 Whole-house wireless network coverage condition detection method and equipment
CN115714732B (en) * 2022-11-03 2024-01-26 巨擎网络科技(济南)有限公司 Method and equipment for detecting coverage condition of whole-house wireless network
CN116188480A (en) * 2023-04-23 2023-05-30 安徽同湃特机器人科技有限公司 Calculation method of AGV traveling path point during ceiling operation of spraying robot
CN116188480B (en) * 2023-04-23 2023-07-18 安徽同湃特机器人科技有限公司 Calculation method of AGV traveling path point during ceiling operation of spraying robot
CN116993462A (en) * 2023-09-26 2023-11-03 浙江小牛哥科技有限公司 Online automatic quotation system based on digital home decoration

Also Published As

Publication number Publication date
CN110197153B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN110197153A (en) Wall automatic identifying method in a kind of floor plan
CN104751142B (en) A kind of natural scene Method for text detection based on stroke feature
Parker et al. An approach to license plate recognition
CN109886896B (en) Blue license plate segmentation and correction method
CN110717489B (en) Method, device and storage medium for identifying text region of OSD (on Screen display)
CN111179243A (en) Small-size chip crack detection method and system based on computer vision
CN108416766B (en) Double-side light-entering type light guide plate defect visual detection method
US9135722B2 (en) Perceptually lossless color compression
CN109784344A (en) A kind of non-targeted filtering method of image for ground level mark identification
CN108830832A (en) A kind of plastic barrel surface defects detection algorithm based on machine vision
CN102629322B (en) Character feature extraction method based on stroke shape of boundary point and application thereof
CN105374015A (en) Binary method for low-quality document image based on local contract and estimation of stroke width
Paunwala et al. A novel multiple license plate extraction technique for complex background in Indian traffic conditions
CN107633253B (en) Accurate extraction and positioning method based on rectangular surrounding frame in noisy image
CN102842037A (en) Method for removing vehicle shadow based on multi-feature fusion
He et al. A new automatic extraction method of container identity codes
CN105139410B (en) The brain tumor MRI image dividing method projected using aerial cross sectional
CN114972575A (en) Linear fitting algorithm based on contour edge
CN116052152A (en) License plate recognition system based on contour detection and deep neural network
Ingle et al. Adaptive thresholding to robust image binarization for degraded document images
CN110047041B (en) Space-frequency domain combined traffic monitoring video rain removing method
Ali et al. Robust window detection from 3d laser scanner data
CN107122757A (en) A kind of unstructured road detection method of real-time robust
CN109584212A (en) A kind of SLM powder bed powder laying image scratch defect identification method based on MATLAB
CN113284158B (en) Image edge extraction method and system based on structural constraint clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant