CN108804815A - A kind of method and apparatus assisting in identifying wall in CAD based on deep learning - Google Patents
A kind of method and apparatus assisting in identifying wall in CAD based on deep learning Download PDFInfo
- Publication number
- CN108804815A CN108804815A CN201810587788.9A CN201810587788A CN108804815A CN 108804815 A CN108804815 A CN 108804815A CN 201810587788 A CN201810587788 A CN 201810587788A CN 108804815 A CN108804815 A CN 108804815A
- Authority
- CN
- China
- Prior art keywords
- wall
- cad
- deep learning
- floor plan
- file data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Architecture (AREA)
- Civil Engineering (AREA)
- Structural Engineering (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of methods assisting in identifying wall in CAD based on deep learning, include the following steps:It obtains and parses the corresponding cad file data of floor plan;Obtain the first wall obtained to cad file data identification;The corresponding floor plan of the cad file data is identified using wall identification model, obtains the second wall;Cross validation is carried out to first wall using second wall, obtains final wall;The wall identification model is trained to obtain based on deep learning network.The invention also discloses a kind of devices assisting in identifying wall in CAD based on deep learning.This method and device can improve the accuracy rate of wall identification.
Description
Technical field
The invention belongs to Interior design of architecture technical fields, and in particular to one kind is assisted in identifying based on deep learning in CAD
The method and apparatus of wall.
Background technology
Currently, architecture indoor spatial design mainly by CAD (Computer Aided Design,
CAD it) is drawn.For the architecture indoor spatial design figure completed using CAD, identify the indoor wall of building for
Careat calculating, heat supply design, gas supply design and ventilation and air-conditioning system design etc. are of great significance to.
Most of existing wall recognition methods is to identify wall according to the characteristics of image on design drawing, is specifically included:It is first
First, the straight line in design drawing is matched according to the geometry of the interfering objects such as furniture, it is believed that similarity reaches certain pre-
If the straight line of threshold value is interfering object, it is deleted;Then to remaining straight line set, matching line segments are carried out, are generated multipair
Parallel lines, and merge the smaller collinear lines of notch;Finally, according to the parallel lines of pairing wall is generated according to width of wall body range
Body.This wall recognition methods calculating process by line segment constraint and geometry is complicated, and computational efficiency is low.In addition, in this way
Wall recognition methods is difficult to reject abstract or complicated geometry, causes wall identification deviation larger, recognition accuracy is low.
The patent application of application publication number CN103971098A discloses a kind of recognition methods of wall in floor plan.The knowledge
Other method includes:Floor plan is pre-processed;Detect the appearance profile of floor plan;Using wall threshold segmentation method to house type
Figure is handled, and binary map is obtained;Burn into expansion, edge detection are carried out to binary map;Hough transformation is carried out to edge image
Rectilinear coordinates information is obtained, according to the coordinate information of rectilinear coordinates acquisition of information wall.Point wherein in wall threshold segmentation method
Threshold value T is cut to be determined with the average gray value of exterior domain by the average gray value and wall of wall.This method does not exclude house type
The interfering objects such as the furniture in figure, therefore, identification accuracy are low.
The patent application that application publication number is CN106156438A discloses a kind of wall recognition methods and device.Wall is known
Other method includes:Obtain the floor plan that user uploads;Rasterizing is carried out to floor plan;House type to after user's show grid
Figure;Obtain the location point for belonging to wall body area of user's selection;According to the information of location point, by knowing in the floor plan after rasterizing
Other wall body area.The wall recognition methods is simply interacted by webpage front-end with background server, is realized to building wall
The identification of body has better recognition accuracy although calculating process is simple, and webpage front-end (user) is needed to be taken with backstage
Business device interacts, and can not realize automation, and the selection of user can have error, therefore, can also there is identification accuracy
Low problem.
Invention content
The object of the present invention is to provide a kind of method and apparatus assisting in identifying wall in CAD based on deep learning, with solution
The certainly low problem of wall identification accuracy.
To solve the above problems, the present invention provides following technical scheme:
On the one hand, the embodiment of the present invention provides a kind of method assisting in identifying wall in CAD based on deep learning, including with
Lower step:
It obtains and parses the corresponding cad file data of floor plan;
Obtain the first wall obtained to cad file data identification;
The corresponding floor plan of the cad file data is identified using wall identification model, obtains the second wall;
Cross validation is carried out to first wall using second wall, obtains final wall;
The wall identification model is trained to obtain based on deep learning network.
On the other hand, the embodiment of the present invention provides a kind of device assisting in identifying wall in CAD based on deep learning, packet
It includes:
One or more processors, memory and are stored in the memory and can be in one or more of processing
The one or more computer programs executed on device, one or more of processors are executing one or more of computers
When program, realize the above method the step of.
The method and apparatus provided in an embodiment of the present invention for assisting in identifying wall in CAD based on deep learning, utilize structure
Wall identification model floor plan is identified, obtain the second wall, and using second wall to acquisition and the house type
Scheme corresponding first wall and carry out cross validation, to obtain final wall.Since wall identification model is remembered with very strong study
Recall function, wall is identified using wall identification model, the interference informations such as a large amount of furniture in floor plan can be excluded, obtained
Almost without interference information the second wall, when carrying out cross validation using second the first wall of wall pair, the can be rejected
Interference information in one wall obtains final wall, improves the accuracy of wall recognition result according to this.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technology description to do simply to introduce, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art, can be with root under the premise of not making the creative labor
Other accompanying drawings are obtained according to these attached drawings.Throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is the method flow diagram that one embodiment of the invention assists in identifying wall in CAD based on deep learning;
Fig. 2 is the deep neural network structure chart of another embodiment of the present invention structure;
Fig. 3 is the method flow diagram that another embodiment of the present invention assists in identifying wall in CAD based on deep learning;
Fig. 4 is rasterizing in another embodiment of the present invention, the floor plan after scaling processing;
Fig. 5 is that the first wall figure after wall identification is carried out to floor plan shown in Fig. 4 using existing method;
Fig. 6 is the second wall for being obtained to floor plan shown in Fig. 4 identification using wall identification model to shown in fig. 5 the
One wall carries out the final wall figure after cross validation.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention more comprehensible, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only this
Invention a part of the embodiment, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art exist
The every other embodiment obtained under the premise of creative work is not made, shall fall within the protection scope of the present invention.
When identifying wall using existing method, due to cannot relatively accurately exclude the interference informations such as furniture, so that know
Include some interference informations in other wall, causes recognition accuracy low.
To improve the recognition accuracy of wall, an embodiment of the present invention provides one kind assisting in identifying CAD based on deep learning
The method of middle wall, as shown in Figure 1, including the following steps:
S101 is obtained and is parsed the corresponding cad file data of floor plan.
Cad file data refer to the data stored with formats such as dwg, dxf, which can correspond to one after parsing and display
A or multiple floor plans.
Cad file data are pre-processed for convenience, needs cad file data being parsed into rasterizing is facilitated to handle
Data mode, i.e. the data mode is generally the polar plot that is indicated with vector quantization data.Specifically, the data after parsing include
Straight line, camber line, circle, for the data structure of extended formatting, such as multi-section-line, spline curve and ellipse, by its approximate transform
For straight line and camber line.Wherein, two tuples ((starting point coordinate), (terminal point coordinate)) that straight line is formed with starting point coordinate and terminal point coordinate
It indicates, the triple ((starting point coordinate), (terminal point coordinate), radian) that camber line is formed with starting point coordinate and radian indicates, circle is with circle
Two tuples ((central coordinate of circle), radius) of heart coordinate and radius composition indicate.
S102 obtains the first wall obtained to cad file data identification.
First wall identifies cad file data using the prior art, storage in memory, when needing to use
When, directly transfer use.
Specifically, method disclosed in application publication number CN103971098A can be utilized to obtain the first wall, it can be with profit
With application publication number be CN106156438A disclosed in method obtain the first wall, the first wall can also be obtained using the following method
Body, this method step are:First, the straight line in design drawing is matched according to the geometry of the interfering objects such as furniture, is recognized
The straight line for reaching certain predetermined threshold value for similarity is interfering object, is deleted;Then it to remaining straight line set, carries out
Matching line segments generate multipair parallel lines, and merge the smaller collinear lines of notch;Finally, according to the parallel lines of pairing according to wall
Body width range generates wall.
In these methods, some do not exclude interference information, cause wall recognition result inaccurate;Although some
The step of containing exclusive PCR information, but interference information is excluded inaccurately, those to be difficult to reject abstract and multiple
Miscellaneous interference structure, exclusion is not clean, can cause still to include some interference informations in wall recognition result, therefore, through counting,
Include some interference informations in the wall that existing method identification obtains, causes wall result inaccurate.
S103 is identified the corresponding floor plan of the cad file data using wall identification model, obtains the second wall
Body.
Wall identification model is to train to obtain using a large amount of training samples, specifically based on deep learning network
It is the prediction model obtained to deep neural network training, when floor plan to be identified to be input in the wall identification model, i.e.,
One-dimensional vector be can get as a result, indicating whether image content features corresponding with the vector are wall.
Deep neural network is a kind of neural network capturing 2-D data content characteristic, is mainly used in image recognition neck
Domain.Therefore, current embodiment require that converting cad file data to image, the i.e. data of two-dimensional array form, to facilitate using deep
Degree neural network extracts 2-D data content characteristic.
Specifically, rasterizing processing is carried out to the cad file data after parsing, is corresponded to obtaining the cad file data
Floor plan.
As previously mentioned, the cad file data after parsing are the polar plot comprising vector data, to the CAD texts after parsing
Number of packages converts polar plot to bitmap according to rasterizing processing is carried out.Bitmap is a series of identifiable figure formed with pixels
Picture, the image are indicated and are stored in the form of two-dimensional array.
The deep neural network structure of the present embodiment structure is as shown in Fig. 2, include two parts, first part is for two
Dimension data wall feature extracts, and another part is used to generate corresponding reconstructed image according to the wall feature of extraction.Specifically
Ground, the deep neural network of first part include data input layer, convolutional layer, down-sampling layer, activation primitive layer, full articulamentum,
The deep neural network of second part includes data transposition convolutional layer, activation primitive layer and data output layer, and first part
Full articulamentum and second part transposition convolutional layer connect, using the result of full articulamentum as the processing number of transposition convolutional layer
According in this way using the deep neural network of first part acquisition wall feature, in the deep neural network pair using second part
Wall carries out image reconstruction, generates reconstructed image.
After determining the foundation structure of deep neural network, it is also necessary to determine the classification dimension and size of each layer.Classification
Dimension determines that the categorical measure that upper layer is abstracted by lower layer, window size are the size of two dimensional image in each layer.
For data input layer, classification dimension is 3, that is, indicates input one is indicated by 3 colored papers of RGB two
Tie up image;Window size is the size of two dimensional image.For convolutional layer, activation primitive layer, down-sampling layer, transposition convolutional layer and complete
For articulamentum, classification dimension indicates the quantity of characteristic pattern in each layer;Window size indicates the size of this layer of characteristic pattern.For
For data output layer, classification dimension is 3, and window size is the size of reconstructed image.
After the classification dimension and size for determining each layer, also needs to convolutional layer, full articulamentum, activation primitive layer and turn
Set convolutional layer, setting kernel mappings area size, kernel mappings region movement step value and window edge expanding value.Kernel reflects
It penetrates area size and determines and be abstracted to upper layer as unit of the characteristic area of which kind of size, for first layer convolutional layer
Speech, the size in kernel mappings region are corresponding with the region shape of aforementioned two-dimensional array consistent;Move stepping in kernel mappings region
Value is configured mainly for convolutional layer, is determined the step size of kernel mappings region movement, is usually arranged as " 1 ";Window
Border extended value is configured mainly for convolutional layer, determines the area size covered to two-dimensional array outside edges, when setting
When being set to " 0 ", the information of two-dimensional array outside edges is not included in kernel mappings region.
As shown in Fig. 2, the first part for the deep neural network that the present embodiment is established includes 201,8 groups of data input layer
Layer 202~209, full articulamentum 210, full articulamentum 211 are closed, each combination layer includes a convolutional layer, an activation primitive
Layer, a down-sampling layer, convolutional layer are used to reflect the block of pixels (including multiple neighbor pixels) in image by convolutional calculation
It penetrates as the characteristic point in the characteristic pattern of upper layer;Activation primitive layer is for handling the data of characteristic point using relu functions;Under
Sample level is used to carry out sub-sample to the characteristic pattern after convolution, and several characteristic points adjacent in characteristic pattern are sampled to upper layer feature
A characteristic point in figure, it is possible to reduce data processing amount while keeping characteristics information.The Size=a*a*b of each layer, " a*a " table
Show window size, " b " indicates classification dimension.
Wherein, the size=256*256*3 of data input layer 201 indicates that input data is indicated with RGB3 color value
Size be 256*256 image.The size=128*128*64 of combination layer 202 is indicated comprising size=256*256*64
The down-sampling layer of convolutional layer, the activation primitive layer of size=256*256*64 and size=128*128*64, wherein convolutional layer
Window size is identical as the window size of last layer, and classification dimension changes, and activation primitive layer handles characteristic point data, under
Sample level ensures that classification dimension is constant, reduces window size.The size=64*64*128 of combination layer 203, combination layer 204
Size=32*32*256, the size=16*16*512 of combination layer 205 are realized in combination layer 203~205 by convolutional layer
Classification dimension doubles, and is handled characteristic point data by activation primitive layer, realizes that window size halves by down-sampling layer,
Obtain corresponding characteristic pattern.For example, the size=32*32*256 of combination layer 204, indicates the volume for including size=64*64*256
The down-sampling layer of lamination, the activation primitive layer of size=64*64*256 and size=32*32*256 obtains 256 classification dimensions
Degree, size are the characteristic pattern of 32*32.The size=8*8*512 of combination layer 206, the size=4*4*512 of combination layer 207, combination
The size=2*2*512 of layer 208, the size=1*1*512 of combination layer 209 are grasped in combination layer 206~209 by convolution
Make, classification dimension is constant, is handled characteristic point data by activation primitive layer, realizes that window size subtracts by down-sampling layer
Half, obtain corresponding characteristic pattern.The size=1*1*512 of full articulamentum 210, indicates 512 characteristic patterns in combination layer 209
It is all connected in 512 characteristic patterns, " 1*1 " indicates that the characteristic pattern in full articulamentum 210 is the characteristic pattern that size is 1*1, entirely
The size=1*1*512 of articulamentum 211 indicates to carry out primary full connection processing again to the result of full articulamentum 210, obtains 512
It is 1*1 characteristic patterns to open size.
The second part of deep neural network includes 8 transposition convolution and activation combination layer 212~219, transposition convolution with
Activation combination layer 212 is connected with full articulamentum 211.Each transposition convolution and activation combination layer include a transposition convolutional layer and
One activation primitive layer, transposition convolutional layer is for being restored and being restored to feature, by the characteristic point in low resolution characteristic pattern
It is mapped in multiple characteristic points on high-resolution features figure, finally by aliasing of multiple characteristic point informations on same position
To realize the structure of feature on high-resolution features figure;Activation primitive layer is used to carry out the data of each characteristic point in characteristic pattern
Processing.
Wherein, the size=2*2*512 of transposition convolution and activation combination layer 212, transposition convolution and activation combination layer 213
Size=4*4*512, the size=8*8*512 of transposition convolution and activation combination layer 214, transposition convolution and activation combination layer 215
Size=16*16*512, transposition convolution with activation combination layer 212~215 in, classification dimension is constant, passes through transposition convolution
Layer realizes that window size doubles, and is handled the data of each characteristic point in characteristic pattern by activation primitive layer, obtains corresponding
Characteristic pattern.The size=32*32*256 of transposition convolution and activation combination layer 216, transposition convolution and activation combination layer 217
Size=64*64*128, the size=128*128*64 of transposition convolution and activation combination layer 218, in transposition convolution and activation group
It closes in layer 216~218, realizes that classification dimension halves by transposition convolution, window size doubles, by activation primitive layer to feature
The data of each characteristic point are handled in figure, obtain corresponding characteristic pattern.For example, transposition convolution and activation combination layer 217
Size=64*64*128 indicates the activation letter of transposition convolutional layer and size=64*64*128 comprising size=64*64*128
Several layers, 128 classification dimensions are obtained, size is the characteristic pattern of 64*64.The size=of transposition convolution and activation combination layer 219
256*256*3, it is identical as the size of input layer 201, it indicates classification dimension through the transposition convolutional layer of size=256*256*3
It is converted to 3, window size is doubled to 256*256, obtains the size that is indicated with 3 color values of RGB as the reconstruct of 256*256
Image, the transposition convolution and the result of activation combination layer 219 are data output layer.
After the deep neural network is built, the deep neural network of structure is trained using training sample, is changed
The inner parameter of deep neural network, to obtain wall identification model.
Specifically, each training sample includes the image of a rasterizing, and to the wall in the image of the rasterizing into
The wall markup information of rower note.Network training process is a kind of learning process of supervised, in the wall for obtaining training sample
After probability value, it is compared with wall markup information, if training result is consistent with wall markup information, illustrates network
The setting of parameter is appropriate, after to different training samples repeatedly train, if result can be in accuracy, stability etc.
Aspect is up to standard, then obtains wall identification model;If the wall probability value and wall markup information that training obtains are inconsistent, according to
The difference degree of the two is successively fed back, each layer parameter of sequential adjustment, so that training result approaches or be equal to wall mark
Information, this process are back-propagating.After carrying out multiple parameter adjustment to deep neural network using a large amount of training sample, i.e.,
It can get wall identification model.
After obtaining wall identification model, you can to identify the wall in input picture using the wall identification model.
Specifically, described that the corresponding floor plan of the cad file data is identified using wall identification model, obtain the second wall
Including:
The corresponding floor plan of the cad file data is input to wall identification model, is computed and obtains wall feature square
Battle array, after generating reconstructed image according to the wall eigenmatrix, further according to preset wall type threshold value to the reconstructed image
It is intercepted, generates the second wall.
The practical reconstructed image is wall confidence level matrix, the size of the size and input picture of the wall confidence level matrix
Identical, the value of each location point corresponds to the probability that input picture corresponding position is wall, i.e. wall confidence level in matrix.
Wall type threshold value can be a RGB numerical value, and the pixel that will be greater than the RGB numerical value is considered wall, is less than
Pixel more than the RGB numerical value is considered background, according to this second wall of RGB numerical generations.The size of RGB numerical value is according to reality
Border situation setting, does not limit herein.
It is input to the rasterized images that rasterizing processing obtains are carried out to cad file data in wall identification model, profit
With in wall identification model network structure and network parameter carries out wall feature extraction to rasterized images and to obtain wall special
After levying recognition result, feature expansion is carried out according to the wall feature recognition result, to generate reconstructed image, in the reconstructed image
Include not only wall information, also includes some other information.After obtaining reconstructed image, preset wall type threshold value pair is utilized
Reconstructed image is handled, and is generated the second wall, i.e., is indicated wall position with white.
Due to structure wall identification model when, there is no in input picture interference information (furniture, irregular structure,
Abstract structure) learnt, therefore, when carrying out feature extraction to floor plan using wall identification model, it is special to be only capable of extraction wall
Sign is included hardly interference information, can realized to interference in this way, in the reconstructed image obtained using wall identification model
Information excludes well, i.e., is intercepted to reconstructed image using wall type threshold value, and the second wall of generation eliminates interference
Information.
Since the size of the input layer of wall identification model is fixed, in order to meet the call format of input data, need
Size adjusting is carried out to input data.Specifically, the corresponding floor plan of the cad file data is zoomed to and meets the wall
After body identification model input image size, it is input in wall identification model.In the present embodiment, it is required to floor plan being adjusted to
After 256*256 sizes, just it is entered into wall identification model.Not change the second wall size of acquisition, weighed
After composition picture, the reconstructed image is zoomed into former floor plan size.Former floor plan size is the ruler before being adjusted to floor plan
It is very little.Specifically, after the reconstructed image being zoomed to former floor plan size, further according to preset wall type threshold value to described heavy
Composition picture is intercepted, and the second wall is generated.
S104 carries out cross validation to first wall using second wall, obtains final wall.
Since the first wall is identified using existing method from floor plan, can include in the first wall of identification
Interference information (such as:It is furniture in former floor plan, but is identified as wall, which is interference information), utilize the second wall
The first wall of body pair carries out cross validation, can weed out the interference information in the first wall, obtain accurate wall.
Specifically, described that cross validation is carried out to first wall using second wall, obtain final wall packet
It includes:
Overlap proportion or overlapping region of first wall on second wall are counted, overlap proportion is rejected and is less than
First wall of anti-eclipse threshold, remaining first wall are final wall.
For floor plan, the every wall generated using existing method is all referred to as the first wall, utilizes wall identification model
The every wall generated is all referred to as the second wall.Overlap proportion of first wall on the second wall is counted, refers to statistics every
Overlap proportion of first wall in the second wall with the first wall same position.Overlap proportion refers to the first wall second
Overlapping region on wall accounts for the ratio of the second wall whole region.Anti-eclipse threshold can be the numerical value for the ratio that indicates, when certain
The first wall of item is not Chong Die with the second wall or the overlap proportion of the first wall and the second wall is less than preset anti-eclipse threshold
When, then illustrate first wall be interference information possibility it is very big, then by first wall reject.Remaining first wall is equal
It is first wall larger with the second wall overlap ratio, it is believed that those walls are true wall, are retained it, to know
Other final wall.
It is, of course, also possible to directly count overlapping area of first wall on second wall, overlap ratio is rejected
Example is less than the first wall of anti-eclipse threshold, and remaining first wall is final wall.In this case, anti-eclipse threshold is exactly table
Show the numerical value of area.Anti-eclipse threshold is set according to actual conditions, and is not limited herein.
In another embodiment, on the basis of assisting in identifying the method for wall in CAD based on deep learning above-mentioned,
The method for assisting in identifying wall in CAD based on deep learning further includes:Regularization processing is carried out to the final wall, with full
The connected relation of sufficient wall.
Although the final wall after cross validation has met practical house type interior wall in semantic and geometrical relationship
The various limitations of body, but therefore can also have some needs apart from the not connected vertical wall of closer conllinear walls and turning
Regularization processing is carried out to these walls.
Specifically, described to include to the final wall progress regularization processing:
The conllinear wall less than the first distance threshold of adjusting the distance merges, and takes longer wall conduct in two conllinear walls
Main wall body, and main wall body is extended towards shorter wall, until main wall body covers shorter wall, realize conllinear wall
Merge;
Intersection point is calculated to mutually perpendicular wall, and intersection point described in extended distance is less than the wall of second distance threshold value to institute
Intersection point is stated, to form wall turning;Alternatively, orthogonal two walls are extended to the intersection point simultaneously, to form wall
Turning.
Some closer conllinear walls of distance and not connected mutually perpendicular wall can be connected to by above method, it is full
The connected relation of wall in full border house type.
In another embodiment, on the basis of assisting in identifying the method for wall in CAD based on deep learning above-mentioned,
The method for assisting in identifying wall in CAD based on deep learning further includes that basis vector data are screened and handled, tool
Body, after parsing the cad file data, the line segment that length is less than length threshold is rejected, by angle of inclination in predetermined angle
Line segment in range is converted into horizontal line section or vertical line segment.
After parsing cad file, in the data for obtaining vector quantization, the smaller line segment of length in need or inclination angle can be contained
Larger line segment is spent, these line segments are unlikely to be the wall in floor plan, therefore, before identification, it is shorter to delete length
Line segment, the larger line segment in adjustment angle of inclination, can reduce the calculation amount of wall identification model.
The setting of length threshold is for the overall structure of floor plan and size, it is considered that the length threshold
Under line segment generally will not be wall, such line segment is deleted.The size of length threshold is set according to actual conditions,
This is not limited.
Predetermined angle range is also an opposite concept, it is considered that will be less than 45 ° of line segment with the angle of horizontal direction
It is converted into horizontal line section, the line segment with the angle of vertical direction less than 45 ° is converted into vertical line segment.The predetermined angle range also root
It sets according to actual conditions, does not limit herein.
In another embodiment, on the basis of assisting in identifying the method for wall in CAD based on deep learning above-mentioned,
The method for assisting in identifying wall in CAD based on deep learning further includes:
After parsing the cad file data, according to the dependence of neighbouring line segment, to be not present in the former floor plan
The region of line segment with dependence, or the number with dependence line segment are used as boundary less than the region for relying on threshold value,
It splits and obtains multiple subregions, per sub-regions as a floor plan.
When that can include multiple floor plans in a cad file, need to carry out cluster behaviour to the line segment in cad file data
Make, to identify multiple house types.Specifically, the dependence of neighbouring line segment is detected, which, which can be neighbouring line segment, has
Common endpoint, neighbouring line segment intersection, neighbouring line segment is parallel but distance is closer etc..In the case of one kind, when former floor plan certain
In a region, there is no the line segments with dependence, then using the region as boundary, former floor plan is split into several sub-districts
Domain, per sub-regions as a floor plan.In the case of another, in some region in former floor plan, existing has
The line segment of dependence, but this line segment item number with dependence is less, is less than preset dependence threshold value, the dependence threshold
Value indicates the item number of the line segment with dependence, then using the region as boundary, former floor plan is split into several sub-regions,
Per sub-regions as a floor plan.
Before identifying the second wall using wall identification model, more floor plans are split, each floor plan is independent
It as input picture, is input in wall identification model, the calculation amount of wall identification model can be reduced, and identification can be improved
Accuracy.
Another embodiment of the presently claimed invention provides a kind of method assisting in identifying wall in CAD based on deep learning,
As shown in figure 3, including the following steps:
S301 is obtained and is parsed the corresponding cad file data of floor plan.
S302, for the cad file data, length is less than the line segment of length threshold in rejecting, and angle of inclination is existed
Line segment within the scope of predetermined angle is converted into horizontal line section or vertical line segment.
S303 clusters the cad file data after parsing, realizes that more house types are split.
Specifically, after parsing the cad file data, according to the dependence of neighbouring line segment, with the former floor plan
In there is no with dependence line segment region, or the number with dependence line segment less than rely on threshold value region make
It for boundary, splits and obtains multiple subregions, per sub-regions as a floor plan.
S304 obtains the first wall obtained to cad file data identification.
S305 carries out rasterizing processing to the cad file data after parsing, obtains the corresponding family of the cad file data
Type figure, and the floor plan is zoomed in and out;
S306 is identified the floor plan after scaling using wall identification model, obtains the second wall.
Specifically, the corresponding floor plan of the cad file data is input to wall identification model, is computed acquisition wall
Eigenmatrix generates reconstructed image according to the wall eigenmatrix, after the reconstructed image is zoomed to former floor plan size,
The reconstructed image is intercepted further according to preset wall type threshold value, generates the second wall.
S307 carries out cross validation to first wall using second wall, obtains final wall.
Specifically, overlap proportion or overlapping region of first wall on second wall are counted, overlapping is rejected
Ratio is less than the first wall of anti-eclipse threshold, and remaining first wall is final wall.
S308 carries out regularization processing, to meet the connected relation of wall to the final wall.
Specifically, the conllinear wall less than the first distance threshold of adjusting the distance merges, and takes longer in two conllinear walls
Wall extends as main wall body, and to main wall body towards shorter wall, until main wall body covers shorter wall, realizes altogether
The merging of line wall;
Intersection point is calculated to mutually perpendicular wall, and intersection point described in extended distance is less than the wall of second distance threshold value to institute
Intersection point is stated, to form wall turning.
In the present embodiment, since wall identification model has very strong learning and memory function, wall identification model pair is utilized
Wall is identified, and can exclude the interference informations such as a large amount of furniture in floor plan, obtains almost without the second of interference information
Wall when carrying out cross validation using second the first wall of wall pair, can reject the interference information in the first wall, obtain most
Whole wall improves the accuracy of wall recognition result according to this.
Another embodiment of the presently claimed invention provides a kind of device assisting in identifying wall in CAD based on deep learning,
Including:
One or more processors, memory and are stored in the memory and can be in one or more of processing
The one or more computer programs executed on device, one or more of processors are executing one or more of computers
When program, the arbitrary steps for the method that any one embodiment provides as previously described are realized, details are not described herein again.
The processor and memory can be existing arbitrary processor and memory, not limit herein.
The accuracy of the wall of above method identification acquisition is illustrated with reference to specific floor plan.
First, for the cad file of acquisition, the cad file is parsed using the above method, vector quantization data screening
With processing, the fractionation of more house types, rasterizing with scaling processing, floor plan as shown in Figure 4 is obtained.
Then, being identified to floor plan as shown in Figure 4 using existing method obtains the first wall figure.Specifically
Process is;(1) straight line in design drawing is matched according to the geometry of the interfering objects such as furniture, it is believed that similarity reaches
The straight line of certain predetermined threshold value is interfering object, is deleted;(2) to remaining straight line set, matching line segments are carried out, are generated
Multipair parallel lines, and merge the smaller collinear lines of notch;(3) according to the parallel lines of pairing the is generated according to width of wall body range
One wall, as shown in Figure 5.
Next, floor plan shown in Fig. 4 is identified using above-mentioned wall identification model, the second wall is obtained.
Finally, the final wall figure after cross validation is carried out to the first wall shown in fig. 5 using second wall, is such as schemed
Shown in 6.
Comparison diagram 4~6 can be obtained clearly, know the A in floor plan as shown in Figure 4, B area in existing method
Not at wall, as shown in the region C, D in Fig. 5.Wall in Fig. 5 in the region C, D is not true wall, but is interfered
Information.After being corrected using second the first wall of wall pair, can clearly it be obtained from Fig. 6, in Fig. 5 in the region C, D
Interference information is removed, and ensure that the accuracy of final identification wall.
Technical scheme of the present invention and advantageous effect is described in detail in above-described specific implementation mode, Ying Li
Solution is not intended to restrict the invention the foregoing is merely presently most preferred embodiment of the invention, all principle models in the present invention
Interior done any modification, supplementary, and equivalent replacement etc. are enclosed, should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of method being assisted in identifying wall in CAD based on deep learning, is included the following steps:
It obtains and parses the corresponding cad file data of floor plan;
Obtain the first wall obtained to cad file data identification;
The corresponding floor plan of the cad file data is identified using wall identification model, obtains the second wall;
Cross validation is carried out to first wall using second wall, obtains final wall;
The wall identification model is trained to obtain based on deep learning network.
2. assisting in identifying the method for wall in CAD based on deep learning as described in claim 1, which is characterized in that the profit
The corresponding floor plan of the cad file data is identified with wall identification model, obtaining the second wall includes:
The corresponding floor plan of the cad file data is input to wall identification model, is computed and obtains wall eigenmatrix, root
After generating reconstructed image according to the wall eigenmatrix, the reconstructed image is cut further according to preset wall type threshold value
It takes, generates the second wall.
3. assisting in identifying the method for wall in CAD based on deep learning as claimed in claim 2, which is characterized in that will be described
The corresponding floor plan of cad file data zooms to meet the wall identification model input image size after, be input to wall knowledge
In other model;
After obtaining reconstructed image, the reconstructed image is zoomed into former floor plan size.
4. assisting in identifying the method for wall in CAD based on deep learning as described in claim 1, which is characterized in that the profit
Cross validation is carried out to first wall with second wall, obtaining final wall includes:
Overlap proportion or overlapping region of first wall on second wall are counted, rejects overlap proportion less than overlapping
First wall of threshold value, remaining first wall are final wall.
5. assisting in identifying the method for wall in CAD based on deep learning as described in claim 1, which is characterized in that parsing
Cad file data afterwards carry out rasterizing processing, to obtain the corresponding floor plan of the cad file data.
6. assisting in identifying the method for wall in CAD based on deep learning as described in claim 1, which is characterized in that the side
Method further includes:
Regularization processing is carried out to the final wall, to meet the connected relation of wall.
7. assisting in identifying the method for wall in CAD based on deep learning as claimed in claim 6, which is characterized in that described right
The final wall carries out regularization processing:
The conllinear wall less than the first distance threshold of adjusting the distance merges, and takes in two conllinear walls longer wall as main wall
Body, and main wall body is extended towards shorter wall, until main wall body covers shorter wall, realize the conjunction of conllinear wall
And;
Intersection point is calculated to mutually perpendicular wall, and intersection point described in extended distance is less than the wall of second distance threshold value to the friendship
Point, to form wall turning;Alternatively, orthogonal two walls are extended to the intersection point simultaneously, to form wall turning.
8. the method as described in claim 1 or 6 for assisting in identifying wall in CAD based on deep learning, which is characterized in that described
Method further includes:
After parsing the cad file data, the line segment that length is less than length threshold is rejected, by angle of inclination in predetermined angle model
Line segment in enclosing is converted into horizontal line section or vertical line segment.
9. the method as described in claim 1 or 6 for assisting in identifying wall in CAD based on deep learning, which is characterized in that described
Method further includes:
After parsing the cad file data, according to the dependence of neighbouring line segment, with there is no have in the former floor plan
The region of the line segment of dependence, or the number with dependence line segment are used as boundary less than the region for relying on threshold value, split
Multiple subregions are obtained, per sub-regions as a floor plan.
10. a kind of device assisting in identifying wall in CAD based on deep learning, including:One or more processors, memory with
And the one or more computer programs that can be executed in the memory and on the one or more processors are stored in,
It is characterized in that,
One or more of processors realize such as claim 1~9 times when executing one or more of computer programs
The step of one the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810587788.9A CN108804815B (en) | 2018-06-08 | 2018-06-08 | Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810587788.9A CN108804815B (en) | 2018-06-08 | 2018-06-08 | Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108804815A true CN108804815A (en) | 2018-11-13 |
CN108804815B CN108804815B (en) | 2023-04-07 |
Family
ID=64088136
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810587788.9A Active CN108804815B (en) | 2018-06-08 | 2018-06-08 | Method and device for assisting in identifying wall body in CAD (computer aided design) based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108804815B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615679A (en) * | 2018-12-05 | 2019-04-12 | 江苏艾佳家居用品有限公司 | A kind of recognition methods of house type component |
CN109785435A (en) * | 2019-01-03 | 2019-05-21 | 东易日盛家居装饰集团股份有限公司 | A kind of wall method for reconstructing and device |
CN109993797A (en) * | 2019-04-04 | 2019-07-09 | 广东三维家信息科技有限公司 | Door and window method for detecting position and device |
CN110020502A (en) * | 2019-04-18 | 2019-07-16 | 广东三维家信息科技有限公司 | The generation method and device of floor plan |
CN110059690A (en) * | 2019-03-28 | 2019-07-26 | 广州智方信息科技有限公司 | Floor plan semanteme automatic analysis method and system based on depth convolutional neural networks |
CN110096949A (en) * | 2019-03-16 | 2019-08-06 | 平安城市建设科技(深圳)有限公司 | Floor plan intelligent identification Method, device, equipment and computer readable storage medium |
CN110176057A (en) * | 2019-04-12 | 2019-08-27 | 平安城市建设科技(深圳)有限公司 | Three-dimensional house type model generating method, device, equipment and storage medium |
CN110188495A (en) * | 2019-06-04 | 2019-08-30 | 中住(北京)数据科技有限公司 | A method of the two-dimentional floor plan based on deep learning generates three-dimensional floor plan |
CN110909602A (en) * | 2019-10-21 | 2020-03-24 | 广联达科技股份有限公司 | Two-dimensional vector diagram sub-domain identification method and device |
CN111815602A (en) * | 2020-07-06 | 2020-10-23 | 清华大学 | Building PDF drawing wall recognition device and method based on deep learning and morphology |
CN112417538A (en) * | 2020-11-10 | 2021-02-26 | 杭州群核信息技术有限公司 | Window identification method and device based on CAD drawing and window three-dimensional reconstruction method |
CN113095109A (en) * | 2019-12-23 | 2021-07-09 | 中移(成都)信息通信科技有限公司 | Crop leaf surface recognition model training method, recognition method and device |
CN113536408A (en) * | 2021-07-01 | 2021-10-22 | 华蓝设计(集团)有限公司 | Residential core tube area calculation method based on CAD external reference collaborative mode |
CN113742810A (en) * | 2020-05-28 | 2021-12-03 | 杭州群核信息技术有限公司 | Scale identification method and three-dimensional model building system based on copy graph |
CN113808192A (en) * | 2021-09-23 | 2021-12-17 | 深圳须弥云图空间科技有限公司 | Method, device and equipment for generating house type graph and storage medium |
CN115797962A (en) * | 2023-01-13 | 2023-03-14 | 深圳市大乐装建筑科技有限公司 | Wall column identification method and device based on assembly type building AI design |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971098A (en) * | 2014-05-19 | 2014-08-06 | 北京明兰网络科技有限公司 | Method for recognizing wall in house type image and method for automatically correcting length ratio of house type image |
CN104821011A (en) * | 2015-05-20 | 2015-08-05 | 郭小虎 | Method of generating 3D house type model by 2D house type model based on camera shooting |
CN106156438A (en) * | 2016-07-12 | 2016-11-23 | 杭州群核信息技术有限公司 | Body of wall recognition methods and device |
CN107122528A (en) * | 2017-04-13 | 2017-09-01 | 广州乐家数字科技有限公司 | A kind of floor plan parametrization can edit modeling method again |
CN107194938A (en) * | 2017-04-17 | 2017-09-22 | 上海大学 | Image outline detection method based on depth convolutional neural networks |
-
2018
- 2018-06-08 CN CN201810587788.9A patent/CN108804815B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971098A (en) * | 2014-05-19 | 2014-08-06 | 北京明兰网络科技有限公司 | Method for recognizing wall in house type image and method for automatically correcting length ratio of house type image |
CN104821011A (en) * | 2015-05-20 | 2015-08-05 | 郭小虎 | Method of generating 3D house type model by 2D house type model based on camera shooting |
CN106156438A (en) * | 2016-07-12 | 2016-11-23 | 杭州群核信息技术有限公司 | Body of wall recognition methods and device |
CN107122528A (en) * | 2017-04-13 | 2017-09-01 | 广州乐家数字科技有限公司 | A kind of floor plan parametrization can edit modeling method again |
CN107194938A (en) * | 2017-04-17 | 2017-09-22 | 上海大学 | Image outline detection method based on depth convolutional neural networks |
Non-Patent Citations (3)
Title |
---|
张宏鑫等: "室内平面图分块矢量化与高效三维建筑建模", 《计算机科学与探索》 * |
朱俊芳: "基于结构构件识别的户型图三维重建算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
江州: "基于形状与边缘特征的户型图识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615679A (en) * | 2018-12-05 | 2019-04-12 | 江苏艾佳家居用品有限公司 | A kind of recognition methods of house type component |
CN109785435A (en) * | 2019-01-03 | 2019-05-21 | 东易日盛家居装饰集团股份有限公司 | A kind of wall method for reconstructing and device |
CN110096949A (en) * | 2019-03-16 | 2019-08-06 | 平安城市建设科技(深圳)有限公司 | Floor plan intelligent identification Method, device, equipment and computer readable storage medium |
CN110059690A (en) * | 2019-03-28 | 2019-07-26 | 广州智方信息科技有限公司 | Floor plan semanteme automatic analysis method and system based on depth convolutional neural networks |
CN109993797A (en) * | 2019-04-04 | 2019-07-09 | 广东三维家信息科技有限公司 | Door and window method for detecting position and device |
CN109993797B (en) * | 2019-04-04 | 2021-03-02 | 广东三维家信息科技有限公司 | Door and window position detection method and device |
CN110176057A (en) * | 2019-04-12 | 2019-08-27 | 平安城市建设科技(深圳)有限公司 | Three-dimensional house type model generating method, device, equipment and storage medium |
CN110020502A (en) * | 2019-04-18 | 2019-07-16 | 广东三维家信息科技有限公司 | The generation method and device of floor plan |
CN110188495A (en) * | 2019-06-04 | 2019-08-30 | 中住(北京)数据科技有限公司 | A method of the two-dimentional floor plan based on deep learning generates three-dimensional floor plan |
CN110909602A (en) * | 2019-10-21 | 2020-03-24 | 广联达科技股份有限公司 | Two-dimensional vector diagram sub-domain identification method and device |
CN113095109A (en) * | 2019-12-23 | 2021-07-09 | 中移(成都)信息通信科技有限公司 | Crop leaf surface recognition model training method, recognition method and device |
CN113742810B (en) * | 2020-05-28 | 2023-08-15 | 杭州群核信息技术有限公司 | Scale identification method and three-dimensional model building system based on copy |
CN113742810A (en) * | 2020-05-28 | 2021-12-03 | 杭州群核信息技术有限公司 | Scale identification method and three-dimensional model building system based on copy graph |
CN111815602A (en) * | 2020-07-06 | 2020-10-23 | 清华大学 | Building PDF drawing wall recognition device and method based on deep learning and morphology |
CN111815602B (en) * | 2020-07-06 | 2022-10-11 | 清华大学 | Building PDF drawing wall identification device and method based on deep learning and morphology |
CN112417538A (en) * | 2020-11-10 | 2021-02-26 | 杭州群核信息技术有限公司 | Window identification method and device based on CAD drawing and window three-dimensional reconstruction method |
CN112417538B (en) * | 2020-11-10 | 2024-04-16 | 杭州群核信息技术有限公司 | Window identification method and device based on CAD drawing and window three-dimensional reconstruction method |
CN113536408B (en) * | 2021-07-01 | 2022-12-13 | 华蓝设计(集团)有限公司 | Residential core tube area calculation method based on CAD external reference collaborative mode |
CN113536408A (en) * | 2021-07-01 | 2021-10-22 | 华蓝设计(集团)有限公司 | Residential core tube area calculation method based on CAD external reference collaborative mode |
CN113808192A (en) * | 2021-09-23 | 2021-12-17 | 深圳须弥云图空间科技有限公司 | Method, device and equipment for generating house type graph and storage medium |
CN113808192B (en) * | 2021-09-23 | 2024-04-09 | 深圳须弥云图空间科技有限公司 | House pattern generation method, device, equipment and storage medium |
CN115797962A (en) * | 2023-01-13 | 2023-03-14 | 深圳市大乐装建筑科技有限公司 | Wall column identification method and device based on assembly type building AI design |
Also Published As
Publication number | Publication date |
---|---|
CN108804815B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108804815A (en) | A kind of method and apparatus assisting in identifying wall in CAD based on deep learning | |
US11971726B2 (en) | Method of constructing indoor two-dimensional semantic map with wall corner as critical feature based on robot platform | |
Chen et al. | An end-to-end shape modeling framework for vectorized building outline generation from aerial images | |
Zhao et al. | Reconstructing BIM from 2D structural drawings for existing buildings | |
Liasis et al. | Building extraction in satellite images using active contours and colour features | |
CN108763813B (en) | Method and device for identifying wall in copy picture based on deep learning | |
Wu et al. | An extended minimum spanning tree method for characterizing local urban patterns | |
Zhang et al. | Saliency detection based on self-adaptive multiple feature fusion for remote sensing images | |
Du et al. | Segmentation and sampling method for complex polyline generalization based on a generative adversarial network | |
CN108765475B (en) | Building three-dimensional point cloud registration method based on deep learning | |
CN105389817B (en) | A kind of two phase remote sensing image variation detection methods | |
CN113837151B (en) | Table image processing method and device, computer equipment and readable storage medium | |
CN112990183B (en) | Method, system and device for extracting homonymous strokes of offline handwritten Chinese characters | |
Peeters et al. | Automated recognition of urban objects for morphological urban analysis | |
CN112990010A (en) | Point cloud data processing method and device, computer equipment and storage medium | |
CN107301649B (en) | Regional merged SAR image coastline detection algorithm based on superpixels | |
US20220004740A1 (en) | Apparatus and Method For Three-Dimensional Object Recognition | |
CN110704652A (en) | Vehicle image fine-grained retrieval method and device based on multiple attention mechanism | |
Chen et al. | Building change detection in very high-resolution remote sensing image based on pseudo-orthorectification | |
CN110659637A (en) | Electric energy meter number and label automatic identification method combining deep neural network and SIFT features | |
CN114565916A (en) | Target detection model training method, target detection method and electronic equipment | |
CN116704542A (en) | Layer classification method, device, equipment and storage medium | |
CN102609721B (en) | Remote sensing image clustering method | |
CN105354845B (en) | A kind of semi-supervised change detecting method of remote sensing image | |
CN110348311B (en) | Deep learning-based road intersection identification system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |