CN103390088A - Full-automatic three-dimensional conversion method aiming at grating architectural plan - Google Patents

Full-automatic three-dimensional conversion method aiming at grating architectural plan Download PDF

Info

Publication number
CN103390088A
CN103390088A CN201310329371XA CN201310329371A CN103390088A CN 103390088 A CN103390088 A CN 103390088A CN 201310329371X A CN201310329371X A CN 201310329371XA CN 201310329371 A CN201310329371 A CN 201310329371A CN 103390088 A CN103390088 A CN 103390088A
Authority
CN
China
Prior art keywords
point
line segment
wall
grating
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310329371XA
Other languages
Chinese (zh)
Inventor
张宏鑫
李嫄姝
郑文庭
鲍虎军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310329371XA priority Critical patent/CN103390088A/en
Publication of CN103390088A publication Critical patent/CN103390088A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a full-automatic three-dimensional conversion method aiming at a grating architectural plan. The method comprises the following steps of (1) performing binaryzation and correction on the grating architectural plan to obtain a preprocessed image; (2) extracting an image region comprising wall lines from the preprocessed image to obtain a plurality of sub-images; (3) performing vectorization processing on all sub-images to correspondingly obtain a line segment set with wall width, and performing extraction operation on the line segment set to obtain information such as wall positions and the wall width; (4) obtaining subgraphs of wall appendants through the line segment set in step (3), and judging the specific classes of the subgraphs of the wall appendants by using a linear identification analytical algorithm; and (5) converting data in steps (3)and (4) into three-dimensional structure data according to a preset height.

Description

A kind of conversion method of full-automatic three-dimensional for the grating architectural plan
Technical field
The present invention relates to the architectural plan raster image, comprehensive and improved the area researches such as vector quantization, image recognition and shape grammar, relate in particular to a kind of conversion method of full-automatic three-dimensional for the grating architectural plan.
Background technology
Along with Chinese national economy develops rapidly and structural adjustment, urbanization process becomes irresistible historical trend.For the city that effective management scale increases day by day, comprise and process the emergent problem of all kinds of safety, prevent the illegal building phenomenon in urbanization process, a large amount of drawbacks in avoiding building etc., " digital city " technology is widely paid attention to and application.This technology is calculated with memory technology as basis take computer graphics, multimedia and large-scale cluster, take network as tie, integrated use the technology such as remote sensing remote measurement, global location, Geographic Information System and virtual emulation, city is carried out the three-dimensional description of multiresolution, multiple dimensioned, multi-space.Utilizing these Information Technology Methods effectively to carry out digital virtual to the related content of the past in city, status quo and future realizes.In these technology, most crucial problem is exactly the scene modeling technology, and except the outdoor architecture ensemble volume modeling of large scale, the interior architecture information processing of fine dimension becomes the research emphasis in field, digital city just gradually.
In design, construction and the use procedure of interior architecture, the effect of architectural plan is very important.The drafting of architectural plan, mainly adopt the top view of every layer, and with unified sign, mark Architectural Elements.Along with the development of infotech, computer graphics has progressively substituted manual drawing, is also changing deviser's working method simultaneously.And use computing machine to build the 3D model according to the 2D planimetric map, not only can allow deviser and architect that the work of design drawing and analogue formation is united two into one, thereby the works of close examination more intuitively oneself, simultaneously, also can carry out by the computer run simulated data design validity of the power of test, light, sound, fire and other attributes, thereby can revise as required or adjusted design before formally building.
As seen, according to architectural plan, build the 3D model, in fields such as virtual city roaming, game, real estate and public safeties, be widely used.In some applications, BUILDINGS MODELS, because paying attention to a large amount of details, needs manual structure, and these models need to drop into a large amount of human resources.Yet under most applicable cases, we just need to have the building 3D model of expressing the meaning property in a large number, thereby by the 2D architectural plan, build automatically and efficiently the 3D model, just seem very useful and necessary.
Yet industrial design for a long time all makes by hand and draws, and makes in the past a large amount of engineering drawings all in mode made of paper, preserve.Raster image with what obtain after these drawing scanning input computing machines, simultaneously, the design drawing that a large amount of CAD softwares produce also is converted into raster image finally, be beneficial to browsing and propagating in internet, applications, yet these raster images can't provide 3D BUILDINGS MODELS reconstruct desired parameters.
Existing main flow three-dimensional modeling business software,, as AutoCAD, 3DMAX, MAYA etc., all adopt Interactive Modeling, a three-dimension interaction Modeling Platform namely is provided, the user, by mouse and all kinds of geometric elements of keyboard mutuality ground operation, carries out various geometric editors, finally forms object module.Though the method is powerful, can give the very large dirigibility of user, construct the high model of precision, also exist simultaneously to the user professional require high, the problem such as modeling efficiency is low, thereby be not widely used in the relevant extensive building scene of all kinds of buildings and build.
Summary of the invention
The object of the invention is to build the complicated deficiency of operation for existing 3D BUILDINGS MODELS, propose a kind of light weight method based on grating architectural plan automatic reconfiguration 3D BUILDINGS MODELS.The method has promoted the efficiency with " expressing the meaning property " 3D BUILDINGS MODELS structure, and more intuitively convenient with user interactions.
A kind of conversion method of full-automatic three-dimensional for the grating architectural plan comprises the following steps:
1) the grating architectural plan is done binaryzation and rectification, obtain pretreated image;
By scanning, take pictures or grating architectural plan that the mode such as downloads on the net obtains, inevitably can introduce noise, or cause the picture inclination.Thereby at first we use the binaryzation operation with the outer object of building component, rejectings such as furniture, explanatory note or other picture noises, secondly, correct by Hough transformation the image that tilts occurring, be as far as possible the vertical black-white grating polar plot of level thereby obtain the less and body of wall of noise, to facilitate the follow-up vectorized process of image.
Described binaryzation adopts the cvThreshold function in the OpenCV built-in function, obtains the image after binaryzation.
Described rectification is: the image after adopting cvHoughLines2 function in the OpenCV built-in function to binaryzation carries out the Hough line segment and detects, long straight-line segment in image after the extraction binaryzation, count the slope value of these long straight-line segments, image after using subsequently cvGetQuadrangleSubPix function in the OpenCV built-in function to binaryzation is rotated rectification, makes long straight-line segment parallel with the rectangular coordinate system of grating architectural plan.
Rectangular coordinate with the grating architectural plan is benchmark, by the slope value of calculating long straight-line segment, can obtain the angle of these long straight-line segments with respect to rectangular coordinate system, then by the image after the rotation binaryzation, completes correction process.
2) extract the image-region that comprises wall body stripe from pretreated image, obtain some subimages;
In described step 2) in, utilize the average integral projection function to extract the image-region that comprises wall body stripe.When a certain row (OK) the pixel grey scale average of image changed, this variation meeting reflected on the average integral projection value of these row (OK).
In order to reject fast architectural plan empty zone, and further accelerate vectorized process efficiency, we use average integral projection function (AIPF) to extract the image-region that comprises wall body stripe.This step can be introduced parallel computation and accelerate to process, extract the subgraph that only comprises a small amount of complete straight-line segment, space structure in architectural plan is done tentatively and cut apart, accelerate the search speed of vector quantization starting point, and after the line segment integration step in, only need complete line segment to each other the connectedness identical with the integral projection direction in the judgement subgraph, and need not to travel through the line segment of entire image, thereby make vectorized process more efficient.
3) each number of sub images is carried out vectorized process, correspondence obtains the line segment aggregate with the body of wall width, extracts and obtains body of wall position and body of wall width information from line segment aggregate;
After the integral projection step, whole image is broken down into some subimages that comprise building component information, obtains the line segment aggregate of bandwidth after the need vectorized process, further obtains several how basic architecture of body of wall information.Consider that existing SPV method has advantage aspect shape maintenance and operation efficiency, and simplify on this basis and improve, train more appropriate empirical value according to practical application, and be applied to block image, improve recognition capability and the vector quantization efficiency of figure.And initial axial point and tracking go on foot direction, sparse point tracking, node is cut apart and line segment is integrated four steps by finding, and extract body of wall position and width information, and obtain the door and window subgraph that embeds in body of wall, in order to carry out discriminance analysis.
For every number of sub images, described vectorized process is:
The first step: find initial axial point and follow the trail of direction
Step 1): with the step-length less than 1~3 pixel of body of wall live width, scan subimage from top to bottom, for every sweep trace, read successively from left to right the gray-scale value of each pixel on current scan line, if current pixel point meets following 3 points simultaneously, with this current pixel point as reference point;
A) gray-scale value is 0, and namely current pixel point is black pixel point;
B) a upper pixel is that gray-scale value is 1, and namely going up a pixel is the white pixel point;
C) current pixel point accessed mistake never;
Step 2): in the connected region (being black picture element in connected region) at reference point x (i) place, the initial axial point of iterative query and tracking direction, the iterative query process is:
The i time iteration
Odd number time operation: cross reference point x (i) and be horizontal reference line m (i) between the border, left and right of connected region, and the mid point that obtains horizontal reference line is as reference point y (i);
Even number time operation: cross reference point y (i), be vertical reference line n (i) between the up-and-down boundary of connected region, and the mid point that obtains vertical reference line is as reference point x (i+1);
The i+1 time iteration
Odd number time operation: cross reference point x (i+1) and be horizontal reference line m (i+1) between the border, left and right of connected region, and the mid point that obtains horizontal reference line is as reference point y (i+1);
Even number time operation: cross reference point y (i+1), be vertical reference line n (i+1) between the up-and-down boundary of connected region, and the mid point that obtains vertical reference line is as reference point x (i+2);
From the even number of iteration operates for the first time, in the iteration of any k time:
Odd number time operation: the distance between judgement x (k) and y (k), if less than threshold value, use x (k) as initial axial point, and choose reference line place direction long in horizontal reference line m (k) and vertical reference line n (k-1) as following the trail of direction, otherwise, enter next operation;
Even number time operation: the distance between judgement x (k+1) and y (k), if less than threshold value, use y (k) as initial axial point, and choose reference line place direction long in horizontal reference line m (k) and vertical reference line n (k) as following the trail of direction; Otherwise, enter next iteration;
Step 3): repeating step 1) and step 2), complete the processing (separate between each connected region) to all connected regions in whole subimage, obtain the corresponding initial axial point of searching and follow the trail of direction for each connected region;
Second step: sparse point tracking
In any connected region, from initial axial point,, with the fixing step-length of following the trail of, along following the trail of direction, inquire about successively the new axial point of next pixel conduct (doing sparse pixel follows the trail of),, until follow the trail of while violating one of following two conditions, perform step node and cut apart;
A) the live width difference of current new axial point and upper one new axial point is in allowing threshold value;
B) not visited before new axial point;
The 3rd step: node is cut apart
The node segmentation procedure rationally is divided into line segment corner and right-angled intersection zone the splicing of some straight-line segments, and concrete iterative step is as follows:
A) return a new axial point;
B) adjusting the tracking step-length is last tracking step-length 1/2;
C) carry out sparse point tracking along former tracking direction;
D), following the trail of each the new axial point place that obtains after the step-length adjustment, judge whether new axial point meets two conditions in second step, if meet, continuation continues to follow the trail of with the step-length after adjusting, if do not meet, and the step a) in the 3rd step of repetition~d);
Iteration, until the length of following the trail of the step is close to zero, the node segmentation procedure stops, the point that meets all conditions that finds finally is new axial point;
The 4th step: line segment is integrated
Be the vector line segment between two adjacent new axial point, take the position of one of them new axial point as origin coordinates, the position of the new axial point that another is adjacent is for stopping coordinate, subtract each other and be described vector line segment, and the live width mean value of adjacent two new axial point is the live width of whole vector line segment;
After in each subimage, all black pixel points have all been accessed, all vector line segments that obtain are integrated, generate long straight vector line segment, calculate the angle between any two long straight vector line segments, if angle is less than 5 degree, think that two vector line segments are parallel, judge again the Neighbor Points distance of two long straight vector line segments, if less than setting value, this setting value is the width of default body of wall line, merge the line segment aggregate that obtains with the body of wall width, and extract and obtain body of wall position and body of wall width information from line segment aggregate.
The grating architectural plan is done the vector conversion, extract the information such as each building component position width, correctly identify the building appurtenances such as door and window and balcony simultaneously, final use generates 3D BUILDINGS MODELS JSON file automatically based on the procedural modeling of shape grammar.
One of core of the present invention is exactly vector quantization.Vector quantization also makes grating turn vector, is a process of finding the vector lines from raster image.Good vectorization method should keep shape information as far as possible, comprises live width, lines geometric configuration and crossbar contact, so that post-processed, and it should be enough soon to be applicable to reality system.Existing vectorization method roughly is divided into seven classes: based on Hough transformation, based on refinement, based on profile, based on the rider mark, based on network model, based on the orthogonal directions search with based on sparse pixel.Specifically can be with reference to L Wenyin, D Dori.A Survey of Non-Thinning Based Vectorization Methods.Advances in Pattern Recognition, 1998:230~241.By to above seven kinds of vectorization methods in the analysis-by-synthesis that extracts the aspects such as the integrality of parameter information, Accuracy and high efficiency, we adopt the vectorization method based on sparse pixel (SPV), the method is only searched for the black-pixel region of original grating image, make search rate reduce, and be beneficial to and avoid the figure line defect, greatly accelerated the extraction rate of lines, in addition, algorithm can also be preserved line width information and accurate central shaft and endpoint location, is convenient to the post-processed to image.
4) obtain body of wall adjunct subgraph by the line segment aggregate in step 3), utilize the linear discriminant analysis algorithm to judge the specific category of each body of wall adjunct subgraph;
The building appurtenances such as door and window in architectural plan are correctly identified, and because accurate efficient lightweight is ultimate aim of the present invention, thereby we adopt the LDA that introduces GSVD and QR decomposition to improve algorithm.Specifically can be with reference to H Park, BL Drake, S Lee, CH Park.Fast Linear Discriminant Analysis Using QR Decomposition and Regularization.Technical Report GT-CSE-07-21,2007.
5) preset building highly, and integrating step 3) and the data in step 4), be converted to three-dimensional structure data.
In the procedural modeling method based on shape grammar and characteristic line element, namely in the F-Wires method, the three-dimensional building model is by resolving a series of procedural modeling rule grey iterative generations.The building components such as body of wall, door, window, balcony, passage are defined as shape unit one by one.Each shape unit is by a four-tuple S=<N, G, B, P〉expression, wherein N represents the title of shape unit; G represents the geological information of shape unit; B is how much bounding boxs with direction, is used to specify position and the size of unit, and convenient shape unit is afterwards asked the computings such as friendship; P represents other attributes such as texture, the material etc. of shape unit.Characteristic line element F-wire is expressed as one group of three dimensions vertex set, and each summit connects successively, forms the characteristic line element broken line.Especially, when a F-wire is the sealing rectangle that is surrounded by 4 coplanar summits, be called characteristic rectangle, or referred to as F-rect.
During application F-Wires method generating three-dimensional BUILDINGS MODELS, we are with existing two-dimentional building component data, add the default third dimension (highly) data, linear discriminant analysis (LDA) algorithm is to use a kind of feature extracting method comparatively widely in pattern-recognition,, by discrete matrix average centralization in discrete matrix weighting processing and class between class, make between the class of the diagnostic characteristics after dimensionality reduction in dispersion and class the ratio of dispersion maximum.Owing to introducing classification parameter information, the identification of the therefore low more convenient multiclass image of dimension data in reduction process.Namely find optimum linear transition matrix G T, made it to guarantee the good separability between classification on the low-dimensional.But in the practical application scene, image training sample data dimensions (m) are far longer than number of samples (n), thereby can produce and owe Sampling, thereby introduce QR and decompose, only to decompositing n*n dimension upper triangular matrix R, do a SVD decomposition, acquired results multiplies each other with orthogonal matrix Q again, can obtain target transition matrix G.Thus, not only solved and owed Sampling, and reduced dimension and number of times that SVD decomposes, greatly accelerated computing velocity.
The shape grammar that George Stiny proposed in 1972 is a kind of computer-implemented method, can, according to people's designed concept and requirement, according to certain rule, produce new shape.It is compact that the method has expression, the advantage that generalization ability is strong.Architecture knowledge is incorporated shape grammar to be represented, in conjunction with the abstract modeling characteristic line in architectural plan (or being called characteristic line element feature wires), we construct the procedural modeling method based on shape grammar and characteristic line element, be called for short the F-Wires modeling method, and design corresponding F-Wires formation rule, the rule of simplification complexity, better meet " expressing the meaning property " " lightweight " modeling demand.
The invention has the advantages that the characteristics for architectural plan, proposed image block technology effectively, further optimize simultaneously the SPV algorithm, make vector quantization speed that very large lifting be arranged.At the identification step of the attached object of body of wall, we have introduced the LDA sorting technique that is suitable for small-sample learning, and have proposed relevant speed-up computation scheme, when guaranteeing recognition accuracy, have greatly improved treatment effeciency.Represent because this paper method is based on process type, the parameter in therefore can representing by correction easily, be editor and processing efficiently to the three-dimensional building model, also is convenient to storage and the extraction of information simultaneously.
Description of drawings
Fig. 1 is the system flowchart of algorithm of the present invention.
Fig. 2 is average integral projection function schematic diagram.
Fig. 3 is the F-Wires formation rule.
Embodiment
The present invention proposes a kind of light weight computing method for the full-automatic generating three-dimensional BUILDINGS MODELS of grating architectural plan, and as shown in Figure 1, it comprises following five steps to idiographic flow:
(1) the grating architectural plan is done binaryzation and rectification, obtain pretreated image.
Grating architectural plan binaryzation adopts the cvThreshold function in the OpenCV built-in function, obtains the image after binaryzation; Rectification is: the image after adopting cvHoughLines2 function in the OpenCV built-in function to binaryzation carries out the Hough line segment and detects, long straight-line segment in image after the extraction binaryzation, count the slope value of these long straight-line segments, image after using subsequently cvGetQuadrangleSubPix function in the OpenCV built-in function to binaryzation is rotated rectification, makes long straight-line segment parallel with the rectangular coordinate system of grating architectural plan.
Rectangular coordinate with the grating architectural plan is benchmark, by the slope value of calculating long straight-line segment, can obtain the angle of these long straight-line segments with respect to rectangular coordinate system, then by the image after the rotation binaryzation, completes correction process.
(2) utilize the average integral projection function to extract the image-region that comprises wall body stripe from pretreated image, obtain some subimages.
The average integral projection function is expressed as:
M v ( x ) = 1 y 2 - y 1 ∫ y 1 y 2 I ( x , y ) d y , M h ( y ) = 1 x 2 - x 1 ∫ x 1 x 2 I ( x , y ) d x
Wherein: the coordinate of (x, y) expression pixel, the grey scale pixel value that I (x, y) expression point (x, y) is located, M v(x) be illustrated in interval [y 1, y 2] on vertical average integral projection function value, M h(y) be illustrated in interval [x 1, x 2] on the average integral projection functional value of level.
For can robust ground in the opposing image noise and and the incomplete problem brought of alignment of image, we are with M v(x) and M h(y) normalize to [0,1] interval:
M v ′ ( x ) = M v ( x ) - min ( M v ( x ) ) max ( M v ( x ) ) - min ( M v ( x ) ) , M h ′ ( x ) = M h ( x ) - min ( M h ( x ) ) max ( M h ( x ) ) - min ( M h ( x ) )
Fig. 2 is the area dividing example that adopts the average integral projection, and wherein horizontal ordinate is image level (vertically) coordinate, and ordinate is corresponding projection function value.Fig. 2 (a) and (c) showed respectively vertical projection conversion and the axial horizontal projection conversion of x gained function result along the y axle, corresponding projection function Threshold segmentation effect is respectively as shown in Fig. 2 (b) and Fig. 2 (d).In addition, solid straight line in Fig. 2 is the median of all nonzero values of level (vertically) integral projection function, it is made as threshold value, statistics surpasses continuous horizontal (vertically) zone of this threshold value, if this peak width is greater than default body of wall line width values, extract this level (vertically) zone as pending subimage, it is carried out vector quantization.
(3) each number of sub images is carried out vectorized process, correspondence obtains the line segment aggregate with the body of wall width, extracts and obtains body of wall position and body of wall width information from line segment aggregate.
In this step, utilize the lightweight vectorization method of the sparse some pixel (SPV) based on non-refinement after improving, extract building component position dimension isovector information.
After the integral projection step, whole image is broken down into some subimages that comprise building component information, obtains the line segment aggregate of bandwidth after the need vectorized process, further obtains several how basic architecture of body of wall information.
For every number of sub images, described vectorized process is:
The first step: find initial axial point and follow the trail of direction
Step 1): with the step-length less than 2 pixels of body of wall live width, scan subimage from top to bottom, for every sweep trace, read successively from left to right the gray-scale value of each pixel on current scan line, if current pixel point meets following 3 points simultaneously, with this current pixel point as reference point;
A) gray-scale value is 0, and namely current pixel point is black pixel point;
B) a upper pixel is that gray-scale value is 1, and namely going up a pixel is the white pixel point;
C) current pixel point accessed mistake never;
Step 2): in the connected region (being black picture element in connected region) at reference point x (i) place, the initial axial point of iterative query and tracking direction, the iterative query process is:
The i time iteration
Odd number time operation: cross reference point x (i) and be horizontal reference line m (i) between the border, left and right of connected region, and the mid point that obtains horizontal reference line is as reference point y (i);
Even number time operation: cross reference point y (i), be vertical reference line n (i) between the up-and-down boundary of connected region, and the mid point that obtains vertical reference line is as reference point x (i+1);
The i+1 time iteration
Odd number time operation: cross reference point x (i+1) and be horizontal reference line m (i+1) between the border, left and right of connected region, and the mid point that obtains horizontal reference line is as reference point y (i+1);
Even number time operation: cross reference point y (i+1), be vertical reference line n (i+1) between the up-and-down boundary of connected region, and the mid point that obtains vertical reference line is as reference point x (i+2);
From the even number of iteration operates for the first time, in the iteration of any k time:
Odd number time operation: the distance between judgement x (k) and y (k), if less than threshold value, use x (k) as initial axial point, and choose reference line place direction long in horizontal reference line m (k) and vertical reference line n (k-1) as following the trail of direction, otherwise, enter next operation;
Even number time operation: the distance between judgement x (k+1) and y (k), if less than threshold value, use y (k) as initial axial point, and choose reference line place direction long in horizontal reference line m (k) and vertical reference line n (k) as following the trail of direction; Otherwise, enter next iteration;
Step 3): repeating step 1) and step 2), complete the processing (separate between each connected region) to all connected regions in whole subimage, obtain the corresponding initial axial point of searching and follow the trail of direction for each connected region;
Second step: sparse point tracking
In any connected region, from initial axial point,, with the fixing step-length of following the trail of, along following the trail of direction, inquire about successively the new axial point of next pixel conduct (doing sparse pixel follows the trail of),, until follow the trail of while violating one of following two conditions, perform step node and cut apart;
A) the live width difference of current new axial point and upper one new axial point is in allowing threshold value;
B) not visited before new axial point;
The 3rd step: node is cut apart
The node segmentation procedure rationally is divided into line segment corner and right-angled intersection zone the splicing of some straight-line segments, and concrete iterative step is as follows:
A) return a new axial point;
B) adjusting the tracking step-length is last tracking step-length 1/2;
C) carry out sparse point tracking along former tracking direction;
D), following the trail of each the new axial point place that obtains after the step-length adjustment, judge whether new axial point meets two conditions in second step, if meet, continuation continues to follow the trail of with the step-length after adjusting, if do not meet, and the step a) in the 3rd step of repetition~d);
Iteration, until the length of following the trail of the step is close to zero, the node segmentation procedure stops, the point that meets all conditions that finds finally is new axial point;
The 4th step: line segment is integrated
Be the vector line segment between two adjacent new axial point, take the position of one of them new axial point as origin coordinates, the position of the new axial point that another is adjacent is for stopping coordinate, subtract each other and be described vector line segment, and the live width mean value of adjacent two new axial point is the live width of whole vector line segment;
After in each subimage, all black pixel points have all been accessed, all vector line segments that obtain are integrated, generate long straight vector line segment, calculate the angle between any two long straight vector line segments, if angle is near 0, being generally less than 5 degree thinks near 0, think that two vector line segments are parallel, judge again the Neighbor Points distance of two long straight vector line segments, if less than setting value, this setting value is the width of default body of wall line, merges the line segment aggregate that obtains with the body of wall width, and extracts and obtain body of wall position and body of wall width information from line segment aggregate.
(4) obtain body of wall adjunct subgraph by the line segment aggregate in step (3), utilize the linear discriminant analysis algorithm to judge the specific category of each body of wall adjunct subgraph.
The building appurtenances such as door and window in architectural plan are correctly identified, because accurate efficient lightweight is ultimate aim of the present invention, thereby we adopt the LDA that introduces GSVD and QR decomposition to improve algorithm, the identification of fast accurate ground and the multiple building components such as location door and window and hole; Specifically can be with reference to H Park, BL Drake, S Lee, CH Park.Fast Linear Discriminant Analysis Using QR Decomposition and Regularization.Technical Report GT-CSE-07-21,2007.
(5) according to preset height, the data in step (3) and step (4) are converted to three-dimensional structure data.
In the procedural modeling method based on shape grammar and characteristic line element, namely in the F-Wires method, as shown in Figure 3, the three-dimensional building model is by resolving a series of procedural modeling rule grey iterative generations.Shape grammar is represented (with reference to G Stiny.Introduction to shape and shape grammars.Environment and Planning B, 1980,7 (3): 343~351) with the feature modelling modeling (with reference to R Gal, O Sorkine, NJ Mitra, D Cohen.iWIRES:An analyze-and-edit approach to shape manipulation.ACM Transactions on Graphics, 2009,28 (3): 110~116) method combines, and realizes process type High Efficiency Modeling and the exchange method of " analysis+modeling ".
The building components such as body of wall, door, window, balcony, passage are defined as shape unit one by one.Each shape unit is by a four-tuple S=<N, G, B, P〉expression, wherein N represents the title of shape unit; G represents the geological information of shape unit; B is how much bounding boxs with direction, is used to specify position and the size of unit, and convenient shape unit is afterwards asked the computings such as friendship; P represents other attributes such as texture, the material etc. of shape unit.Characteristic line element F-wire is expressed as one group of three dimensions vertex set, and each summit connects successively, forms the characteristic line element broken line.Especially, when a F-wire is the sealing rectangle that is surrounded by 4 coplanar summits, be called characteristic rectangle, or referred to as F-rect.
This paper method is mainly used in the quick three-dimensional Model Reconstruction of lightweight, thereby we adopt the JSON form to preserve the model data (as table 1) of explaining based on the F-Wires formation rule.When application F-Wires method generating three-dimensional BUILDINGS MODELS, we add the default third dimension (highly) data with existing two-dimentional building component data, adopt the F-Wires formation rule shown in table 2, generate based on the syntax of F-Wires formation rule and express.
The JSON of table 1F-Wires formation rule represents
Figure BDA00003596860100121
Table 2F-Wires formation rule
Figure BDA00003596860100131
When calling F-Wires formation rule reconstruction model, we have built a tree (as Fig. 3) of equal valuely, and node represents shape unit, and the directed edge between node is a F-Wires formation rule.When the user edits the shape unit node, can be from when the pre-editing node, dateing back its father node, only its father node and subordinate's branch shape unit thereof are reconstructed, this makes the interaction response of system rapider, and the user is to the modification of model also very convenient.In addition, each shape unit all can directly be revised attribute in three-dimensional environment,, as pinup picture, material etc., reaches the requirement of the personality interactive modeling alive of expressing the meaning.

Claims (6)

1. the conversion method of the full-automatic three-dimensional for the grating architectural plan, is characterized in that, comprises the following steps:
1) the grating architectural plan is done binaryzation and rectification, obtain pretreated image;
2) extract the image-region that comprises wall body stripe from pretreated image, obtain some subimages;
3) each number of sub images is carried out vectorized process, correspondence obtains the line segment aggregate with the body of wall width, extracts and obtains body of wall position and body of wall width information from line segment aggregate;
4) obtain body of wall adjunct subgraph by the line segment aggregate in step 3), utilize the linear discriminant analysis algorithm to judge the specific category of each body of wall adjunct subgraph;
5) according to preset height, the data in step 3) and step 4) are converted to three-dimensional structure data.
2. the conversion method of the full-automatic three-dimensional for the grating architectural plan as claimed in claim 1, is characterized in that, in step 1), described binaryzation adopts the cvThreshold function in the OpenCV built-in function, obtains the image after binaryzation.
3. the conversion method of the full-automatic three-dimensional for the grating architectural plan as claimed in claim 1, it is characterized in that, described rectification is: the image after adopting cvHoughLines2 function in the OpenCV built-in function to binaryzation carries out the Hough line segment and detects, long straight-line segment in image after the extraction binaryzation, count the slope value of these long straight-line segments, image after using subsequently cvGetQuadrangleSubPix function in the OpenCV built-in function to binaryzation is rotated rectification, makes long straight-line segment parallel with the rectangular coordinate system of grating architectural plan.
4. the conversion method of the full-automatic three-dimensional for the grating architectural plan as claimed in claim 1, is characterized in that, in described step 2) in, utilize the average integral projection function to extract the image-region that comprises wall body stripe from pretreated image.
5. the conversion method of the full-automatic three-dimensional for the grating architectural plan as claimed in claim 4, is characterized in that, in step 3), for every number of sub images, described vectorized process is:
The first step: find initial axial point and follow the trail of direction
Step 1): with the step-length less than the body of wall live width, scan described subimage from top to bottom, for every sweep trace, read successively from left to right the gray-scale value of each pixel on current scan line, if current pixel point meets following 3 points simultaneously, with this current pixel point as reference point;
A) gray-scale value is 0, and namely current pixel point is black pixel point;
B) a upper pixel is that gray-scale value is 1, and namely going up a pixel is the white pixel point;
C) current pixel point accessed mistake never;
Step 2): in the connected region at reference point x (i) place, the initial axial point of iterative query and tracking direction;
Step 3): repeating step 1) and step 2), complete the processing to all connected regions in whole subimage, obtain the corresponding initial axial point of searching and follow the trail of direction for each connected region;
Second step: sparse point tracking
In any connected region, from initial axial point,, with the fixing step-length of following the trail of, along following the trail of direction, inquire about successively the new axial point of next pixel conduct, until follow the trail of while violating one of following two conditions, the execution step node is cut apart;
A) the live width difference of current new axial point and upper one new axial point is in allowing threshold value;
B) not visited before new axial point;
The 3rd step: node is cut apart
The node segmentation procedure rationally is divided into line segment corner and right-angled intersection zone the splicing of some straight-line segments, and concrete iterative step is as follows:
A) return a new axial point;
B) adjusting the tracking step-length is last tracking step-length 1/2;
C) carry out sparse point tracking along former tracking direction;
D), following the trail of each the new axial point place that obtains after the step-length adjustment, judge whether new axial point meets two conditions in second step, if meet, continuation continues to follow the trail of with the step-length after adjusting, if do not meet, and the step a) in the 3rd step of repetition~d);
Iteration, until the length of following the trail of the step is close to zero, the node segmentation procedure stops, the point that meets all conditions that finds finally is new axial point;
The 4th step: line segment is integrated
Be the vector line segment between two adjacent new axial point, after in each subimage, all black pixel points have all been accessed, all vector line segments that obtain are integrated, generate long straight vector line segment, calculate the angle between any two long straight vector line segments, if angle is near 0, think that two vector line segments are parallel, judge again the Neighbor Points distance of two long straight vector line segments, if less than setting value, merge the line segment aggregate that obtains with the body of wall width, and extract and obtain body of wall position and body of wall width information from line segment aggregate.
6. the conversion method of the full-automatic three-dimensional for the grating architectural plan as claimed in claim 5, is characterized in that, described iterative query process is:
The i time iteration comprises:
Odd number time operation: cross reference point x (i) and be horizontal reference line m (i) between the border, left and right of connected region, and the mid point that obtains horizontal reference line is as reference point y (i);
Even number time operation: cross reference point y (i), be vertical reference line n (i) between the up-and-down boundary of connected region, and the mid point that obtains vertical reference line is as reference point x (i+1);
From the even number of iteration operates for the first time, in the iteration of the k time:
Odd number time operation: the distance between judgement x (k) and y (k), if less than threshold value, use x (k) as initial axial point, and choose reference line place direction long in horizontal reference line m (k) and vertical reference line n (k-1) as following the trail of direction, otherwise, enter next operation;
Even number time operation: the distance between judgement x (k+1) and y (k), if less than threshold value, use y (k) as initial axial point, and choose reference line place direction long in horizontal reference line m (k) and vertical reference line n (k) as following the trail of direction; Otherwise, enter next iteration.
CN201310329371XA 2013-07-31 2013-07-31 Full-automatic three-dimensional conversion method aiming at grating architectural plan Pending CN103390088A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310329371XA CN103390088A (en) 2013-07-31 2013-07-31 Full-automatic three-dimensional conversion method aiming at grating architectural plan

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310329371XA CN103390088A (en) 2013-07-31 2013-07-31 Full-automatic three-dimensional conversion method aiming at grating architectural plan

Publications (1)

Publication Number Publication Date
CN103390088A true CN103390088A (en) 2013-11-13

Family

ID=49534357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310329371XA Pending CN103390088A (en) 2013-07-31 2013-07-31 Full-automatic three-dimensional conversion method aiming at grating architectural plan

Country Status (1)

Country Link
CN (1) CN103390088A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252715A (en) * 2014-09-05 2014-12-31 北京大学 Single line image-based three-dimensional reconstruction method
CN104484899A (en) * 2014-12-26 2015-04-01 天津恒达文博科技有限公司 Map generation system based on plate continuity judgment algorithm
CN106156438A (en) * 2016-07-12 2016-11-23 杭州群核信息技术有限公司 Body of wall recognition methods and device
CN107274486A (en) * 2017-06-26 2017-10-20 广州天翌云信息科技有限公司 A kind of model 3D effect map generalization method
CN109345454A (en) * 2018-09-18 2019-02-15 徐庆 Method, storage medium and the system of bitmap images vector quantization
CN109506632A (en) * 2018-11-12 2019-03-22 浙江省国土勘测规划有限公司 A kind of real estate plane survey method
CN109919958A (en) * 2019-01-14 2019-06-21 桂林航天工业学院 A kind of multiple constraint line segments extraction method based on multi-scale image space
CN110188616A (en) * 2019-05-05 2019-08-30 盎锐(上海)信息科技有限公司 Space modeling method and device based on 2D and 3D image
CN110210377A (en) * 2019-05-30 2019-09-06 南京维狸家智能科技有限公司 A kind of wall and door and window information acquisition method rebuild for three-dimensional house type
CN112070702A (en) * 2020-09-14 2020-12-11 中南民族大学 Image super-resolution reconstruction system and method for multi-scale residual error feature discrimination enhancement
CN112729167A (en) * 2020-12-21 2021-04-30 福建汇川物联网技术科技股份有限公司 Calculation method and device of plane equation
CN112767317A (en) * 2020-12-31 2021-05-07 上海易维视科技有限公司 Naked eye 3D display grating film detection method
CN113807325A (en) * 2021-11-17 2021-12-17 南京三叶虫创新科技有限公司 Line type identification method and system based on image processing
CN113987622A (en) * 2021-09-26 2022-01-28 长沙泛一参数信息技术有限公司 Method for automatically acquiring building layer height parameters from shaft elevation map
CN114048535A (en) * 2021-11-17 2022-02-15 北京蜂鸟视图科技有限公司 System and method for generating door and window in CAD wall map layer during map construction
CN117058338A (en) * 2023-07-07 2023-11-14 北京畅图科技有限公司 CAD-based three-dimensional building model construction method, system, equipment and medium
CN117252975A (en) * 2023-09-22 2023-12-19 北京唯得科技有限公司 Three-dimensional conversion method and system for plane diagrams, storage medium and electronic equipment
CN117475084A (en) * 2023-11-27 2024-01-30 五矿瑞和(上海)建设有限公司 Method and system for generating curtain wall three-dimensional wire frame model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060085529A1 (en) * 2004-10-05 2006-04-20 Rudy Ziegler Method and system for streaming images to wireless devices
CN101034436A (en) * 2007-04-20 2007-09-12 永凯软件技术(上海)有限公司 Multiple linewidth self-adapting preliminary vectorization method in vectorization process of engineering drawing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060085529A1 (en) * 2004-10-05 2006-04-20 Rudy Ziegler Method and system for streaming images to wireless devices
CN101034436A (en) * 2007-04-20 2007-09-12 永凯软件技术(上海)有限公司 Multiple linewidth self-adapting preliminary vectorization method in vectorization process of engineering drawing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李嫄姝: "基于稀疏像素矢量化的高效三维建筑建模", 《中国优秀硕士学位论文全文数据库》, no. 7, 15 July 2013 (2013-07-15) *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252715B (en) * 2014-09-05 2017-05-03 北京大学 Single line image-based three-dimensional reconstruction method
CN104252715A (en) * 2014-09-05 2014-12-31 北京大学 Single line image-based three-dimensional reconstruction method
CN104484899A (en) * 2014-12-26 2015-04-01 天津恒达文博科技有限公司 Map generation system based on plate continuity judgment algorithm
CN106156438A (en) * 2016-07-12 2016-11-23 杭州群核信息技术有限公司 Body of wall recognition methods and device
CN107274486A (en) * 2017-06-26 2017-10-20 广州天翌云信息科技有限公司 A kind of model 3D effect map generalization method
CN109345454A (en) * 2018-09-18 2019-02-15 徐庆 Method, storage medium and the system of bitmap images vector quantization
CN109345454B (en) * 2018-09-18 2023-01-06 徐庆 Bitmap image vectorization method, storage medium and system
CN109506632B (en) * 2018-11-12 2021-03-26 浙江省国土勘测规划有限公司 Real estate plane measurement method
CN109506632A (en) * 2018-11-12 2019-03-22 浙江省国土勘测规划有限公司 A kind of real estate plane survey method
CN109919958B (en) * 2019-01-14 2023-03-28 桂林航天工业学院 Multi-constraint line segment extraction method based on multi-scale image space
CN109919958A (en) * 2019-01-14 2019-06-21 桂林航天工业学院 A kind of multiple constraint line segments extraction method based on multi-scale image space
CN110188616B (en) * 2019-05-05 2023-02-28 上海盎维信息技术有限公司 Space modeling method and device based on 2D and 3D images
CN110188616A (en) * 2019-05-05 2019-08-30 盎锐(上海)信息科技有限公司 Space modeling method and device based on 2D and 3D image
CN110210377A (en) * 2019-05-30 2019-09-06 南京维狸家智能科技有限公司 A kind of wall and door and window information acquisition method rebuild for three-dimensional house type
CN110210377B (en) * 2019-05-30 2023-07-28 南京维狸家智能科技有限公司 Wall body and door and window information acquisition method for three-dimensional house type reconstruction
CN112070702A (en) * 2020-09-14 2020-12-11 中南民族大学 Image super-resolution reconstruction system and method for multi-scale residual error feature discrimination enhancement
CN112070702B (en) * 2020-09-14 2023-10-03 中南民族大学 Image super-resolution reconstruction system and method for multi-scale residual error characteristic discrimination enhancement
CN112729167B (en) * 2020-12-21 2022-10-25 福建汇川物联网技术科技股份有限公司 Calculation method and device of plane equation
CN112729167A (en) * 2020-12-21 2021-04-30 福建汇川物联网技术科技股份有限公司 Calculation method and device of plane equation
CN112767317A (en) * 2020-12-31 2021-05-07 上海易维视科技有限公司 Naked eye 3D display grating film detection method
CN113987622A (en) * 2021-09-26 2022-01-28 长沙泛一参数信息技术有限公司 Method for automatically acquiring building layer height parameters from shaft elevation map
CN113807325B (en) * 2021-11-17 2022-02-22 南京三叶虫创新科技有限公司 Line type identification method and system based on image processing
CN114048535A (en) * 2021-11-17 2022-02-15 北京蜂鸟视图科技有限公司 System and method for generating door and window in CAD wall map layer during map construction
CN113807325A (en) * 2021-11-17 2021-12-17 南京三叶虫创新科技有限公司 Line type identification method and system based on image processing
CN117058338A (en) * 2023-07-07 2023-11-14 北京畅图科技有限公司 CAD-based three-dimensional building model construction method, system, equipment and medium
CN117252975A (en) * 2023-09-22 2023-12-19 北京唯得科技有限公司 Three-dimensional conversion method and system for plane diagrams, storage medium and electronic equipment
CN117475084A (en) * 2023-11-27 2024-01-30 五矿瑞和(上海)建设有限公司 Method and system for generating curtain wall three-dimensional wire frame model
CN117475084B (en) * 2023-11-27 2024-05-31 五矿瑞和(上海)建设有限公司 Method and system for generating curtain wall three-dimensional wire frame model

Similar Documents

Publication Publication Date Title
CN103390088A (en) Full-automatic three-dimensional conversion method aiming at grating architectural plan
Zhang et al. A review of deep learning-based semantic segmentation for point cloud
Lin et al. Semantic decomposition and reconstruction of residential scenes from LiDAR data
Chen et al. Topologically aware building rooftop reconstruction from airborne laser scanning point clouds
WO2021048681A1 (en) Reality-based three-dimensional infrastructure reconstruction
CN111008422A (en) Building live-action map making method and system
Pantoja-Rosero et al. Generating LOD3 building models from structure-from-motion and semantic segmentation
Hensel et al. Facade reconstruction for textured LoD2 CityGML models based on deep learning and mixed integer linear programming
Kong et al. Enhanced facade parsing for street-level images using convolutional neural networks
Ibrahim et al. Deep learning-based masonry wall image analysis
Lotte et al. 3D façade labeling over complex scenarios: A case study using convolutional neural network and structure-from-motion
Remondino et al. 3D documentation of 40 kilometers of historical porticoes–the challenge
Wu et al. Automatic structural mapping and semantic optimization from indoor point clouds
Yang et al. Automated semantics and topology representation of residential-building space using floor-plan raster maps
Parente et al. Integration of convolutional and adversarial networks into building design: A review
Forlani et al. Building reconstruction and visualization from lidar data
CN113051654A (en) Indoor stair three-dimensional geographic entity model construction method based on two-dimensional GIS data
CN116206068B (en) Three-dimensional driving scene generation and construction method and device based on real data set
Komadina et al. Automated 3D urban landscapes visualization using open data sources on the example of the city of Zagreb
Liu et al. Texture-cognition-based 3D building model generalization
Gruen et al. An Operable System for LoD3 Model Generation Using Multi-Source Data and User-Friendly Interactive Editing
Chen et al. Ground material classification for UAV-based photogrammetric 3D data A 2D-3D Hybrid Approach
Cui et al. A Review of Indoor Automation Modeling Based on Light Detection and Ranging Point Clouds.
Xiong Reconstructing and correcting 3d building models using roof topology graphs
Jiang et al. Automated site planning using CAIN-GAN model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20131113