WO2024045826A1 - 建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质 - Google Patents

建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质 Download PDF

Info

Publication number
WO2024045826A1
WO2024045826A1 PCT/CN2023/102872 CN2023102872W WO2024045826A1 WO 2024045826 A1 WO2024045826 A1 WO 2024045826A1 CN 2023102872 W CN2023102872 W CN 2023102872W WO 2024045826 A1 WO2024045826 A1 WO 2024045826A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
spatial
wall
image
features
Prior art date
Application number
PCT/CN2023/102872
Other languages
English (en)
French (fr)
Inventor
崔淼
陈成才
Original Assignee
上海智臻智能网络科技股份有限公司
智臻人工智能科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海智臻智能网络科技股份有限公司, 智臻人工智能科技(上海)有限公司 filed Critical 上海智臻智能网络科技股份有限公司
Publication of WO2024045826A1 publication Critical patent/WO2024045826A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • Embodiments of the present application relate to the field of image processing technology, and in particular to a spatial area calculation method and system for architectural drawings, a processing method, equipment and storage medium for architectural drawings.
  • the building plan is the basic sample drawing of the building construction drawing, which is used to guide the smooth and correct construction of the building.
  • the identification of the building plane space, as well as the calculation of the area and space type size of the building plane space, are often used to meet the design institute's version verification of construction, decoration, landscape and other drawings, as well as apartment design, etc.
  • the area requirements for space functions usually have specific national standards.
  • the bedroom use area must meet certain area requirements, and the area of kitchens with different structures (such as the kitchen use area of a residential suite consisting of a bedroom, living room, kitchen and bathroom, etc.) It should not be less than 3.5m 2 .
  • the calculation of the area of the spatial functions of the building plan is a very important step for plan review and construction, and the determination of the area of the spatial functions of the building plan will affect the speed of plan review and construction.
  • most architectural drawings do not indicate the area of each spatial function, making it difficult to guide the smooth and correct construction of the building.
  • the problem solved by the embodiments of this application is to provide a spatial area calculation method and system for architectural drawings, a processing method, equipment and storage medium for architectural drawings, so as to improve the calculation accuracy of spatial functional area.
  • embodiments of the present application provide a method for calculating the spatial area of architectural drawings, which includes: extracting candidate frame areas from the architectural drawings, the components of which include walls; and calculating the candidate frames of the architectural drawings.
  • the frame area is segmented by spatial functions to achieve edge detection of each spatial function to obtain an initial image.
  • the initial image includes a plurality of spatial function segmented areas corresponding to the spatial functions one-to-one; image processing is performed on the initial image, along the The outline of the wall connects the walls of the same spatial functional division area in sequence, and fills the area between the inner wall line and the outer wall line of the wall to obtain the target image. It includes the first connected domain corresponding to the wall of the same spatial functional division area, and the first connected domain forms a closed spatial functional area along the outline of the wall; according to the second connected area inside the spatial functional area area of the domain to obtain the area of the spatial functional area.
  • the embodiment of the present application also provides a method for processing architectural drawings, using the space area calculation method described in the embodiment of the present application to calculate the area of the spatial functional area in the architectural drawing.
  • Embodiments of the present application also provide a spatial area calculation system for architectural drawings, including: a frame extraction module for extracting candidate frame areas in architectural drawings, where components of the architectural drawings include walls; and a space segmentation module for extracting candidate frame areas in architectural drawings.
  • the processing module is used to perform image processing on the initial image, so that the walls in the same spatial functional division area are connected in sequence, and the area between the inner wall line and the outer wall line of the wall is filled, to obtain
  • the target image includes the first connected domain corresponding to the wall of the same spatial functional division area, and the first connected domain forms a closed spatial functional area along the outline of the wall; the calculation module uses The area of the spatial functional area is obtained based on the area of the second connected domain inside the spatial functional area.
  • Embodiments of the present application also provide a device, including at least one memory and at least one processor.
  • the memory stores one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement this application
  • the space area calculation method described in the embodiment is not limited to:
  • Embodiments of the present application also provide a storage medium that stores one or more computer instructions, and the one or more computer instructions are used to implement the space area calculation method described in the embodiments of the present application.
  • the candidate frame area of the architectural drawing is first divided into spatial functions to obtain an initial image including multiple spatial functional segmentation areas, and then Perform image processing on the initial image, and connect the walls of the same spatial functional division area in sequence along the outline of the wall, and fill the area between the inner wall line and the outer wall line of the wall, so that the same spatial function
  • the walls of the divided areas are located in the same first connected domain, and the area of the spatial functional area is obtained according to the area of the second connected domain inside the spatial functional area; the embodiment of the present application first performs spatial functional segmentation, which is conducive to accurately determining each
  • the location and outline of the spatial functional division area can be used to reduce the probability of missed detection.
  • the walls of the same spatial functional division area should be located at the same first level.
  • a closed spatial functional area is formed along the outline of the wall, and then the area of the second connected domain inside the spatial functional area is calculated, which is conducive to calculating the spatial functional area more accurately.
  • the accuracy of calculating the spatial functional area can be Reaching more than 98%.
  • Figure 1 is a schematic flow chart of an embodiment of the spatial area calculation method of the architectural drawings of the present application.
  • Figure 2 is a schematic diagram of an embodiment of the initial image in step S2.
  • FIG. 3 is a schematic flowchart of an embodiment of each step in step S2.
  • Figure 4 is a schematic structural diagram of an embodiment of the semantic segmentation model in step S2.
  • Figure 5 is a partial enlargement of the initial image.
  • Figure 6 is a schematic diagram of an embodiment of the target image in step S3.
  • Figure 7 is a schematic structural diagram of an embodiment of the spatial area calculation system for architectural drawings of the present application.
  • Figure 8 is a schematic structural diagram of an embodiment of the space segmentation module in Figure 7;
  • Figure 9 is a schematic structural diagram of equipment provided by an embodiment of the present application.
  • architectural drawings do not indicate the area of each spatial function (for example, balcony, bathroom, bedroom, living room, kitchen, etc.), and it is difficult for architectural drawings to guide the smooth and correct construction of the building.
  • AI artificial intelligence
  • designing a solution for calculating the functional area of each space using artificial intelligence (AI) is a crucial need in the construction industry.
  • Automatically calculate the area of spatial functions in architectural drawings which can be applied in different stages such as design and review of architectural drawings. For example, in the design stage, by automatically calculating the area of spatial functions, the design efficiency of architectural drawings can be greatly improved, and the areas of each spatial function can be quickly marked.
  • the spatial functional area can be automatically calculated and marked for non-standard drawings without marked areas, or the spatial functional areas of drawings with marked areas can be recalculated for review, thereby improving the review efficiency of architectural drawings.
  • the embodiment of the present application provides a method for calculating the space area of architectural drawings. Segmentation is performed first, which is conducive to accurately determining the position and outline of each spatial functional segmentation area, thereby reducing the probability of missed detection.
  • the functional area is mainly the area enclosed by the wall. Therefore, image processing is performed on the initial image, so that the walls in the functional division area of the same space are connected in sequence along the outline of the wall, and the inner wall lines and outer walls of the wall are The area between wall lines is filled so that The walls of the same spatial functional division area are located in the same first connected domain.
  • a closed spatial functional area is formed along the outline of the wall, and then the area of the second connected domain inside the spatial functional area is calculated, which is conducive to more accurate calculation. Calculate the functional area of the space.
  • the spatial functional area is a closed area surrounded by walls, doors, windows and other components. Different spatial functional areas correspond to different building functions, such as bedrooms, living rooms, kitchens, bathrooms, etc.
  • FIG. 1 a schematic flow chart of an embodiment of the method for calculating the space area of architectural drawings of the present application is shown.
  • the calculation method of space area in architectural drawings includes the following steps:
  • Step S1 Extract candidate frame areas from the architectural drawings.
  • the components of the architectural drawings include walls;
  • Step S2 Perform spatial function segmentation on the candidate frame area of the architectural drawing to achieve edge detection of each spatial function and obtain an initial image.
  • the initial image includes multiple spatial function segmentation areas that correspond to the spatial functions one-to-one;
  • Step S3 Perform image processing on the initial image, connect the walls of the same spatial functional division area in sequence along the outline of the wall, and fill the area between the inner wall line and the outer wall line of the wall to obtain
  • the target image includes the first connected domain corresponding to the wall in the same spatial functional division area, and the first connected domain forms a closed spatial functional area along the outline of the wall;
  • Step S4 Obtain the area of the spatial functional area based on the area of the second connected domain inside the spatial functional area.
  • step S1 is performed to extract a candidate frame area (not shown) from the architectural drawing (not shown) to be tested, and the components of the architectural drawing include walls.
  • the architectural drawings to be tested are building plans that require spatial functional area calculation.
  • the architectural drawings are CAD architectural drawings, which are used to reflect the plane shape, functional requirements, plane layout and plane composition relationships of the building.
  • the architectural drawings are architectural drawings of a residence.
  • Architectural drawings have components. Among them, components are the various elements (i.e., elements) that constitute a building, such as walls, windows, doors, floors, beams, etc.
  • the components of the architectural drawing include walls.
  • the components of the architectural drawings also include doors and windows embedded in the wall.
  • the architectural drawing has a drawing frame.
  • an architectural drawing is an architectural engineering drawing that includes multiple frames.
  • a frame refers to a wire frame that limits the drawing area in an architectural engineering drawing.
  • One frame usually contains one drawing.
  • a drawing frame represents the drawing of a residence on the same floor in a unit building.
  • the candidate frame area is the area in the architectural drawing where one of the frames to be intercepted is located, that is, the frame to be reviewed. It should be noted that architectural drawings usually include multiple frames. Therefore, before reviewing the architectural drawings, candidate frame areas are first extracted to implement the review of the architectural drawings based on artificial intelligence, for example, to facilitate subsequent targeted The candidate picture frame area is divided into spatial functions, and then the spatial functional area in the picture frame that needs to be reviewed is calculated in a targeted manner.
  • the step of extracting the candidate frame area from the architectural drawing to be tested includes: obtaining attribute information of the candidate frame to be extracted.
  • the attribute information includes one or more of text attribute information and layer attribute information.
  • this embodiment determines the area of the candidate frame to be extracted in the architectural drawing based on the attribute information of the candidate frame and the mapping relationship between the attribute information and the position, which is helpful to avoid the problem of missing frames due to image resolution. Too high will lead to the problem of missed detection, and it can accurately locate the position of the candidate frame area, thereby improving the effect of frame recognition. Specifically, compared with using deep learning for frame recognition, this embodiment can improve the effect of frame recognition by 10%.
  • determining the area of the candidate frame to be extracted in the architectural drawing based on the attribute information of the candidate frame and the mapping relationship between the attribute information and the position is also conducive to improving the speed of extracting the candidate frame area.
  • the position of the candidate frame in the architectural drawing is the coordinate of the candidate frame in the architectural drawing.
  • the attribute information includes one or more of text attribute information and layer attribute information.
  • the frame usually has layer attribute information such as layer naming, and the frame usually has text attribute information such as building unit keywords. Therefore, the attribute information of the frame can be used to identify the frame.
  • Step S2 is executed to perform spatial function segmentation on the candidate frame area of the architectural drawing to achieve edge detection of each spatial function to obtain the initial image 100 , the initial image 100 includes a plurality of spatial function segmented areas 100a that correspond to the spatial functions one-to-one.
  • edge detection of each spatial function (such as balcony, bathroom, bedroom, living room, kitchen, etc.) can be achieved, so that different types of spatial functions can be preliminarily segmented and multiple spatial function segmentation areas can be obtained 100a, for example, room area, living room area, etc.
  • the edge contours of each component can be obtained.
  • the initial image 100 has the edge outline of the wall 110 , the edge outline of the door 120 , and the edge outline of the window 130 ; where the door 120 may include one or both of a swing door and a sliding door. kind.
  • Performing spatial function segmentation first is helpful to preliminarily determine the position and contour of each spatial function segmentation area 100a more accurately, thereby reducing the probability of missed detection and providing a better quality initial image 100 for subsequent image processing.
  • Spatial functional segmentation is performed on the candidate frame area of the architectural drawing, so that the initial image 100 is obtained from the original image (that is, the architectural drawing to be tested), thereby improving the resolution of the initial image 100, which is beneficial to improving subsequent image processing. Effect.
  • spatial function segmentation is based on deep learning methods, which is also beneficial to improving drawing review efficiency and construction speed.
  • a semantic segmentation model is used to perform spatial functional segmentation on candidate frame areas of architectural drawings.
  • FIG. 3 is a schematic flowchart of an embodiment of each step in step S2.
  • the step of performing spatial functional segmentation on the candidate frame area of the architectural drawing includes: performing step S21 to intercept the image of the candidate frame area from the architectural drawing as an image to be processed (not shown).
  • the architectural drawings are intercepted to obtain the image to be processed corresponding to the candidate frame area, so that only the image to be processed is subsequently processed, thereby reducing the amount of data calculation; moreover, the architectural drawings are intercepted so that the format of the image to be processed satisfies:
  • the algorithm performs a series of operations on the image to be processed.
  • the resolution of the image to be processed is increased, which is beneficial to improving the quality of subsequent image features.
  • the basic features of multi-channel images are low-dimensional image features.
  • Low-dimensional image features contain fewer irrelevant features and redundant features, which are conducive to improving the accuracy of dividing each spatial function and conducive to reducing detailed information (for example, The thinner lines) are lost, thus preparing for the subsequent extraction of higher-dimensional image features, which is conducive to the subsequent accurate division of each spatial function.
  • by extracting basic image features from different channels it is helpful to make the extracted basic image features better characterize regions with different spatial functions.
  • the steps of extracting basic features of multi-channel images from the image to be processed include: inputting the image to be processed into the backbone network, obtaining the output results after passing through multiple network blocks of the backbone network, and obtaining different second channels Several basic image features; wherein, the backbone network includes multiple network blocks connected in series, each network block is used to output a specific number of second channel basic image features.
  • the backbone network By using the backbone network, the basic features of multiple channels in the image to be processed can be extracted.
  • the backbone network includes a residual network with deformable convolution.
  • the residual network By using the residual network, it is helpful to reduce the probability of overfitting in the feature extraction process.
  • the residual network has deformable convolution, which is conducive to increasing the receptive field of the network, thereby reducing the probability of missing some features.
  • the receptive field refers to the features output by each layer of the convolutional neural network. The size of the area mapped by the pixels on the feature map on the input image.
  • the backbone network includes multiple network blocks in series, and therefore has deformable convolutions in each network block.
  • FIG 4 is a schematic structural diagram of an embodiment of the semantic segmentation model in step S2.
  • the backbone network is the ShuffleNetV2 network.
  • the backbone network includes multiple network blocks connected in series.
  • the multiple network blocks connected in series include the first network block (Shuffle_block1), the second network block (Shuffle_block2), the Three network blocks (Shuffle_block3), fourth network block (Shuffle_block4), fifth network block (Shuffle_block5) and sixth network block (Shuffle_block6).
  • the features extracted by each network block are gradually enriched, but it is also easy to produce redundant features, which is prone to over-fitting problems.
  • the sofa is mistakenly used as an independent spatial function; if you choose If the output results of the network blocks that are too close to the input end of the backbone network are used as basic image features with different second channel numbers, it will easily lead to the loss of some features, thereby easily reducing the effect of segmenting different types of spatial functions (for example, it is impossible to split out bedroom area).
  • the output result of the fifth network block (that is, Shuffle_block5) of the backbone network is obtained as basic image features with different second channel numbers, thereby obtaining basic multi-channel image features.
  • Step S23 is executed to perform atrous convolution operations with different numbers of first channels on the basic features of the multi-channel image to obtain multi-scale spatial region features.
  • the receptive fields of the spatial region features are larger than the receptive fields of the basic features of the multi-channel image.
  • first channel numbers are 8, 64, 128 and 256 respectively.
  • each atrous convolution operation with the first channel number uses multiple convolutions with different dilation rates.
  • the void ratio is any even number between 8 and 16. Setting different hole rates can obtain different receptive fields, thereby improving the effect of obtaining multi-scale spatial region features.
  • the void rate should not be too small, otherwise it will easily increase the probability of missed detection. For this reason, the void ratio is any even number between 8 and 16.
  • the atrous convolution operations of each first channel number adopt atrous rates of 8, 12, and 16 respectively.
  • the convolution kernel size of any one or two atrous convolution operations with the first channel number is 1*1.
  • the convolution kernel size of any one or two first channel number atrous convolution operations is 1*1, which is conducive to removing redundant features, thereby improving the accuracy of dividing each spatial function.
  • the convolution kernel size of the atrous convolution operation that only selects one or two of the first channel numbers is 1*1, and the number of convolution kernels with size 1*1 will not be too many, thereby avoiding excessive dimensionality reduction.
  • the convolution kernel size of the atrous convolution operation with a first channel number of 8 is 1*1
  • the convolution kernel size of the remaining atrous convolution operations is 3*3.
  • the steps of spatial functional segmentation of the candidate frame areas of the architectural drawings also include : Execute step S24 to perform upsampling processing on the features of each spatial region at multiple scales to increase high-dimensional features in the features of each spatial region.
  • the spatial region features after the atrous convolution operation of each first channel number are separately upsampled. During the upsampling process, the spatial region features after the atrous convolution operation of each first channel number can be processed.
  • the spatial region features are set to a one-to-one correspondence and matching dimensional proportions so that the dimensions can be unified during subsequent feature fusion.
  • this embodiment adopts a method of separately upsampling the spatial region features after the atrous convolution operation of each first channel number, which is also conducive to making the network
  • the model is smaller in size.
  • Step S25 is executed to perform feature fusion (concat) on the multi-channel image basic features and the upsampled spatial region features to obtain fused image features.
  • Image features of different scales contain different detailed information. Through feature fusion, feature information of different scales are fused together to obtain multi-channel fused image features, which is beneficial to improving the edge detection effect of each spatial function. This improves the accuracy of dividing each spatial function.
  • the steps for feature fusion of multi-channel image basic features and spatial region features include: inputting the multi-channel image basic features and spatial region features into the fusion network to obtain initial fusion features; and sequentially upsampling the initial fusion features. and dimensionality reduction processing to obtain fused image features.
  • the upsampling processing includes 2 times upsampling or 4x upsampling.
  • the abstract feature information of the boundary of the spatial functional segmentation region is further extracted based on the initial fusion features.
  • the upsampling process includes 2x upsampling or 4x upsampling.
  • 2 times upsampling or 4 times upsampling makes the convolution kernel used in the upsampling process smaller, which is conducive to better extracting the boundaries of the spatial functional segmentation area; moreover, the multiple of the upsampling process is an even number, which means It is beneficial to make the sampling speed faster and collect more comprehensive information; in addition, the The multiple of the upsampling process will not be too large, thereby reducing the memory occupied during data processing.
  • the step of performing dimensionality reduction on the initial fusion features after the upsampling process includes: using a convolution kernel with a third channel number and a convolution kernel size of 1*1 to convolve the initial fusion features after the upsampling process. Accumulation processing.
  • the number of third channels is greater than or equal to 128.
  • the number of the third channel will not be too small, thereby reducing the probability of missing features in the spatial region of the first channel while removing redundant information.
  • the third channel number is 128 or 256.
  • Step S26 is executed to obtain multiple spatial functional segmentation areas 100a based on the fused image features.
  • frame space semantic segmentation is performed on the fused image features to obtain the semantic segmentation results of the image to be processed, thereby obtaining multiple spatial functional segmentation regions.
  • Figure 5 is a partial enlargement of the initial image.
  • Figure 6 is a schematic diagram of an embodiment of the target image in step S3.
  • Step S3 is executed to perform image processing on the initial image 100.
  • the target image 300 includes the first connected domain 310 corresponding to the wall 110 of the same spatial functional segmentation area 100a.
  • the first connected domain 310 A closed spatial functional area 320 is formed along the contour of the wall 110 .
  • Segmenting space functions is helpful to accurately determine the position and outline of each space function segmentation area 100a, thereby reducing the probability of missed detection.
  • the space function area is mainly the area surrounded by the wall 110, therefore, along the wall 110
  • the outline of makes the wall 110 of the same spatial functional division area 100a located in the same first connected domain 310. That is to say, for any side of the spatial functional divided area 100a, the first connected domain 310 where the wall 110 is located The boundary is a straight line, and the first connected domain 310 forms a closed spatial functional area 320 along the outline of the wall 110. Therefore, after calculating the area of the second connected domain (not labeled) inside the spatial functional area 320, we can accurately Calculate the functional area of the space.
  • the steps of image processing on the initial image 100 include: performing grayscale processing on the initial image to obtain a binary image.
  • the inner wall line 110b and the outer wall line 110a of the wall 110 correspond to Pixels have the same attribute value.
  • Grayscale processing is performed to obtain a binary image with only two attribute values.
  • the pixels corresponding to the edge contours of each spatial function can be distinguished from the pixels in the remaining area.
  • the wall 110 The pixels corresponding to the inner wall line 110b and the outer wall line 110a have the same attribute value, and it also facilitates subsequent connected domain processing, so that the pixels corresponding to the wall 110 in the same spatial functional segmentation area 100a are located in the same first in a connected domain 310.
  • the pixels corresponding to the inner wall line 110b and the outer wall line 110a of the wall 110 have the same attribute value. Therefore, the colors of the pixels corresponding to the inner wall line 110b and the outer wall line 110a of the wall 110 are same.
  • the attribute values of the pixels in the binary image are 255 and 0 respectively.
  • the pixel with the attribute value of 255 is a white pixel, and the pixel with the attribute value of 0 Dots are black pixels.
  • the attribute values of pixels in the binary image can also be represented by "0" and "1" respectively.
  • the pixels with the attribute value "0" are black pixels, and the pixels with the attribute value "1" Pixels are white pixels.
  • the color of the outline of each component is only for convenience of illustration. During the actual space area calculation process, the color of the outline of each component can be set to white, and the color of the remaining parts can be set to black. .
  • the step of image processing on the initial image 100 also includes: performing connected domain processing on the wall 110 of each spatial functional segmentation area 100a in the binary image to obtain the target image 300.
  • the connected domain processing is used to make the same
  • the pixels corresponding to the wall 110 of a spatial functional division area 100a are all located in the same first connected domain 310.
  • the Connected Component refers to the area in the image where pixels with the same attribute value (for example, gray value) and adjacent positions are connected.
  • the outline of the wall 110 is such that the inner wall lines 110b are connected in sequence and the outer wall lines 110a are connected in sequence, and the area between the inner wall lines 110b and the outer wall lines 110a of the wall 110 is filled, thereby preventing the inner wall from being
  • the area between the line 110b and the exterior wall line 110a is regarded as part of the functional area of the space, and the area at the disconnection position between the interior wall lines 110b or the exterior wall lines 110a is also prevented from being used as part of the functional area of the space, thereby improving the The calculation accuracy of spatial functional area is improved.
  • there are two types of attribute values of pixels in binary images and it is easy to To achieve connected domain processing.
  • the step of performing connected domain processing on the walls 110 of each spatial functional division area 100a in the binary image includes: identifying the wall 110 of the architectural drawing in the binary image to obtain the position of the wall 110 ; After the wall 110 is identified, first connected domain processing is performed on each spatial functional division area 100a respectively; wherein the first connected domain processing includes: processing between the inner wall line 110b and the outer wall line 110a of the wall 110 The pixel points undergo a first attribute value conversion, so that the pixel point attribute values after the first attribute value conversion are the same as the pixel point attribute values of the interior wall line 110b and the exterior wall line 110a.
  • the position of the wall 110 can be obtained by obtaining the color of the load-bearing wall.
  • the step of converting the first attribute value of the pixel point between the inner wall line 110b and the outer wall line 110a of the wall 110 includes: using the size and the first attribute value to be converted
  • the convolution kernel with the same length L and width W of the wall 110 is used to expand the area where the wall 110 is located.
  • a convolution kernel with the same size as the length L and width W of the wall to be converted into the first attribute value is beneficial to ensuring the inner wall line 110b and the outer wall line 110a of this part of the wall 110
  • the pixel points in between can realize the first attribute value conversion, thereby improving the filling effect of the area between the interior wall line 110b and the exterior wall line 110a, and conducive to increasing the speed of the first attribute value conversion.
  • the functional elements of each space mainly include a wall 110, a door 120 and a window 130.
  • the inner wall line 110b of the wall 110 is disconnected at the position of the door 120 and the window 130.
  • the outer wall of the wall 110 The line 110a is also disconnected at the location of the door 120 and the window 130. Therefore, in order to accurately calculate the area of the spatial function, the wall 110 needs to be completed at the location of the door 120 and the window 130, so as to follow the outline of the wall 110.
  • the walls 110 of the same spatial functional division area 100a are connected in sequence.
  • the calculation method of the spatial area of the architectural drawings also includes: after identifying the walls of the architectural drawings in the binary image, identifying the doors and windows embedded in the walls in the binary image to obtain the door 120 and The position of the window 130; after identifying the door 120 and the window 130, perform the second connected domain processing and the third connected domain processing on the spatial functional segmentation area 100a respectively.
  • a target detection algorithm is used to identify the door 120 and the window 130 in the binary image.
  • the target detection algorithm can adopt the YOLOv5 network.
  • the second connected domain processing includes: determining the extension direction of the corresponding door line 125 as the first direction according to the opening direction of the door 120; determining the corresponding direction of the wall 110 adjacent to the door 120.
  • the first endpoint 116 of the interior wall line 110b and the exterior wall line 110a in the first direction; the first rectangular area 117 surrounded by the first endpoint 116 is obtained according to the position of the first endpoint 116, and the first rectangular area 117 Connect the walls 110 on both sides of the door 120 along the first direction; perform a second attribute value conversion on the pixel points in the first rectangular area 117, so that the attribute values of the pixel points after the first attribute value conversion are consistent with the inner wall line 110b and the outer wall
  • the pixel attribute values of line 110a are the same.
  • the door line 125 is used to represent the reference line of the door 120 in the closed state. Therefore, for the wall 110 embedded with the door 120, the door 120, the interior wall line 110b, and the exterior wall line 110a all extend in the same direction. Therefore, by determining the extension direction of the corresponding door line 125, it is easy to determine the filling direction of the wall 110 where the door 120 is located.
  • the interior wall line 110b and the exterior wall line 110a of the same wall 110 are two parallel lines. Therefore, determine the interior wall line 110b and exterior wall line 110a corresponding to the wall 110 adjacent to the door 120.
  • the first endpoint 116 in the first direction determines the first rectangular area 117 according to the first endpoint 116.
  • the first endpoint 116 is the vertex of the first rectangular area 117, and the pixel point after the first attribute value conversion is
  • the attribute value is the same as the pixel attribute value of the interior wall line 110b and the exterior wall line 110a, so that the wall 110 can be positioned at the position of the door 120 by converting the second attribute value of the pixels in the first rectangular area 117.
  • Completion which is equivalent to extending the wall 110 along the first direction, and, after the second attribute value conversion, at the location of the door 120, the area between the interior wall line 110b and the exterior wall line 110a is Fill up.
  • the third connected domain processing includes: determining the extension direction of the boundary line 131 of the window 130 as the second direction; determining the interior wall line 110b and exterior wall line 110b corresponding to the wall 110 adjacent to the window 130.
  • the wall 110 perform a third attribute value conversion on the pixels in the second rectangular area 133, so that the pixel attribute values after the third attribute value conversion are the same as the pixel attribute values of the inner wall line 110b and the outer wall line 110a.
  • the wall 110 is filled in at the position of the window 130, which is equivalent to extending the wall 110 along the second direction, and, in the third After the attribute value is converted, the area between the interior wall line 110b and the exterior wall line 110a is filled in at the location of the window 130.
  • the inner wall line 110b and the outer wall line 110a of the wall 110 are identified through the logical relationship between spatial functions.
  • the remaining spatial function segmented areas 100a are external areas, so any spatial function is identified
  • the step of dividing the inner wall line 110b and the outer wall line 110a of the wall 110 corresponding to the area 100 includes: determining the wall 110 of the space function dividing area 100 based on the outer area adjacent to the space function dividing area 100 currently to be identified.
  • the exterior wall line 110a is exposed to the adjacent exterior area, and the remaining boundary line of the wall 110 is correspondingly the interior wall line 110b.
  • the boundary line of its wall 110 toward the external area is the exterior wall line 110a. That is to say, the exterior wall line 110a corresponds to one of the exterior areas, so the location of the exterior area is determined.
  • the exterior wall line 110a of the wall 110 exposed to the external area can be determined.
  • the outer window boundary line (not labeled) and the inner window boundary line (not labeled) of the window 130 can also be identified.
  • the spatial functional division area 100a includes an outdoor area, which is the outdoor area of the unit building.
  • the step of identifying the outer window boundary line and the inner window boundary line of the window 130 includes: determining the exterior wall line 110a exposed to the outdoor area. ; Determine the outer window boundary line of the window 130 based on the outer wall line 110a exposed to the outdoor area.
  • the outer window boundary line is connected to the outer wall line 110a, and the remaining boundary line of the window 130 is correspondingly the inner window boundary line.
  • an area corresponding to the exterior wall line 110a is an outdoor area, and the window 130 is embedded in the wall 110. Therefore, the exterior window boundary line is connected to the exterior wall line 110a.
  • identification After identifying the exterior wall line 110a and the exterior window boundary line, the interior wall line 110b and the interior window boundary line can be determined through the elimination method.
  • step S4 is performed to obtain the area of the spatial functional area based on the area of the second connected domain inside the spatial functional area 320 .
  • the walls 110 of the same spatial functional segmentation area 100a are all located in the same first connected domain 310.
  • the first connected domain 310 forms a closed spatial functional area 320 along the outline of the wall 110.
  • the region 320 is a second connected domain that can be distinguished from the first connected domain 310 .
  • connected domain detection is performed on the target image 300, and after extracting the second connected domain within the spatial functional area 320, the area of the second connected domain is calculated to obtain the area of the spatial functional area 320.
  • An embodiment of the present application also provides a method for processing architectural drawings, which uses the above spatial area calculation method of architectural drawings to calculate the area of the spatial functional area in the architectural drawings.
  • the processing of architectural drawings includes at least one of the design of architectural drawings, the review of architectural drawings, and the inspection of architectural drawings.
  • Figure 7 is a schematic structural diagram of an embodiment of the spatial area calculation system for architectural drawings of the present application.
  • the space area calculation system includes: a frame extraction module 10, used to extract candidate frame areas in architectural drawings.
  • the components of the architectural drawings include walls; a space segmentation module 20, used to Perform spatial function segmentation on the candidate frame area of the architectural drawing to achieve edge detection of each spatial function to obtain an initial image 100.
  • the initial image 100 includes a plurality of spatial function segmentation areas 100a that correspond to the spatial functions one-to-one; image processing module 30, used to perform image processing on the initial image 100, so that the walls 110 of the same spatial functional division area 100a are connected in sequence, and the area between the inner wall line 110b and the outer wall line 110a of the wall 110 is filled,
  • the target image 300 is obtained.
  • the target image 300 includes the first connected domain 310 corresponding to the wall 110 of the same spatial functional division area 100a.
  • the first connected domain 310 forms a closed spatial functional area 320 along the outline of the wall 110; calculation Module 40 is used to obtain the area of the spatial functional area 320 based on the area of the second connected domain inside the spatial functional area 320 .
  • the space area calculation system provided by the embodiment of the present application first performs space functional segmentation, which is helpful to accurately determine the position and outline of each space functional segmentation area 100a, thereby reducing the probability of missed detection.
  • the space functional area is mainly the wall 110
  • the area of the enclosed area therefore, make the wall 110 of the same spatial functional division area 100a be located in the same first connected domain 310, and enclose a closed spatial functional area 320 along the outline of the wall 110, and then calculate the spatial function
  • the area of the second connected domain inside area 320 is conducive to calculating the spatial functional area more accurately.
  • the architectural drawings are CAD architectural drawings.
  • the architectural drawings are architectural drawings of a residence.
  • Architectural drawings have components. Among them, components are the various elements (i.e., elements) that constitute a building, such as walls, windows, doors, floors, beams, etc.
  • the components of the architectural drawing include a wall 110 and a door 120 and a window 130 embedded in the wall 110 .
  • the architectural drawing has frames
  • the candidate frame area is the area where one of the frames to be intercepted in the architectural drawing is located, that is, the frame to be reviewed.
  • Review facilitates subsequent targeted spatial functional segmentation of the candidate frame area, and then targeted calculation of the spatial functional area in the frame that needs to be reviewed.
  • the picture frame extraction module 10 includes: an attribute information acquisition unit, used to obtain the attribute information of the candidate picture frame to be extracted, including one or more of text attribute information and layer attribute information.
  • the attribute information and The position of the candidate frame to be extracted in the architectural drawing has a mapping relationship;
  • the candidate frame area determination unit is used to determine the area of the candidate frame to be extracted in the architectural drawing as a candidate frame based on the attribute information and mapping relationship area.
  • determining the area of the candidate frame to be extracted in the architectural drawings is helpful to avoid the problem of missed detection due to excessive image resolution, and can Accurately locate the position of the candidate frame area, thereby improving the effect of frame recognition.
  • the position of the candidate frame in the architectural drawing is the coordinate of the candidate frame in the architectural drawing.
  • the space segmentation module 20 is used to segment the candidate frame areas of the architectural drawings by spatial functions.
  • edge detection of each spatial function for example, balcony, bathroom, bedroom, living room, kitchen, etc.
  • Different types of space functions can be preliminarily divided to obtain multiple space function divided areas 100a, such as room areas, living room areas, etc.
  • the edge contours of each component can be obtained.
  • the initial image 100 has the edge outline of the wall 110 , the edge outline of the door 120 , and the edge outline of the window 130 .
  • Performing spatial function segmentation first is helpful to preliminarily determine the position and contour of each spatial function segmentation area 100a more accurately, thereby reducing the probability of missed detection and providing a better quality initial image 100 for subsequent image processing.
  • Spatial functional segmentation is performed on the candidate frame area of the architectural drawing, so that the initial image 100 is obtained from the original image (that is, the architectural drawing to be tested), thereby improving the resolution of the initial image 100, which is beneficial to improving subsequent image processing. Effect.
  • the spatial function segmentation in this embodiment is based on deep learning methods, which is also beneficial to improving drawing review efficiency and construction speed.
  • the spatial segmentation module 20 uses a semantic segmentation model to perform spatial functional segmentation.
  • FIG. 8 is a schematic structural diagram of an embodiment of the spatial segmentation module 20 .
  • the space segmentation module 20 includes: an image interception unit 21, configured to intercept the image of the candidate frame area from the architectural drawing as an image to be processed (not shown).
  • the architectural drawings are intercepted to obtain the image to be processed corresponding to the candidate frame area, so that only the image to be processed is subsequently processed, thereby reducing the amount of data calculation; moreover, the architectural drawings are intercepted so that the format of the image to be processed satisfies:
  • the algorithm performs a series of operations on the image to be processed.
  • the image to be processed is intercepted from the original image (that is, the architectural drawing to be tested), thereby improving the resolution of the image to be processed, which is beneficial to improving the quality of subsequent image features.
  • the spatial segmentation module 20 also includes: a feature extraction unit 22, used to extract multi-channel image basic features from the image to be processed.
  • the basic features of multi-channel images are low-dimensional image features.
  • Low-dimensional image features contain fewer irrelevant features and redundant features, which are beneficial to improving the accuracy of dividing each spatial function and helping to reduce detailed information (for example, The thinner lines) are lost, thus preparing for the subsequent extraction of higher-dimensional image features, which is conducive to the subsequent accurate division of each spatial function.
  • by extracting basic image features from different channels it is helpful to make the extracted basic image features better characterize regions with different spatial functions.
  • the feature extraction unit 22 is used to input the image to be processed into the backbone network, obtain the output results after passing through multiple network blocks (blocks) of the backbone network, and obtain basic image features with different second channel numbers; wherein, the backbone network
  • the network includes multiple network blocks connected in series, and each network block is used to output image basic features of a specific second channel number.
  • the backbone network includes a residual network with deformable convolution, which is beneficial to reducing the probability of over-fitting during feature extraction.
  • the residual network has deformable convolution, which is conducive to increasing the receptive field of the network, thereby reducing the probability of missed detection of some features.
  • the backbone network includes multiple network blocks in series, and therefore has deformable convolutions in each network block.
  • FIG 4 is a schematic structural diagram of an embodiment of a semantic segmentation model.
  • the backbone network is the ShuffleNetV2 network.
  • the backbone network includes multiple network blocks in series.
  • the multiple network blocks in series include the first network block (Shuffle_block1), the second network block (Shuffle_block2), and the third network block (Shuffle_block3). ), the fourth network block (Shuffle_block4), the fifth network block (Shuffle_block5) and the sixth network block (Shuffle_block6).
  • the features extracted by each network block are gradually enriched, but redundant features are also easily generated, which is prone to over-fitting problems.
  • the sofa is mistakenly regarded as an independent image. Spatial function; if the output of a network block too close to the input end of the backbone network is selected as the basic feature of the image with different second channel numbers, it will easily lead to the loss of some features, which will easily reduce the efficiency of segmenting different types of spatial functions. Effect (for example, unable to separate the bedroom area).
  • the feature extraction unit 22 obtains the output result of the fifth network block (that is, Shuffle_block5) of the backbone network as basic image features with different numbers of second channels, thereby obtaining basic multi-channel image features.
  • the spatial segmentation module 20 also includes: a dilated convolution operation unit 23, which is used to perform dilated convolution operations with different numbers of first channels on the multi-channel image basic features to obtain multi-scale spatial region features.
  • the receptive field of the spatial region features is larger than Receptive fields of basic features of multi-channel images.
  • first channel numbers are 8, 64, 128 and 256 respectively.
  • each atrous convolution operation of the first channel number uses multiple convolution layers with different hole rates, and the hole rate is any even number from 8 to 16. Setting different hole rates can obtain different receptive fields, thereby improving the effect of obtaining multi-scale spatial region features.
  • the void rate should not be too small, otherwise it will easily increase the probability of missed detection. For this reason, the void rate should be any even number between 8 and 16.
  • the atrous convolution operations of each first channel number adopt atrous rates of 8, 12, and 16 respectively.
  • the convolution kernel size of any one or two atrous convolution operations with the first channel number is 1*1, which is conducive to removing redundant features, thereby improving the accuracy of each space.
  • the accuracy of function division, and the convolution kernel size of only one or two atrous convolution operations with the first channel number is selected to be 1*1, thereby avoiding the problem of excessive dimensionality reduction.
  • the convolution kernel size of the atrous convolution operation with a first channel number of 8 is 1*1, and the convolution kernel size of the remaining atrous convolution operations is 3*3.
  • the spatial segmentation module 20 also includes: an upsampling processing unit 24, which is disposed between the atrous convolution operation unit 23 and the feature fusion unit 25.
  • the upsampling processing unit 24 is used to upsample the features of each spatial region at multiple scales. , to add high-dimensional features in the features of each spatial region.
  • the spatial region features after the atrous convolution operation of each first channel number are separately upsampled. During the upsampling process, the spatial region features after the atrous convolution operation of each first channel number can be processed.
  • the spatial region features are set to a one-to-one correspondence and matching dimensional proportions so that the dimensions can be unified during subsequent feature fusion.
  • this embodiment adopts a method of separately upsampling the spatial region features after the atrous convolution operation of each first channel number, which is also conducive to making the network
  • the model is smaller in size.
  • the spatial segmentation module 20 also includes a feature fusion unit 25, which is used to perform feature fusion on multi-channel image basic features and upsampled spatial region features to obtain fused image features.
  • Image features of different scales contain different detailed information. Through feature fusion, feature information of different scales are fused together to obtain multi-channel fused image features, which is beneficial to improving the edge detection effect of each spatial function. This improves the accuracy of dividing each spatial function.
  • the feature fusion unit 25 includes: a fusion subunit, used to input multi-channel image basic features and spatial region features into the fusion network to obtain initial fusion features; and a processing subunit, used to sequentially upsample the initial fusion features. processing and dimensionality reduction to obtain fused image features, and the upsampling process includes 2 times upsampling or 4x upsampling.
  • the abstract feature information of the boundary of the spatial functional segmentation region is further extracted based on the initial fusion features.
  • the upsampling process includes 2x upsampling or 4x upsampling.
  • 2 times upsampling or 4 times upsampling makes the convolution kernel used in the upsampling process smaller, which is conducive to better extracting the boundaries of the spatial functional segmentation area; moreover, the multiple of the upsampling process is an even number, which means It is beneficial to make the sampling speed faster and collect more comprehensive information; in addition, the multiple of the upsampling process will not be too large, thereby reducing the memory occupied during data processing.
  • the processing subunit uses a convolution kernel with a third channel number and a convolution kernel size of 1*1 to perform convolution processing.
  • the number of third channels is greater than or equal to 128.
  • the number of the third channel will not be too small, thereby reducing the probability of missing features in the spatial region of the first channel while removing redundant information.
  • the third channel number is 128 or 256.
  • the spatial segmentation module 20 also includes: a spatial functional segmentation area acquisition unit 26, configured to obtain multiple spatial functional segmentation areas 100a according to the fused image features. Specifically, the spatial function segmentation area acquisition unit 26 performs frame space semantic segmentation on the fused image features to obtain a semantic segmentation result of the image to be processed, thereby obtaining multiple spatial function segmentation areas.
  • the spatial functional segmentation performed by the space segmentation module 20 is conducive to accurately determining the position and outline of each spatial functional segmentation area 100a, thereby reducing the probability of missed detection.
  • the spatial functional area is mainly the area surrounded by the wall 110, , along the contour of the wall 110, the walls 110 of the same spatial functional division area 100a are located in the same first connected domain 310. That is to say, for any side of the spatial functional division area 100a, the wall 110 is located in the same first connected domain 310.
  • the boundary of a connected domain 310 is a straight line, and the first connected domain 310 encloses a closed spatial functional area 320 along the outline of the wall 110. Therefore, after calculating the area of the second connected domain (not labeled) inside the spatial functional area 320, , the functional area of the space can be accurately calculated.
  • FIG. 5 is a partial enlarged view of the initial image
  • FIG. 6 is a schematic diagram of an embodiment of the target image.
  • the image processing module 30 includes: a binary processing unit for performing grayscale processing on the initial image 100 to obtain a binary image.
  • the inner wall line 110b of the wall 110 and the outer wall The pixels corresponding to line 110a have the same attribute value.
  • Grayscale processing is performed to obtain a binary image with only two attribute values.
  • the pixels corresponding to the edge contours of each spatial function can be distinguished from the pixels in the remaining area.
  • the wall 110 The pixels corresponding to the inner wall line 110b and the outer wall line 110a have the same attribute value, and it also facilitates subsequent connected domain processing, so that the pixels corresponding to the wall 110 in the same spatial functional segmentation area 100a are located in the same first in a connected domain 310.
  • the pixels corresponding to the inner wall line 110b and the outer wall line 110a of the wall 110 have the same attribute value. Therefore, the colors of the pixels corresponding to the inner wall line 110b and the outer wall line 110a of the wall 110 are same.
  • the image processing module 30 also includes: a connected domain processing unit for performing connected domain processing on the wall 110 of each spatial functional segmentation area 100a in the binary image to obtain the target image 300.
  • the connected domain processing is used for The pixel points corresponding to the wall 110 of the same spatial functional division area 100a are all located in the same first connected domain 310.
  • the outline of the wall 110 is such that the inner wall lines 110b are connected in sequence and the outer wall lines 110a are connected in sequence, and the area between the inner wall lines 110b and the outer wall lines 110a of the wall 110 is filled, thereby preventing the inner wall from being
  • the area between the line 110b and the exterior wall line 110a is regarded as part of the functional area of the space, and the area at the disconnection position between the interior wall lines 110b or the exterior wall lines 110a is also prevented from being used as part of the functional area of the space, thereby improving the The calculation accuracy of spatial functional area is improved.
  • there are two types of attribute values of pixels in the binary image making it easy to implement connected domain processing.
  • the connected domain processing unit includes a first connected domain processing unit.
  • the first connected domain processing unit includes: a first identification subunit, used to identify the wall 110 of the architectural drawing in the binary image to obtain the position of the wall 110; the first connected domain processing subunit, using After the wall 110 is identified, the first connected domain processing is performed on each spatial functional division area 100a respectively; wherein the first connected domain processing includes: processing between the inner wall line 110b and the outer wall line 110a of the wall 110 The pixel points undergo a first attribute value conversion, so that the pixel point attribute values after the first attribute value conversion are the same as the pixel point attribute values of the interior wall line 110b and the exterior wall line 110a.
  • the position of the wall 110 can be obtained by obtaining the color of the load-bearing wall.
  • the first connected domain processing sub-unit uses a convolution kernel with the same size as the length L and width W of the wall 110 to be converted to the first attribute value, and performs the processing on the wall 110 where the wall 110 is located.
  • the area is expanded.
  • Using a convolution kernel with the same size as the length L and width W of the wall 110 to be converted to the first attribute value is beneficial to ensuring that the pixel points between the inner wall line 110b and the outer wall line 110a of this part of the wall 110 are uniform.
  • the first attribute value conversion can be realized, thereby improving the filling effect of the area between the interior wall line 110b and the exterior wall line 110a, and conducive to increasing the speed of the first attribute value conversion.
  • the functional elements (i.e. components) of each space mainly include the wall 110, the door 120 and the window 130.
  • the inner wall line 110b of the wall 110 is at the position of the door 120 and the window 130.
  • the exterior wall line 110a of the wall 110 is also disconnected at the location of the door 120 and the window 130. Therefore, in order to accurately calculate the area of the space function, it is necessary to make up the wall 110 at the location of the door 120 and the window 130.
  • the walls 110 in the same spatial functional division area 100a are connected in sequence along the outline of the wall 110.
  • the connected domain processing unit also includes a second connected domain processing unit.
  • the second connected domain processing unit includes: a second identification subunit, used to identify the doors and windows embedded in the wall in the binary image after identifying the walls of the architectural drawings in the binary image. to obtain the positions of the door 120 and the window 130; the second connected domain processing subunit is used to perform second connected domain processing on the spatial functional segmentation area 100a after identifying the door 120; the third connected domain processing subunit is used After the window 130 is identified, the third connected domain processing is performed on the spatial functional segmentation area 100a.
  • a second identification subunit used to identify the doors and windows embedded in the wall in the binary image after identifying the walls of the architectural drawings in the binary image. to obtain the positions of the door 120 and the window 130
  • the second connected domain processing subunit is used to perform second connected domain processing on the spatial functional segmentation area 100a after identifying the door 120
  • the third connected domain processing subunit is used After the window 130 is identified, the third connected domain processing is performed on the spatial functional segmentation area 100a.
  • the second identification subunit uses a target detection algorithm to identify the door 120 and the window 130 in the binary image.
  • the target detection algorithm can adopt the YOLOv5 network.
  • the second connected domain processing includes: determining the extension direction of the corresponding door line 125 as the first direction according to the opening direction of the door 120; determining the corresponding direction of the wall 110 adjacent to the door 120.
  • the first endpoint 116 of the interior wall line 110b and the exterior wall line 110a in the first direction; the first rectangular area 117 surrounded by the first endpoint 116 is obtained according to the position of the first endpoint 116, and the first rectangular area 117 Connect the walls 110 on both sides of the door 120 along the first direction; perform a second attribute value conversion on the pixel points in the first rectangular area 117, so that the attribute values of the pixel points after the first attribute value conversion are consistent with the inner wall line 110b and the outer wall
  • the pixel attribute values of line 110a are the same.
  • the door line 125 is used to represent the reference line of the door 120 in a closed state. Therefore, for the wall 110 embedded with the door 120, the door 120, the interior wall line 110b, and the exterior wall line 110a are The extending directions are all the same. Therefore, by determining the extending direction of the corresponding door line 125, it is easy to determine the filling direction of the wall 110 where the door 120 is located.
  • the interior wall line 110b and the exterior wall line 110a of the same wall 110 are two parallel lines. Therefore, determine the interior wall line 110b and exterior wall line 110a corresponding to the wall 110 adjacent to the door 120.
  • the first endpoint 116 in the first direction determines the first rectangular area 117 according to the first endpoint 116.
  • the first endpoint 116 is the vertex of the first rectangular area 117, and the pixel point after the first attribute value conversion is
  • the attribute value is the same as the pixel attribute value of the interior wall line 110b and the exterior wall line 110a, so that the wall 110 can be positioned at the position of the door 120 by converting the second attribute value of the pixels in the first rectangular area 117.
  • Completion which is equivalent to extending the wall 110 along the first direction, and, after the second attribute value conversion, at the location of the door 120, the area between the interior wall line 110b and the exterior wall line 110a is Fill up.
  • the third connected domain processing includes: determining the extension direction of the boundary line 131 of the window 130 as the second direction; determining the interior wall line 110b and exterior wall line 110b corresponding to the wall 110 adjacent to the window 130.
  • the wall 110 perform a third attribute value conversion on the pixels in the second rectangular area 133, so that the pixel attribute values after the third attribute value conversion are the same as the pixel attribute values of the inner wall line 110b and the outer wall line 110a.
  • the wall 110 is filled in at the position of the window 130, which is equivalent to extending the wall 110 along the second direction, and, in the third After the attribute values are converted, the area between the interior wall line 110b and the exterior wall line 110a is filled in at the location of the window 130.
  • the second identification subunit identifies the inner wall line 110b and the outer wall line 110a of the wall 110 through the logical relationship between spatial functions. Specifically, for any spatial function divided area 100a, the remaining spatial function divided area 100a is an external area. Therefore, the second identification subunit determines the spatial function based on the external area adjacent to the currently to-be-identified spatial function divided area 100. The exterior wall line 110a of the wall 110 in the divided area 100 is exposed to the adjacent exterior area, and the remaining boundary line of the wall 110 is determined as the interior wall line 110b.
  • the boundary line of its wall 110 toward the external area is the exterior wall line 110a. That is to say, the exterior wall line 110a corresponds to one of the exterior areas, so the location of the exterior area is determined.
  • the exterior wall line 110a of the wall 110 exposed to the external area can be determined.
  • the second identification subunit can also identify the outer window boundary line (not labeled) and the inner window boundary line (not labeled) of the window 130 through the logical relationship between spatial functions.
  • the spatial functional division area 100a includes an outdoor area, which is the outdoor area of the unit building.
  • the second identification subunit determines the exterior wall line 110a exposed to the outdoor area, and based on the exterior wall line 110a exposed to the outdoor area, The boundary line of the window 130 connected to the exterior wall line 110a exposed to the outdoor area is determined as the outer window boundary line, and the remaining boundary line of the window 130 is determined as the inner window boundary line.
  • Part of the wall 110 of the residence is exposed to the outdoors, so an area corresponding to the exterior wall line 110a is the outdoor area, and the window 130 is embedded in the wall 110.
  • the exterior window boundary line is connected to the exterior wall line 110a.
  • the exterior wall is identified After determining the line 110a and the boundary line of the outer window, the inner wall line 110b and the boundary line of the inner window can be determined through the elimination method.
  • the walls 110 of the same spatial functional division area 100a are all located in the same first connected domain 310.
  • the first connected domain 310 forms a closed spatial functional area 320 along the outline of the wall 110.
  • the functional area 320 is a second connected domain that can be distinguished from the first connected domain 310 .
  • the calculation module 40 performs connected domain detection on the target image 300, and after extracting the second connected domain inside the spatial functional area 320, calculates the area of the second connected domain to obtain the area of the spatial functional area 320.
  • the embodiment of the present application also provides a device that can implement the spatial area calculation method of the architectural drawing provided by the embodiment of the present application by loading the above-mentioned spatial area calculation method of the architectural drawing in the form of a program.
  • the device of this embodiment includes: at least one processor 01, at least one communication interface 02, at least one memory 03, and at least one communication bus 04.
  • the number of the processor 01 , the communication interface 02 , the memory 03 and the communication bus 04 is at least one, and the processor 01 , the communication interface 02 and the memory 03 complete communication with each other through the communication bus 04 .
  • the communication interface 02 may be an interface of a communication module used for network communication, such as an interface of a GSM module.
  • the processor 01 may be a central processing unit CPU, or an application specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the spatial area calculation method of the architectural drawings of this embodiment.
  • ASIC Application Specific Integrated Circuit
  • Memory 03 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the memory 03 stores one or more computer instructions, and the one or more computer instructions are executed by the processor 01 to implement the spatial area calculation method of architectural drawings provided by the embodiment of the present application.
  • implementation terminal device may also include other components (not shown) that may not be necessary for the disclosure of the embodiments of the present application; in view that these other components may not be necessary for understanding the disclosure of the embodiments of the present application, The embodiments of this application will not introduce this one by one.
  • Embodiments of the present application also provide a storage medium.
  • the storage medium stores one or more computer instructions.
  • the one or more computer instructions are used to implement the spatial area calculation method of architectural drawings provided by the embodiments of the present application.
  • the candidate frame areas of the architectural drawings are first divided into spatial functions to obtain an initial spatial functional segmentation area including multiple spatial functional segmentation areas.
  • the walls in the same spatial functional division area are connected in sequence along the outline of the wall, and the area between the inner wall line and the outer wall line of the wall is filled, making the same
  • the walls of a spatial functional division area are located in the same first connected domain, and based on the area of the second connected domain inside the spatial functional area, the area of the spatial functional area is obtained; in the embodiment of the present application, the spatial functional division is performed first, which is beneficial to Accurately determine the position and outline of each space functional division area, thereby reducing the probability of missed detection.
  • the space functional area is mainly the area enclosed by the wall
  • the walls of the same space functional division area are located at the same In the first connected domain, a closed spatial functional area is formed along the outline of the wall, and then the area of the second connected domain inside the spatial functional area is calculated, which is helpful to calculate the spatial functional area more accurately.
  • embodiments of the present application described above are combinations of elements and features of the present application. Unless otherwise mentioned, elements or features may be considered optional. Each element or feature may be practiced without being combined with other elements or features. Additionally, embodiments of the present application may be constructed by combining some of the elements and/or features. The sequence of operations described in the embodiments of the present application may be repeated New arrangement. Some configurations of any embodiment may be included in another embodiment, and may be replaced with corresponding configurations of another embodiment. It is obvious to a person skilled in the art that claims that do not have an explicit reference relationship with each other among the appended claims may be combined into embodiments of this application, or may be included as new claims in amendments after the application is filed.
  • the embodiments of the present application may be implemented by various means such as hardware, firmware, software or combinations thereof.
  • the method according to the exemplary embodiments of the present application can be configured through one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices ( PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, etc.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable gate array
  • processor controller, microcontroller, microprocessor, etc.
  • the implementation of the present application can be implemented in the form of modules, processes, functions, etc.
  • the software code may be stored in the memory unit and executed by the processor.
  • the memory unit may be located internally or externally to the processor and may send data to and receive data from the processor via various known means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

一种建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质,方法包括:在建筑图纸中提取候选图框区域;对建筑图纸的候选图框区域进行空间功能分割,得到初始图像,初始图像包括多个空间功能分割区域;对初始图像进行图像处理,使同一个空间功能分割区域的墙体依次顺连,并使墙体的内墙线和外墙线之间的区域被填充,以获得目标图像,目标图像包括同一个空间功能分割区域的墙体所对应的第一连通域,第一连通域围成封闭的空间功能区;根据空间功能区内部的第二连通域的面积,得到空间功能区的面积。本申请结合空间功能分割和图像处理,有利于在精确确定各个空间功能分割区域的位置和轮廓的情况下,更加精准地计算出空间功能面积。

Description

建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质 技术领域
本申请实施例涉及图像处理技术领域,尤其涉及一种建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质。
发明背景
建筑平面图是建筑施工图的基本样图,用于指导建筑的顺利施工和正确施工。建筑平面空间的识别,以及建筑平面空间的面积和空间类型尺寸的计算,经常用来满足设计院在建筑、装修、景观等图纸方面的版本校验、以及户型设计等。
空间功能的面积要求通常具有特定的国家标准,例如,卧室使用面积需符合一定面积要求,不同结构厨房的面积(例如由卧室、起居室、厨房和卫生间等组成的住宅套型的厨房使用面积)不应小于3.5m2
因此,建筑平面图的空间功能的面积计算对审图和施工是非常重要的步骤,且建筑平面图的空间功能的面积的确定,会影响审图和施工速度。但是,由于批量图纸以及不同建筑工程师的习惯不同等原因,导致大部分建筑图纸未标注每个空间功能的面积,从而难以指导建筑的顺利施工和正确施工。
发明内容
本申请实施例解决的问题是提供一种建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质,提高空间功能面积的计算精度。
为解决上述问题,本申请实施例提供一种建筑图纸的空间面积计算方法,包括:在建筑图纸中提取候选图框区域,所述建筑图纸的构件包括墙体;对所述建筑图纸的候选图框区域进行空间功能分割,以实现对各个空间功能的边缘检测,得到初始图像,所述初始图像包括多个与空间功能一一对应的空间功能分割区域;对所述初始图像进行图像处理,沿所述墙体的轮廓使同一个空间功能分割区域的墙体依次顺连,并使所述墙体的内墙线和外墙线之间的区域被填充,以获得目标图像,所述目标图像包括同一个空间功能分割区域的墙体所对应的第一连通域,所述第一连通域沿所述墙体的轮廓围成封闭的空间功能区;根据所述空间功能区内部的第二连通域的面积,得到所述空间功能区的面积。
本申请实施例还提供一种建筑图纸的处理方法,采用本申请实施例所述的空间面积计算方法计算建筑图纸中空间功能区的面积。
本申请实施例还提供一种建筑图纸的空间面积计算系统,包括:图框提取模块,用于在建筑图纸中提取候选图框区域,所述建筑图纸的构件包括墙体;空间分割模块,用于对所述建筑图纸的候选图框区域进行空间功能分割,以实现对各个空间功能的边缘检测,得到初始图像,所述初始图像包括多个与空间功能一一对应的空间功能分割区域;图像处理模块,用于对所述初始图像进行图像处理,使同一个空间功能分割区域的墙体依次顺连,并使所述墙体的内墙线和外墙线之间的区域被填充,得到目标图像,所述目标图像包括同一个空间功能分割区域的墙体所对应的第一连通域,所述第一连通域沿所述墙体的轮廓围成封闭的空间功能区;计算模块,用于根据所述空间功能区内部的第二连通域的面积,得到所述空间功能区的面积。
本申请实施例还提供一种设备,包括至少一个存储器和至少一个处理器,所述存储器存储有一条或多条计算机指令,其中,所述一条或多条计算机指令被所述处理器执行以实现本申请 实施例所述的空间面积计算方法。
本申请实施例还提供一种存储介质,所述存储介质存储有一条或多条计算机指令,所述一条或多条计算机指令用于实现本申请实施例所述的空间面积计算方法。
本申请实施例的技术方案具有以下优点:
本申请实施例提供的空间面积计算方法中,在建筑图纸中提取候选图框区域后,先对建筑图纸的候选图框区域进行空间功能分割,得到包括多个空间功能分割区域的初始图像,再对初始图像进行图像处理,沿墙体的轮廓使同一个空间功能分割区域的墙体依次顺连,并使墙体的内墙线和外墙线之间的区域被填充,使同一个空间功能分割区域的墙体位于相同的第一连通域中,并根据空间功能区内部的第二连通域的面积,得到空间功能区的面积;本申请实施例先进行空间功能分割,有利于精确确定各个空间功能分割区域的位置和轮廓,从而降低漏检的概率,同时,由于空间功能面积主要为墙体围成的区域的面积,因此,使同一个空间功能分割区域的墙体位于相同的第一连通域中,沿墙体的轮廓围成封闭的空间功能区,再计算空间功能区内部的第二连通域的面积,有利于更加精准地计算出空间功能面积,计算出空间功能面积的精度可达到98%以上。
附图简要说明
图1是本申请建筑图纸的空间面积计算方法一实施例的流程示意图。
图2是步骤S2中初始图像一实施例的示意图。
图3是步骤S2中各步骤一实施例的流程示意图。
图4是步骤S2中语义分割模型一实施例的结构示意图。
图5是初始图像的局部放大图。
图6是步骤S3中目标图像一实施例的示意图。
图7是本申请建筑图纸的空间面积计算系统一实施例的结构示意图。
图8是图7中空间分割模块一实施例的的结构示意图;
图9是本申请一实施例所提供的设备的结构示意图。
实施本申请的方式
由背景技术可知,建筑图纸未标注每个空间功能(例如,阳台、卫生间、卧室、客厅、厨房等)的面积,建筑图纸难以指导建筑的顺利施工和正确施工。为了自动计算空间功能的面积,设计人工智能(AI)计算每个空间功能面积的方案是建筑行业至关重要的需求。自动计算建筑图纸中空间功能的面积,可以应用在建筑图纸的设计、审核等不同阶段。比如在设计阶段,通过自动计算空间功能的面积,可以大大提高建筑图纸的设计效率,快速标注各个空间功能的面积,同时设计师在制图过程中可以对不符合标准的设计及时进行修改;在图纸审核阶段,可以对未标注面积的不规范制图进行空间功能面积的自动计算和标注,或者对已标注面积的图纸重新计算空间功能面积以进行复核,从而提高建筑图纸的审核效率。进一步的,在计算得到空间功能面积后,还可以根据标准要求,对制图是否符合规范进行判断,设置提醒机制,以便及时发现不合规的设计部分,能够提高建筑图纸在设计、审核、施工全过程中的效率。
目前主要采用基于深度学习方法来计算空间功能面积,但由于空间功能通常只有形状特征,而没有丰富的纹理特征,仅采用深度学习分割方法或者深度学习目标检测方法进行空间功能区域检测,容易导致提取特征不明显,从而导致空间功能面积和实际计算机辅助设计(Computer Aided Design,CAD)二维建筑图纸的表达面积之间具有很大误差。
为了解决上述技术问题,本申请实施例提供一种建筑图纸的空间面积计算方法,先进行分割,有利于精确确定各个空间功能分割区域的位置和轮廓,从而降低漏检的概率,同时,由于空间功能面积主要为墙体围成的区域的面积,因此对初始图像进行图像处理,沿墙体的轮廓使同一个空间功能分割区域的墙体依次顺连,并使墙体的内墙线和外墙线之间的区域被填充,使 同一个空间功能分割区域的墙体位于相同的第一连通域中,沿墙体的轮廓围成封闭的空间功能区,再计算空间功能区内部的第二连通域的面积,有利于更加精准地计算出空间功能面积。
空间功能区是由墙体、门、窗等构件围成的封闭区域,不同的空间功能区对应不同的建筑使用功能,比如卧室、客厅、厨房、卫生间等。
参考图1,示出了本申请建筑图纸的空间面积计算方法一实施例的流程示意图。建筑图纸的空间面积计算方法包括以下步骤:
步骤S1:在建筑图纸中提取候选图框区域,建筑图纸的构件包括墙体;
步骤S2:对建筑图纸的候选图框区域进行空间功能分割,以实现对各个空间功能的边缘检测,得到初始图像,初始图像包括多个与空间功能一一对应的空间功能分割区域;
步骤S3:对初始图像进行图像处理,沿墙体的轮廓使同一个空间功能分割区域的墙体依次顺连,并使墙体的内墙线和外墙线之间的区域被填充,以获得目标图像,目标图像包括同一个空间功能分割区域的墙体所对应的第一连通域,第一连通域沿墙体的轮廓围成封闭的空间功能区;
步骤S4:根据空间功能区内部的第二连通域的面积,得到空间功能区的面积。
为使本申请的上述目的、特征和优点能够更为明显易懂,下面结合附图对本申请的具体实施例做详细的说明。
参考图1,执行步骤S1,在待测的建筑图纸(图未示)中提取候选图框区域(图未示),建筑图纸的构件包括墙体。
待测的建筑图纸为需要进行空间功能面积计算的建筑平面图。本实施例中,建筑图纸为CAD建筑图纸,用于反映建筑物的平面形状、功能需要、平面布局及其平面的构成关系。作为一种示例,建筑图纸为住宅的建筑图纸。
建筑图纸具有构件。其中,构件是构成建筑物的各个要素(也即图元),例如,墙体、窗户、门、楼面、房梁等。本实施例中,建筑图纸的构件包括墙体。具体地,建筑图纸的构件还包括嵌于墙体中的门和窗。
本实施例中,建筑图纸具有图框(drawing frame)。可以理解的是,建筑图纸为包括多个图框的建筑工程图纸,图框是指建筑工程图纸中限定绘图区域的线框,一个图框中通常包含一张图纸。例如,对于住宅的建筑图纸,一个图框表示一个单元楼中同一楼层的住宅的图纸。
候选图框区域为建筑图纸中待截取的其中一个图框所在的区域,也即待审核的图框。需要说明的是,建筑图纸通常包括多个图框,因此,在对建筑图纸进行审核之前,先提取候选图框区域,以便基于人工智能的方式实现建筑图纸的审核,例如,便于后续有针对性地对该候选图框区域进行空间功能分割,进而有针对性地对需要审核的图框中的空间功能面积进行计算。
本实施例中,在待测的建筑图纸中提取候选图框区域的步骤包括:获取待提取的候选图框的属性信息,属性信息包括文字属性信息和图层属性信息中的一种或多种,属性信息与待提取的候选图框在建筑图纸中的位置具有映射关系;根据属性信息以及映射关系,确定待提取的候选图框在建筑图纸中的区域,作为候选图框区域。
CAD建筑图纸的解析图像分辨率较大,若采用深度学习的方式来进行图框识别,由于网络的感受野较小,在提取分辨率较大的图像时,容易出现特征信息丢失、小目标图框漏检的问题,因此,本实施例根据候选图框的属性信息、以及该属性信息与位置的映射关系,确定待提取的候选图框在建筑图纸中的区域,有利于避免因图像分辨率过高而导致漏检的问题,且能够精准定位候选图框区域的位置,从而提高图框识别的效果。具体地,相较于采用深度学习的方式进行图框识别,本实施例可以将图框识别的效果提升10%。此外,根据候选图框的属性信息、以及该属性信息与位置的映射关系,确定待提取的候选图框在建筑图纸中的区域,还有利于提高提取候选图框区域的速度。作为一种示例,候选图框在建筑图纸中的位置为候选图框在建筑图纸中的坐标。
本实施例中,属性信息包括文字属性信息和图层属性信息中的一种或多种。在建筑图纸中,图框通常具有图层命名等图层属性信息,且图框通常具有建筑单位关键字等文字属性信息,因此,可以利用图框的属性信息进行图框识别。
结合参考图2,图2是步骤S2中初始图像一实施例的示意图,执行步骤S2,对建筑图纸的候选图框区域进行空间功能分割,以实现对各个空间功能的边缘检测,得到初始图像100,初始图像100包括多个与空间功能一一对应的空间功能分割区域100a。
通过进行空间功能分割,能够实现对各个空间功能(例如,阳台、卫生间、卧室、客厅、厨房等)的边缘检测,从而能够对不同类型的空间功能进行初步的分割,获得多个空间功能分割区域100a,例如,房间区域、客厅区域等。
本实施例中,对建筑图纸的候选图框区域进行空间功能分割后,能够获得各构件的边缘轮廓。例如,如图2所示,初始图像100中具有墙体110的边缘轮廓、门120的边缘轮廓、以及窗130的边缘轮廓;其中,门120可以包括平开门和推拉门中的一种或两种。
先进行空间功能分割,有利于较为精确地初步确定各个空间功能分割区域100a的位置和轮廓,从而降低漏检的概率,进而为后续的图像处理提供质量较好的初始图像100。对建筑图纸的候选图框区域进行空间功能分割,使得初始图像100为通过原图(即待测的建筑图纸)获得的,从而提高了初始图像100的分辨率,进而有利于提高后续的图像处理的效果。此外,本实施例中,空间功能分割是基于深度学习方法进行的,因此还有利于提高审图效率和施工速度。在一个具体实施例中,采用语义分割模型,对建筑图纸的候选图框区域进行空间功能分割。
结合参考图3,图3是步骤S2中各步骤一实施例的流程示意图。对建筑图纸的候选图框区域进行空间功能分割的步骤包括:执行步骤S21,从建筑图纸中截取候选图框区域的图像作为待处理图像(图未示)。
对建筑图纸进行截取,得到候选图框区域对应的待处理图像,以便后续仅对待处理图像进行处理,从而减少数据计算量;而且,对建筑图纸进行截取,使得待处理图像的格式满足:能够基于算法对待处理图像进行一系列的操作。
由于待处理图像从原图(即待测的建筑图纸)中截取获得,从而提高了待处理图像的分辨率,进而有利于提高后续的图像特征质量。
执行步骤S22,在待处理图像中提取多通道图像基础特征。
多通道图像基础特征为低维度的图像特征,低维度的图像特征包含较少的无关特征和冗余特征,有利于提高对各个空间功能进行划分的精确性,且有利于减少细节信息(例如,较细的线条)的丢失,从而为后续进一步提取更高维度的图像特征做准备,相应有利于后续对各个空间功能进行精确划分。此外,通过提取不同通道的图像基础特征,有利于使得提取到的图像基础特征能够更好地表征不同空间功能的区域。
具体地,在待处理图像中提取多通道图像基础特征的步骤包括:将待处理图像输入至骨干网络中,获取经过骨干网络的多个网络块(block)后的输出结果,得到不同第二通道数的图像基础特征;其中,骨干网络包括多个串联的网络块,每个网络块用于输出特定第二通道数的图像基础特征。
通过采用骨干网络,能够提取待处理图像中多通道的基础特征。本实施例中,骨干网络包括具有可变形卷积的残差网络。通过采用残差网络,有利于降低提取特征过程中出现过拟合现象的概率。而且,残差网络具有可变形卷积,有利于增大网络的感受野(receptive field),从而降低部分特征漏检的概率,其中,感受野指的是卷积神经网络每一层输出的特征图(feature map)上的像素点在输入图像上映射的区域大小。具体地,骨干网络包括多个串联的网络块,因此,每一个网络块中均具有可变形卷积。结合参考图4,图4是步骤S2中语义分割模型一实施例的结构示意图,本实施例中,骨干网络为ShuffleNetV2网络。本实施例中,骨干网络包括多个串联的网络块,例如,以骨干网络为ShuffleNetV2网络为例,多个串联的网络块包括第一网络块(Shuffle_block1)、第二网络块(Shuffle_block2)、第三网络块(Shuffle_block3)、第四网络块(Shuffle_block4)、第五网络块(Shuffle_block5)和第六网络块(Shuffle_block6)。
将待处理图像输入至骨干网络后,各网络块提取的特征逐渐丰富,但相应也容易产生冗余特征,从而容易出现过拟合的问题,例如,误将沙发作为独立的空间功能;如果选择过于靠近骨干网络的输入端的网络块的输出结果作为不同第二通道数的图像基础特征,则又容易导致部分特征的丢失,从而容易降低将各个不同类型的空间功能进行分割的效果(例如,无法分割出 卧室的区域)。为此,本实施例中,获取骨干网络的第五网络块(也即Shuffle_block5)的输出结果,作为不同第二通道数的图像基础特征,从而获得多通道图像基础特征。
执行步骤S23,对多通道图像基础特征进行不同第一通道数的空洞卷积操作,得到多尺度下的空间区域特征,空间区域特征的感受野大于多通道图像基础特征的感受野。
通过进行空洞卷积操作,有利于在进一步增大网络的感受野的同时,减少信息丢失和分辨率的损失,且还有利于降低计算量。而且,设置不同的第一通道数,能够得到不同尺度下的空间区域特征。
增大第一通道数,有利于提高对多通道图像基础特征进行升维的效果,相应有利于提取多通道图像基础特征中的高维度特征,但如果第一通道数过大,也容易导致网络模型的体积过大,相应容易降低空洞卷积操作的速度,不利于提高建筑图纸的检测速度。为此,本实施例中,第一通道数分别为8、64、128和256,通过选用这四种第一通道数,有利于在对多通道图像基础特征进行升维时获得较佳的升维效果,同时提高对建筑图纸的检测速度。
本实施例中,对多通道图像基础特征进行不同第一通道数的空洞卷积操作的步骤中,每一个第一通道数的空洞卷积操作采用多个不同空洞率(dilation rate)的卷积层,空洞率为8~16中的任意偶数。设置不同的空洞率,能够获得不同的感受野,从而提高获取多尺度的空间区域特征的效果。空洞率不宜过小,否则容易增大漏检的概率。为此,空洞率为8~16中的任意偶数。例如,每一个第一通道数的空洞卷积操作采用的空洞率分别为8、12和16。
本实施例中,进行空洞卷积操作的步骤中,任意一种或两种第一通道数的空洞卷积操作的卷积核尺寸为1*1。在空洞卷积操作中,任意一种或两种第一通道数的空洞卷积操作的卷积核尺寸为1*1,有利于去除冗余特征,从而提高对各个空间功能进行划分的精确性,而且,仅选取其中一种或两种第一通道数的空洞卷积操作的卷积核尺寸为1*1,尺寸为1*1的卷积核数量不会过多,从而避免过度降维的问题。作为一种示例,第一通道数为8的空洞卷积操作的卷积核尺寸为1*1,其余空洞卷积操作的卷积核尺寸为3*3。
对多通道图像基础特征进行不同第一通道数的空洞卷积操作后,将多通道图像基础特征和空间区域特征进行特征融合之前,对建筑图纸的候选图框区域进行空间功能分割的步骤还包括:执行步骤S24,分别对多尺度下的各个空间区域特征进行上采样处理,用于增加各个空间区域特征中的高维度特征。
通过进行上采样处理,以增加多尺度下的空间区域特征中的高维度特征,从而获得更多的图像语义信息。
而且,对每一种第一通道数的空洞卷积操作后的空间区域特征分别进行上采样处理,则在进行上采样处理时,能够针对每一种第一通道数的空洞卷积操作后的空间区域特征设置一一对应并相匹配的维度比例,以便在后续进行特征融合时实现维度的统一。
此外,与采用其他实现维度的统一的方式相比,本实施例采用对每一种第一通道数的空洞卷积操作后的空间区域特征分别进行上采样处理的方式,还有利于使得网络的模型体积较小。
执行步骤S25,将多通道图像基础特征、以及上采样处理后的空间区域特征进行特征融合(concat),得到融合图像特征。
不同尺度的图像特征所包含的细节信息不同,通过进行特征融合,从而将不同尺度的特征信息融合在一起,得到多通道的融合图像特征,这有利于提高对各个空间功能的边缘检测的效果,进而提高对各个空间功能进行划分的精确性。
具体地,将多通道图像基础特征和空间区域特征进行特征融合的步骤包括:将多通道图像基础特征和空间区域特征输入至融合网络中,得到初始融合特征;对初始融合特征依次进行上采样处理和降维处理,得到融合图像特征,所述上采样处理包括2倍上采样或4倍上采样。
通过对初始融合特征依次进行上采样处理,进一步基于初始融合特征,提取空间功能分割区域的边界的抽象特征信息。
本实施例中,所述上采样处理包括2倍上采样或4倍上采样。采用2倍上采样或4倍上采样,使得该上采样处理所采用的卷积核较小,有利于更好地提取空间功能分割区域的边界;而且,该上采样处理的倍数为偶数,这有利于使得采样速度更快、采集的信息更全面;此外,该 上采样处理的倍数不会过大,从而在数据处理时减少占据的内存。
通过进行降维处理,有利于去除冗余信息,进而减少计算量、提高分割速度,相应提高计算速度。
具体地,对上采样处理后的初始融合特征进行降维处理的步骤包括:采用具有第三通道数、且卷积核尺寸为1*1的卷积核对上采样处理后的初始融合特征进行卷积处理。
本实施例中,第三通道数大于或等于128。第三通道数不会过小,从而在去除冗余信息的同时,降低第一通道数的空间区域特征缺失的概率。作为一种示例,第三通道数为128或256。
执行步骤S26,根据融合图像特征,获得多个空间功能分割区域100a。
具体地,对融合图像特征进行图框空间语义分割,得到待处理图像的语义分割结果,从而获得多个空间功能分割区域。
继续参考图1,并结合参考图5和图6,图5是初始图像的局部放大图,图6是步骤S3中目标图像一实施例的示意图,执行步骤S3,对初始图像100进行图像处理,沿墙体110(如图2所示)的轮廓使同一个空间功能分割区域100a的墙体110依次顺连,并使墙体110的内墙线110b(如图2所示)和外墙线110a(如图2所示)之间的区域被填充,以获得目标图像300,目标图像300包括同一个空间功能分割区域100a的墙体110所对应的第一连通域310,第一连通域310沿墙体110的轮廓围成封闭的空间功能区320。
进行空间功能分割有利于精确确定各个空间功能分割区域100a的位置和轮廓,从而降低漏检的概率,同时,由于空间功能面积主要是墙体110围成的区域的面积,因此,沿墙体110的轮廓使同一个空间功能分割区域100a的墙体110位于相同的第一连通域310中,也就是说,对于空间功能分割区域100a的任何一侧,墙体110所在的第一连通域310的边界为直线,第一连通域310沿墙体110的轮廓围成封闭的空间功能区320,因此,在计算空间功能区320内部的第二连通域(未标示)的面积后,即可精准地计算出空间功能面积。
本实施例中,对初始图像100进行图像处理的步骤包括:对初始图像进行灰度处理,得到二值图,在二值图中,墙体110的内墙线110b和外墙线110a对应的像素点具有相同的属性值。
进行灰度处理,以获得仅具有两种属性值的二值图,且在二值图中,各个空间功能的边缘轮廓对应的像素点,能够与剩余区域的像素点相区分,墙体110的内墙线110b和外墙线110a对应的像素点具有相同的属性值,而且还便于后续对进行连通域处理,使同一个空间功能分割区域100a的墙体110对应的像素点均位于同一个第一连通域310中。
在二值图中,墙体110的内墙线110b和外墙线110a对应的像素点具有相同的属性值,因此,墙体110的内墙线110b和外墙线110a对应的像素点的颜色相同。作为一种示例,以属性值为灰度值为例,则二值图中的像素点的属性值分别为255和0,属性值为255的像素点为白色像素点,属性值为0的像素点为黑色像素点。在其他实施例中,二值图中的像素点的属性值也可以分别用“0”和“1”表示,属性值为“0”的像素点为黑色像素点,属性值为“1”的像素点为白色像素点。
需要说明的是,本实施例中,各构件轮廓的颜色示意情况仅是为了便于图示,在实际空间面积计算过程中,可以将各构件轮廓的颜色设置为白色,其余部分的颜色设置为黑色。
本实施例中,对初始图像100进行图像处理的步骤还包括:在二值图中对各个空间功能分割区域100a的墙体110进行连通域处理,获得目标图像300,连通域处理用于使同一个空间功能分割区域100a的墙体110对应的像素点均位于同一个第一连通域310中。在二值图中,连通域(Connected Component),指的是图像中具有相同属性值(例如,灰度值)且位置相邻的像素点连成的区域。
二值图中的像素点的属性值的种类为两种,且墙体110的内墙线110b和外墙线110a对应的像素点具有相同的属性值,因此,通过进行连通域处理,能够沿墙体110的轮廓,使内墙线110b依次顺连、外墙线110a依次顺连,并使墙体110的内墙线110b和外墙线110a之间的区域被填充,从而避免将内墙线110b和外墙线110a之间的区域面积作为空间功能面积的一部分,也避免在内墙线110b之间或外墙线110a之间的断开位置处的面积作为空间功能面积的一部分,进而提高了空间功能面积的计算精度。而且,二值图中的像素点的属性值的种类为两种,也易 于实现连通域处理。
本实施例中,在二值图中对各个空间功能分割区域100a的墙体110进行连通域处理的步骤包括:在二值图中对建筑图纸的墙体110进行识别,得到墙体110的位置;对墙体110进行识别后,对各空间功能分割区域100a分别进行第一连通域处理;其中,第一连通域处理包括:对墙体110的内墙线110b和外墙线110a之间的像素点进行第一属性值转换,使第一属性值转换后的像素点属性值,与内墙线110b、外墙线110a的像素点属性值均相同。
具体地,由于墙体110在建筑图纸中有承重墙属性,因此通过获取承重墙颜色,即可获取墙体110的位置。
如图5所示,作为一种示例,对墙体110的内墙线110b和外墙线110a之间的像素点进行第一属性值转换的步骤包括:采用尺寸与待进行第一属性值转换的墙体110的长度L和宽度W相同的卷积核,对墙体110所在的区域进行膨胀处理。针对每一部分墙体110,采用尺寸与待进行第一属性值转换的墙体的长度L和宽度W相同的卷积核,有利于确保该部分墙体110的内墙线110b和外墙线110a之间的像素点均能实现第一属性值转换,从而提高内墙线110b和外墙线110a之间的区域被填充的效果,且有利于提高第一属性值转换的速度。
如图5所示,每一个空间功能的元素主要包括墙体110、门120和窗130,墙体110的内墙线110b在门120和窗130的位置处断开,墙体110的外墙线110a也在门120和窗130的位置处断开,因此,为了精准计算空间功能的面积,需对墙体110在门120和窗130的位置处进行补齐,从而沿墙体110的轮廓使同一个空间功能分割区域100a的墙体110依次顺连。
为此,建筑图纸的空间面积计算方法还包括:在二值图中对建筑图纸的墙体进行识别后,在二值图中识别出嵌于墙体中的门和窗,以获得门120和窗130的位置;对门120和窗130进行识别后,对空间功能分割区域100a分别进行第二连通域处理和第三连通域处理。
本实施例中,对初始图像进行灰度处理以获得二值图后,采用目标检测算法在该二值图中识别门120和窗130。例如,目标检测算法可以采用YOLOv5网络。
其中,如图5所示,第二连通域处理包括:根据门120的开启方向,确定相对应的门线125的延伸方向作为第一方向;确定与门120相邻的墙体110所对应的内墙线110b和外墙线110a在第一方向上的第一端点116;根据第一端点116的位置获得由第一端点116围成的第一矩形区117,第一矩形区117沿第一方向连接门120两侧的墙体110;对第一矩形区117的像素点进行第二属性值转换,使第一属性值转换后的像素点属性值与内墙线110b、外墙线110a的像素点属性值相同。
在建筑图纸中,门线125用于表示门120在闭合状态下的基准线,因此对于嵌有门120的墙体110,门120和内墙线110b、外墙线110a的延伸方向均相同,因此,通过确定相对应的门线125的延伸方向,便于确定门120所在的墙体110的补齐方向。
在建筑图纸中,同一个墙体110的内墙线110b和外墙线110a是两条平行线,因此,确定与门120相邻的墙体110所对应的内墙线110b和外墙线110a在第一方向上的第一端点116,根据第一端点116确定第一矩形区117,第一端点116为第一矩形区117的顶点,并使第一属性值转换后的像素点属性值与内墙线110b、外墙线110a的像素点属性值相同,从而能够通过对第一矩形区117的像素点进行第二属性值转换的方式,将墙体110在门120所在的位置补齐,这相当于沿第一方向对墙体110进行了延伸,而且,在第二属性值转换后,在门120所在的位置处,内墙线110b和外墙线110a之间的区域被填满。
同理,如图5所示,第三连通域处理包括:确定窗130的边界线131的延伸方向作为第二方向;确定与窗130相邻的墙体110所对应的内墙线110b和外墙线110a在第二方向上的第二端点132;根据第二端点132的位置获得由第二端点132围成的第二矩形区133,第二矩形区133沿第二方向连接窗130两侧的墙体110;对第二矩形区133的像素点进行第三属性值转换,使第三属性值转换后的像素点属性值与内墙线110b、外墙线110a的像素点属性值相同。
通过对第二矩形区133的像素点进行第三属性值转换,将墙体110在窗130所在的位置补齐,这相当于沿第二方向对墙体110进行了延伸,而且,在第三属性值转换后,在窗130所在的位置处,内墙线110b和外墙线110a之间的区域被填满。
本实施例中,通过空间功能之间的逻辑关系,识别墙体110的内墙线110b和外墙线110a。
具体地,进行空间功能分割,获得多个与空间功能一一对应的空间功能分割区域100a后,对于任一空间功能分割区域100a,剩余空间功能分割区域100a为外部区域,因此识别任一空间功能分割区域100对应的墙体110的内墙线110b和外墙线110a的步骤包括:根据与当前待识别的空间功能分割区域100相邻的外部区域,确定该空间功能分割区域100的墙体110的外墙线110a,外墙线110a暴露于相邻的外部区域,墙体110的剩余边界线相应为内墙线110b。
对于任一当前待识别的空间功能分割区域100,其墙体110朝向外部区域的边界线为外墙线110a,也就是说,外墙线110a对应其中一个外部区域,因此确定外部区域的位置,即可确定暴露于该外部区域的墙体110的外墙线110a。
通过空间功能之间的逻辑关系,还能够识别窗130的外窗边界线(未标示)和内窗边界线(未标示)。具体地,空间功能分割区域100a包括户外区域,户外区域即为单元楼的室外区域,则识别窗130的外窗边界线和内窗边界线的步骤包括:确定暴露于户外区域的外墙线110a;根据暴露于户外区域的外墙线110a,确定窗130的外窗边界线,外窗边界线与外墙线110a相连,窗130的剩余边界线相应为内窗边界线。
住宅的部分墙体110暴露于户外,因此,外墙线110a对应的一个区域是户外区域,而窗130嵌于墙体110内,因此外窗边界线与外墙线110a相连,相应的,识别出外墙线110a和外窗边界线后,通过排除法,即可确定内墙线110b和内窗边界线。
继续参考图1,执行步骤S4,根据空间功能区320内部的第二连通域的面积,得到空间功能区的面积。
通过前述的图像处理,同一个空间功能分割区域100a的墙体110均位于同一个第一连通域310中,第一连通域310沿墙体110的轮廓围成封闭的空间功能区320,空间功能区320则为能够与第一连通域310相区分的第二连通域。
具体地,对目标图像300进行连通域检测,在空间功能区320内部提取第二连通域后,计算第二连通域的面积,得到空间功能区320的面积。
本申请实施例还提供一种建筑图纸的处理方法,采用上述建筑图纸的空间面积计算方法计算建筑图纸中空间功能区的面积。在实际应用中,建筑图纸的处理包括建筑图纸的设计、建筑图纸的审核、建筑图纸的检测中的至少一种。
图7是本申请建筑图纸的空间面积计算系统一实施例的结构示意图。结合参考图2、图4至图6,空间面积计算系统包括:图框提取模块10,用于在建筑图纸中提取候选图框区域,建筑图纸的构件包括墙体;空间分割模块20,用于对建筑图纸的候选图框区域进行空间功能分割,以实现对各个空间功能的边缘检测,得到初始图像100,初始图像100包括多个与空间功能一一对应的空间功能分割区域100a;图像处理模块30,用于对初始图像100进行图像处理,使同一个空间功能分割区域100a的墙体110依次顺连,并使墙体110的内墙线110b和外墙线110a之间的区域被填充,得到目标图像300,目标图像300包括同一个空间功能分割区域100a的墙体110所对应的第一连通域310,第一连通域310沿墙体110的轮廓围成封闭的空间功能区320;计算模块40,用于根据空间功能区320内部的第二连通域的面积,得到空间功能区320的面积。
本申请实施例提供的空间面积计算系统,先进行空间功能分割,有利于精确确定各个空间功能分割区域100a的位置和轮廓,从而降低漏检的概率,同时,由于空间功能面积主要为墙体110围成的区域的面积,因此,使同一个空间功能分割区域100a的墙体110位于相同的第一连通域310中,沿墙体110的轮廓围成封闭的空间功能区320,再计算空间功能区320内部的第二连通域的面积,有利于更加精准地计算出空间功能面积。
本实施例中,建筑图纸为CAD建筑图纸。作为一种示例,建筑图纸为住宅的建筑图纸。建筑图纸具有构件。其中,构件是构成建筑物的各个要素(也即图元),例如,墙体、窗户、门、楼面、房梁等。本实施例中,建筑图纸的构件包括墙体110,还包括嵌于墙体110中的门120和窗130。
本实施例中,建筑图纸具有图框,候选图框区域为建筑图纸中待截取的其中一个图框所在的区域,也即待审核的图框。先提取候选图框区域,以便基于人工智能的方式实现建筑图纸的 审核,例如,便于后续有针对性地对该候选图框区域进行空间功能分割,进而有针对性地对需要审核的图框中的空间功能面积进行计算。
本实施例中,图框提取模块10包括:属性信息获取单元,用于获取待提取的候选图框的属性信息,包括文字属性信息和图层属性信息中的一种或多种,属性信息与待提取的候选图框在建筑图纸中的位置具有映射关系;候选图框区域确定单元,用于根据属性信息以及映射关系,确定待提取的候选图框在建筑图纸中的区域,作为候选图框区域。
根据候选图框的属性信息、以及该属性信息与位置的映射关系,确定待提取的候选图框在建筑图纸中的区域,有利于避免因图像分辨率过高而导致漏检的问题,且能够精准定位候选图框区域的位置,从而提高图框识别的效果。此外还有利于提高提取候选图框区域的速度。作为一种示例,候选图框在建筑图纸中的位置为候选图框在建筑图纸中的坐标。
空间分割模块20用于对建筑图纸的候选图框区域进行空间功能分割,通过进行空间功能分割,能够实现对各个空间功能(例如,阳台、卫生间、卧室、客厅、厨房等)的边缘检测,从而能够对不同类型的空间功能进行初步的分割,获得多个空间功能分割区域100a,例如,房间区域、客厅区域等。
本实施例中,对建筑图纸的候选图框区域进行空间功能分割后,能够获得各构件的边缘轮廓。例如,如图2所示,初始图像100中具有墙体110的边缘轮廓、门120的边缘轮廓、以及窗130的边缘轮廓。
先进行空间功能分割,有利于较为精确地初步确定各个空间功能分割区域100a的位置和轮廓,从而降低漏检的概率,进而为后续的图像处理提供质量较好的初始图像100。对建筑图纸的候选图框区域进行空间功能分割,使得初始图像100为通过原图(即待测的建筑图纸)获得的,从而提高了初始图像100的分辨率,进而有利于提高后续的图像处理的效果。此外,本实施例的空间功能分割是基于深度学习方法进行的,因此还有利于提高审图效率和施工速度。
在一个具体实施例中,空间分割模块20采用语义分割模型进行空间功能分割。结合参考图8,图8是空间分割模块20一实施例的的结构示意图。
空间分割模块20包括:图像截取单元21,用于从建筑图纸中截取候选图框区域的图像作为待处理图像(图未示)。对建筑图纸进行截取,得到候选图框区域对应的待处理图像,以便后续仅对待处理图像进行处理,从而减少数据计算量;而且,对建筑图纸进行截取,使得待处理图像的格式满足:能够基于算法对待处理图像进行一系列的操作。
待处理图像从原图(即待测的建筑图纸)中截取获得,从而提高了待处理图像的分辨率,进而有利于提高后续的图像特征质量。
空间分割模块20还包括:特征提取单元22,用于在待处理图像中提取多通道图像基础特征。多通道图像基础特征为低维度的图像特征,低维度的图像特征包含较少的无关特征和冗余特征,有利于提高对各个空间功能进行划分的精确性,且有利于减少细节信息(例如,较细的线条)的丢失,从而为后续进一步提取更高维度的图像特征做准备,相应有利于后续对各个空间功能进行精确划分。此外,通过提取不同通道的图像基础特征,有利于使得提取到的图像基础特征能够更好地表征不同空间功能的区域。
具体地,特征提取单元22用于将待处理图像输入至骨干网络中,获取经过骨干网络的多个网络块(block)后的输出结果,得到不同第二通道数的图像基础特征;其中,骨干网络包括多个串联的网络块,每个网络块用于输出特定第二通道数的图像基础特征。
本实施例中,骨干网络包括具有可变形卷积的残差网络,有利于降低提取特征过程中出现过拟合现象的概率。而且,残差网络具有可变形卷积,有利于增大网络的感受野,从而降低部分特征漏检的概率。具体地,骨干网络包括多个串联的网络块,因此,每一个网络块中均具有可变形卷积。
结合参考图4,图4是语义分割模型一实施例的结构示意图,本实施例中,骨干网络为ShuffleNetV2网络。骨干网络包括多个串联的网络块,例如,以骨干网络为ShuffleNetV2网络为例,多个串联的网络块包括第一网络块(Shuffle_block1)、第二网络块(Shuffle_block2)、第三网络块(Shuffle_block3)、第四网络块(Shuffle_block4)、第五网络块(Shuffle_block5) 和第六网络块(Shuffle_block6)。
需要说明的是,将待处理图像输入至骨干网络后,各网络块提取的特征逐渐丰富,但相应也容易产生冗余特征,从而容易出现过拟合的问题,例如,误将沙发作为独立的空间功能;如果选择过于靠近骨干网络的输入端的网络块的输出结果作为不同第二通道数的图像基础特征,则又容易导致部分特征的丢失,从而容易降低将各个不同类型的空间功能进行分割的效果(例如,无法分割出卧室的区域)。为此,本实施例中,特征提取单元22获取骨干网络的第五网络块(也即Shuffle_block5)的输出结果,作为不同第二通道数的图像基础特征,从而获得多通道图像基础特征。
空间分割模块20还包括:空洞卷积操作单元23,用于对多通道图像基础特征进行不同第一通道数的空洞卷积操作,得到多尺度下的空间区域特征,空间区域特征的感受野大于多通道图像基础特征的感受野。通过空洞卷积操作,有利于在进一步增大网络的感受野的同时,减少信息丢失和分辨率的损失,且还有利于降低计算量。而且设置不同的第一通道数,能够得到不同尺度下的空间区域特征。
增大第一通道数,有利于提高对多通道图像基础特征进行升维的效果,相应有利于提取多通道图像基础特征中的高维度特征,但如果第一通道数过大,也容易导致网络模型的体积过大,相应容易降低空洞卷积操作的速度,不利于提高建筑图纸的检测速度。为此,本实施例中,第一通道数分别为8、64、128和256,通过选用这四种第一通道数,有利于在对多通道图像基础特征进行升维时获得较佳的升维效果,同时提高对建筑图纸的检测速度。
本实施例中,每一个第一通道数的空洞卷积操作采用多个不同空洞率的卷积层,空洞率为8~16中的任意偶数。设置不同的空洞率,能够获得不同的感受野,从而提高获取多尺度的空间区域特征的效果。空洞率不宜过小,否则容易增大漏检的概率,为此,空洞率为8~16中的任意偶数。例如,每一个第一通道数的空洞卷积操作采用的空洞率分别为8、12和16。
本实施例中,在空洞卷积操作中,任意一种或两种第一通道数的空洞卷积操作的卷积核尺寸为1*1,这有利于去除冗余特征,从而提高对各个空间功能进行划分的精确性,而且仅选取其中一种或两种第一通道数的空洞卷积操作的卷积核尺寸为1*1,从而避免过度降维的问题。作为一种示例,第一通道数为8的空洞卷积操作的卷积核尺寸为1*1,其余空洞卷积操作的卷积核尺寸为3*3。
空间分割模块20还包括:上采样处理单元24,设置于空洞卷积操作单元23和特征融合单元25之间,上采样处理单元24用于分别对多尺度下的各个空间区域特征进行上采样处理,以增加各个空间区域特征中的高维度特征。
通过上采样处理,以增加多尺度下的空间区域特征中的高维度特征,从而获得更多的图像语义信息。
而且,对每一种第一通道数的空洞卷积操作后的空间区域特征分别进行上采样处理,则在进行上采样处理时,能够针对每一种第一通道数的空洞卷积操作后的空间区域特征设置一一对应并相匹配的维度比例,以便在后续进行特征融合时实现维度的统一。
此外,与采用其他实现维度的统一的方式相比,本实施例采用对每一种第一通道数的空洞卷积操作后的空间区域特征分别进行上采样处理的方式,还有利于使得网络的模型体积较小。
空间分割模块20还包括:特征融合单元25,用于将多通道图像基础特征、以及上采样处理后的空间区域特征进行特征融合,得到融合图像特征。不同尺度的图像特征所包含的细节信息不同,通过进行特征融合,从而将不同尺度的特征信息融合在一起,得到多通道的融合图像特征,这有利于提高对各个空间功能的边缘检测的效果,进而提高对各个空间功能进行划分的精确性。
具体地,特征融合单元25包括:融合子单元,用于将多通道图像基础特征和空间区域特征输入至融合网络中,得到初始融合特征;处理子单元,用于对初始融合特征依次进行上采样处理和降维处理,得到融合图像特征,所述上采样处理包括2倍上采样或4倍上采样。
通过对初始融合特征依次进行上采样处理,进一步基于初始融合特征,提取空间功能分割区域的边界的抽象特征信息。
本实施例中,所述上采样处理包括2倍上采样或4倍上采样。采用2倍上采样或4倍上采样,使得该上采样处理所采用的卷积核较小,有利于更好地提取空间功能分割区域的边界;而且,该上采样处理的倍数为偶数,这有利于使得采样速度更快、采集的信息更全面;此外,该上采样处理的倍数不会过大,从而在数据处理时减少占据的内存。
通过进行降维处理,有利于去除冗余信息,进而减少计算量、提高分割速度,相应提高计算速度。
具体地,处理子单元在进行降维处理时,采用具有第三通道数、且卷积核尺寸为1*1的卷积核进行卷积处理。
本实施例中,第三通道数大于或等于128。第三通道数不会过小,从而在去除冗余信息的同时,降低第一通道数的空间区域特征缺失的概率。作为一种示例,第三通道数为128或256。
空间分割模块20还包括:空间功能分割区域获取单元26,用于根据融合图像特征,获得多个空间功能分割区域100a。具体地,空间功能分割区域获取单元26对融合图像特征进行图框空间语义分割,得到待处理图像的语义分割结果,从而获得多个空间功能分割区域。
空间分割模块20进行空间功能分割,有利于精确确定各个空间功能分割区域100a的位置和轮廓,从而降低漏检的概率,同时,由于空间功能面积主要是墙体110围成的区域的面积,因此,沿墙体110的轮廓使同一个空间功能分割区域100a的墙体110位于相同的第一连通域310中,也就是说,对于空间功能分割区域100a的任何一侧,墙体110所在的第一连通域310的边界为直线,第一连通域310沿墙体110的轮廓围成封闭的空间功能区320,因此,在计算空间功能区320内部的第二连通域(未标示)的面积后,即可精准地计算出空间功能面积。
结合参考图5和图6,图5是初始图像的局部放大图,图6是目标图像一实施例的示意图。本实施例中,图像处理模块30包括:二值化处理单元,用于对初始图像100进行灰度处理,得到二值图,在二值图中,墙体110的内墙线110b和外墙线110a对应的像素点具有相同的属性值。
进行灰度处理,以获得仅具有两种属性值的二值图,且在二值图中,各个空间功能的边缘轮廓对应的像素点,能够与剩余区域的像素点相区分,墙体110的内墙线110b和外墙线110a对应的像素点具有相同的属性值,而且还便于后续对进行连通域处理,使同一个空间功能分割区域100a的墙体110对应的像素点均位于同一个第一连通域310中。
在二值图中,墙体110的内墙线110b和外墙线110a对应的像素点具有相同的属性值,因此,墙体110的内墙线110b和外墙线110a对应的像素点的颜色相同。
本实施例中,图像处理模块30还包括:连通域处理单元,用于在二值图中对各个空间功能分割区域100a的墙体110进行连通域处理,获得目标图像300,连通域处理用于使同一个空间功能分割区域100a的墙体110对应的像素点均位于同一个第一连通域310中。
二值图中的像素点的属性值的种类为两种,且墙体110的内墙线110b和外墙线110a对应的像素点具有相同的属性值,因此,通过进行连通域处理,能够沿墙体110的轮廓,使内墙线110b依次顺连、外墙线110a依次顺连,并使墙体110的内墙线110b和外墙线110a之间的区域被填充,从而避免将内墙线110b和外墙线110a之间的区域面积作为空间功能面积的一部分,也避免在内墙线110b之间或外墙线110a之间的断开位置处的面积作为空间功能面积的一部分,进而提高了空间功能面积的计算精度。而且,二值图中的像素点的属性值的种类为两种,也易于实现连通域处理。
本实施例中,连通域处理单元包括第一连通域处理单元。
具体地,第一连通域处理单元包括:第一识别子单元,用于在二值图中对建筑图纸的墙体110进行识别,得到墙体110的位置;第一连通域处理子单元,用于对墙体110进行识别后,对各空间功能分割区域100a分别进行第一连通域处理;其中,第一连通域处理包括:对墙体110的内墙线110b和外墙线110a之间的像素点进行第一属性值转换,使第一属性值转换后的像素点属性值,与内墙线110b、外墙线110a的像素点属性值均相同。
具体地,由于墙体110在建筑图纸中有承重墙属性,因此通过获取承重墙颜色,即可获取墙体110的位置。
如图5所示,作为一种示例,第一连通域处理子单元采用尺寸与待进行第一属性值转换的墙体110的长度L和宽度W相同的卷积核,对墙体110所在的区域进行膨胀处理。采用尺寸与待进行第一属性值转换的墙体110的长度L和宽度W相同的卷积核,有利于确保该部分墙体110的内墙线110b和外墙线110a之间的像素点均能实现第一属性值转换,从而提高内墙线110b和外墙线110a之间的区域被填充的效果,且有利于提高第一属性值转换的速度。
需要说明的是,如图5所示,每一个空间功能的元素(也即构件)主要包括墙体110、门120和窗130,墙体110的内墙线110b在门120和窗130的位置处断开,墙体110的外墙线110a也在门120和窗130的位置处断开,因此为了精准计算空间功能的面积,需对墙体110在门120和窗130的位置处进行补齐,沿墙体110的轮廓使同一个空间功能分割区域100a的墙体110依次顺连。
为此,本实施例中,连通域处理单元还包括第二连通域处理单元。
具体地,第二连通域处理单元包括:第二识别子单元,用于在二值图中对建筑图纸的墙体进行识别后,在二值图中识别出嵌于墙体中的门和窗,以获得门120和窗130的位置;第二连通域处理子单元,用于在对门120进行识别后,对空间功能分割区域100a进行第二连通域处理;第三连通域处理子单元,用于在对窗130进行识别后,对空间功能分割区域100a进行第三连通域处理。
本实施例中,第二识别子单元采用目标检测算法在该二值图中识别门120和窗130。例如,目标检测算法可以采用YOLOv5网络。
其中,如图5所示,第二连通域处理包括:根据门120的开启方向,确定相对应的门线125的延伸方向作为第一方向;确定与门120相邻的墙体110所对应的内墙线110b和外墙线110a在第一方向上的第一端点116;根据第一端点116的位置获得由第一端点116围成的第一矩形区117,第一矩形区117沿第一方向连接门120两侧的墙体110;对第一矩形区117的像素点进行第二属性值转换,使第一属性值转换后的像素点属性值与内墙线110b、外墙线110a的像素点属性值相同。
需要说明的是,在建筑图纸中,门线125用于表示门120在闭合状态下的基准线,因此对于嵌有门120的墙体110,门120和内墙线110b、外墙线110a的延伸方向均相同,因此,通过确定相对应的门线125的延伸方向,便于确定门120所在的墙体110的补齐方向。
在建筑图纸中,同一个墙体110的内墙线110b和外墙线110a是两条平行线,因此,确定与门120相邻的墙体110所对应的内墙线110b和外墙线110a在第一方向上的第一端点116,根据第一端点116确定第一矩形区117,第一端点116为第一矩形区117的顶点,并使第一属性值转换后的像素点属性值与内墙线110b、外墙线110a的像素点属性值相同,从而能够通过对第一矩形区117的像素点进行第二属性值转换的方式,将墙体110在门120所在的位置补齐,这相当于沿第一方向对墙体110进行了延伸,而且,在第二属性值转换后,在门120所在的位置处,内墙线110b和外墙线110a之间的区域被填满。
同理,如图5所示,第三连通域处理包括:确定窗130的边界线131的延伸方向作为第二方向;确定与窗130相邻的墙体110所对应的内墙线110b和外墙线110a在第二方向上的第二端点132;根据第二端点132的位置获得由第二端点132围成的第二矩形区133,第二矩形区133沿第二方向连接窗130两侧的墙体110;对第二矩形区133的像素点进行第三属性值转换,使第三属性值转换后的像素点属性值与内墙线110b、外墙线110a的像素点属性值相同。
通过对第二矩形区133的像素点进行第三属性值转换,将墙体110在窗130所在的位置补齐,这相当于沿第二方向对墙体110进行了延伸,而且,在第三属性值转换后,在窗130所在的位置处,内墙线110b和外墙线110a之间的区域被填满。
本实施例中,第二识别子单元通过空间功能之间的逻辑关系,识别墙体110的内墙线110b和外墙线110a。具体地,对于任一空间功能分割区域100a,剩余空间功能分割区域100a为外部区域,因此,第二识别子单元根据与当前待识别的空间功能分割区域100相邻的外部区域,确定该空间功能分割区域100的墙体110的外墙线110a,外墙线110a暴露于相邻的外部区域,并将墙体110的剩余边界线确定为内墙线110b。
对于任一当前待识别的空间功能分割区域100,其墙体110朝向外部区域的边界线为外墙线110a,也就是说,外墙线110a对应其中一个外部区域,因此确定外部区域的位置,即可确定暴露于该外部区域的墙体110的外墙线110a。
本实施例中,第二识别子单元通过空间功能之间的逻辑关系,还能够识别窗130的外窗边界线(未标示)和内窗边界线(未标示)。
具体地,空间功能分割区域100a包括户外区域,户外区域即为单元楼的室外区域,第二识别子单元确定暴露于户外区域的外墙线110a,并根据暴露于户外区域的外墙线110a,确定与暴露于户外区域的外墙线110a相连的窗130的边界线作为外窗边界线,,将窗130的剩余边界线确定为内窗边界线。住宅的部分墙体110暴露于户外,因此外墙线110a对应的一个区域是户外区域,而窗130嵌于墙体110内,外窗边界线与外墙线110a相连,相应的,识别出外墙线110a和外窗边界线后,通过排除法即可确定内墙线110b和内窗边界线。
通过图像处理模块30,使同一个空间功能分割区域100a的墙体110均位于同一个第一连通域310中,第一连通域310沿墙体110的轮廓围成封闭的空间功能区320,空间功能区320则为能够与第一连通域310相区分的第二连通域。具体地,计算模块40对目标图像300进行连通域检测,在空间功能区320内部提取第二连通域后,计算第二连通域的面积,得到空间功能区320的面积。
本申请实施例还提供一种设备,该设备可以通过装载程序形式的上述建筑图纸的空间面积计算方法,以实现本申请实施例提供的建筑图纸的空间面积计算方法。
结合参考图9,示出了本申请一实施例所提供的设备的结构示意图。本实施例设备包括:至少一个处理器01、至少一个通信接口02、至少一个存储器03和至少一个通信总线04。
本实施例中,处理器01、通信接口02、存储器03和通信总线04的数量均为至少一个,且处理器01、通信接口02以及存储器03通过通信总线04完成相互间的通信。
通信接口02可以为用于进行网络通信的通信模块的接口,例如为GSM模块的接口。
处理器01可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本实施例建筑图纸的空间面积计算方法的一个或多个集成电路。
存储器03可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
其中,存储器03存储有一条或多条计算机指令,一条或多条计算机指令被处理器01执行以实现本申请实施例提供的建筑图纸的空间面积计算方法。
需要说明的是,上述的实现终端设备还可以包括与本申请实施例公开内容可能并不是必需的其他器件(未示出);鉴于这些其他器件对于理解本申请实施例公开内容可能并不是必需,本申请实施例对此不进行逐一介绍。
本申请实施例还提供一种存储介质,存储介质存储有一条或多条计算机指令,一条或多条计算机指令用于实现本申请实施例提供的建筑图纸的空间面积计算方法。
本申请实施例提供的建筑图纸的空间面积计算方法中,在建筑图纸中提取候选图框区域后,先对建筑图纸的候选图框区域进行空间功能分割,得到包括多个空间功能分割区域的初始图像,再对初始图像进行图像处理,沿墙体的轮廓使同一个空间功能分割区域的墙体依次顺连,并使墙体的内墙线和外墙线之间的区域被填充,使同一个空间功能分割区域的墙体位于相同的第一连通域中,并根据空间功能区内部的第二连通域的面积,得到空间功能区的面积;本申请实施例先进行空间功能分割,有利于精确确定各个空间功能分割区域的位置和轮廓,从而降低漏检的概率,同时,由于空间功能面积主要为墙体围成的区域的面积,因此,使同一个空间功能分割区域的墙体位于相同的第一连通域中,沿墙体的轮廓围成封闭的空间功能区,再计算空间功能区内部的第二连通域的面积,有利于更加精准地计算出空间功能面积。
上述本申请的实施方式是本申请的元件和特征的组合。除非另外提及,否则元件或特征可被视为选择性的。各个元件或特征可在不与其它元件或特征组合的情况下实践。另外,本申请的实施方式可通过组合部分元件和/或特征来构造。本申请的实施方式中所描述的操作顺序可重 新排列。任一实施方式的一些构造可被包括在另一实施方式中,并且可用另一实施方式的对应构造代替。对于本领域技术人员而言明显的是,所附权利要求中彼此没有明确引用关系的权利要求可组合成本申请的实施方式,或者可在提交本申请之后的修改中作为新的权利要求包括。
本申请的实施方式可通过例如硬件、固件、软件或其组合的各种手段来实现。在硬件配置方式中,根据本申请示例性实施方式的方法可通过一个或更多个专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理器件(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器等来实现。
在固件或软件配置方式中,本申请的实施方式可以模块、过程、功能等形式实现。软件代码可存储在存储器单元中并由处理器执行。存储器单元位于处理器的内部或外部,并可经由各种己知手段向处理器发送数据以及从处理器接收数据。
对所公开的实施例的上述说明,使本领域技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其他实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是符合与本文所公开的原理和新颖特点相一致的最宽的范围。
虽然本申请披露如上,但本申请并非限定于此。任何本领域技术人员,在不脱离本申请的精神和范围内,均可作各种更动与修改,因此本申请的保护范围应当以权利要求所限定的范围为准。

Claims (15)

  1. 一种建筑图纸的空间面积计算方法,包括:
    在建筑图纸中提取候选图框区域,所述建筑图纸的构件包括墙体;
    对所述建筑图纸的候选图框区域进行空间功能分割,以实现对各个空间功能的边缘检测,得到初始图像,所述初始图像包括多个与空间功能一一对应的空间功能分割区域;
    对所述初始图像进行图像处理,沿所述墙体的轮廓使同一个空间功能分割区域的墙体依次顺连,并使所述墙体的内墙线和外墙线之间的区域被填充,以获得目标图像,所述目标图像包括同一个空间功能分割区域的墙体所对应的第一连通域,所述第一连通域沿所述墙体的轮廓围成封闭的空间功能区;
    根据所述空间功能区内部的第二连通域的面积,得到所述空间功能区的面积。
  2. 如权利要求1所述的空间面积计算方法,其中,在建筑图纸中提取候选图框区域的步骤包括:
    获取待提取的候选图框的属性信息,所述属性信息包括文字属性信息和图层属性信息中的一种或多种,所述属性信息与待提取的候选图框在所述建筑图纸中的位置具有映射关系;
    根据所述属性信息以及所述映射关系,确定所述待提取的候选图框在所述建筑图纸中的区域,作为候选图框区域。
  3. 如权利要求1或2所述的空间面积计算方法,其中,对所述建筑图纸的候选图框区域进行空间功能分割的步骤包括:
    从所述建筑图纸中截取所述候选图框区域作为待处理图像;
    在所述待处理图像中提取多通道图像基础特征;
    对所述多通道图像基础特征进行不同第一通道数的空洞卷积操作,得到多尺度下的空间区域特征,所述空间区域特征的感受野大于所述多通道图像基础特征的感受野;
    将所述多通道图像基础特征和空间区域特征进行特征融合,得到融合图像特征;
    根据所述融合图像特征,获得多个空间功能分割区域。
  4. 如权利要求3所述的空间面积计算方法,其中,在所述待处理图像中提取多通道图像基础特征的步骤包括:将所述待处理图像输入至骨干网络中,获取经过所述骨干网络的多个网络块后的输出结果,得到不同第二通道数的图像基础特征;其中,所述骨干网络包括多个串联的网络块,每个所述网络块用于输出特定第二通道数的图像基础特征。
  5. 如权利要求4所述的空间面积计算方法,其中,对所述多通道图像基础特征进行不同第一通道数的空洞卷积操作的步骤中,每一个第一通道数的空洞卷积操作采用多个不同空洞率的卷积层,所述空洞率为8~16中的任意偶数。
  6. 如权利要求4或5所述的空间面积计算方法,其中,对所述多通道图像基础特征进行不同第一通道数的空洞卷积操作后,将所述多通道图像基础特征和空间区域特征进行特征融合之前,对所述建筑图纸的候选图框区域进行空间功能分割的步骤还包括:分别对多尺度下的各个空间区域特征进行上采样处理,用于增加各个空间区域特征中的高维度特征。
  7. 如权利要求4至6任一项所述的空间面积计算方法,其中,将所述多通道图像基础特征和空间区域特征进行特征融合的步骤包括:
    将所述多通道图像基础特征和空间区域特征输入至融合网络中,得到初始融合特征;
    对所述初始融合特征依次进行上采样处理和降维处理,得到融合图像特征,所述上采样处理包括2倍上采样或4倍上采样。
  8. 如权利要求1至7任一项所述的空间面积计算方法,其中,对所述初始图像进行图像处理的步骤包括:
    对所述初始图像进行灰度处理,得到二值图,在所述二值图中,所述墙体的内墙线和外墙线对应的像素点具有相同的属性值;
    在所述二值图中对各个空间功能分割区域的墙体进行连通域处理,获得目标图像,所述连 通域处理用于使同一个空间功能分割区域的墙体对应的像素点均位于同一个第一连通域中。
  9. 如权利要求8所述的空间面积计算方法,其中,在所述二值图中对各个空间功能分割区域的墙体进行连通域处理的步骤包括:
    在所述二值图中对所述建筑图纸的墙体进行识别,得到所述墙体的位置;
    对所述墙体进行识别后,对所述空间功能分割区域分别进行第一连通域处理;其中,
    所述第一连通域处理包括:对所述墙体的内墙线和外墙线之间的像素点进行第一属性值转换,使所述第一属性值转换后的像素点属性值与所述内墙线、外墙线的像素点属性值相同。
  10. 如权利要求9所述的空间面积计算方法,其中,对所述墙体的内墙线和外墙线之间的像素点进行第一属性值转换的步骤包括:
    采用尺寸与待进行第一属性值转换的墙体的长度和宽度相同的卷积核,对所述墙体所在的区域进行膨胀处理。
  11. 如权利要求9或10所述的空间面积计算方法,其中,所述空间面积计算方法还包括:
    在所述二值图中对所述建筑图纸的墙体进行识别后,在所述二值图中识别出嵌于所述墙体中的门和窗,以获得所述门和窗的位置;
    对所述门和窗进行识别后,对所述空间功能分割区域分别进行第二连通域处理和第三连通域处理;其中,
    所述第二连通域处理包括:根据所述门的开启方向,确定相对应的门线的延伸方向作为第一方向;确定与所述门相邻的墙体所对应的内墙线和外墙线在所述第一方向上的第一端点;根据所述第一端点的位置获得由所述第一端点围成的第一矩形区,所述第一矩形区沿所述第一方向连接所述门两侧的墙体;对所述第一矩形区的像素点进行第二属性值转换,使所述第一属性值转换后的像素点属性值与所述内墙线、外墙线的像素点属性值相同;
    所述第三连通域处理包括:确定窗的边界线的延伸方向作为第二方向;确定与所述窗相邻的墙体所对应的内墙线和外墙线在所述第二方向上的第二端点;根据所述第二端点的位置获得由所述第二端点围成的第二矩形区,所述第二矩形区沿所述第二方向连接所述窗两侧的墙体;对所述第二矩形区的像素点进行第三属性值转换,使所述第三属性值转换后的像素点属性值与所述内墙线、外墙线的像素点属性值相同。
  12. 一种建筑图纸的处理方法,采用如权利要求1至11任一所述的建筑图纸的空间面积计算方法计算建筑图纸中空间功能区的面积。
  13. 一种建筑图纸的空间面积计算系统,包括:
    图框提取模块,用于在建筑图纸中提取候选图框区域,所述建筑图纸的构件包括墙体;
    空间分割模块,用于对所述建筑图纸的候选图框区域进行空间功能分割,以实现对各个空间功能的边缘检测,得到初始图像,所述初始图像包括多个与空间功能一一对应的空间功能分割区域;
    图像处理模块,用于对所述初始图像进行图像处理,使同一个空间功能分割区域的墙体依次顺连,并使所述墙体的内墙线和外墙线之间的区域被填充,得到目标图像,所述目标图像包括同一个空间功能分割区域的墙体所对应的第一连通域,所述第一连通域沿所述墙体的轮廓围成封闭的空间功能区;
    计算模块,用于根据所述空间功能区内部的第二连通域的面积,得到所述空间功能区的面积。
  14. 一种设备,包括至少一个存储器和至少一个处理器,所述存储器存储有一条或多条计算机指令,其中,所述一条或多条计算机指令被所述处理器执行以实现如权利要求1至11任一项所述的建筑图纸的空间面积计算方法。
  15. 一种存储介质,所述存储介质存储有一条或多条计算机指令,所述一条或多条计算机指令用于实现如权利要求1至11任一项所述的建筑图纸的空间面积计算方法。
PCT/CN2023/102872 2022-08-29 2023-06-27 建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质 WO2024045826A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211047573.0 2022-08-29
CN202211047573.0A CN115393420A (zh) 2022-08-29 2022-08-29 建筑图纸的空间面积计算方法及系统

Publications (1)

Publication Number Publication Date
WO2024045826A1 true WO2024045826A1 (zh) 2024-03-07

Family

ID=84124489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/102872 WO2024045826A1 (zh) 2022-08-29 2023-06-27 建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质

Country Status (2)

Country Link
CN (1) CN115393420A (zh)
WO (1) WO2024045826A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953164A (zh) * 2024-03-26 2024-04-30 北京鸿鹄云图科技股份有限公司 一种提高图纸测量质量的方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393420A (zh) * 2022-08-29 2022-11-25 上海智臻智能网络科技股份有限公司 建筑图纸的空间面积计算方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008597A (zh) * 2019-12-05 2020-04-14 万翼科技有限公司 Cad图纸的空间识别方法、装置、电子设备及存储介质
WO2020199477A1 (zh) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 基于多模型融合的图像标注方法、装置、计算机设备及存储介质
CN111815602A (zh) * 2020-07-06 2020-10-23 清华大学 基于深度学习和形态学的建筑pdf图纸墙体识别装置和方法
CN114550195A (zh) * 2020-11-10 2022-05-27 欧特克公司 用于从建筑图纸中提取楼层平面图元素的机器学习技术
CN115393420A (zh) * 2022-08-29 2022-11-25 上海智臻智能网络科技股份有限公司 建筑图纸的空间面积计算方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020199477A1 (zh) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 基于多模型融合的图像标注方法、装置、计算机设备及存储介质
CN111008597A (zh) * 2019-12-05 2020-04-14 万翼科技有限公司 Cad图纸的空间识别方法、装置、电子设备及存储介质
CN111815602A (zh) * 2020-07-06 2020-10-23 清华大学 基于深度学习和形态学的建筑pdf图纸墙体识别装置和方法
CN114550195A (zh) * 2020-11-10 2022-05-27 欧特克公司 用于从建筑图纸中提取楼层平面图元素的机器学习技术
CN115393420A (zh) * 2022-08-29 2022-11-25 上海智臻智能网络科技股份有限公司 建筑图纸的空间面积计算方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953164A (zh) * 2024-03-26 2024-04-30 北京鸿鹄云图科技股份有限公司 一种提高图纸测量质量的方法及系统
CN117953164B (zh) * 2024-03-26 2024-05-24 北京鸿鹄云图科技股份有限公司 一种提高图纸测量质量的方法及系统

Also Published As

Publication number Publication date
CN115393420A (zh) 2022-11-25

Similar Documents

Publication Publication Date Title
WO2024045826A1 (zh) 建筑图纸的空间面积计算方法及系统、建筑图纸的处理方法、设备和存储介质
US11636234B2 (en) Generating 3D models representing buildings
WO2018108129A1 (zh) 用于识别物体类别的方法及装置、电子设备
US20210209410A1 (en) Method and apparatus for classification of wafer defect patterns as well as storage medium and electronic device
CN112037912B (zh) 基于医疗知识图谱的分诊模型训练方法、装置及设备
EP3506160B1 (en) Semantic segmentation of 2d floor plans with a pixel-wise classifier
US10395147B2 (en) Method and apparatus for improved segmentation and recognition of images
WO2022121156A1 (zh) 图像中目标物检测方法、装置、电子设备及可读存储介质
CN110059697B (zh) 一种基于深度学习的肺结节自动分割方法
JP2018200685A (ja) 完全教師あり学習用のデータセットの形成
WO2023280148A1 (zh) 一种血管分割方法、装置、电子设备和可读介质
WO2021135477A1 (zh) 基于概率图模型的文本属性抽取方法、装置、计算机设备及存储介质
CN110084813B (zh) 一种基于三维深度学习网络的肺结节良恶性预测方法
CN114241509B (zh) 基于施工图的空间分割方法、系统、存储介质及设备
CN111667459B (zh) 一种基于3d可变卷积和时序特征融合的医学征象检测方法、系统、终端及存储介质
WO2019196718A1 (zh) 元素图像生成方法、装置及系统
CN110889437A (zh) 一种图像处理方法、装置、电子设备及存储介质
CN112884758A (zh) 一种基于风格迁移方法的缺陷绝缘子样本生成方法及系统
WO2021168703A1 (zh) 字符处理及字符识别方法、存储介质和终端设备
CN114758137A (zh) 超声图像分割方法、装置及计算机可读存储介质
KR102535054B1 (ko) 패치기반의 딥러닝 알고리즘을 통한 실내도면 이미지에서의 실내공간정보 자동추출방법 및 그 장치
CN110084810B (zh) 一种肺结节图像检测方法、模型训练方法、装置及存储介质
WO2020215682A1 (zh) 眼底图像样本扩展方法、装置、电子设备及计算机非易失性可读存储介质
CN111209946A (zh) 三维图像处理方法、图像处理模型训练方法及介质
WO2022111383A1 (zh) 一种ct肋骨自动计数方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23858864

Country of ref document: EP

Kind code of ref document: A1