CN112907550B - Building detection method and device, electronic equipment and storage medium - Google Patents

Building detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112907550B
CN112907550B CN202110228247.9A CN202110228247A CN112907550B CN 112907550 B CN112907550 B CN 112907550B CN 202110228247 A CN202110228247 A CN 202110228247A CN 112907550 B CN112907550 B CN 112907550B
Authority
CN
China
Prior art keywords
dimensional
building
detected
sub
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110228247.9A
Other languages
Chinese (zh)
Other versions
CN112907550A (en
Inventor
汤寅航
刘琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innovation Qizhi Chengdu Technology Co ltd
Original Assignee
Innovation Qizhi Chengdu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innovation Qizhi Chengdu Technology Co ltd filed Critical Innovation Qizhi Chengdu Technology Co ltd
Priority to CN202110228247.9A priority Critical patent/CN112907550B/en
Publication of CN112907550A publication Critical patent/CN112907550A/en
Application granted granted Critical
Publication of CN112907550B publication Critical patent/CN112907550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Abstract

The application provides a building detection method, a device, electronic equipment and a storage medium. The method comprises the following steps: receiving two-dimensional image data and three-dimensional image data of a building to be detected, which are acquired by a plurality of image acquisition devices; splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected; splicing the corresponding two-dimensional images according to the three-dimensional structure and the two-dimensional image data to obtain a planar structure corresponding to the building to be detected; dividing the three-dimensional structure and the plane structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part; and detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result. According to the detection method for the building to be detected, provided by the embodiment of the application, manual participation is not needed, and the detection efficiency is improved.

Description

Building detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a building detection method, a device, an electronic apparatus, and a storage medium.
Background
The detection of building construction is an important component of quality management of building construction projects. In recent years, with the rapid development of the social economy of China and the continuous progress of the building industry, the number of various building structures is continuously increased.
In the building construction process, because the personnel flow quantity is large, the conditions of building materials are not right, building structure size is insufficient, building materials are insufficient and the like are easy to occur, various problems are easy to occur after the building is used for a certain time, the structure is easy to fall off, and even casualties are caused seriously.
The existing building construction detection method mainly comprises the steps of manually detecting, namely detecting personnel enter a building, measuring and calculating each building structure one by one, and comparing with a design drawing to acquire the building condition. A further auxiliary means is that the detector holds the scanning device into the building, scans the building, and then analyzes based on the scanned data. There is a problem of inefficiency in the manner of manual inspection.
Disclosure of Invention
An object of an embodiment of the present application is to provide a method, an apparatus, an electronic device, and a storage medium for building detection, so as to improve efficiency of building detection.
In a first aspect, an embodiment of the present application provides a building detection method, including: receiving two-dimensional image data and three-dimensional image data of a building to be detected, which are acquired by a plurality of image acquisition devices; splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected; splicing the corresponding two-dimensional images according to the three-dimensional structure and the two-dimensional image data to obtain a planar structure corresponding to the building to be detected; dividing the three-dimensional structure and the planar structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part; and detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result.
According to the embodiment of the application, the two-dimensional images and the three-dimensional images of the building to be detected are acquired by utilizing the plurality of set image acquisition devices, the three-dimensional images and the two-dimensional images are spliced to obtain the whole building to be detected, then the whole building to be detected is segmented according to detection requirements to obtain the complete sub-three-dimensional data and the sub-two-dimensional data corresponding to the part to be detected, so that the part to be detected is detected, manual participation is not needed, and the detection efficiency is improved.
Further, the three-dimensional image data includes a plurality of point clouds; the splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain the corresponding three-dimensional structure of the building to be detected comprises the following steps: aiming at the plurality of three-dimensional image data, acquiring a preset number of point clouds corresponding to two coplanar three-dimensional image data; performing matrix transformation on the two three-dimensional image data according to the coplanar point clouds to obtain a corresponding transformation matrix; and optimizing the transformation matrix by using a three-dimensional image stitching algorithm to obtain the three-dimensional structure.
According to the embodiment of the application, the three-dimensional image data is subjected to matrix transformation by utilizing the coplanar point clouds, coarse positioning is realized, the transformation matrix is optimized by utilizing the three-dimensional image stitching algorithm, and the accurate registration of the point clouds is realized, so that the accuracy of three-dimensional image stitching is improved.
Further, the method for dividing the three-dimensional structure according to the detection part of the building to be detected includes: acquiring a three-dimensional model of the building to be detected; the three-dimensional model comprises model point coordinates; matching the model point coordinates with the point cloud of the three-dimensional structure; determining target model point coordinates corresponding to the detection part from the three-dimensional model, and determining target point clouds in the corresponding three-dimensional structure according to the target model point coordinates; and dividing the three-dimensional structure according to the target point cloud.
According to the embodiment of the application, the three-dimensional structure of the building to be detected is segmented by utilizing the three-dimensional model of the building to be detected, so that the complete part to be detected is obtained.
Further, the matching between the model point coordinates and the point cloud of the three-dimensional structure includes: acquiring the ground in the three-dimensional model and acquiring the ground in the three-dimensional structure; matching the ground in the three-dimensional model with the ground in the three-dimensional structure; matching other structures in the three-dimensional model with other structures in the three-dimensional structure according to the spatial position information of the other structures and the ground; wherein the other structure is a structure other than the ground.
According to the embodiment of the application, the three-dimensional model is matched with the three-dimensional structure based on the ground, and other parts are matched, so that the accuracy of segmentation is improved.
Further, the dividing the planar structure according to the detection part of the building to be detected includes: based on the segmentation result of the three-dimensional structure, the corresponding planar structure is segmented.
Further, the detecting the detection part according to the sub-three-dimensional data and the sub-two-dimensional data to obtain a detection result includes: obtaining size information of the detection part according to the sub-three-dimensional data; the size information comprises a length size, a width size and a height size corresponding to the detection part; determining the volume corresponding to the detection part according to the size information; and determining material information according to the volume and the material density corresponding to the detection part.
According to the embodiment of the application, the size information corresponding to the detection part can be obtained according to the sub-three-dimensional data obtained after segmentation, and the material information of the detection part can be obtained based on the material density corresponding to the detection part, so that the detection of the detection part is realized.
Further, the detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result includes: and determining whether the material corresponding to the detection part is correct or not according to the texture characteristics of the sub two-dimensional data. Thereby being capable of rapidly judging whether the building materials to be detected are correct.
In a second aspect, embodiments of the present application provide a building detection apparatus, including: the receiving module is used for receiving the two-dimensional image data and the three-dimensional image data of the building to be detected, which are acquired by the plurality of image acquisition devices; the first splicing module is used for splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected; the second splicing module is used for splicing the two-dimensional images corresponding to the two-dimensional image data according to the three-dimensional structure to obtain a planar structure corresponding to the building to be detected; the segmentation module is used for segmenting the three-dimensional structure and the plane structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part; the detection module is used for detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a memory and a bus, wherein the processor and the memory complete communication with each other through the bus; the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a non-transitory computer readable storage medium comprising: the non-transitory computer-readable storage medium stores computer instructions that cause the computer to perform the method of the first aspect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a building detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of feature point matching according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a three-dimensional image stitching method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a building segmentation process to be detected according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a building detection device according to an embodiment of the present application;
fig. 6 is a schematic diagram of an entity structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The content to be detected in the building is very large and comprises building materials, sizes of objects in the building, building materials and the like. Moreover, the building is generally large in size and comprises a plurality of parts, such as floors, walls, roofs, windows and doors, and some of the building is internally provided with posts, and some of the building is internally provided with decorations and the like, so that each part needs to be detected.
The embodiment of the application provides a building detection method, as shown in fig. 1, the detection method provided by the embodiment of the application can be applied to electronic equipment; the electronic device may be a smart phone, a tablet computer, a personal digital assistant (Personal Digital Assitant, PDA), etc., and the method includes:
step 101: and receiving the two-dimensional image data and the three-dimensional image data of the building to be detected, which are acquired by the plurality of image acquisition devices.
The view angle collected by the image collecting device is limited, that is, the collecting range is limited, in addition, the view field collected by the image collecting device may be shielded by a pillar and the like in the building, so that a plurality of image collecting devices are needed to collect images of the building to be detected. The image acquisition device can be arranged in the building to be detected in advance, and can be arranged outside the building to be detected if the outer surface of the building to be detected is required to be detected. Each image acquisition device is provided with a 2D imaging device and a 3D imaging device, and the two-dimensional imaging device and the 3D imaging device are used for acquiring images of all parts of a building so as to obtain two-dimensional image data and three-dimensional image data of the corresponding parts. The image acquisition device can be in communication connection with the electronic device so as to be capable of transmitting the acquired two-dimensional image data and three-dimensional image data to the electronic device in real time. The image acquisition device is not connected with the electronic equipment in a communication way, and the two-dimensional image data and the three-dimensional image data acquired by the image acquisition device are transmitted to the electronic equipment through the data line.
It can be understood that the number of the image acquisition devices and the set positions are not limited in the embodiment of the application, so long as the three-dimensional image data and the two-dimensional image data acquired by the image acquisition devices are collected and covered on each part of the building to be detected.
Step 102: and splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain the corresponding three-dimensional structure of the building to be detected.
The structures such as walls and columns often shield the view acquired by the image acquisition device, so that the acquired three-dimensional image data are spliced to obtain the complete three-dimensional structure of the building.
Step 103: and splicing the two-dimensional images corresponding to the two-dimensional image data according to the three-dimensional structure to obtain a planar structure corresponding to the building to be detected.
Because the 2D imaging equipment and the 3D imaging equipment are integrated on one image acquisition device, and the images of the same part are acquired by the 2D imaging equipment and the 3D imaging equipment, after the three-dimensional image data are utilized for splicing, the two-dimensional image data can be correspondingly spliced, so that the plane structure corresponding to the building to be detected is obtained.
Step 104: and dividing the three-dimensional structure and the planar structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part.
After the overall three-dimensional structure and the planar structure of the building to be detected are obtained, for each detection part, the three-dimensional structure and the planar structure are required to be segmented, and corresponding sub-three-dimensional data and sub-two-dimensional data are obtained so as to detect the detection part.
Step 105: and detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result.
In a specific implementation process, after the sub-three-dimensional data and the sub-two-dimensional data of the detection part are obtained, the volume of the detection part can be obtained by utilizing the sub-three-dimensional data, so that the amount of materials is judged; and obtaining texture of the material according to the sub two-dimensional data, and judging whether the material is correct.
According to the embodiment of the application, the two-dimensional images and the three-dimensional images of the building to be detected are acquired by utilizing the plurality of set image acquisition devices, the three-dimensional images and the two-dimensional images are spliced to obtain the whole building to be detected, then the whole building to be detected is segmented according to detection requirements to obtain the complete sub-three-dimensional data and the sub-two-dimensional data corresponding to the part to be detected, so that the part to be detected is detected, manual participation is not needed, and the detection efficiency is improved.
On the basis of the above embodiment, the three-dimensional image data includes a plurality of point clouds; the splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain the corresponding three-dimensional structure of the building to be detected comprises the following steps:
aiming at the plurality of three-dimensional image data, acquiring a preset number of point clouds corresponding to two coplanar three-dimensional image data;
performing matrix transformation on the two three-dimensional image data according to the coplanar point clouds to obtain a corresponding transformation matrix;
and optimizing the transformation matrix by using a three-dimensional image stitching algorithm to obtain the three-dimensional structure.
In a specific implementation process, when three-dimensional image data is spliced, because overlapping parts may exist in the three-dimensional image data acquired by the plurality of image acquisition devices, a preset number of point clouds corresponding to two coplanar three-dimensional image data can be acquired from the plurality of three-dimensional image data. For example: two pieces of three-dimensional image data comprising the same mirror can be used for selecting point clouds on the mirror, and specifically, the point clouds corresponding to four vertex angles of the mirror can be selected. It can be understood that whether the two three-dimensional image data are coplanar can be determined manually, or the object of interest in the two three-dimensional images can be framed out through object detection, similarity calculation can be performed on the object of interest, for example, the intersection ratio can be calculated, if the similarity is greater than a preset threshold, it is indicated that the two three-dimensional images have coplanar objects, and a preset number of point clouds are selected from the coplanar objects. It is understood that the three-dimensional image data includes a point cloud corresponding to each pixel point in the three-dimensional image. It will be appreciated that the acquisition of the point cloud needs to be performed for all three-dimensional image data in which a co-plane exists in the plurality of three-dimensional image data.
After the coplanar point clouds are obtained, since the three-dimensional image data are acquired by the image acquisition devices placed at different positions, even the coplanar point clouds have different coordinates. In practice, however, the coordinates of the point cloud of a certain point in the first three-dimensional image data should be the same as the coordinates of the point cloud of the corresponding point in the second three-dimensional image data. According to the embodiment of the application, the two three-dimensional image data are subjected to matrix transformation according to the coplanar point clouds, so that coarse positioning is performed. The method specifically comprises the following steps:
assume that a set of a preset number of point clouds in the first three-dimensional image data is: { P 1 ,P 2 ,...P n1 The set formed by the preset number of point clouds in the second piece of three-dimensional image data is as follows: { Q 1 ,Q 2 ,...Q n2 }. Randomly selecting a point P from a preset number of point clouds of the first three-dimensional image data i According to P i Point cloud density acquisition P of (2) i Is calculated radius by adopting the following formula to construct P i Is a covariance matrix of (a):
randomly selecting a point Q from a preset number of point clouds of the first three-dimensional image data i According to Q i Point cloud density acquisition Q i Is calculated radius by the following formula to construct Q i Is a covariance matrix of (a):
solving eigenvalues and eigenvectors of the covariance matrix: COV (P) i )V=EV。
According to the eigenvectorConstructing x, y and z coordinate axes of a local rotation translation invariant coordinate system, and establishing a point P i The local rotation of the origin translates the invariant coordinate system.
And carrying out characteristic point matching on the point cloud set P and the point cloud set Q according to the local rotation translation invariant coordinate system to obtain a preliminary matching point set. The flow of feature point matching is shown in fig. 2, and may include:
step 201: for any point P in a point cloud set P i The corresponding characteristic is v i Searching for v in the second three-dimensional image data i The nearest feature v j Characteristic v 'of next closest approach' j
Step 202: calculating the characteristic v i Respectively to the characteristic v j And feature v' j Is a Euclidean distance of (2);
step 203: judging the characteristic v according to the following i And feature v j Whether there is a correct correspondence between:
wherein e (v) i ,v j ) Is characteristic v i And feature v j Corresponding relation of D i ' j Is characteristic v i And feature v j Euclidean distance of D ij Is characteristic v i And feature v' j Is a euclidean distance of (c).
If e (v) i ,v j ) =1, then indicates the characteristic v i And feature v j And if not, the matching is successful, otherwise, the matching is failed.
According to the method, the point clouds in the point cloud set P and the point cloud set Q can be matched.
And calculating to obtain a rotation matrix and a translation matrix by utilizing a singular value decomposition algorithm, wherein the rotation matrix and the translation matrix form the transformation matrix.
The three-dimensional image stitching algorithm may be an ICP algorithm, and the specific steps are as shown in fig. 3, including:
step 301: setting a distance threshold w as a condition for ending the iteration; wherein w >0; the specific value of the distance threshold w is determined according to the point cloud density d of the point cloud combination P;
step 302: randomly selecting a plurality of points in the first three-dimensional image data as points to be matched;
step 303: searching corresponding points of points to be matched in the second three-dimensional image data by using a reverse projection method;
step 304: adopting a point-to-surface distance measurement as an objective function to be solved by an ICP algorithm, and continuously and iteratively calculating a rigid transformation relation between the first three-dimensional image data and the second three-dimensional image data;
step 305: stopping iteration when the objective function value is smaller than the distance threshold value w; and taking the rigidity transformation relation solved at the moment as a final result to finish the point cloud matching.
And after the point cloud matching is completed, splicing the three-dimensional image data according to the matched point cloud data to obtain a three-dimensional structure.
According to the embodiment of the application, the three-dimensional image data is subjected to matrix transformation by utilizing the coplanar point clouds, coarse positioning is realized, the transformation matrix is optimized by utilizing the three-dimensional image stitching algorithm, and the accurate registration of the point clouds is realized, so that the accuracy of three-dimensional image stitching is improved.
On the basis of the above embodiment, since the materials used for different parts of the building to be detected are different and the corresponding dimensions are also different, it is necessary to divide the three-dimensional structure of the building to be detected, and specific dividing steps are shown in fig. 4, and include:
step 401: acquiring a three-dimensional model of the building to be detected; the three-dimensional model comprises model point coordinates; the three-dimensional model can be a CAD drawing corresponding to the building to be detected, and the CAD drawing comprises model point coordinates corresponding to each part of the three-dimensional model. The model point coordinates are also three-dimensional coordinates.
Step 402: matching the model point coordinates with the point cloud of the three-dimensional structure; that is, the model point coordinates of each point in the three-dimensional model are matched with the point cloud of the three-dimensional structure of the building to be detected, so that the points in the three-dimensional model correspond to the points of the three-dimensional structure. The model point coordinates may be matched to the point cloud of the three-dimensional structure according to the following steps:
the first step: acquiring the ground in the three-dimensional model and acquiring the ground in the three-dimensional structure; because only one ground is provided for both the three-dimensional model and the three-dimensional structure, the ground in the three-dimensional model and the ground in the three-dimensional structure are respectively used as references for matching, and the matching efficiency can be greatly improved.
And a second step of: matching the ground in the three-dimensional model with the ground in the three-dimensional structure; fitting model point coordinates corresponding to the ground in the three-dimensional model to obtain a fitting plane corresponding to the ground in the three-dimensional model, and fitting point clouds corresponding to the ground in the three-dimensional structure to obtain a fitting plane corresponding to the ground in the three-dimensional structure; and then, carrying out similarity calculation on the two fitted planes after fitting, thereby realizing the matching of the ground in the three-dimensional model and the ground in the three-dimensional structure.
And a third step of: matching other structures in the three-dimensional model with other structures in the three-dimensional structure according to the spatial position information of the other structures and the ground; wherein the other structure is a structure other than the ground.
After the ground matching is completed, point clouds of other structures can be obtained from the three-dimensional structure, and according to the spatial position relationship between the point clouds of other structures and the ground, a structure with the position relationship is found from the three-dimensional model, and the two structures are matched, so that the matching of each part in the three-dimensional structure and each part in the three-dimensional model is completed.
Step 403: determining target model point coordinates corresponding to the detection part from the three-dimensional model, and determining target point clouds in the corresponding three-dimensional structure according to the target model point coordinates; and obtaining target model point coordinates corresponding to the detection part from the three-dimensional model according to the requirement, wherein the detection part is a part designated by a detector. The detection personnel can extract the point coordinates of the target model through the electronic equipment, and the three-dimensional model is matched with the three-dimensional structure, so that the point cloud in the three-dimensional structure corresponding to the point coordinates of the target model is used as the target point cloud.
Step 404: and dividing the three-dimensional structure according to the target point cloud.
On the basis of the above embodiment, since the three-dimensional structure and the planar structure are corresponding, that is, the planar structure has less depth information than the three-dimensional structure, the point cloud in the three-dimensional structure and the pixel coordinates in the platform structure can also be in one-to-one correspondence, and after the three-dimensional structure is segmented, the planar structure can be segmented based on the segmentation result.
According to the embodiment of the application, the three-dimensional structure and the plane structure of the building to be detected are segmented by utilizing the three-dimensional model of the building to be detected, so that the complete part to be detected is obtained.
On the basis of the above embodiment, after the sub-three-dimensional data and the sub-two-dimensional data corresponding to the detection portion are obtained, the length dimension, the width dimension, and the height dimension of the detection portion can be obtained according to the sub-three-dimensional data, and the volume of the detection portion can be calculated and obtained according to the length dimension, the width dimension, and the height dimension. It will be appreciated that the detection site may not be a standard cuboid, and therefore, in calculating the volume, the detection site may be subdivided into a plurality of standard cuboids for calculation and summed to obtain the volume of the detection site. Of course, the detection part may also be a round body, and the size information corresponding to the detection part may be the radius of the round body. After the volume of the detection part is obtained, material information can be calculated and obtained according to the corresponding material density of the detection part. For example: the detection part is a column in a building to be detected, the material density of the column can be known in advance, and because each part has a corresponding construction standard, corresponding material information, namely the consumption of cement corresponding to the column, is obtained after the material density is multiplied by the volume of the detection part.
In addition, the sub-two-dimensional data comprises texture information, so that whether the material corresponding to the detection part is correct or not can be determined according to the texture characteristics of the sub-two-dimensional data. Specifically, the texture features of the sub two-dimensional data and the texture features corresponding to the materials to be used corresponding to the detection part can be matched, specifically, the similarity of the texture features and the texture features can be performed, if the similarity is greater than a preset threshold value, the fact that the materials corresponding to the detection part are correct is indicated, otherwise, the materials corresponding to the detection part are wrong is indicated.
According to the embodiment of the application, the building can be effectively detected by analyzing and reconstructing on the basis of the two-dimensional data and the three-dimensional data, and the detection precision and speed are improved.
Fig. 5 is a schematic structural diagram of a building detection device according to an embodiment of the present application, where the device may be a module, a program segment, or a code on an electronic device. It should be understood that the apparatus corresponds to the embodiment of the method of fig. 1 described above, and is capable of performing the steps involved in the embodiment of the method of fig. 1, and specific functions of the apparatus may be referred to in the foregoing description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy. The device comprises a receiving module 501, a first splicing module 502, a second splicing module 503, a dividing module 504 and a detecting module 505, wherein:
the receiving module 501 is used for receiving two-dimensional image data and three-dimensional image data of a building to be detected, which are acquired by a plurality of image acquisition devices; the first stitching module 502 is configured to stitch corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected; the second stitching module 503 is configured to stitch the two-dimensional images corresponding to the two-dimensional image data according to the three-dimensional structure to obtain a planar structure corresponding to the building to be detected; the segmentation module 504 is configured to segment the three-dimensional structure and the planar structure according to a detection position of the building to be detected, so as to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection position; the detection module 505 is configured to detect the detection portion according to the sub-three-dimensional data or the sub-two-dimensional data, so as to obtain a detection result.
On the basis of the above embodiment, the three-dimensional image data includes a plurality of point clouds; the first splicing module 502 is specifically configured to:
aiming at the plurality of three-dimensional image data, acquiring a preset number of point clouds corresponding to two coplanar three-dimensional image data;
performing matrix transformation on the two three-dimensional image data according to the coplanar point clouds to obtain a corresponding transformation matrix;
and optimizing the transformation matrix by using a three-dimensional image stitching algorithm to obtain the three-dimensional structure.
Based on the above embodiment, the segmentation module 504 is specifically configured to:
acquiring a three-dimensional model of the building to be detected; the three-dimensional model comprises model point coordinates;
matching the model point coordinates with the point cloud of the three-dimensional structure;
determining target model point coordinates corresponding to the detection part from the three-dimensional model, and determining target point clouds in the corresponding three-dimensional structure according to the target model point coordinates;
and dividing the three-dimensional structure according to the target point cloud.
Based on the above embodiment, the segmentation module 504 is specifically configured to:
acquiring the ground in the three-dimensional model and acquiring the ground in the three-dimensional structure;
matching the ground in the three-dimensional model with the ground in the three-dimensional structure;
matching other structures in the three-dimensional model with other structures in the three-dimensional structure according to the spatial position information of the other structures and the ground; wherein the other structure is a structure other than the ground.
Based on the above embodiment, the segmentation module 504 is specifically configured to:
based on the segmentation result of the three-dimensional structure, the corresponding planar structure is segmented.
Based on the above embodiment, the detection module 505 is specifically configured to:
obtaining size information of the detection part according to the sub-three-dimensional data; the size information comprises a length size, a width size and a height size corresponding to the detection part;
determining the volume corresponding to the detection part according to the size information;
and determining material information according to the volume and the material density corresponding to the detection part.
On the basis of the above embodiment, the detection module 505 is further specifically configured to:
and determining whether the material corresponding to the detection part is correct or not according to the texture characteristics of the sub two-dimensional data.
Fig. 6 is a schematic diagram of an entity structure of an electronic device according to an embodiment of the present application, as shown in fig. 6, where the electronic device includes: a processor (processor) 601, a memory (memory) 602, and a bus 603; wherein the processor 601 and the memory 602 perform communication with each other via the bus 603.
The processor 601 is configured to invoke program instructions in the memory 602 to perform the methods provided in the above method embodiments, for example, including: receiving two-dimensional image data and three-dimensional image data of a building to be detected, which are acquired by a plurality of image acquisition devices; splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected; splicing the corresponding two-dimensional images according to the three-dimensional structure and the two-dimensional image data to obtain a planar structure corresponding to the building to be detected; dividing the three-dimensional structure and the planar structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part; and detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result.
The processor 601 may be an integrated circuit chip having signal processing capabilities. The processor 601 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. Which may implement or perform the various methods, steps, and logical blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The Memory 602 may include, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), and the like.
The present embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the methods provided by the above-described method embodiments, for example comprising: receiving two-dimensional image data and three-dimensional image data of a building to be detected, which are acquired by a plurality of image acquisition devices; splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected; splicing the corresponding two-dimensional images according to the three-dimensional structure and the two-dimensional image data to obtain a planar structure corresponding to the building to be detected; dividing the three-dimensional structure and the planar structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part; and detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result.
The present embodiment provides a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the methods provided by the above-described method embodiments, for example, including: receiving two-dimensional image data and three-dimensional image data of a building to be detected, which are acquired by a plurality of image acquisition devices; splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected; splicing the corresponding two-dimensional images according to the three-dimensional structure and the two-dimensional image data to obtain a planar structure corresponding to the building to be detected; dividing the three-dimensional structure and the planar structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part; and detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (9)

1. A method of building inspection comprising:
receiving two-dimensional image data and three-dimensional image data of a building to be detected, which are acquired by a plurality of image acquisition devices; each image acquisition device is provided with 2D imaging equipment and 3D imaging equipment;
splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected;
splicing the corresponding two-dimensional images according to the three-dimensional structure and the two-dimensional image data to obtain a planar structure corresponding to the building to be detected;
dividing the three-dimensional structure and the planar structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part;
detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result;
dividing the three-dimensional structure according to the detection part of the building to be detected, including:
acquiring a three-dimensional model of the building to be detected; the three-dimensional model comprises model point coordinates;
matching the model point coordinates with the point cloud of the three-dimensional structure;
determining target model point coordinates corresponding to the detection part from the three-dimensional model, and determining target point clouds in the corresponding three-dimensional structure according to the target model point coordinates;
and dividing the three-dimensional structure according to the target point cloud.
2. The method of claim 1, wherein the three-dimensional image data comprises a plurality of point clouds; the splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain the corresponding three-dimensional structure of the building to be detected comprises the following steps:
aiming at the plurality of three-dimensional image data, acquiring a preset number of point clouds corresponding to two coplanar three-dimensional image data;
performing matrix transformation on the two three-dimensional image data according to the coplanar point clouds to obtain a corresponding transformation matrix;
and optimizing the transformation matrix by using a three-dimensional image stitching algorithm to obtain the three-dimensional structure.
3. The method of claim 1, wherein said matching said model point coordinates to said three-dimensional structure point cloud comprises:
acquiring the ground in the three-dimensional model and acquiring the ground in the three-dimensional structure;
matching the ground in the three-dimensional model with the ground in the three-dimensional structure;
matching other structures in the three-dimensional model with other structures in the three-dimensional structure according to the spatial position information of the other structures and the ground; wherein the other structure is a structure other than the ground.
4. The method according to claim 1, wherein the dividing the planar structure according to the detection site of the building to be detected comprises:
based on the segmentation result of the three-dimensional structure, the corresponding planar structure is segmented.
5. The method according to claim 1, wherein detecting the detection site based on the sub-three-dimensional data and the sub-two-dimensional data to obtain a detection result includes:
obtaining size information of the detection part according to the sub-three-dimensional data; the size information comprises a length size, a width size and a height size corresponding to the detection part;
determining the volume corresponding to the detection part according to the size information;
and determining material information according to the volume and the material density corresponding to the detection part.
6. The method according to claim 1, wherein detecting the detection site based on the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result includes:
and determining whether the material corresponding to the detection part is correct or not according to the texture characteristics of the sub two-dimensional data.
7. A building inspection device, comprising:
the receiving module is used for receiving the two-dimensional image data and the three-dimensional image data of the building to be detected, which are acquired by the plurality of image acquisition devices; each image acquisition device is provided with 2D imaging equipment and 3D imaging equipment;
the first splicing module is used for splicing the corresponding three-dimensional images according to the plurality of three-dimensional image data to obtain a corresponding three-dimensional structure of the building to be detected;
the second splicing module is used for splicing the two-dimensional images corresponding to the two-dimensional image data according to the three-dimensional structure to obtain a planar structure corresponding to the building to be detected;
the segmentation module is used for segmenting the three-dimensional structure and the plane structure according to the detection part of the building to be detected to obtain sub-three-dimensional data and sub-two-dimensional data corresponding to the detection part;
the detection module is used for detecting the detection part according to the sub-three-dimensional data or the sub-two-dimensional data to obtain a detection result;
the segmentation module is specifically used for:
acquiring a three-dimensional model of the building to be detected; the three-dimensional model comprises model point coordinates;
matching the model point coordinates with the point cloud of the three-dimensional structure;
determining target model point coordinates corresponding to the detection part from the three-dimensional model, and determining target point clouds in the corresponding three-dimensional structure according to the target model point coordinates;
and dividing the three-dimensional structure according to the target point cloud.
8. An electronic device, comprising: a processor, a memory, and a bus, wherein,
the processor and the memory complete communication with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-6.
9. A non-transitory computer readable storage medium storing computer instructions which, when executed by a computer, cause the computer to perform the method of any of claims 1-6.
CN202110228247.9A 2021-03-01 2021-03-01 Building detection method and device, electronic equipment and storage medium Active CN112907550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110228247.9A CN112907550B (en) 2021-03-01 2021-03-01 Building detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110228247.9A CN112907550B (en) 2021-03-01 2021-03-01 Building detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112907550A CN112907550A (en) 2021-06-04
CN112907550B true CN112907550B (en) 2024-01-19

Family

ID=76107361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110228247.9A Active CN112907550B (en) 2021-03-01 2021-03-01 Building detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112907550B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037814B (en) * 2021-11-11 2022-12-23 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013204229A1 (en) * 2013-03-12 2014-09-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and device for determining a raw material content in a building
KR20150128300A (en) * 2014-05-09 2015-11-18 한국건설기술연구원 method of making three dimension model and defect analysis using camera and laser scanning
CN106780712A (en) * 2016-10-28 2017-05-31 武汉市工程科学技术研究院 Joint laser scanning and the three-dimensional point cloud generation method of Image Matching
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110462678A (en) * 2017-03-30 2019-11-15 富士胶片株式会社 Image processing apparatus and image processing method
WO2019242174A1 (en) * 2018-06-21 2019-12-26 华南理工大学 Method for automatically detecting building structure and generating 3d model based on laser radar
CN111009002A (en) * 2019-10-16 2020-04-14 贝壳技术有限公司 Point cloud registration detection method and device, electronic equipment and storage medium
CN112200916A (en) * 2020-12-08 2021-01-08 深圳市房多多网络科技有限公司 Method and device for generating house type graph, computing equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9805451B2 (en) * 2014-06-24 2017-10-31 Hover Inc. Building material classifications from imagery
EP3506211B1 (en) * 2017-12-28 2021-02-24 Dassault Systèmes Generating 3d models representing buildings
US10339384B2 (en) * 2018-02-07 2019-07-02 Structionsite Inc. Construction photograph integration with 3D model images
CN109242903B (en) * 2018-09-07 2020-08-07 百度在线网络技术(北京)有限公司 Three-dimensional data generation method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013204229A1 (en) * 2013-03-12 2014-09-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and device for determining a raw material content in a building
KR20150128300A (en) * 2014-05-09 2015-11-18 한국건설기술연구원 method of making three dimension model and defect analysis using camera and laser scanning
CN106780712A (en) * 2016-10-28 2017-05-31 武汉市工程科学技术研究院 Joint laser scanning and the three-dimensional point cloud generation method of Image Matching
CN110462678A (en) * 2017-03-30 2019-11-15 富士胶片株式会社 Image processing apparatus and image processing method
WO2019242174A1 (en) * 2018-06-21 2019-12-26 华南理工大学 Method for automatically detecting building structure and generating 3d model based on laser radar
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN111009002A (en) * 2019-10-16 2020-04-14 贝壳技术有限公司 Point cloud registration detection method and device, electronic equipment and storage medium
CN112200916A (en) * 2020-12-08 2021-01-08 深圳市房多多网络科技有限公司 Method and device for generating house type graph, computing equipment and storage medium

Also Published As

Publication number Publication date
CN112907550A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN110443836B (en) Point cloud data automatic registration method and device based on plane features
Yang et al. Automated registration of dense terrestrial laser-scanning point clouds using curves
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
Liang et al. Image based localization in indoor environments
Liu et al. Concrete crack assessment using digital image processing and 3D scene reconstruction
Kim et al. Automated point cloud registration using visual and planar features for construction environments
Awrangjeb et al. Automatic detection of residential buildings using LIDAR data and multispectral imagery
Golparvar-Fard et al. Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques
US20150262346A1 (en) Image processing apparatus, image processing method, and image processing program
JP5538868B2 (en) Image processing apparatus, image processing method and program
US7928978B2 (en) Method for generating multi-resolution three-dimensional model
Khoshelham Automated localization of a laser scanner in indoor environments using planar objects
Han et al. Automated monitoring of operation-level construction progress using 4D BIM and daily site photologs
US20150199573A1 (en) Global Scene Descriptors for Matching Manhattan Scenes using Edge Maps Associated with Vanishing Points
Sui et al. A novel 3D building damage detection method using multiple overlapping UAV images
CN109740659B (en) Image matching method and device, electronic equipment and storage medium
CN109063632B (en) Parking space characteristic screening method based on binocular vision
Auer et al. Characterization of facade regularities in high-resolution SAR images
CN112907550B (en) Building detection method and device, electronic equipment and storage medium
Wang Automatic extraction of building outline from high resolution aerial imagery
Li et al. Towards automated extraction for terrestrial laser scanning data of building components based on panorama and deep learning
US20130331145A1 (en) Measuring system for mobile three dimensional imaging system
Tu et al. Detecting facade damage on moderate damaged type from high-resolution oblique aerial images
Zhu et al. A filtering strategy for interest point detecting to improve repeatability and information content
Wang et al. Performance tests for automatic 3D geometric data registration technique for progressive as-built construction site modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant