CN112365369A - Method for automatically monitoring construction progress based on machine vision - Google Patents

Method for automatically monitoring construction progress based on machine vision Download PDF

Info

Publication number
CN112365369A
CN112365369A CN202011239206.1A CN202011239206A CN112365369A CN 112365369 A CN112365369 A CN 112365369A CN 202011239206 A CN202011239206 A CN 202011239206A CN 112365369 A CN112365369 A CN 112365369A
Authority
CN
China
Prior art keywords
construction
area
point cloud
image
elevation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011239206.1A
Other languages
Chinese (zh)
Inventor
李成涛
陈立
王家峰
姚楠
李荣冰
马文刚
黄传峰
林敏�
潘大荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202011239206.1A priority Critical patent/CN112365369A/en
Publication of CN112365369A publication Critical patent/CN112365369A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Primary Health Care (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for automatically monitoring construction progress based on machine vision, which comprises the steps of dividing a construction area and a non-construction area in a general plan of a construction project of a construction site, and taking a building monomer in a dynamic area as a single identification object; rasterizing the dynamic area and the steady-state area to form a binary grating image carrying monitoring area range information; three-dimensional reconstruction of a construction site scene based on reference images of the construction site and surrounding scenes; masking the construction area in the reference photo, extracting the part of the non-construction area in the reference photo, taking the extracted non-construction area as a registration primitive, performing conventional monitoring measurement on a construction site and obtaining construction site photos of a conventional monitoring batch to be monitored; and performing three-dimensional reconstruction on the construction scene again by combining the registration primitives to generate three-dimensional point cloud, DSM (surface model document) and DOM (document object model), extracting the elevation of the working surface of each building monomer from the monitored DSM or three-dimensional point cloud, and automatically monitoring the construction progress through elevation comparison.

Description

Method for automatically monitoring construction progress based on machine vision
Technical Field
The invention relates to the technical field of machine vision, in particular to a method for automatically monitoring construction progress based on machine vision.
Background
The construction progress monitoring is very necessary for engineering management, and the current common manual inspection method is time-consuming and labor-consuming, and the records are easy to make mistakes. Unmanned aerial vehicle is good monitoring tool, and unmanned aerial vehicle carries the camera to monitor, can obtain fine control visual angle. At present, the application example of adopting an unmanned aerial vehicle to monitor the construction site is also appeared, but most of the application examples are aerial photos read through manual work, the manpower required for site inspection is greatly saved by the method, but the aerial photos are manually read, the specific buildings are still required to be distinguished in the building site where the buildings are erected, the construction floors are counted, and the problems of labor consumption and easy error are still existed.
In recent years, more advanced methods for automatically analyzing construction site images have appeared, and mainly a site photo is used for three-dimensional reconstruction of a construction scene to obtain a three-dimensional point cloud or a three-dimensional grid model, and the three-dimensional point cloud or the three-dimensional grid model is compared with design information to further realize progress recognition. From the literature, when three-dimensional reconstruction is performed on a group of pictures without POS information, image control point information is generally required, which results in: the physical maintenance requirement on the image control points is higher (as described in CN 111006646A), or after the scene point cloud of the construction site is measured, the image control points are registered with the design model in a "manual manner", or after the image control points are initially specified to complete the directional positioning of the three-dimensional reconstruction model, and after the parts with consistent appearances before and after the monitoring are selected, the subsequent three-dimensional reconstruction model is registered with the three-dimensional reconstruction model with completed positioning and directional orientation by using an ICP algorithm. Using the ICP algorithm, a portion of consistent appearance needs to be selected for each monitoring. Strictly speaking, for the subsequent monitoring batch, a part which is consistent with the appearance of the previous batch needs to be specified, otherwise, the registration failure can be caused; because no method capable of well automatically determining the consistent part of the appearance exists at present, the automatic level of data processing can be reduced by adopting an ICP (inductively coupled plasma) algorithm and a three-dimensional reconstruction model for registration; in recent years, "image control free" photogrammetry technologies can well avoid the problem of specifying an image control point each time, but these "image control free" photogrammetry technologies use photographic equipment (such as an unmanned aerial vehicle configured with RTK) which is configured with a CORS, an RTK and the like and can acquire high-precision GNSS coordinates, and will technically put forward more requirements on a network, a link, a base station and the like, and the cost is higher.
According to reports, the modeling precision of the consumer-grade unmanned aerial vehicle can reach a centimeter level, if the method that the low-altitude photography precision of the unmanned aerial vehicle is higher and the existing environmental information can be utilized can be combined, the consumer-grade unmanned aerial vehicle with the lower GNSS positioning precision can be used in an image control-free mode, a stable measurement result which can be well registered with the design information is obtained, and effective data are further provided for construction progress monitoring.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a method for automatically monitoring the construction progress based on machine vision, which fully utilizes the position and image information of the surrounding environment of a construction site to realize the automatic monitoring of the construction progress of a fragmented construction area.
The technical scheme adopted by the invention is as follows:
a method for automatically monitoring construction progress based on machine vision comprises the following steps:
step 1, dividing a construction area and a non-construction area in a general plan of a construction project of a construction site, wherein the construction area is a dynamic areaThe non-construction area is a stable area; the single building body in the dynamic area is used as a single identification object, the plane area and the +/-0 elevation of the single building body are identified, and then the single plane area, the +/-0 elevation of the single building body and the floor elevation of the single building body are obtained, and the floor elevation is used
Figure BDA0002767800960000021
Expressed as the design elevation of the ith floor of the jth building;
step 2, rasterizing the dynamic area and the steady area by specifying an origin position, a coordinate axis direction, coordinates (x0, y0), dx and dy and a graph range in an origin position horizontal plane to form a binary raster image carrying monitoring area range information; wherein dx and dy are the space intervals represented by each row and each column of the raster image; the original position is a position point in the real world;
step 3, performing reference measurement on the construction site to obtain reference images of the construction site and surrounding scenes, taking the reference images of the batch as reference photos, and performing three-dimensional reconstruction on the construction site scene by using the reference photos;
step 4, masking the construction area in the reference photo, extracting the part of the non-construction area in the reference photo, taking the extracted non-construction area as a registration primitive, and participating in the three-dimensional reconstruction calculation of the scene of the later construction site to realize that the conventional monitoring does not require inputting or appointing an image control point any more;
step 5, performing conventional monitoring measurement on the construction site, namely acquiring images of the construction site to be monitored and surrounding scenes of the construction site as construction site photos of conventional monitoring batches to be monitored; combining the non-construction area of the reference photo in the step 4, performing three-dimensional reconstruction on the construction site scene again to generate three-dimensional point cloud, DSM and DOM, wherein the DSM is a digital surface model containing construction site scene elevation information, the DOM is a digital orthophoto map, and the origin position and the coordinate axis direction of the three-dimensional point cloud in the step 5 are consistent with the selection of the origin position and the coordinate axis direction in the step 2; the origin positions, the coordinate axis directions dx and dy and the image range of the DSM and the DOM in the step 5 are consistent with the selection of the origin positions and the coordinate axis directions dx and dy in the step 2;
step 6, according to the monomer plane area, extracting the working surface elevation of the jth building monomer from the DSM or the three-dimensional point cloud monitored in the step 5, and
Figure BDA0002767800960000022
comparing the floor numbers i with the minimum difference, namely knowing that the floor is constructed to the ith floor, thereby automatically obtaining the construction progress;
and 7, on the basis of the DOM obtained in the step 5, extracting the current working face image of the single building body according to the plane position information of the single building body, and providing image data for specific construction stage identification.
Further, in the step 4, the construction area on the reference photo is automatically segmented by using the reference photo and the point cloud information, and the process is as follows:
step 4.1, extracting Mask of plane range of dynamic regionPlanar dynamic region,MaskPlanar dynamic regionRepresented as a binary image identifying a planar dynamic region coinciding with the DSM origin position, pixel row-column pitch, hereinafter referred to as "planar dynamic region Mask image", MaskPlanar dynamic region(i, j) represents the value of the ith row and the jth column of the image;
step 4.2, performing space-three calculation on the reference photo, calculating the internal orientation elements and the external orientation elements of each image in the reference photo, and then performing three-dimensional reconstruction on the scene to obtain three-dimensional point cloud of the scene;
step 4.3, discretizing horizontal coordinate components in the point cloud three-dimensional coordinates according to dx and dy consistent with DSM, and selecting three-dimensional points in a specified range according to a planar dynamic area mask image, wherein the specific expression is as follows: for the k point p in the point cloudkThe coordinate is (x)k,yk,zk) Remapped as line number i in the digital orthographkAnd column number jkExpressed as: i.e. ik=round((xk-x0)/dx+1),jk=round((yk-y0)/dx +1), wherein the notation round (·) denotes rounding; such asFruit MaskPlanar dynamic region(ik,jk) 1, then the point p in the point cloudkI.e. points belonging to a construction area; if MaskPlanar dynamic region(ik,jk) If 0, then the point p in the point cloudkI.e. points belonging to non-construction areas; selecting a point cloud of a dynamic area according to the steps;
4.4, projecting the three-dimensional points of the construction area selected in the step 4.3 on a reference photo according to the inner orientation elements and the outer orientation elements of each image and the projective geometric principle to obtain a point cloud projection image of the object in the construction area;
step 4.5, performing opening operation or closing operation on the point cloud projection image of the object in the construction area to form a closed area on the image, namely obtaining a Mask image Mask of each imageImage of a personBy masking Mask of each imageImage of a personThe dynamic area and the non-dynamic area in each photo can be automatically distinguished.
Further, the method for performing opening operation or closing operation on the projection diagram of the point cloud comprises the following steps: if the background is white and the point cloud is black, opening operation is carried out according to the binary image; and if the background is black and the point cloud part is white, performing closing operation.
Further, the step 6 of identifying includes:
step 6.1, extracting a point cloud set of the jth building monomer from the DSM or the three-dimensional point cloud of the construction site as { P }according to the monomer plane areaj(xk,yk,zk)},k=1,2,…,Nj,NjThe total point number x of the j-th building monomer point cloud is extractedk,ykAs a horizontal coordinate, zkAs elevation coordinates, for zkCarrying out sectional statistics, recording the number of points in each elevation section, recording the representative value of the elevation section with the most points, such as the minimum value, the maximum value or the intermediate value, which is the elevation of the work surface of the j-th building monomer identified, and recording the elevation as the elevation of the work surface of the j-th building monomer
Figure BDA0002767800960000031
Step 6.2, for the jth building unit, will
Figure BDA0002767800960000041
And
Figure BDA0002767800960000042
comparison in which
Figure BDA0002767800960000043
Minimum i, is noted
Figure BDA0002767800960000044
Namely, the estimation of the floor number of the construction progress of the jth building monomer in the monitoring data.
Further, the method for extracting the point cloud of the jth building monomer from the DSM or the three-dimensional point cloud of the construction site in the step 6.1 comprises the following steps:
if the point cloud point is based on the DSM, taking out the elevation values of all points of the DSM in the jth building monomer plane area according to the monomer plane area, and combining the plane position and the elevation values to be used as the point cloud of the jth building monomer;
if the method is based on the three-dimensional point cloud, the Mask is processed by adopting the method of the step 4.3 according to the single plane area and based on the three-dimensional point cloud of the construction sitePlanar dynamic regionUsing Mask for specifying plane position information of jth building unitThe jth building unitAnd (6) replacing.
Further, the method for dividing the construction area and the non-construction area in the step 1 comprises the following steps: based on the general plan of the construction project, according to the actual situation, the construction area, roads around the construction area, the non-construction areas such as the existing buildings of the adjacent land parcel and the like are divided in a manual mode.
Further, the construction project total plan in the step 1 includes various achievements bearing the positioning information of the construction site and the building, such as construction project total plan paper, construction project total plan image, satellite image, design model or surveying and mapping achievement.
The invention has the beneficial effects that:
according to the invention, the position and image information of the surrounding environment of the construction site are fully utilized to monitor the construction progress of the construction site which is built on a large scale, the problems that the satellite positioning precision of the consumption-level unmanned aerial vehicle is low and the automatic registration cannot be reliably completed by measuring results for multiple times are solved, and a feasible technical route is provided for the automatic monitoring of convenient image acquisition tools such as the consumption-level unmanned aerial vehicle and the like on the construction site. Practice proves that the method is feasible and can be used as an effective method for automatically monitoring the progress of the fragmented construction area.
The invention solves the problem of information multiplexing of images containing dynamic areas by distinguishing the dynamic areas from the stable areas, and can reduce the requirements (such as the aspects of the overlapping degree of the photos, the integrity degree of the visual angles of the photos and the like) for single image acquisition work; in addition, the application can adapt to the consumption-level unmanned aerial vehicle with lower GPS precision, the working cost is reduced, and automatic and efficient construction progress monitoring is realized.
Drawings
FIG. 1 is a flow chart of a method for automatically monitoring construction progress based on machine vision according to the present invention;
FIG. 2 is a schematic diagram of dynamic and steady state zone partitioning based on a construction project plan according to the present invention;
FIG. 3 is a schematic view of rasterizing a monolithic planar region;
FIG. 4 is a schematic diagram of scene three-dimensional reconstruction of a construction site and surrounding scenes;
FIG. 5 is a diagram of DSMs and DOM generated for this monitoring;
FIG. 6 is a schematic diagram of mode statistics, and 6a is the DSM-based extraction of the working surface elevation of the building blocks
Figure BDA0002767800960000045
Figure BDA0002767800960000045
6b is extracting the height of the working face of the building monomer based on three-dimensional point cloud
Figure BDA0002767800960000051
The elevation statistical diagram of (a);
FIG. 7 is a current "work surface" image of the building unit being extracted;
FIG. 8 is a DOM diagram of a reference;
FIG. 9 is a schematic point cloud splitting;
FIG. 10 is a schematic diagram of dynamic and steady state region segmentation of an image of a reference batch by dynamic and steady state region point cloud segmentation;
FIG. 11 is a schematic diagram of the extraction of the current "work surface" image of the building unit based on the DOM and the planar position information of the building unit in step 7.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a method for automatically monitoring construction progress based on machine vision, which comprises the following steps:
step 1, as shown in fig. 1 and 2, based on a planning and designing file of a construction site (namely, a construction project total plan image, a satellite image, a design model, surveying and mapping results and other various results bearing construction sites and building positioning information), on the construction project total plan, a construction area, roads around the construction area and an area of an existing building close to a land are divided into a construction area (namely, a dynamic area) and a non-construction area (namely, a stable area); the dynamic and steady-state regions are partitioned here by manual partitioning, for example: and manually dividing the construction project plane drawing or the shot construction project plane image through automatic framing of a frame. Using a single building in the dynamic area as a single identification object (building monomer for short), identifying the plane area and the +/-0 elevation of the monomer, and further obtaining the plane area, the +/-0 elevation of the monomer and the floor elevation of the monomer, wherein the floor elevation is used
Figure BDA0002767800960000052
Expressed as the design elevation of the ith floor of the jth building.
Step 2, rasterizing the dynamic area and the steady-state area as shown in fig. 3, and designating coordinates (x0, y0), dx and dy and a graph range in a horizontal plane of an origin position, a coordinate axis direction and the origin position in the real world to form a binary raster image carrying monitoring area range information; x0 and y0 are coordinates of a designated origin position, and for example, a certain intersection of the center lines of roads, a certain measurement control point of a city, a certain building corner point, etc. are used as designated origins. The direction of the coordinate system can be specified, and when the coordinate system is combined with a geographic system, longitude and latitude can be adopted, and the coordinate system of the northeast first class can also be adopted. dx, dy are the spatial separation represented by each row and column of the raster image, and are related to the resolution longitude of the image space. Such as: x0 ═ 100(m), y0 ═ 200(m), dx ═ 0.2(m), and dy ═ 0.2(m), where "x 0 ═ 100m, y0 ═ 200 m" indicates that the image origin is 100m and 200m in the x-and y-directions from the origin of the particular field space coordinate system.
And 3, performing reference measurement on the construction site as shown in FIG. 4, obtaining reference images of the construction site and surrounding scenes, wherein the reference images of the batch are reference photos, and performing three-dimensional reconstruction on the construction site scene by using the reference photos. In this case, a consumer-grade unmanned aerial vehicle with low GPS accuracy can be used to collect reference images of a construction site and surrounding scenes. In the batch of reference measurements, image control points with known coordinates are specified for spatial localization of the three-dimensional model. The image control points are shown in the following table:
TABLE 1 representation of control point coordinates
Name X Y Z
#CP1 679.34 201.43 9.24
#CP2 687.65 171.95 10.03
#CP3 657.26 602.76 12.75
#CP4 658.68 628.55 13.91
#CP5 72.41 420.38 8.83
#CP6 37.71 398.92 9.77
#CP7 189.26 57.07 9.35
Step 4, masking the construction area (dynamic area) in the reference photo, extracting the part of the non-construction area (namely extracting the stable part of the reference photo) in the reference photo, taking the extracted stable part as a 'registration primitive', and participating in the three-dimensional reconstruction calculation of the scene of the later construction site to realize that the conventional monitoring does not require input or appoint an image control point any more; as shown in fig. 8 and 9, the present application uses the reference photo and the point cloud information to automatically segment the dynamic area on the reference photo, and the steps are as follows:
step 4.1: extracting plane range Mask of dynamic regionPlanar dynamic region,MaskPlanar dynamic regionBinary image identifying a planar dynamic region expressed as a correspondence with a DSM origin position, pixel row-column pitch, and hereinafter referred to as "planar dynamic region Mask image", MaskPlanar dynamic region(i, j) represents the value of the ith row and the jth column of the image;
step 4.2: performing space-three calculation on the reference photo, calculating internal orientation elements (camera parameters) and external orientation elements (position and corner information) of each image in the reference photo, and then performing three-dimensional reconstruction on the scene to obtain three-dimensional point cloud of the scene;
step 4.3: discretizing horizontal coordinate components in the point cloud three-dimensional coordinates according to dx and dy consistent with DSM, and selecting three-dimensional points in a specified range according to a planar dynamic area mask image, wherein the method specifically comprises the following steps:
for the k point p in the point cloudkThe coordinate is (x)k,yk,zk) Remapped as line number i in the digital orthographkAnd column number jkIs represented by ik=round((xk-x0)/dx+1),jk=round((yk-y0)/dx +1), wherein the notation round (·) denotes rounding; if MaskPlanar dynamic region(ik,jk) 1, then the point p in the point cloudkI.e. points belonging to a construction area; if MaskPlanar dynamic region(ik,jk) When the point is equal to 0, then point cloudThe point p in (1)kI.e. points belonging to non-construction areas; selecting a point cloud of a dynamic area according to the steps;
step 4.4: projecting the dynamic area three-dimensional points selected in the step 4.3 on a reference photo according to the inner orientation elements (camera parameters) and the outer orientation elements (positions and angles) of each image and the projective geometric knowledge to obtain a point cloud projection image of the object in the construction area, wherein the point cloud projection image is shown in fig. 10;
step 4.5: opening or closing the point cloud projection image of the object in the construction area to form a closed area on the image, namely obtaining a Mask image Mask of each imageImage of a personBy masking Mask of each imageImage of a personThe dynamic area and the non-dynamic area in each photo can be automatically distinguished. Specifically, if the background is white and the point cloud is black, performing an "on operation" according to the binary image; if the background is black and the point cloud part is white, then the closing operation is performed.
Step 5, performing conventional monitoring measurement on the construction site as shown in FIG. 5, namely acquiring images of the construction site to be monitored and surrounding scenes of the construction site as construction site photos of a conventional monitoring batch to be monitored; combining the non-construction area of the reference photo in the step 4, performing three-dimensional reconstruction on the construction site scene again to generate three-dimensional point cloud, DSM and DOM, wherein the DSM is a digital surface model containing construction site scene elevation information, the DOM is a digital orthophoto map, and the origin position and the coordinate axis direction of the three-dimensional point cloud in the step 5 are consistent with the selection of the origin position and the coordinate axis direction in the step 2; in the step 5, the origin positions, the coordinate axis directions dx and dy and the image ranges of the DSM and the DOM are kept consistent with the selection of the origin positions and the coordinate axis directions dx and dy in the step 2, the lowest point elevation of each horizontal position can be selected in a mode of selecting the lowest point during extraction, and the influence of objects above a working surface such as a tower crane can be avoided; the point lower than the working surface in the building generally does not interfere with the height identification of the current construction floor due to small area, no point cloud information generated by shadow or no leading information even if the point cloud is formed;
step 6: according to the single plane area, the elevation of the single working surface of the building is extracted and identified based on DSM or construction three-dimensional point cloud, and then the construction progress is identified as shown in figure 6, and the specific process is as follows:
step 6.1, identifying the elevation of the working surface of the building monomer; more specifically, taking the monitoring data of the 30 th building as an example,
(1) if DSM-based, the working face height of No. 30 building block is extracted from the monitored DSM
Figure BDA0002767800960000071
The size of the elevation section can be selected to be 0.2-0.5 m, and when the size of the elevation section is selected to be 0.2 m in the example, the elevation statistics are shown in fig. 6 a; in the range of 69.6-69.8 meters, the elevation statistics of each point reaches 3836 points, and if the maximum is reached, the elevation of the work surface under construction is estimated as: (69.6-69.8)/2 ═ 69.7 m, i.e.
Figure BDA0002767800960000072
Rice and its production process
(2) If the construction site is based on the three-dimensional point cloud, the Mask is processed from the three-dimensional point cloud of the construction site according to the No. 30 building monomer plane area by adopting the method of the step 4.3Planar dynamic regionMask for specifying plane position information of the j-th building unitRange of floor jAnd replacing, extracting the point cloud { PT (point cloud point) of No. 30 building monomer30(xk,yk,zk)}k=1,2,…,N30,N30The total number of points of the extracted No. 30 building monomer point cloud is 414 points in the example, xk,ykAs a horizontal coordinate, zkAs elevation coordinates, for zkAnd (4) carrying out sectional statistics, recording the number of points in each elevation section, and recording the representative value of the elevation section with the largest number of points, such as the minimum value, the maximum value or the middle value, wherein the elevation section with the largest number of points in the example is 69.6-69.8 meters, and the total number of points is 35, as shown in fig. 6 b. In this example, the median value is taken as a representative value, and the elevation of the working surface under construction is estimated as: (69.6-69.8)/2 ═ 69.7 m, i.e.
Figure BDA0002767800960000081
And (4) rice.
Step 6.2, no matter the elevation of the No. 30 construction monomer working surface obtained based on DSM or three-dimensional point cloud, all the elevation needs to be obtained
Figure BDA0002767800960000082
And
Figure BDA0002767800960000083
comparison in which
Figure BDA0002767800960000084
Minimum i, is noted
Figure BDA00027678009600000810
In step 6.1, the elevations extracted by the mode (1) and the mode (2) are all elevations
Figure BDA0002767800960000085
Height of each floor slab of No. 30 building unit (j ═ 30)
Figure BDA0002767800960000086
See table 2, by comparison,
Figure BDA0002767800960000087
designed elevation of floor of 21 floors in table 2
Figure BDA0002767800960000088
If the difference is minimum, the number of the floor of the construction progress is estimated
Figure BDA0002767800960000089
(layer (s)).
TABLE 2 elevation of floor in No. 30 building
Floor number Floor elevation (m) Floor number Floor elevation (m) Floor number Floor elevation (m)
1 10.65 10 37.65 19 64.65
2 13.65 11 40.65 20 67.65
3 16.65 12 43.65 21 70.65
4 19.65 13 46.65 22 73.65
5 22.65 14 49.65 23 76.65
6 25.65 15 52.65 24 79.65
7 28.65 16 55.65 25 82.65
8 31.65 17 58.65 26 85.65
9 34.65 18 61.65
And 7: on the basis of the DOM result obtained in step 5, according to the plane position information of the building unit, the current "working face" image of the building unit is extracted, and image data is provided for specific construction stage identification, as shown in fig. 7 and 11.
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes and modifications made in accordance with the principles and concepts disclosed herein are intended to be included within the scope of the present invention.

Claims (7)

1. A method for automatically monitoring construction progress based on machine vision is characterized by comprising the following steps:
step 1, dividing a construction area and a non-construction area in a general plan of a construction project of a construction site, wherein the construction area is a dynamic area, and the non-construction area is a stable area; using the single building body in the dynamic area as a single identification object, identifying the plane area and +/-0 elevation of the single building body, and further obtaining the single plane area, the single +/-0 elevation and the single floor elevation, wherein the floor elevation is used
Figure FDA0002767800950000011
Expressed as the design elevation of the ith floor of the jth building;
step 2, rasterizing the dynamic area and the steady area by specifying an origin position, a coordinate axis direction, coordinates (x0, y0), dx and dy and a graph range in an origin position horizontal plane to form a binary raster image carrying monitoring area range information; wherein dx and dy are the space intervals represented by each row and each column of the raster image; the original position is a position point in the real world;
step 3, performing reference measurement on the construction site to obtain reference images of the construction site and surrounding scenes, taking the reference images of the batch as reference photos, and performing three-dimensional reconstruction on the construction site scene by using the reference photos;
step 4, masking the construction area in the reference photo, extracting the part of the non-construction area in the reference photo, taking the extracted non-construction area as a registration primitive, and participating in the three-dimensional reconstruction calculation of the scene of the post-construction site to realize that the monitoring of other non-reference batches does not require input or appoint image control points any more;
step 5, performing conventional monitoring measurement on the construction site, namely acquiring images of the construction site to be monitored and surrounding scenes of the construction site as construction site photos of conventional monitoring batches to be monitored; combining the non-construction area of the reference photo in the step 4, performing three-dimensional reconstruction on the construction site scene again to generate three-dimensional point cloud, DSM and DOM, wherein the DSM is a digital surface model containing construction site scene elevation information, and the DOM is a digital orthophoto map; wherein, the origin position and the coordinate axis direction of the three-dimensional point cloud in the step 5 are kept consistent with the selection of the origin position and the coordinate axis direction in the step 2; the origin positions, the coordinate axis directions dx and dy and the image range of the DSM and the DOM in the step 5 are consistent with the selection of the origin positions and the coordinate axis directions dx and dy in the step 2;
step 6, extracting the working surface elevation of the jth building monomer from the conventionally monitored DSM or three-dimensional point cloud according to the monomer plane area, and
Figure FDA0002767800950000012
comparing the floor numbers i with the minimum difference, namely knowing that the floor is constructed to the ith floor, thereby automatically obtaining the construction progress;
and 7, on the basis of the DOM obtained in the step 5, extracting the current working face image of each building unit according to the plane position information of the building units, and providing image data for specific construction stage identification.
2. The method for automatically monitoring construction progress based on machine vision as claimed in claim 1, wherein the step 4 uses the reference photo and the point cloud information to automatically divide the construction area on the reference photo, and the process is as follows:
step 4.1, extract the average of dynamic regionMask of area rangePlanar dynamic region,MaskPlanar dynamic regionRepresented as a binary image identifying a planar dynamic region coinciding with the DSM origin position, pixel row-column pitch, hereinafter referred to as "planar dynamic region Mask image", MaskPlanar dynamic region(i, j) represents the value of the ith row and the jth column of the image;
step 4.2, performing space-three calculation on the reference photo, calculating the internal orientation elements and the external orientation elements of each image in the reference photo, and then performing three-dimensional reconstruction on the scene to obtain three-dimensional point cloud of the scene;
step 4.3, discretizing horizontal coordinate components in the point cloud three-dimensional coordinates according to dx and dy consistent with DSM, and selecting three-dimensional points in a specified range according to a planar dynamic area mask image, wherein the specific expression is as follows:
for the k point p in the point cloudkThe coordinate is (x)k,yk,zk) Remapped as line number i in the digital orthographkAnd column number jkIs represented by ik=round((xk-x0)/dx+1),jk=round((yk-y0)/dx +1), wherein the notation round (·) denotes rounding; if MaskPlanar dynamic region(ik,jk) 1, then the point p in the point cloudkI.e. points belonging to a construction area; if MaskPlanar dynamic region(ik,jk) If 0, then the point p in the point cloudkI.e. points belonging to non-construction areas; selecting a point cloud of a dynamic area according to the steps;
4.4, projecting the three-dimensional points of the construction area selected in the step 4.3 on a reference photo according to the inner orientation elements and the outer orientation elements of each image and the projective geometric principle to obtain a point cloud projection image of the object in the construction area;
step 4.5, performing opening operation or closing operation on the point cloud projection image of the object in the construction area to form a closed area on the image, namely obtaining a Mask of each imageImage of a personBy masking Mask of each imageImage of a personCan automatically distinguish the dynamic area and the non-dynamic area in each photoAnd (4) a region.
3. The method for automatically monitoring the construction progress based on the machine vision is characterized in that the method for performing the opening operation or the closing operation on the projection graph of the point cloud is as follows: if the background is white and the point cloud is black, opening operation is carried out according to the binary image; and if the background is black and the point cloud part is white, performing closing operation.
4. The method for automatic construction progress monitoring based on machine vision according to claim 1, 2 or 3, characterized in that the identification step of step 6 comprises:
step 6.1, extracting a point cloud set of the jth building monomer from the DSM or the three-dimensional point cloud of the construction site as { P }according to the monomer plane areaj(xk,yk,zk)},k=1,2,…,Nj,NjThe total point number x of the j-th building monomer point cloud is extractedk,ykAs a horizontal coordinate, zkAs elevation coordinates, for zkCarrying out sectional statistics, recording the number of points in each elevation section, recording the representative value of the elevation section with the most points, such as the minimum value, the maximum value or the intermediate value, which is the elevation of the work surface of the j-th building monomer identified, and recording the elevation as the elevation of the work surface of the j-th building monomer
Figure FDA0002767800950000021
Step 6.2, for the jth building unit, will
Figure FDA0002767800950000031
And
Figure FDA0002767800950000032
comparison in which
Figure FDA0002767800950000033
Minimum i, is noted
Figure FDA0002767800950000034
Figure FDA0002767800950000035
Namely, the estimation of the floor number of the construction progress of the jth building monomer in the monitoring data.
5. The method for automatically monitoring construction progress based on machine vision according to claim 4, wherein the method for extracting the point cloud of the jth building monomer from the DSM or the three-dimensional point cloud of the construction site in the step 6.1 is as follows:
if the point cloud point is based on the DSM, taking out the elevation values of all points of the DSM in the jth building monomer plane area according to the monomer plane area, and combining the plane position and the elevation values to be used as the point cloud of the jth building monomer;
if the method is based on the three-dimensional point cloud, the Mask is processed by adopting the method of the step 4.3 according to the single plane area and based on the three-dimensional point cloud of the construction sitePlanar dynamic regionUsing Mask for specifying plane position information of jth building unitThe jth building unitAnd (6) replacing.
6. The method for automatically monitoring construction progress based on machine vision according to claim 5, wherein the method for dividing the construction area and the non-construction area in the step 1 is as follows: based on the general plan of the construction project, according to the actual situation, the construction area, roads around the construction area, the non-construction areas such as the existing buildings of the adjacent land parcel and the like are divided in a manual mode.
7. The method for automatically monitoring construction progress based on machine vision according to claim 6, wherein the total plan of the construction project in the step 1 comprises various types of results of construction project total plan paper, construction project total plan image, satellite image, design model or mapping result, which bear construction site and building positioning information.
CN202011239206.1A 2020-11-09 2020-11-09 Method for automatically monitoring construction progress based on machine vision Pending CN112365369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011239206.1A CN112365369A (en) 2020-11-09 2020-11-09 Method for automatically monitoring construction progress based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011239206.1A CN112365369A (en) 2020-11-09 2020-11-09 Method for automatically monitoring construction progress based on machine vision

Publications (1)

Publication Number Publication Date
CN112365369A true CN112365369A (en) 2021-02-12

Family

ID=74508708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011239206.1A Pending CN112365369A (en) 2020-11-09 2020-11-09 Method for automatically monitoring construction progress based on machine vision

Country Status (1)

Country Link
CN (1) CN112365369A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379899A (en) * 2021-06-22 2021-09-10 南京工程学院 Automatic extraction method for regional images of construction engineering working face

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379899A (en) * 2021-06-22 2021-09-10 南京工程学院 Automatic extraction method for regional images of construction engineering working face
CN113379899B (en) * 2021-06-22 2023-09-19 南京工程学院 Automatic extraction method for building engineering working face area image

Similar Documents

Publication Publication Date Title
CN111322994B (en) Large-scale cadastral survey method for intensive house area based on unmanned aerial vehicle oblique photography
US7509241B2 (en) Method and apparatus for automatically generating a site model
US8116530B2 (en) Map change detection device, map change detection method, and program
CN107092877A (en) Remote sensing image roof contour extracting method based on basement bottom of the building vector
CN106600680A (en) Batch type refined three-dimensional modeling method of building frame model
CN107564046A (en) It is a kind of based on a cloud and the secondary accurate extracting method of registering contour of building of UAV images
Abbate et al. Prospective upon multi-source urban scale data for 3d documentation and monitoring of urban legacies
CN113032977A (en) Method for measuring and calculating earth and rock volume based on unmanned aerial vehicle inverse modeling technology
AGUILAR et al. 3D coastal monitoring from very dense UAV-Based photogrammetric point clouds
WO2022104251A1 (en) Image analysis for aerial images
CN117115243B (en) Building group outer facade window positioning method and device based on street view picture
CN112365369A (en) Method for automatically monitoring construction progress based on machine vision
Elugachev et al. Development of the technical vision algorithm
JP2014126537A (en) Coordinate correction device, coordinate correction program, and coordinate correction method
CN116106904B (en) Facility deformation monitoring method and facility deformation monitoring equipment for object MT-InSAR
WO2023223284A1 (en) System and method for triggering data transfer using progress tracking
CN113587834B (en) Slope deformation monitoring method based on uncontrolled photogrammetry
de Oliveira et al. Height-gradient-based method for occlusion detection in true orthophoto generation
Sani et al. 3D reconstruction of building model using UAV point clouds
CN115164769A (en) Three-dimensional real estate measuring and calculating method based on oblique photography technology
CN113670266A (en) Technology for measuring real estate title by utilizing unmanned aerial vehicle oblique photography
CN112884890A (en) Multi-format basic geographic information data fusion display method
CN115183746B (en) Space-earth integrated image acquisition method applied to distribution network low-voltage line panoramic transparent user newspaper
CN113532283B (en) Method for monitoring foundation pit displacement trend based on consumption-level unmanned aerial vehicle and GPS (global positioning system)
KR102538157B1 (en) Method for producing 3 dimension reality model using unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210212

WD01 Invention patent application deemed withdrawn after publication