CN116993303B - Progress monitoring management method and system for construction operation - Google Patents

Progress monitoring management method and system for construction operation Download PDF

Info

Publication number
CN116993303B
CN116993303B CN202311247461.4A CN202311247461A CN116993303B CN 116993303 B CN116993303 B CN 116993303B CN 202311247461 A CN202311247461 A CN 202311247461A CN 116993303 B CN116993303 B CN 116993303B
Authority
CN
China
Prior art keywords
image
construction
progress
difference
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311247461.4A
Other languages
Chinese (zh)
Other versions
CN116993303A (en
Inventor
丁雪亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Meizhi Xiangshu Technology Co ltd
Original Assignee
Shenzhen Meizhi Xiangshu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Meizhi Xiangshu Technology Co ltd filed Critical Shenzhen Meizhi Xiangshu Technology Co ltd
Priority to CN202311247461.4A priority Critical patent/CN116993303B/en
Publication of CN116993303A publication Critical patent/CN116993303A/en
Application granted granted Critical
Publication of CN116993303B publication Critical patent/CN116993303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Marketing (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Operations Research (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of intelligent monitoring, and discloses a progress monitoring management method and system for construction operation, which are used for improving the accuracy of progress monitoring management for the construction operation. Comprising the following steps: based on the area to be monitored, carrying out flight path planning on a plurality of coded unmanned aerial vehicles to obtain a flight path corresponding to each coded unmanned aerial vehicle; controlling a plurality of encoding unmanned aerial vehicles to acquire multispectral images to obtain a multispectral image set; analyzing the characteristic region of each multispectral image to obtain a target characteristic region of each multispectral image, and carrying out image segmentation on each multispectral image to obtain a segmented image set; performing process node matching on the segmented image set to obtain a process node set, and performing construction progress calculation on the process node set to obtain a target construction progress; carrying out standard image matching on the segmented image set to obtain a standard image set, and carrying out difference comparison to obtain a difference analysis result; and generating a management strategy for the target construction progress to obtain a target management strategy.

Description

Progress monitoring management method and system for construction operation
Technical Field
The invention relates to the technical field of intelligent monitoring, in particular to a progress monitoring management method and system for construction operation.
Background
With the increasing complexity of construction and engineering projects, traditional construction management methods are increasingly showing limitations. The need to monitor engineering progress, quality and safety has become more urgent, and the rapid development of digitization technology has also provided new possibilities to address these challenges. Therefore, comprehensive application based on unmanned aerial vehicles, image processing, data analysis and intelligent algorithms becomes a research hotspot in the field of construction management.
In the prior art, the data of each link is often in an isolated state, and seamless integration is difficult to realize. The difficulty in data exchange and sharing among multiple systems affects the accuracy of real-time monitoring and decision making. Aiming at intelligent path planning of a plurality of unmanned aerial vehicles, complex problems such as path conflict, path optimization, collaborative flight and the like need to be solved, and an efficient and practical path planning algorithm is lacking at present. Acquisition and processing of multispectral images involves large amounts of image data and complex analysis algorithms. How to accurately identify the target feature region and effectively segment the image remains a challenging problem. The feature region analysis of the multispectral image set requires highly accurate algorithms to ensure that critical information is identified. The existing data analysis method may be unstable in complex scene, resulting in inaccuracy of analysis result.
Disclosure of Invention
The invention provides a method and a system for monitoring and managing the progress of construction operation, which are used for improving the accuracy of the progress monitoring and management of the construction operation.
The first aspect of the present invention provides a method for monitoring and managing progress of a construction job, the method comprising:
acquiring preset equipment states of a plurality of unmanned aerial vehicles to obtain an equipment state set, and carrying out equipment coding on the unmanned aerial vehicles through the equipment state set to obtain a plurality of coded unmanned aerial vehicles;
performing flight path planning on a plurality of coded unmanned aerial vehicles based on a preset area to be monitored to obtain a flight path corresponding to each coded unmanned aerial vehicle;
controlling a plurality of encoding unmanned aerial vehicles to acquire multispectral images based on the flight paths corresponding to the encoding unmanned aerial vehicles to obtain multispectral image sets;
carrying out characteristic region analysis on each multispectral image in the multispectral image set to obtain a target characteristic region corresponding to each multispectral image, and respectively carrying out image segmentation on each multispectral image based on the target characteristic region corresponding to each multispectral image to obtain a segmented image set;
Performing process node matching on the segmented image set to obtain a process node set corresponding to the segmented image set, and performing construction progress calculation on the process node set to obtain a target construction progress;
performing standard image matching on the segmented image set through a preset expected construction progress to obtain a standard image set, and performing difference comparison on the segmented image set and the standard image set to obtain a difference analysis result;
and based on the difference analysis result, performing management strategy generation on the target construction progress to obtain a target management strategy, and transmitting the target management strategy to a preset data processing terminal.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the performing, based on a preset area to be monitored, flight path planning on the plurality of encoding unmanned aerial vehicles to obtain a flight path corresponding to each encoding unmanned aerial vehicle includes:
performing region segmentation on the region to be monitored to obtain a plurality of sub regions to be monitored;
carrying out priority calculation on a plurality of subareas to be monitored to obtain priority indexes of each subarea to be monitored;
performing terminal calibration on a plurality of coded unmanned aerial vehicles based on the priority index of each sub-region to be monitored to obtain terminal coordinates of each coded unmanned aerial vehicle;
And planning flight paths of the plurality of coding unmanned aerial vehicles through the terminal coordinates of each coding unmanned aerial vehicle to obtain the corresponding flight path of each coding unmanned aerial vehicle.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the performing feature region analysis on each multispectral image in the multispectral image set to obtain a target feature region corresponding to each multispectral image, and performing image segmentation on each multispectral image based on the target feature region corresponding to each multispectral image to obtain a segmented image set, where the method includes:
performing color calibration processing on each multispectral image to obtain a plurality of calibration images;
calibrating the monitoring target of the area to be monitored to obtain a plurality of monitoring targets;
performing edge calibration on the plurality of calibration images through the plurality of monitoring targets to obtain edge information corresponding to each calibration image;
performing initial region segmentation on each multispectral image through the edge information corresponding to each calibration image to obtain an initial characteristic region corresponding to each multispectral image;
carrying out discontinuous region extraction on the initial characteristic region corresponding to each multispectral image respectively to obtain discontinuous regions of the initial characteristic region corresponding to each multispectral image;
Performing region smoothing on the initial characteristic region corresponding to each multispectral image based on the discontinuous region of the initial characteristic region corresponding to each multispectral image to obtain a target characteristic region corresponding to each multispectral image;
and respectively carrying out image segmentation on each multispectral image based on the target characteristic region corresponding to each multispectral image to obtain a segmented image set.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, the performing process node matching on the segmented image set to obtain a process node set corresponding to the segmented image set, and performing construction progress calculation on the process node set to obtain a target construction progress, where the performing includes:
carrying out pixel distribution analysis on each divided image in the divided image set to obtain pixel distribution data of each divided image;
performing density estimation on pixel distribution data of each segmented image to obtain density estimation data of each segmented image;
performing process node matching through the density estimation data of each segmented image based on a preset standard process node database to obtain a process node set corresponding to the segmented image set;
Performing process completion degree analysis on each process node in the process node set to obtain the process completion degree of each process node;
and calculating the construction progress of the process node set according to the process completion degree of each process node to obtain the target construction progress.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the performing standard image matching on the segmented image set through a preset expected construction progress to obtain a standard image set, and performing difference comparison on the segmented image set and the standard image set to obtain a difference analysis result, where the performing step includes:
extracting expected process nodes from the expected construction progress to obtain a plurality of expected process nodes;
calculating expected completion time of a plurality of expected process nodes to obtain target expected completion time;
performing standard image matching on the segmented image set through a plurality of expected process nodes based on the target expected completion time to obtain the standard image set;
and performing difference comparison on the segmented image set and the standard image set to obtain a difference analysis result.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect of the present invention, the performing a difference comparison on the segmented image set and the standard image set to obtain a difference analysis result includes:
Performing image alignment processing on the segmented image set and the standard image set to obtain a plurality of groups of comparison images;
performing brightness-based difference detection on each group of comparison images to obtain a difference detection result of each group of comparison images;
performing differential point positioning on each group of comparison images based on the differential detection result of each group of comparison images to obtain differential point positions corresponding to each group of comparison images;
and performing difference comparison on the segmented image set and the standard image set through difference point positions corresponding to each group of comparison images to obtain a difference analysis result.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the generating, based on the difference analysis result, a management policy for the target construction progress to obtain a target management policy, and transmitting the target management policy to a preset data processing terminal includes:
performing construction operation content difference extraction on the difference analysis result to obtain target difference construction operation content;
performing task quantity calculation on the target difference construction job content to obtain a target task quantity;
performing operation type analysis on the target difference construction operation content to obtain a target operation type;
And generating a management strategy for the target construction progress through the target task quantity and the target job type to obtain a target management strategy, and transmitting the target management strategy to a preset data processing terminal.
The second aspect of the present invention provides a progress monitoring management system of construction work, the progress monitoring management system of construction work comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring preset device states of a plurality of unmanned aerial vehicles to obtain a device state set, and performing device coding on the plurality of unmanned aerial vehicles through the device state set to obtain a plurality of coded unmanned aerial vehicles;
the planning module is used for planning flight paths of the plurality of coded unmanned aerial vehicles based on a preset area to be monitored to obtain a flight path corresponding to each coded unmanned aerial vehicle;
the acquisition module is used for controlling a plurality of the encoding unmanned aerial vehicles to acquire multispectral images based on the flight paths corresponding to the encoding unmanned aerial vehicles, so as to obtain a multispectral image set;
the analysis module is used for carrying out characteristic region analysis on each multispectral image in the multispectral image set to obtain a target characteristic region corresponding to each multispectral image, and carrying out image segmentation on each multispectral image based on the target characteristic region corresponding to each multispectral image to obtain a segmented image set;
The calculation module is used for carrying out process node matching on the segmented image set to obtain a process node set corresponding to the segmented image set, and carrying out construction progress calculation on the process node set to obtain a target construction progress;
the matching module is used for carrying out standard image matching on the segmented image set through a preset expected construction progress to obtain a standard image set, and carrying out difference comparison on the segmented image set and the standard image set to obtain a difference analysis result;
and the generation module is used for generating the management strategy for the target construction progress based on the difference analysis result, obtaining a target management strategy and transmitting the target management strategy to a preset data processing terminal.
A third aspect of the present invention provides a progress monitoring and managing apparatus for construction work, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the progress monitoring management apparatus of the construction job to execute the progress monitoring management method of the construction job described above.
A fourth aspect of the present invention provides a computer-readable storage medium having instructions stored therein that, when executed on a computer, cause the computer to perform the above-described progress monitoring management method of construction work.
According to the technical scheme provided by the invention, through equipment state acquisition of multiple unmanned aerial vehicles, information of a construction site can be acquired in real time, so that a construction management team can quickly know the condition of the construction site, and the construction progress and management engineering can be monitored better. Based on a preset area to be monitored, a plurality of coding unmanned aerial vehicles conduct intelligent flight path planning, so that path conflict is effectively avoided, acquisition efficiency is optimized, and data acquisition quality and efficiency are improved. Through the collaborative operation of a plurality of coding unmanned aerial vehicles, multispectral images can be collected simultaneously, a larger range of areas can be covered, and a richer data source is provided for image analysis. And based on the multispectral images, target characteristic region analysis is carried out on each image, so that the accurate identification of the key region related to engineering construction is realized, and the subjectivity of manual analysis is reduced. The process node matching is automatically carried out on the segmented image set, so that the automatic identification and analysis of the process node are realized, the manual operation is reduced, and the accuracy and efficiency are improved. The difference between the actual construction progress and the expected progress can be found in real time by comparing the difference between the segmentation image set and the standard image set, so that a management strategy of the target construction progress is generated, and a management team is helped to adjust the engineering plan in time. And the target management strategy is transmitted to the data processing terminal, so that the execution and monitoring of the strategy are realized, and the consistency of the actual operation and the management strategy is ensured. The scheme integrates various technical means, reduces the dependence of manual operation, improves the working efficiency, reduces human errors and reduces the risk in the construction process. And generating a target construction progress management strategy through analysis and processing of the multi-source data.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a method for monitoring and managing progress of a construction job according to an embodiment of the present invention;
FIG. 2 is a flow chart of feature region analysis for each multispectral image in a multispectral image set in accordance with an embodiment of the invention;
FIG. 3 is a flow chart of process node matching of a segmented image set in an embodiment of the invention;
FIG. 4 is a flowchart of performing standard image matching on a segmented image set according to a preset expected construction schedule in an embodiment of the present invention;
FIG. 5 is a schematic diagram of an embodiment of a system for monitoring and managing progress of a construction job according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of a progress monitoring and managing apparatus for construction jobs in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method and a system for monitoring and managing the progress of construction operation, which are used for improving the accuracy of the progress monitoring and management of the construction operation.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For easy understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and an embodiment of a method for monitoring and managing progress of a construction job in an embodiment of the present invention includes:
s101, acquiring preset equipment states of a plurality of unmanned aerial vehicles, obtaining an equipment state set, and carrying out equipment coding on the plurality of unmanned aerial vehicles through the equipment state set to obtain a plurality of coded unmanned aerial vehicles;
it can be understood that the execution body of the present invention may be a progress monitoring management system of a construction job, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, the server presets a plurality of unmanned aerial vehicles at the construction site and is equipped with a sensor to acquire their equipment status information. These sensors include GPS positioning sensors, attitude sensors, battery level sensors, and the like. The sensors can monitor information such as the position, the posture, the battery state and the like of the unmanned aerial vehicle in real time and transmit data to the data processing system. The data processing system integrates the data acquired from the various unmanned aerial vehicle sensors together to form a set of device states. This set includes information of the location, status, power, etc. of each drone, ready for subsequent device encoding. And carrying out equipment coding on each unmanned aerial vehicle by utilizing the information in the equipment state set. The code may be a number, a combination of letters, or a unique identifier. The purpose of the coding is to distinguish between different drones so that they can be uniquely identified and managed. For example, assuming an ongoing large building site, a plurality of drones are preset for data acquisition in order to monitor construction progress and safety. These unmanned aerial vehicles are equipped with GPS positioning, altitude sensors, battery status sensors, and the like, respectively. All the drones are activated and started before the construction at the worksite begins. The sensors begin to monitor their position, height, inclination, battery level, etc. in real time. These data are transmitted to a central data processing system. The data processing system collects all data transmitted by the sensors to form a set of device states. This set includes real-time information for each drone, such as: unmanned aerial vehicle 1: position (longitude, latitude), altitude, inclination, battery level; unmanned aerial vehicle 2: location (longitude, latitude), altitude, inclination, battery level. From this information, the data processing system generates a unique code for each drone: unmanned aerial vehicle 1: UAV001, unmanned aerial vehicle 2: UAV002. These codes are used to identify and track each drone, ensuring that each drone can be accurately managed and controlled. By the method for acquiring and coding the equipment states, the construction site management team can monitor the states of all unmanned aerial vehicles in real time and make management and adjustment in time, so that the construction safety and efficiency are improved.
S102, carrying out flight path planning on a plurality of coded unmanned aerial vehicles based on a preset area to be monitored to obtain a flight path corresponding to each coded unmanned aerial vehicle;
specifically, the large area to be monitored is finely divided into a plurality of small sub-areas. This region segmentation may be based on Geographic Information System (GIS) data or other spatial data, ensuring that each sub-region is accurately identified and defined. For example, in urban building site monitoring, the entire site may be divided into sub-areas of different floors, different building units, etc. And calculating the priority of each sub-area to be monitored. This calculation may be based on a number of factors, such as construction progress, personnel safety, material preparation, etc. For example, in building site monitoring, the progress of the construction of a floor may be an important factor, while the safety of a personnel-intensive area may also require high priority monitoring. And setting an end point coordinate for each coded unmanned aerial vehicle based on the result of the priority calculation. The sub-areas with high priority are endowed with the end points of the unmanned aerial vehicle, so that the unmanned aerial vehicle is ensured to monitor the key areas in a centralized manner. For example, in a construction site, the progress of construction of high floors may be more urgent, and unmanned aerial vehicles will be assigned to these floors to ensure timely monitoring. And planning a flight path of each coded unmanned aerial vehicle by utilizing the end point coordinates. This involves using a flight path planning algorithm to ensure that the drone can fly efficiently to a specified destination while avoiding obstacles and dangerous areas. For example, the path planning algorithm may calculate an optimal flight path based on information such as terrain, building distribution, etc. For example, consider an ongoing urban high-rise building site, where the monitoring management is performed using a coded drone in order to monitor construction progress and safety. The worksite is subdivided into different floors and building units, each of which is considered a sub-area. The construction progress varies from floor to floor, and the higher the floor, the more urgent the progress is, and therefore the priority of the floors will vary. According to the priority calculation, floors with high priority will be allocated to the unmanned aerial vehicle in order to monitor the construction progress and the safety situation in real time. Each drone is planned with an optimal flight path using a flight path planning algorithm to ensure coverage of the designated floor and avoid collisions with the building. In the embodiment, the construction site management team can monitor the construction progress, the safety condition and the like of each floor in real time, and make adjustment and decision in time so as to ensure the project to be propelled according to the plan. The flight path planning of the coding unmanned aerial vehicle can be adjusted according to the priorities of different floors, so that efficient and accurate monitoring coverage is ensured, important data support is provided for management teams, and therefore construction progress and management efficiency are optimized.
S103, controlling a plurality of encoding unmanned aerial vehicles to acquire multispectral images based on the flight paths corresponding to the encoding unmanned aerial vehicles to obtain a multispectral image set;
specifically, an actual flight mission plan is formulated according to each coded unmanned aerial vehicle flight path planned before. This includes the start and end points of the flight and the key points that need to be traversed. For example, considering a city building site monitoring scenario, each drone may be assigned to a different construction area and the flight path may cover a particular floor, building unit, etc. Each encoding drone is set to autonomous navigational mode using an automatic flight control system. The unmanned aerial vehicle automatic take-off, flight, hovering, landing and other operations of the unmanned aerial vehicle can be realized according to the preset flight path. Through a Global Positioning System (GPS) and an inertial sensor, the unmanned aerial vehicle can accurately fly along a predetermined path. During flight, the encoding drone is equipped with multispectral sensors that are capable of capturing spectral information in different bands. The unmanned aerial vehicle performs multispectral image acquisition while flying according to a preset flight path. These images may provide rich information such as construction status of the construction site, personnel distribution, equipment conditions, etc. The acquired multispectral image is then transmitted to a ground station or cloud platform via a communication system. At this stage, image processing and data integration are performed on the image. This may involve steps of image correction, stitching, classification, etc. to obtain more accurate information. For example, in building site monitoring, analysis of the progress of the construction of a site may be achieved through image processing, thereby supporting engineering management decisions. For example, assume that there is a large urban building site where it is necessary to monitor the progress of construction on different floors in real time. Using coded drones, each drone is assigned to a different floor for monitoring. They acquire multispectral images during flight according to a flight path plan. After the images are transmitted to the ground station, the construction progress reports of different floors can be formed through image processing and data integration, and timely data support is provided for engineering management teams.
S104, carrying out characteristic region analysis on each multispectral image in the multispectral image set to obtain a target characteristic region corresponding to each multispectral image, and respectively carrying out image segmentation on each multispectral image based on the target characteristic region corresponding to each multispectral image to obtain a segmented image set;
specifically, for each multispectral image, color calibration processing is performed to ensure that color information in the image can accurately reflect the actual scene. This facilitates subsequent image processing and analysis steps. And secondly, according to the actual requirements of the building engineering project, a target area to be monitored is definitely required. For example, consider a large building site where it is desirable to monitor construction progress and material transport on different floors. And (3) carrying out edge calibration on the calibration image by using a preset monitoring target. And extracting edge information of the target object to prepare for subsequent region segmentation. For example, for building site monitoring, edge calibration may be performed according to the contour of the building. Based on the edge information, an initial region segmentation is performed on the multispectral image. This divides the image into different areas, including individual monitoring targets, and the initial area division may separate areas of different floors, building units, etc. And carrying out discontinuous region extraction on the initial characteristic region, so as to more accurately divide the target characteristic region. For example, for a construction site, discontinuous region extraction may help accurately locate the construction conditions of different floors. To obtain a more accurate target feature region, region smoothing processing is performed. Noise and irregularity can be eliminated, making the target feature area clearer. In building site monitoring, region smoothing may help reduce clutter in the image. And based on the target characteristic region subjected to the region smoothing processing, carrying out image segmentation on each multispectral image, and segmenting different targets in the image. In a construction project, this means that the progress and status of the construction of different floors, building units can be accurately identified. For example, assuming a server with a high-rise building, multispectral images are acquired daily using a drone to monitor construction progress. And determining that the monitoring target is the construction state among floors by performing color calibration processing on the multispectral image. And (5) carrying out edge calibration according to the calibration image, and extracting building contour information. Then, the image is subjected to initial region segmentation to separate the building and the background. And the construction conditions of different floors are accurately positioned through discontinuous area extraction. And carrying out region smoothing treatment to eliminate noise and obtain accurate building contours. Image segmentation is carried out based on the building outline, so that monitoring of construction states of different floors is realized.
S105, performing process node matching on the segmented image set to obtain a process node set corresponding to the segmented image set, and performing construction progress calculation on the process node set to obtain a target construction progress;
specifically, pixel distribution analysis is performed for each divided image. By counting and analyzing the distribution of pixel values in the images, the characteristic data of each divided image can be obtained to reflect the states and changes of different areas. The pixel distribution data of each divided image is subjected to density estimation. Density estimation is a data analysis method for estimating the frequency and probability distribution of the occurrence of different pixel values. Through density estimation, pixel distribution characteristics of each region in an image can be more accurately described, and basic data is provided for subsequent process node matching. In the process node matching stage, a standard process node database is prepared in advance. And comparing the density estimation data of each segmented image with the data in the standard process node database to find the most matched process node. In this way, different areas in the segmented image set can be mapped to specific engineering progress nodes to form a process node set. And carrying out process completion analysis on each process node in the process node set. The degree of completion of each process node is calculated by comparing the density estimate data of the segmented image with the standard process node data. This analysis can quantify the progress of each process node, reflecting the actual construction progress of the project. And calculating the construction progress of the process node set according to the process completion degree of each process node. And comprehensively considering the completion degree of each process node to obtain the overall construction progress. This calculation provides accurate progress information to project manager based on actual data, helping optimize construction plans and resource allocation. For example, consider a high-rise building project in which a set of segmented images is acquired using an unmanned aerial vehicle. And carrying out pixel distribution analysis on each divided image to obtain pixel distribution data. Then, a finer data description is obtained by density estimation. And matching the density estimation data with the process nodes based on a preset standard process node database to obtain a process node set. And carrying out completion degree analysis on each node in the process node set, and calculating the actual construction progress. And comprehensively considering the completion degree of the process nodes, and calculating the target construction progress of the whole building engineering project.
S106, carrying out standard image matching on the segmented image set through a preset expected construction progress to obtain a standard image set, and carrying out difference comparison on the segmented image set and the standard image set to obtain a difference analysis result;
and extracting the expected process node from the expected construction progress. These nodes represent key tasks and targets for different stages of the engineering project, such as foundation construction completion, body structure construction completion, etc. By explicitly anticipating process nodes, a frame of reference for project progress can be established, providing a basis for subsequent comparison analysis. Next, a calculation of an expected completion time is performed for a plurality of expected process nodes. This calculation needs to comprehensively consider factors such as engineering complexity, resource allocation and the like, so as to accurately determine the expected completion time of each node. This step provides key data for subsequent standard image matching and disparity alignment. The set of segmented images is then subjected to standard image matching by a plurality of desired process nodes based on the target desired completion time. And comparing the actually acquired image with the standard image of the expected stage to judge whether the actual progress accords with the expected progress. And comparing the difference between the divided image set and the standard image set. By comparing differences in the characteristics, structures and the like of the images, the difference condition between the actual progress and the expected progress can be obtained. These discrepancy analysis results will provide visual information to the project management team, helping them to better understand the actual situation of the project. For example, consider a large building project in which the desired process nodes include foundation construction completion, body structure erection completion, and the like. By calculation, the expected completion time for each node is determined. In actual construction, the segmented image is acquired and matched and compared with the standard image. The difference analysis results show that the actual progress lags the expected progress, and project management teams can take corresponding adjustment measures to ensure that projects can be completed on time.
In order to realize the difference comparison of the divided image set and the standard image set, an image alignment process is required. Each image in the set of segmented images is aligned with a corresponding standard image to ensure that they are compared in the same coordinate system. Through image alignment, inconsistency caused by factors such as shooting angles, positions and the like can be eliminated, and the accuracy of the alignment is ensured. Brightness-based difference detection is performed on each set of aligned comparison images. Brightness is one of the most intuitive features in an image, and differences are typically manifested as changes in brightness. And comparing the brightness values of the corresponding positions in the segmented image set and the standard image set to obtain a difference detection result of each group of comparison images. This step can effectively capture the difference between the actual progress and the expected progress. And based on the difference detection result, further performing difference point positioning on each group of comparison images. The difference point is a position where a significant change occurs in the comparison image, and represents a difference between the actual progress and the expected progress. By precisely determining the potential difference points, the position information of the difference points corresponding to each group of comparison images can be obtained, so that the specific situation of the difference can be known in more detail. For example, consider a large building project where a set of segmented images and a set of standard images are acquired at different stages. After the image alignment processing, brightness-based difference detection is performed on each group of comparison images, and the result shows that the brightness of certain areas is significantly changed. After further positioning of the difference points, it was found that these difference points correspond to problems in actual construction, such as positional displacement or missing of certain members. Project management team can take corresponding measures to adjust construction plan according to the difference analysis result, so as to ensure the engineering to be completed on time
And S107, based on the difference analysis result, performing management strategy generation on the target construction progress to obtain a target management strategy, and transmitting the target management strategy to a preset data processing terminal.
Based on the result of the difference analysis, it is necessary to extract the difference in the construction work content. The difference analysis results reflect the difference between the actual construction progress and the expected progress, and thus specific construction content changes need to be extracted therefrom. This includes the work content of adding, deleting or changing in the actual construction found in the process of the variance analysis. And calculating the task amount of the extracted target difference construction job content. By calculating the workload of each difference job content, the amount of tasks actually required to be completed can be obtained. This can provide an accurate data basis for subsequent construction schedule and resource allocation. Meanwhile, job type analysis is required to be performed on the target differential construction job content. Different construction operations may involve different process, resource and time requirements, so that a detailed analysis of each of the different operation contents is required to determine the type of operation to which it belongs, such as earth works, concrete pouring, steel structure installation, etc. Based on the target task volume and job type, a targeted management policy may be generated. These strategies include adjusting the allocation of construction resources, optimizing the workflow, adjusting the construction sequence, etc., to ensure that the actual construction progress will be consistent with the intended progress as soon as possible. For example, considering a large building project, during the construction work, it is found by the difference analysis that the construction progress of a certain floor is different from the expected progress mainly due to the delay of the installation work of certain members. Based on the difference analysis result, extracting the construction content and calculating the task amount, and further analyzing to obtain that the construction content belongs to the steel structure installation operation. The management team can generate management strategies according to the information, mobilize more construction manpower and equipment, optimize the workflow, so as to catch up with the expected progress as soon as possible.
According to the embodiment of the invention, the information of the construction site can be acquired in real time through the equipment state acquisition of the multiple unmanned aerial vehicles, so that a construction management team can quickly know the condition of the construction site, and the construction progress and management engineering can be monitored better. Based on a preset area to be monitored, a plurality of coding unmanned aerial vehicles conduct intelligent flight path planning, so that path conflict is effectively avoided, acquisition efficiency is optimized, and data acquisition quality and efficiency are improved. Through the collaborative operation of a plurality of coding unmanned aerial vehicles, multispectral images can be collected simultaneously, a larger range of areas can be covered, and a richer data source is provided for image analysis. And based on the multispectral images, target characteristic region analysis is carried out on each image, so that the accurate identification of the key region related to engineering construction is realized, and the subjectivity of manual analysis is reduced. The process node matching is automatically carried out on the segmented image set, so that the automatic identification and analysis of the process node are realized, the manual operation is reduced, and the accuracy and efficiency are improved. The difference between the actual construction progress and the expected progress can be found in real time by comparing the difference between the segmentation image set and the standard image set, so that a management strategy of the target construction progress is generated, and a management team is helped to adjust the engineering plan in time. And the target management strategy is transmitted to the data processing terminal, so that the execution and monitoring of the strategy are realized, and the consistency of the actual operation and the management strategy is ensured. The scheme integrates various technical means, reduces the dependence of manual operation, improves the working efficiency, reduces human errors and reduces the risk in the construction process. And generating a target construction progress management strategy through analysis and processing of the multi-source data.
In a specific embodiment, the process of executing step S102 may specifically include the following steps:
(1) Performing region segmentation on the region to be monitored to obtain a plurality of sub-regions to be monitored;
(2) Carrying out priority calculation on the plurality of subareas to be monitored to obtain a priority index of each subarea to be monitored;
(3) Performing end point calibration on a plurality of coded unmanned aerial vehicles based on the priority index of each sub-region to be monitored to obtain the end point coordinates of each coded unmanned aerial vehicle;
(4) And planning flight paths of the plurality of coding unmanned aerial vehicles through the terminal coordinates of each coding unmanned aerial vehicle to obtain the corresponding flight path of each coding unmanned aerial vehicle.
Specifically, for the area to be monitored, reasonable area segmentation is carried out on the area to be monitored, and the whole construction area is divided into a plurality of smaller subareas. Such segmentation facilitates better management and monitoring of construction progress, while also facilitating flight control and data acquisition of the unmanned aerial vehicle. For example, consider a large building site that may be divided into multiple sub-areas, such as building body areas, peripheral road areas, tower crane work areas, etc., to provide targeted monitoring and work measures for the different areas. And in each sub-area to be monitored obtained through segmentation, carrying out priority calculation to determine the monitoring importance degree of each sub-area. This process may involve a number of factors such as construction stage, security risk, resource requirements, etc. Taking a construction site as an example, areas of high-rise building construction may be given higher priority because of overall engineering progress and safety. While some temporary road construction areas may be of lower priority because they have less impact on the overall engineering. Based on the priority index of each sub-region to be monitored, end point calibration can be performed on a plurality of coded unmanned aerial vehicles. This means that in each sub-area the best endpoint coordinates of the drone need to be determined in order to achieve comprehensive monitoring and data acquisition. For example, for a high priority building construction area, an unmanned aerial vehicle endpoint may be calibrated at a critical location in the area to obtain detailed construction progress and quality data. And planning a flight path through the terminal coordinates of each coded unmanned aerial vehicle. This process requires consideration of flight safety, avoidance of obstacles, shortest paths, etc., to ensure that the drone can fly efficiently and accurately to cover the monitored area. Taking a building construction area as an example, the unmanned aerial vehicle needs to bypass the outer wall of a building and finish data acquisition tasks one by one in the air according to a set flight path. For example, assume that there is one highway construction project, which covers a plurality of construction areas including roadbed filling, road paving, bridge construction, and the like. In order to monitor and manage the construction progress, unmanned aerial vehicle is adopted for intelligent monitoring. The whole engineering area is divided into subareas such as a roadbed construction area, a pavement construction area, a bridge construction area and the like. Each sub-area represents a different construction process and has a different construction importance. Next, a priority calculation is performed for each sub-region. Taking a road construction area as an example, the road construction area is given higher priority in consideration of the influence of road quality on driving safety. While the subgrade construction area may be of lower priority because it has a relatively small impact on the overall engineering progress. And calibrating the end point of the plurality of coded unmanned aerial vehicles based on the priority index of each sub-area. In road construction areas, the drone end point may be calibrated at a road segment where special attention is required, such as at a grade change or near an intersection. In bridge construction areas, unmanned aerial vehicle endpoints may be calibrated at key nodes of the bridge in order to monitor the quality of the structure construction. And planning the flight path of the encoded unmanned aerial vehicle through the end point coordinates. And the unmanned aerial vehicle covers each sub-area according to the set flight path, so that relevant construction progress and quality data are acquired. For example, in a road construction area, the drone may fly along the centerline of the roadway, capturing the actual condition of the pavement to compare with the expected progress.
In a specific embodiment, as shown in fig. 2, the process of executing step S104 may specifically include the following steps:
s201, performing color calibration processing on each multispectral image to obtain a plurality of calibration images;
s202, calibrating monitoring targets of an area to be monitored to obtain a plurality of monitoring targets;
s203, performing edge calibration on a plurality of calibration images through a plurality of monitoring targets to obtain edge information corresponding to each calibration image;
s204, carrying out initial region segmentation on each multispectral image through the edge information corresponding to each calibration image to obtain an initial characteristic region corresponding to each multispectral image;
s205, respectively extracting discontinuous areas of the initial characteristic areas corresponding to each multispectral image to obtain the discontinuous areas of the initial characteristic areas corresponding to each multispectral image;
s206, performing region smoothing on the initial characteristic region corresponding to each multispectral image based on the discontinuous region of the initial characteristic region corresponding to each multispectral image to obtain a target characteristic region corresponding to each multispectral image;
s207, respectively carrying out image segmentation on each multispectral image based on the target feature region corresponding to each multispectral image to obtain a segmented image set.
It should be noted that, for each multispectral image, color calibration processing is performed to ensure consistency of color information between different images, thereby providing a reliable basis for subsequent processing. After calibration, a plurality of calibration images are obtained, which are more consistent in color appearance. And calibrating a monitoring target in the area to be monitored. In a construction scene, the monitoring target may be various construction elements such as a building, mechanical equipment, a worker, and the like. By calibrating the target, the area and the position to be monitored can be definitely determined. And carrying out edge calibration on a plurality of calibration images through a plurality of monitoring targets, so that edge information corresponding to each calibration image can be obtained. The edge information reflects the outline and boundary of the object and provides important information for subsequent feature extraction and segmentation. And carrying out initial region segmentation on each multispectral image by utilizing the edge information corresponding to the calibration image. The image may be segmented into different regions, each representing a different object or feature. For each initial feature region, a discontinuous region extraction is performed. This step can separate the continuous features inside the region, thereby obtaining more accurate and fine feature information. And performing region smoothing processing based on the discontinuous regions of the initial characteristic region. This may optimize the shape and structure of the region by eliminating noise, smoothing boundaries, etc., making the features clearer and more definite. And carrying out image segmentation on each image based on the target characteristic region corresponding to each multispectral image. This step may divide the image into a number of sub-images, each representing a particular object or region, thereby enabling the division of the image content. For example, assume that there is a construction project for a high-rise building, and the monitoring is performed using multispectral unmanned aerial vehicle images. By performing color calibration on the multispectral images, the color consistency of each image is ensured. Targets such as building exterior walls and roofs are then calibrated in the area to be monitored. And matching the plurality of calibration images with the edge information of the target through the calibration of the monitoring target. The initial region segmentation is performed based on the edge information, dividing each image into different regions, e.g. different parts of the building and the surrounding. Discontinuous region extraction is performed on the initial feature region to separate out different portions of the window, wall, etc. Then, the discontinuous regions are subjected to region smoothing processing, region boundaries are optimized, and feature continuity is ensured. Image segmentation is performed based on the target feature region of each multispectral image. For example, the exterior wall of a building is divided into different sections, such as doors, windows, exterior walls, etc. Thus, a set of segmented images of different parts of the building can be obtained, providing data support for further analysis and decision making. By the method, the monitoring team can acquire the construction progress and quality information in real time, and powerful support is provided for project management.
In a specific embodiment, as shown in fig. 3, the process of executing step S105 may specifically include the following steps:
s301, carrying out pixel distribution analysis on each divided image in the divided image set to obtain pixel distribution data of each divided image;
s302, performing density estimation on pixel distribution data of each segmented image to obtain density estimation data of each segmented image;
s303, performing process node matching through the density estimation data of each segmented image based on a preset standard process node database to obtain a process node set corresponding to the segmented image set;
s304, carrying out process completion degree analysis on each process node in the process node set to obtain the process completion degree of each process node;
s305, performing construction progress calculation on the process node set according to the process completion degree of each process node to obtain a target construction progress.
The pixel distribution analysis is performed for each of the divided images in the divided image set. The distribution of different pixel values in the image is known, so that the basic characteristics of the image are acquired. By analyzing the pixel distribution, the information of brightness, color and the like of different areas in the image can be obtained. The pixel distribution data of each divided image is subjected to density estimation. Density estimation is a statistical method used to infer the distribution of data. Here, the server obtains the probability distribution of different pixel values in each segmented image through density estimation, so as to better understand the distribution characteristics of the image content. And carrying out process node matching through the density estimation data of each segmented image based on a preset standard process node database. During construction, different process nodes represent different stages or tasks of construction, each node having its unique characteristics. By matching the density estimate data with standard process nodes, a specific process node for each segmented image can be determined. And carrying out process completion degree analysis on each process node in the process node set to obtain the process completion degree of each process node. The process completion reflects the difference between the actual construction progress and the expected progress. By analyzing the process completion, the construction progress of each stage can be known. And calculating the construction progress according to the process completion degree of each process node to obtain the target construction progress. This calculation may be based on the difference between the actual progress and the expected progress, taking into account the completion of each stage, thereby deriving the overall progress of the project. For example, consider a construction project for a high-rise building, where a server collects a set of segmented images, including images from different stages of construction, using a drone. And carrying out pixel distribution analysis on each divided image to know the brightness and color distribution conditions of different areas in the image. And carrying out density estimation on pixel distribution data of each image to obtain probability distribution of different pixel values. And then, matching the density estimation data of each image with the process nodes through a preset standard process node database, and determining the construction stage corresponding to each image. During the engineering construction process, the server analyzes the process completion of each process node. For example, in a concrete placement phase, the server obtains the process completion of the phase by comparing the actual placement with the planned placement. And calculating the construction progress according to the process completion degree of each process node to obtain the target construction progress of the whole construction project. For example, if the process completion of the casting stage is high and the completion of the other stages is low, adjustments can be made based on these data to achieve a balance of overall project progress.
In a specific embodiment, as shown in fig. 4, the process of executing step S106 may specifically include the following steps:
s401, extracting expected process nodes of expected construction progress to obtain a plurality of expected process nodes;
s402, calculating expected completion time of a plurality of expected process nodes to obtain target expected completion time;
s403, carrying out standard image matching on the segmented image set through a plurality of expected process nodes based on target expected completion time to obtain a standard image set;
s404, performing difference comparison on the split image set and the standard image set to obtain a difference analysis result.
Specifically, an expected construction progress node, i.e., an expected process node, is determined according to the project plan or schedule. These nodes may represent different construction stages or important task points. For example, in the construction of a high building, the desired process nodes include completion of foundation construction, capping of the main structure, exterior wall finishing, and the like. These nodes are important markers of project progress and are the basis for subsequent analysis. For each intended process node, an intended completion time needs to be calculated. This can be estimated by construction planning, historical data, and expertise. The expected completion time refers to the time required to complete the process node under ideal conditions. For example, two months may be required to complete foundation construction, while six months may be required for the main body structure to be capped. These time estimates may be calculated based on past project experience and technical characteristics. The images in the set of segmented images may be matched to the standard image based on the expected completion time. The standard image is the image corresponding to the expected process node completion and can be used as a reference for ideal state. The construction stage corresponding to each divided image can be determined by matching the standard image with the images in the divided image set. The difference analysis result can be obtained by comparing the images in the divided image set with the standard image. The difference analysis includes differences in color, texture, structure, etc. of the image. These differences may reflect the gap between the actual construction progress and the expected construction progress. For example, if work expected to have completed at a certain stage does not appear in the actual image, it is indicated that the progress of construction at that stage is lagging. For example, suppose that a business center is being constructed. The project relates to foundation construction, building main body construction, interior decoration and other stages. And determining expected process nodes, such as foundation completion, capping of a main body structure and interior finishing. An expected completion time is calculated for each expected process node. For example, interior finishing may take four months. The standard image is selected based on the expected completion time. For example, a standard image of which interior decoration has been completed is selected as a reference. By matching the images in the set of segmented images with the standard image, the differences between them are compared. If the finishing schedule in the actual image falls behind the expectations, it may be necessary to adjust the construction plan to maintain project schedule.
In a specific embodiment, the process of executing step S404 may specifically include the following steps:
(1) Performing image alignment processing on the segmented image set and the standard image set to obtain a plurality of groups of comparison images;
(2) Performing brightness-based difference detection on each group of comparison images to obtain a difference detection result of each group of comparison images;
(3) Performing differential point positioning on each group of comparison images based on the differential detection result of each group of comparison images to obtain differential point positions corresponding to each group of comparison images;
(4) And performing difference comparison on the divided image set and the standard image set through the difference point positions corresponding to each group of comparison images to obtain a difference analysis result.
Specifically, each image in the set of divided images and the corresponding standard image are subjected to image alignment processing. Image alignment aims to adjust both sets of images to the same size, angle and position for subsequent alignment analysis. Image alignment may use image processing algorithms such as feature point matching or image registration techniques. After the image alignment, a brightness-based difference detection is performed on each set of paired divided images and standard images. This means that the pixel value differences in the two images are compared to find out the region where there is a significant change in brightness. The difference detection may be achieved by calculating a difference value between pixels, and commonly used methods include Mean Square Error (MSE) or Structural Similarity Index (SSIM), etc. By detecting the difference based on the brightness, it is necessary to locate these difference points after determining the area where the difference exists. Differential point positioning can identify specific pixel locations in the segmented image and the standard image. This may help the monitoring personnel better understand the specific location and range of discrepancies. And comparing and analyzing the difference between the segmented image set and the standard image set by utilizing the positioning information of the difference points. This may further help the monitoring personnel to understand the difference between the actual construction progress and the expected progress by visually displaying the difference area. By analyzing the nature and extent of the discrepancy, appropriate measures can be taken to adjust the construction plan to keep the project advanced as planned. For example, assume that a bridge is being constructed. The set of segmented images includes an image for each stage of construction, while the set of standard images is an expected state image for each stage. Image alignment processing is performed on each group of images to ensure that their sizes and positions are consistent. The differences between the segmented image set and the standard image set are analyzed using luminance-based difference detection. And (3) finding out the areas with brightness variation at different parts of the bridge by calculating the difference values among the pixels. Differential point positioning can accurately position the pixel locations of these varying areas, e.g., a certain beam segment of a bridge body may be under slow construction, resulting in an inconsistency with the intended state. By comparing and analyzing the areas of difference, the construction monitoring team can determine the difference between the actual construction progress and the expected progress. If the difference is large, the monitoring team can take corresponding measures, such as increasing the number of workers or optimizing the construction process, so as to ensure that the bridge construction progress is not affected too much.
In a specific embodiment, the process of executing step S107 may specifically include the following steps:
(1) Performing construction operation content difference extraction on the difference analysis result to obtain target difference construction operation content;
(2) Performing task quantity calculation on the target difference construction operation content to obtain a target task quantity;
(3) Performing operation type analysis on the target difference construction operation content to obtain a target operation type;
(4) And generating a management strategy for the target construction progress through the target task quantity and the target job type to obtain a target management strategy, and transmitting the target management strategy to a preset data processing terminal.
Specifically, the content of the difference construction operation existing between the actual construction image and the expected image is extracted from the difference analysis result. These differences may relate to specific engineering components, construction areas, material usage, etc. By matching the difference region with the related project plan, a detailed description of the difference construction work can be obtained. And calculating the task amount of the different construction job contents, namely determining the workload which is actually required to be completed. This can be estimated by factors such as the geometry of the differential construction operation, the amount of material used, the number of projects, etc. The calculation of the task amount can help project managers to know the scale of the difference operation and provide basis for subsequent construction arrangement and resource allocation. And analyzing the property and the characteristics of the construction operation content of the difference, and determining the operation type of the construction operation content. Different jobs may require different construction methods, equipment, personnel, etc. resources, so analysis of job type is critical to subsequent management decisions. For example, the differential work may involve different types of construction work such as earthworks, concrete casting, steel structure installation, and the like. And generating a target management strategy by combining the information of the task quantity, the job type and the like. This includes making detailed construction plans, scheduling the allocation of resources, personnel, equipment, and determining construction procedures and time nodes, etc. The target management strategy should be able to guide the smooth progress of the job during the actual construction process to ensure that the project advances according to the expected progress. For example, suppose that a building construction of a high building is underway. Through the difference analysis, the steel structure installation progress of a certain floor is found to be different from the expected steel structure installation progress. And in the stage of extracting the content of the differential construction operation, identifying the steel structure installation operation of the floor as the differential operation. And according to image analysis, the number and the size of the steel structural members to be installed are calculated. In the task amount calculation stage, the steel structure installation workload which needs to be completed is calculated, wherein the workload comprises the number, the length, the weight and the like of the components. Further analyzing the content of the differential construction operation, and confirming that the operation type is steel structure installation operation. Based on the task volume and the job type, a target management policy is generated. The strategy comprises a detailed construction plan, the allocation of hoisting equipment and operators is determined, and the time node of each construction procedure is planned. These strategies will help to achieve a catch-up of the steel structure installation schedule of the tall building to ensure that the overall construction schedule is not unduly impacted. Through the processing of the difference analysis result, project managers can make proper management strategies aiming at specific difference operations, and the stable promotion of construction progress is ensured. The method provides scientific method and data support for effective monitoring and adjustment of construction operation.
The method for monitoring and managing the progress of the construction job in the embodiment of the present invention is described above, and the system for monitoring and managing the progress of the construction job in the embodiment of the present invention is described below, referring to fig. 5, where one embodiment of the system for monitoring and managing the progress of the construction job in the embodiment of the present invention includes:
the acquiring module 501 is configured to acquire preset device states of a plurality of unmanned aerial vehicles, obtain a device state set, and perform device encoding on the plurality of unmanned aerial vehicles through the device state set to obtain a plurality of encoded unmanned aerial vehicles;
the planning module 502 is configured to plan a flight path for a plurality of encoding unmanned aerial vehicles based on a preset area to be monitored, so as to obtain a flight path corresponding to each encoding unmanned aerial vehicle;
the acquisition module 503 is configured to control, based on a flight path corresponding to each of the encoding unmanned aerial vehicles, the plurality of encoding unmanned aerial vehicles to perform multispectral image acquisition, so as to obtain a multispectral image set;
the analysis module 504 is configured to perform feature region analysis on each multispectral image in the multispectral image set to obtain a target feature region corresponding to each multispectral image, and perform image segmentation on each multispectral image based on the target feature region corresponding to each multispectral image to obtain a segmented image set;
The computing module 505 is configured to perform process node matching on the segmented image set to obtain a process node set corresponding to the segmented image set, and perform construction progress computation on the process node set to obtain a target construction progress;
the matching module 506 is configured to perform standard image matching on the segmented image set according to a preset expected construction progress to obtain a standard image set, and perform difference comparison on the segmented image set and the standard image set to obtain a difference analysis result;
and the generating module 507 is configured to generate a management policy for the target construction progress based on the difference analysis result, obtain a target management policy, and transmit the target management policy to a preset data processing terminal.
Through the cooperation of the components, the information of the construction site can be acquired in real time through the equipment state acquisition of multiple unmanned aerial vehicles, so that a construction management team can quickly know the condition of the construction site, and the construction progress and management engineering can be monitored better. Based on a preset area to be monitored, a plurality of coding unmanned aerial vehicles conduct intelligent flight path planning, so that path conflict is effectively avoided, acquisition efficiency is optimized, and data acquisition quality and efficiency are improved. Through the collaborative operation of a plurality of coding unmanned aerial vehicles, multispectral images can be collected simultaneously, a larger range of areas can be covered, and a richer data source is provided for image analysis. And based on the multispectral images, target characteristic region analysis is carried out on each image, so that the accurate identification of the key region related to engineering construction is realized, and the subjectivity of manual analysis is reduced. The process node matching is automatically carried out on the segmented image set, so that the automatic identification and analysis of the process node are realized, the manual operation is reduced, and the accuracy and efficiency are improved. The difference between the actual construction progress and the expected progress can be found in real time by comparing the difference between the segmentation image set and the standard image set, so that a management strategy of the target construction progress is generated, and a management team is helped to adjust the engineering plan in time. And the target management strategy is transmitted to the data processing terminal, so that the execution and monitoring of the strategy are realized, and the consistency of the actual operation and the management strategy is ensured. The scheme integrates various technical means, reduces the dependence of manual operation, improves the working efficiency, reduces human errors and reduces the risk in the construction process. And generating a target construction progress management strategy through analysis and processing of the multi-source data.
The above-described system for monitoring and managing the progress of a construction job in the embodiment of the present invention is described in detail in fig. 5 from the point of view of a modularized functional entity, and the following describes the apparatus for monitoring and managing the progress of a construction job in the embodiment of the present invention from the point of view of hardware processing.
Fig. 6 is a schematic structural diagram of a construction job progress monitoring management apparatus according to an embodiment of the present invention, where the construction job progress monitoring management apparatus 600 may have a relatively large difference according to different configurations or performances, and includes one or more processors (central processing units, CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage media 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 includes one or more modules (not shown), each of which includes a series of instruction operations in the progress monitoring management apparatus 600 for construction jobs. Still further, the processor 610 may be configured to communicate with the storage medium 630, and execute a series of instruction operations in the storage medium 630 on the progress monitoring management device 600 of the construction job.
The job progress monitoring management apparatus 600 also includes one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the construction job progress monitoring management apparatus structure shown in fig. 6 does not constitute a limitation of the construction job progress monitoring management apparatus, including more or less components than illustrated, or combining some components, or a different arrangement of components.
The invention also provides a construction job progress monitoring and managing device, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the construction job progress monitoring and managing method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, or may be a volatile computer readable storage medium, in which instructions are stored, which when executed on a computer, cause the computer to perform the steps of the progress monitoring management method of construction work.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or passed as separate products, may be stored in a computer readable storage medium. Based on the understanding that the technical solution of the present invention may be embodied in essence or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1. The progress monitoring and managing method for the construction operation is characterized by comprising the following steps of:
acquiring preset equipment states of a plurality of unmanned aerial vehicles to obtain an equipment state set, and carrying out equipment coding on the unmanned aerial vehicles through the equipment state set to obtain a plurality of coded unmanned aerial vehicles;
performing flight path planning on a plurality of coded unmanned aerial vehicles based on a preset area to be monitored to obtain a flight path corresponding to each coded unmanned aerial vehicle; the method specifically comprises the following steps: performing region segmentation on the region to be monitored to obtain a plurality of sub regions to be monitored; carrying out priority calculation on a plurality of subareas to be monitored to obtain priority indexes of each subarea to be monitored; performing terminal calibration on a plurality of coded unmanned aerial vehicles based on the priority index of each sub-region to be monitored to obtain terminal coordinates of each coded unmanned aerial vehicle; carrying out flight path planning on a plurality of coded unmanned aerial vehicles through the terminal coordinates of each coded unmanned aerial vehicle to obtain a flight path corresponding to each coded unmanned aerial vehicle;
Controlling a plurality of encoding unmanned aerial vehicles to acquire multispectral images based on the flight paths corresponding to the encoding unmanned aerial vehicles to obtain multispectral image sets;
carrying out characteristic region analysis on each multispectral image in the multispectral image set to obtain a target characteristic region corresponding to each multispectral image, and respectively carrying out image segmentation on each multispectral image based on the target characteristic region corresponding to each multispectral image to obtain a segmented image set;
performing process node matching on the segmented image set to obtain a process node set corresponding to the segmented image set, and performing construction progress calculation on the process node set to obtain a target construction progress; the method specifically comprises the following steps: for each divided image in the divided image set, carrying out pixel distribution analysis to know the distribution condition of different pixel values in the image, thereby obtaining the basic characteristics of the image, and obtaining the brightness and color information of different areas in the image by analyzing the pixel distribution; the method comprises the steps that density estimation is carried out on pixel distribution data of each divided image, the density estimation is a statistical method and is used for deducing the distribution situation of the data, a server obtains probability distribution of different pixel values in each divided image through the density estimation, so that the distribution characteristics of image content are better understood, and process node matching is carried out through the density estimation data of each divided image based on a preset standard process node database; in the construction process, different process nodes represent different construction stages or tasks, each node has unique characteristics, and the specific process node corresponding to each segmented image is determined by matching the density estimation data with the standard process node; carrying out process completion degree analysis on each process node in the process node set to obtain process completion degree of each process node, wherein the process completion degree reflects the difference between the actual construction progress and the expected progress, knowing the construction progress condition of each stage by analyzing the process completion degree, carrying out construction progress calculation through the process completion degree of each process node to obtain a target construction progress, comprehensively considering the completion condition of each stage based on the difference between the actual progress and the expected progress, and thus obtaining the overall progress of the project;
Performing standard image matching on the segmented image set through a preset expected construction progress to obtain a standard image set, and performing difference comparison on the segmented image set and the standard image set to obtain a difference analysis result; the method specifically comprises the following steps: determining expected construction progress nodes, namely expected process nodes, which represent different construction stages or important task points, according to project plans or scheduling; for each intended process node, an intended completion time, which is the time required to complete the process node under ideal conditions, needs to be calculated; matching images in the segmented image set with standard images based on expected completion time, wherein the standard images are images corresponding to the expected process nodes when the process nodes are completed, and determining construction stages corresponding to each segmented image by matching the standard images with the images in the segmented image set, wherein the standard images are used as references of ideal states; comparing the images in the segmented image set with the standard images to obtain a difference analysis result, wherein the difference analysis comprises differences in terms of colors, textures and structures of the images, and the differences reflect differences between actual construction progress and expected construction progress; the method comprises the steps of carrying out image alignment processing on each image in a segmented image set and a corresponding standard image, wherein the image alignment aims at adjusting two groups of images to the same size, angle and position, and the image alignment uses an image processing algorithm comprising feature point matching or image registration technology; after the images are aligned, performing brightness-based difference detection on each group of paired divided images and standard images, which means that pixel value differences in the two images are compared, a region with significant change in brightness is found out, the difference detection is realized by calculating difference values among pixels, and a method used for the difference detection comprises mean square error or structural similarity index; after the areas with the differences are determined through the difference detection based on the brightness, the difference points are positioned, and specific pixel positions in the segmented image and the standard image are identified through the difference point positioning; comparing and analyzing the difference between the segmented image set and the standard image set by utilizing the positioning information of the difference points, and displaying a difference area in a visual mode to obtain a difference analysis result;
And based on the difference analysis result, performing management strategy generation on the target construction progress to obtain a target management strategy, and transmitting the target management strategy to a preset data processing terminal.
2. The method for monitoring and managing progress of construction work according to claim 1, wherein the performing feature region analysis on each multispectral image in the multispectral image set to obtain a target feature region corresponding to each multispectral image, and performing image segmentation on each multispectral image based on the target feature region corresponding to each multispectral image to obtain a segmented image set, respectively, includes:
performing color calibration processing on each multispectral image to obtain a plurality of calibration images;
calibrating the monitoring target of the area to be monitored to obtain a plurality of monitoring targets;
performing edge calibration on the plurality of calibration images through the plurality of monitoring targets to obtain edge information corresponding to each calibration image;
performing initial region segmentation on each multispectral image through the edge information corresponding to each calibration image to obtain an initial characteristic region corresponding to each multispectral image;
Carrying out discontinuous region extraction on the initial characteristic region corresponding to each multispectral image respectively to obtain discontinuous regions of the initial characteristic region corresponding to each multispectral image;
performing region smoothing on the initial characteristic region corresponding to each multispectral image based on the discontinuous region of the initial characteristic region corresponding to each multispectral image to obtain a target characteristic region corresponding to each multispectral image;
and respectively carrying out image segmentation on each multispectral image based on the target characteristic region corresponding to each multispectral image to obtain a segmented image set.
3. The method for monitoring and managing progress of construction work according to claim 1, wherein the step of generating a management policy for the target construction progress based on the difference analysis result, obtaining a target management policy, and transmitting the target management policy to a preset data processing terminal, comprises:
performing construction operation content difference extraction on the difference analysis result to obtain target difference construction operation content;
performing task quantity calculation on the target difference construction job content to obtain a target task quantity;
Performing operation type analysis on the target difference construction operation content to obtain a target operation type;
and generating a management strategy for the target construction progress through the target task quantity and the target job type to obtain a target management strategy, and transmitting the target management strategy to a preset data processing terminal.
4. The utility model provides a progress control management system of construction operation which characterized in that, the progress control management system of construction operation includes:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring preset device states of a plurality of unmanned aerial vehicles to obtain a device state set, and performing device coding on the plurality of unmanned aerial vehicles through the device state set to obtain a plurality of coded unmanned aerial vehicles;
the planning module is used for planning flight paths of the plurality of coded unmanned aerial vehicles based on a preset area to be monitored to obtain a flight path corresponding to each coded unmanned aerial vehicle; the method specifically comprises the following steps: performing region segmentation on the region to be monitored to obtain a plurality of sub regions to be monitored; carrying out priority calculation on a plurality of subareas to be monitored to obtain priority indexes of each subarea to be monitored; performing terminal calibration on a plurality of coded unmanned aerial vehicles based on the priority index of each sub-region to be monitored to obtain terminal coordinates of each coded unmanned aerial vehicle; carrying out flight path planning on a plurality of coded unmanned aerial vehicles through the terminal coordinates of each coded unmanned aerial vehicle to obtain a flight path corresponding to each coded unmanned aerial vehicle;
The acquisition module is used for controlling a plurality of the encoding unmanned aerial vehicles to acquire multispectral images based on the flight paths corresponding to the encoding unmanned aerial vehicles, so as to obtain a multispectral image set;
the analysis module is used for carrying out characteristic region analysis on each multispectral image in the multispectral image set to obtain a target characteristic region corresponding to each multispectral image, and carrying out image segmentation on each multispectral image based on the target characteristic region corresponding to each multispectral image to obtain a segmented image set;
the calculation module is used for carrying out process node matching on the segmented image set to obtain a process node set corresponding to the segmented image set, and carrying out construction progress calculation on the process node set to obtain a target construction progress; the method specifically comprises the following steps: for each divided image in the divided image set, carrying out pixel distribution analysis to know the distribution condition of different pixel values in the image, thereby obtaining the basic characteristics of the image, and obtaining the brightness and color information of different areas in the image by analyzing the pixel distribution; the method comprises the steps that density estimation is carried out on pixel distribution data of each divided image, the density estimation is a statistical method and is used for deducing the distribution situation of the data, a server obtains probability distribution of different pixel values in each divided image through the density estimation, so that the distribution characteristics of image content are better understood, and process node matching is carried out through the density estimation data of each divided image based on a preset standard process node database; in the construction process, different process nodes represent different construction stages or tasks, each node has unique characteristics, and the specific process node corresponding to each segmented image is determined by matching the density estimation data with the standard process node; carrying out process completion degree analysis on each process node in the process node set to obtain process completion degree of each process node, wherein the process completion degree reflects the difference between the actual construction progress and the expected progress, knowing the construction progress condition of each stage by analyzing the process completion degree, carrying out construction progress calculation through the process completion degree of each process node to obtain a target construction progress, comprehensively considering the completion condition of each stage based on the difference between the actual progress and the expected progress, and thus obtaining the overall progress of the project;
The matching module is used for carrying out standard image matching on the segmented image set through a preset expected construction progress to obtain a standard image set, and carrying out difference comparison on the segmented image set and the standard image set to obtain a difference analysis result; the method specifically comprises the following steps: determining expected construction progress nodes, namely expected process nodes, which represent different construction stages or important task points, according to project plans or scheduling; for each intended process node, an intended completion time, which is the time required to complete the process node under ideal conditions, needs to be calculated; matching images in the segmented image set with standard images based on expected completion time, wherein the standard images are images corresponding to the expected process nodes when the process nodes are completed, and determining construction stages corresponding to each segmented image by matching the standard images with the images in the segmented image set, wherein the standard images are used as references of ideal states; comparing the images in the segmented image set with the standard images to obtain a difference analysis result, wherein the difference analysis comprises differences in terms of colors, textures and structures of the images, and the differences reflect differences between actual construction progress and expected construction progress; the method comprises the steps of carrying out image alignment processing on each image in a segmented image set and a corresponding standard image, wherein the image alignment aims at adjusting two groups of images to the same size, angle and position, and the image alignment uses an image processing algorithm comprising feature point matching or image registration technology; after the images are aligned, performing brightness-based difference detection on each group of paired divided images and standard images, which means that pixel value differences in the two images are compared, a region with significant change in brightness is found out, the difference detection is realized by calculating difference values among pixels, and a method used for the difference detection comprises mean square error or structural similarity index; after the areas with the differences are determined through the difference detection based on the brightness, the difference points are positioned, and specific pixel positions in the segmented image and the standard image are identified through the difference point positioning; comparing and analyzing the difference between the segmented image set and the standard image set by utilizing the positioning information of the difference points, and displaying a difference area in a visual mode to obtain a difference analysis result;
And the generation module is used for generating the management strategy for the target construction progress based on the difference analysis result, obtaining a target management strategy and transmitting the target management strategy to a preset data processing terminal.
5. A progress monitoring and managing apparatus of a construction job, characterized in that the progress monitoring and managing apparatus of a construction job includes: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the progress monitoring management apparatus of the construction job to execute the progress monitoring management method of the construction job according to any one of claims 1 to 3.
6. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the progress monitoring management method of a construction job according to any one of claims 1 to 3.
CN202311247461.4A 2023-09-26 2023-09-26 Progress monitoring management method and system for construction operation Active CN116993303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311247461.4A CN116993303B (en) 2023-09-26 2023-09-26 Progress monitoring management method and system for construction operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311247461.4A CN116993303B (en) 2023-09-26 2023-09-26 Progress monitoring management method and system for construction operation

Publications (2)

Publication Number Publication Date
CN116993303A CN116993303A (en) 2023-11-03
CN116993303B true CN116993303B (en) 2024-03-29

Family

ID=88530513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311247461.4A Active CN116993303B (en) 2023-09-26 2023-09-26 Progress monitoring management method and system for construction operation

Country Status (1)

Country Link
CN (1) CN116993303B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117634864B (en) * 2024-01-24 2024-04-05 华仁建设集团有限公司 Intelligent construction task optimization method and system based on image analysis
CN117893781B (en) * 2024-03-15 2024-05-07 深圳麦风科技有限公司 Method, device and storage medium for extracting image difference region

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287440A (en) * 2020-10-26 2021-01-29 中国水利水电第一工程局有限公司 Subway construction progress management system based on BIM and small-size unmanned vehicles
CN112651675A (en) * 2021-01-22 2021-04-13 中国水利水电第一工程局有限公司 Subway construction progress management system and method based on BIM small unmanned aerial vehicle
CN114187536A (en) * 2021-12-13 2022-03-15 苏州方兴信息技术有限公司 Method and device for determining construction progress, electronic equipment and readable medium
CN115035004A (en) * 2022-04-15 2022-09-09 腾讯科技(深圳)有限公司 Image processing method, apparatus, device, readable storage medium and program product
CN115474027A (en) * 2022-11-14 2022-12-13 深圳市睿拓新科技有限公司 Method and device for processing project progress state data
CN116048115A (en) * 2022-12-29 2023-05-02 中联智慧农业股份有限公司 Control method for unmanned aerial vehicle, group cooperation system and processor
CN116757632A (en) * 2023-06-16 2023-09-15 中咨海外咨询有限公司 Project progress information monitoring method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230290122A1 (en) * 2022-03-08 2023-09-14 Inventus Holdings, Llc Unmanned aerial vehicle based system to track solar panel system construction and commissioning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287440A (en) * 2020-10-26 2021-01-29 中国水利水电第一工程局有限公司 Subway construction progress management system based on BIM and small-size unmanned vehicles
CN112651675A (en) * 2021-01-22 2021-04-13 中国水利水电第一工程局有限公司 Subway construction progress management system and method based on BIM small unmanned aerial vehicle
CN114187536A (en) * 2021-12-13 2022-03-15 苏州方兴信息技术有限公司 Method and device for determining construction progress, electronic equipment and readable medium
CN115035004A (en) * 2022-04-15 2022-09-09 腾讯科技(深圳)有限公司 Image processing method, apparatus, device, readable storage medium and program product
CN115474027A (en) * 2022-11-14 2022-12-13 深圳市睿拓新科技有限公司 Method and device for processing project progress state data
CN116048115A (en) * 2022-12-29 2023-05-02 中联智慧农业股份有限公司 Control method for unmanned aerial vehicle, group cooperation system and processor
CN116757632A (en) * 2023-06-16 2023-09-15 中咨海外咨询有限公司 Project progress information monitoring method

Also Published As

Publication number Publication date
CN116993303A (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN116993303B (en) Progress monitoring management method and system for construction operation
US11593344B2 (en) Updating high definition maps based on age of maps
CN110285792B (en) Fine grid earthwork metering method for unmanned aerial vehicle oblique photography
US11988518B2 (en) Updating high definition maps based on lane closure and lane opening
Siebert et al. Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system
US11783248B2 (en) United states construction management system and method
WO2021088311A1 (en) Multi-unmanned aerial vehicle collaborative operation-based automatic inspection method and system for bridges
Jiang et al. UAV-based 3D reconstruction for hoist site mapping and layout planning in petrochemical construction
CN111006646B (en) Method for monitoring construction progress based on unmanned aerial vehicle oblique photography measurement technology
US11514682B2 (en) Determining weights of points of a point cloud based on geometric features
CN111199066B (en) Construction site virtual construction restoration method based on BIM+GIS
IT201800003849A1 (en) System and method for managing unmanned aerial systems (UAS) that perform an adaptive mission
Tuttas et al. Evaluation of acquisition strategies for image-based construction site monitoring
CN111721279A (en) Tail end path navigation method suitable for power transmission inspection work
CN111062644A (en) Airport ground service vehicle management and control system and method based on high-precision navigation positioning
US20230121226A1 (en) Determining weights of points of a point cloud based on geometric features
CN109063638A (en) Method, system and medium based on oblique photograph prediction waste yield
Heras et al. Urban heritage monitoring, using image processing techniques and data collection with terrestrial laser scanner (TLS), case study Cuenca-Ecuador
Gruen et al. News from CyberCity-modeler
CN112285734B (en) Port unmanned set card high-precision alignment method and system based on spike
CN115373416A (en) Intelligent inspection method for railway power through line
Merckx The utilisation of aerial photography and laser scanning in BIM modelling
Siejek et al. Methodology of Spatial Data Acquisition and Development of High-Definition Map for Autonomous Vehicles–Case Study from Wrocław, Poland
Villalobos et al. Earthwork Surface Creation and Volume Computation Using UAV 3D Mapping
McIntosh et al. Utilization of Lidar Technology—When to Use It and Why

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant