CN116993665A - Intelligent detection method for construction progress of construction engineering working face based on computer vision - Google Patents

Intelligent detection method for construction progress of construction engineering working face based on computer vision Download PDF

Info

Publication number
CN116993665A
CN116993665A CN202310704735.1A CN202310704735A CN116993665A CN 116993665 A CN116993665 A CN 116993665A CN 202310704735 A CN202310704735 A CN 202310704735A CN 116993665 A CN116993665 A CN 116993665A
Authority
CN
China
Prior art keywords
construction
working surface
progress
concrete
working
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310704735.1A
Other languages
Chinese (zh)
Inventor
卢昱杰
刘博�
张智平
魏伟
张自然
李东永
李易航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuang Le Shanghai Information Technology Co ltd
Original Assignee
Chuang Le Shanghai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuang Le Shanghai Information Technology Co ltd filed Critical Chuang Le Shanghai Information Technology Co ltd
Priority to CN202310704735.1A priority Critical patent/CN116993665A/en
Publication of CN116993665A publication Critical patent/CN116993665A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063116Schedule adjustment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Medical Informatics (AREA)
  • Operations Research (AREA)
  • Databases & Information Systems (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an intelligent detection method based on computer vision aiming at construction progress of a construction engineering working face, which belongs to the field of computer vision and the field of engineering management, and comprises the following steps: capturing a construction process of a working surface of a construction site by using a tower crane high-definition camera, and capturing a video frame; dividing the working surface area, and separating the working surface from the background part; performing instance segmentation on the construction stage marker by using a solov 2-based algorithm; counting the number of pixels in the working surface area in three construction processes, and judging the start-stop time of the three construction processes; comparing the actual construction progress with the planned progress and analyzing the progress difference; and building a construction period prediction model according to the past construction activity data, and proposing construction resource adjustment suggestions. The invention detects the construction progress of the working surface based on a computer vision algorithm and analyzes the construction period difference. The invention can help management personnel to master the construction progress, improves the real-time performance, the authenticity and the accuracy of the construction progress monitoring, and is also beneficial to quickly establishing a construction progress visualization model.

Description

Intelligent detection method for construction progress of construction engineering working face based on computer vision
Technical Field
The invention relates to the technical field of intelligent management and monitoring in the building industry, in particular to an intelligent identification method and system for construction progress of a working face based on computer vision.
Background
As computer vision and artificial intelligence technologies have matured, research into their application in the field of building construction has been increasing in recent years. In the aspect of construction monitoring, the computer vision technology is widely focused by people because of the advantages of low labor cost, high efficiency, high precision, strong informatization degree and the like. The background of the invention related thereto is as follows:
the invention discloses an intelligent construction progress identification method and system based on image identification in 2022 of national institute of electric technology and technology of electric power saving, and provides an intelligent construction progress identification method and system based on image identification (application publication number: CN 114708216A). The method and the system judge the progress of construction according to a building construction information model established by construction plan information and monitoring equipment information and by combining image information shot by a plurality of photographic platforms around a construction site in the construction process. However, the method is to judge the number of layers of construction by identifying the distance from the top outline of the building to the foundation, and cannot accurately judge the construction process of a certain layer of working face.
The invention discloses a remote sensing identification method and a remote sensing identification system for construction progress of urban buildings in year 2020 of Guilin university (application publication number: CN 110929607A) provides a remote sensing identification method and a remote sensing identification system for construction progress of buildings based on a Deep Convolutional Neural Network (DCNN) and cloud computing. According to the method and the system, different construction progress of the building is accurately and rapidly identified by performing deep convolution nerve calculation on the high-resolution satellite and the aircraft remote sensing image. The method establishes cloud storage image databases at different stages of urban construction, thereby meeting the identification of building construction progress in urban remote sensing big data age.
The invention 'an operation progress management system based on AI image recognition', disclosed in Guangdong power supply office 2023, a Guangzhou power grid limited responsibility company (application publication number: CN 115620339A), provides an operation progress management system based on AI image recognition. The system comprises a behavior acquisition subsystem, a data transmission module, a data storage processing subsystem and a client subsystem, wherein the data storage processing subsystem can be used for identifying and processing the position area information and the whole image information of acquired operators, and the result is displayed on the client subsystem so as to effectively manage the operation progress of the operators. However, the method for judging the construction progress based on the position and track information of the worker is a method for indirectly but not directly identifying the construction progress, and a certain error may exist.
For the problems in the related art, no effective solution has been proposed at present.
Disclosure of Invention
The invention provides a method for intelligently detecting the construction progress of a standard layer working surface, which is used for identifying the construction stages of the standard layer working surface, acquiring the start and stop time of different construction stages and improving the efficiency and the precision for progress monitoring.
For this purpose, the invention adopts the following specific technical scheme:
a method for intelligently detecting construction progress of a standard layer working surface comprises the following steps:
and dividing the whole working surface area in the shot picture by using a solov2 segmentation algorithm, identifying the range of the working surface, and separating the working surface from the background.
The characteristic articles in the working surface are segmented by utilizing a segmentation algorithm based on the working surface image examples.
The construction stage of the current working face is identified through pixel analysis by utilizing the working face pictures after the instance-based segmentation.
And judging the start and stop time of each construction stage by using the time sequence result of the construction stage identification of the working face.
And analyzing the time data by using the collected time data of different construction stages and using a data regression model to predict the construction period of the next fitting.
Preferably, the steps of dividing the whole construction work surface and the characteristic objects in the work surface by using a solov 2-based dividing algorithm are as follows:
construction work surface image acquisition: and acquiring a working face construction video in real time through the tower crane ball machine, and taking a working face picture every other 'round-robin time interval' minute.
Image pretreatment: and carrying out pretreatment such as noise removal, weak light compensation and the like on the image of the construction working surface.
Labeling a dataset: and determining the object or the region to be marked, and selecting a proper marking mode (polygonal marking or rectangular frame marking) to mark the acquired image data. After marking, the marked data needs to be verified and corrected, the accuracy and consistency of marking are ensured, and the marked data set is split into a training set, a verification set and a test set.
Building a neural network: SOLOv2 neural network is constructed, and the network structure can be modified or designed by self based on the existing pre-training model. SOLOv2 uses Resnet and Feature Pyramid Networks (FPNs) to assign objects of different sizes to different levels of feature mappings. The rough process of segmentation is as follows: firstly, locating a boundary box where each instance is located through target detection, and then carrying out semantic segmentation on the interior of the boundary box to obtain a mask of each instance. The instance segmentation is in turn divided into two subtasks, category prediction and instance mask generation, respectively. An input image is divided into a uniform grid, S x S. If the center of an object falls into a grid cell, the grid cell is responsible for predicting the mask of semantic categories and instances. Specifically, the mask learning process may be split into two branches: convolution kernel branches and feature branches. SOLOv2 is able to efficiently predict high resolution object masks and learn mask kernels and mask features, respectively.
Training a network: the SOLOv2 neural network is trained by using the training set, optimization algorithms such as SGD, adam and the like can be used, and super parameters such as a loss function, a learning rate and the like also need to be set.
Network authentication: and verifying the trained network by using the verification set, checking the performance and effect of the model, and simultaneously tuning the model.
Network test: and testing the trained network by using the test set, and evaluating the performance and effect of the model.
It should be noted that, when the SOLOv2 neural network is trained, the GPU resource needs to be fully utilized to accelerate training, meanwhile, the problems of over fitting, under fitting and the like need to be noted, and the super parameters are properly adjusted to improve the performance and generalization capability of the model.
Preferably, the step of identifying the construction stage of the current working surface through pixel analysis by using the example-based segmented working surface picture comprises the following steps:
dividing different construction activity types: the construction activities of the working face are divided into three types of construction templates, binding reinforcing steel bars and pouring concrete.
And (3) sequentially setting construction stages: the construction stage of the working face is divided into formwork erection, steel bar binding and concrete pouring, wherein the steel bar binding is firstly started, firstly, the steel bars of the column and the wall are bound, the formwork erection is carried out after the completion of the construction stage, the steel bars of the floor slab are bound after the complete erection of the formwork, and the concrete pouring is carried out after the completion of the steel bar binding and the formwork erection.
Pixel duty cycle analysis: and carrying out pixel analysis on the extracted construction operation face picture, and judging whether the reinforced bar pixels and the concrete pixels in the floor area exist or not and the duty ratio.
And (3) judging the start-stop time of different construction stages according to the analysis of the image pixels:
and judging the starting time of the steel bar binding process when the steel bar pixel appears in the working surface area in the analysis result of the pixel. And judging the ending time of the steel bar binding process when the steel bar pixels in the analysis result of the pixels are distributed over the whole working surface area.
When a template pixel appears in a work area in a picture in the analysis result of the pixel (the start time of the step of judging the template is "template lapping"), and when the template pixel does not increase in the work area in the analysis result of the pixel, the end time of the step of judging the template is "template lapping".
When a concrete pixel appears in the working surface area in the analysis result of the pixel, the start time of the "concrete pouring" process is determined. In the analysis result of the pixels, when the pixels of the concrete are 90% or more in the entire work surface area and are not increased, it is determined as the end time of the "concrete placement" process.
Preferably, the step of analyzing the time data by using the collected time data of different construction stages and using a data regression model to predict the following construction period of the fitting is as follows:
collecting time length data: and according to the start-stop time of the different working procedures judged in the steps, obtaining the duration of different construction stages according to the construction duration = end time-start time, and integrating the time consumption of different construction stages on a plurality of different working surfaces to obtain the original data.
Collecting influence factor data: recording factors such as different construction stage types, corresponding working surface areas, the number of constructors, weather conditions, construction starting time, material supply conditions and the like.
Nonlinear relations between working hours required in different construction stages and relevant construction factors are established based on a neural network model: and training the model by taking the type of the construction stage, the area of the working surface, the number of workers, the weather condition, the construction start time and the material supply condition as model input values and taking the length required by construction as model output values to obtain the corresponding relation.
And (3) predicting the construction period: and inputting relevant data of the construction into a neural network model to obtain the construction period time required by the working surface of the layer, and predicting the construction progress.
The beneficial effects of the invention are as follows: through the device.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for intelligently detecting the construction progress of a standard layer working surface according to a first embodiment of the invention;
FIG. 2 is a schematic diagram of a dynamic header portion of a solov2 network model according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of a labeling manner of a reinforcing steel bar binding construction area in the first embodiment of the present invention;
FIG. 4 is a template-lapping stage identification diagram in accordance with an embodiment of the present invention;
fig. 5 is a diagram showing a reinforcement bar binding stage identification according to a first embodiment of the present invention;
FIG. 6 is a diagram showing a concrete placement stage identification in accordance with the first embodiment of the present invention;
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Example 1
A method for intelligently detecting the construction progress of a standard layer working surface is shown in a figure I, and comprises the following steps,
s1, capturing the operation condition of a construction site operation surface by using a high-definition camera hung on a tower crane, and capturing a video frame;
s2, dividing a working surface area in a video frame by using a method based on a solov2 algorithm, and separating the working surface from a background part;
s3, performing instance segmentation on the construction stage marker by using a solov2 algorithm-based method;
s4, carrying out pixel analysis on the segmented picture in a working surface area, and judging different construction activities and start and stop time of the different construction activities;
s5, comparing the actual construction progress with the planned progress, judging whether the construction is normally performed, presenting a judging result at the front end, and feeding back to construction manager and owners;
and S6, predicting the future construction period according to the time length data required by the existing construction activities.
Some basic deployment is performed on the back end before the system works, including the following important steps:
modifying parameter information in the algorithm code according to requirements according to the condition of the construction site, wherein the parameter information comprises,
parameter name Meaning of parameters Parameter initial value
device_id Device id 1
site_id Video stream address 19
save_path Storage path 62bea7dbfbdt
seg_conf Segmentation confidence 0.2
status_conf State confidence 0.4
cls_concrete Concrete state confidence 1
cls_raber Confidence of steel bar state [2,3]
shape Picture rate of difference 1920×1080
Wherein mainly device id, video stream address, storage path, respectively rate are adjusted.
Storing the modified segmentation algorithm codes to the self-defined positions, and adding the video streams of the working face shot by different cameras into the codes
The operation code is as follows:
and establishing a new folder which is named img and log in the folder where the algorithm codes are located, wherein the new folder is used for storing pictures with identification results and log information.
The tasks are timed, periodically executed instructions are set by a crontab command, which reads instructions from a standard input device and stores them in a "crontab" file for later reading and execution.
The timing setting command content is exemplified by:
{ performed once every 30 minutes
If the current time is between 8 and 19 points {
Executing the command "/home/dongyong/SOLO/demo/hzj-10.10/hzj.sh" and redirecting the output to "/home/dongyong/SOLO/demo/hzj.log"
}
}
Wherein "30" represents that the round trip time interval is 30 minutes, and "8-19" represents that the detection interval is 8 hours to 19 hours per day.
When the method is implemented, a plurality of groups of image acquisition equipment are installed at key positions in a required construction site, wherein key areas comprise but are not limited to tower suspension arms, tower crane columns and the like, and the image acquisition equipment comprises but is not limited to high-definition stable monitoring cameras, industrial cameras and the like. The specific installation site and the specific equipment type of the image acquisition equipment are correspondingly adjusted according to the environment of a construction site and the construction state of a working surface.
Firstly, video of the construction process of a working surface in a construction site is acquired, each frame of image of the video is saved, frames are selected at preset intervals for manual marking, a working surface construction activity database is formed, and an improved solo algorithm is trained by using the database.
The label is composed of a scaffold, a template, steel bars, concrete and is marked by using a CVAT tool, and the marking method is ploygon, namely, a picture is enlarged, the picture is close to a target, and outline surrounding lines are formed by drawing points one by one.
The label is marked by adopting the naming form of 'stage + number'. For example, when the first concrete placement area is marked, the label is "bridging 01", a plurality of targets of the same kind in the same drawing are distinguished by different numbers, and the other drawings are numbered again from 01.
In the process of image annotation, it is particularly notable that a large-scale shielding object is encountered to bypass the annotation as much as possible, the long and thin strip-shaped obstacle can be ignored, the integrity of a working surface is ensured as much as possible, and all objects in the image are marked.
The system core algorithm solov2 runs on an MMdetection framework, and the environment is configured to be Torch1.8.0+CUDA11.4+mmdet2.25.1+mmcv-full1.4.2+numpy1.21.6, and the neural network parameters are set to be batch size=2, epoch=100, and image scale=1024×1024.
The components of the SOLOv2 model and their hyper-parameters are detector (SOLOv 2), backbone (ResNet 101), neg (FPN), optimizer (SGD), learning rate (2.5x10-3), etc., respectively. The average segmentation precision of the model after full training can reach about 90%, and a better example segmentation effect is realized.
The accuracy threshold value of the solov2 model carried by the system is 0.8 (default value), and the threshold value can be dynamically adjusted according to the model optimization degree, the training data amount and the field test result of the project.
The image acquisition device should work continuously during the construction stage and the acquired video images should be transmitted in real time to the device in which the method is operated.
Example two
The acquisition module is used for acquiring video images of the construction working face and adjusting camera parameters according to the position relation between the camera and the working face so as to ensure that clear and complete working face pictures can be captured, and comprises the following specific steps of;
step1: and a construction site manager lays cameras at proper positions on the tower crane arms or the tower crane upright posts according to actual condition requirements.
step2: the camera is connected with the video recorder, the server is connected with the local network or the external network, and constructors can transmit videos shot by the camera to the server in two modes. Firstly, using a local network, reading a video stream to a server through a url website by an rtsp protocol; in the second mode, an external network is used, and the server can acquire the video stream of the camera through the push platform by means of rtmp protocol (flv).
step3: and the manager on the construction site adjusts the code rate, resolution and frame rate of the video stream according to the limit of the bandwidth on the construction site. The relation among bandwidth, code rate, resolution and frame rate is: bandwidth ∈code rate=resolution×24×frame rate×transmission line number×compression rate, where 24 means that one pixel occupies 24 bits of memory. In order to ensure the normal use of the system, the lower limits of the code rate, the frame rate and the resolution are respectively 2048kb/s, 25fps and 1920 x 1080, and because the system adopts a non-real-time algorithm and adopts an asynchronous streaming method, the bandwidth only considers the frame rate and the resolution which can be adopted when one path of video is accommodated. In the system, the video stream adopts an encoding mode of H.265, and the compression rate is 0.2. For example, the construction site bandwidth is 200Mb/s, that is, 200×1024= 204800kb/s, the camera initial parameters are set to the recommended lower limit value of each parameter, and then the code rate=1920×1080×24×25/1024×0.2=24300 kb/s < bandwidth, so that the constructor can improve the resolution and the frame rate according to the requirement.
The progress identification module is used for automatically recording the start and stop time of each construction stage of different floors, displaying the start and stop time on the visual platform, identifying the construction stage where the working surface of the floor is constructed and displaying the construction stage on the visual platform, and comprises the following specific steps:
step1: the manager inputs the layer number of the initial working face in the system, the system defaults that the construction activities contained in each layer of working face are 'erecting templates, binding reinforcing steel bars and pouring concrete', and the system can be adjusted in a personalized way if special needs exist.
step2: the system is set up and automatically starts to record after being started, each construction stage of each recorded layer is archived, the archived content comprises the layer number of the working face, the type of the construction stage, the starting and ending time, the total time consumption, the screenshot when being completed and the like, and a manager can check the content on a visual platform of the system.
The transparent reporting module is used for comparing the actual progress with the planned progress, and can monitor whether the construction period is normal or not, and comprises the following specific steps:
step1: the construction manager inputs the planned construction period of each layer of working surface in the system, and the input contents comprise a layer number, a planned starting time, a planned ending time and the like.
step2: the system records the actual construction period of each layer and then automatically compares the actual construction period with the planned construction period. If the progress of the floors is normal, displaying the colors of the floors in the building model in the visual platform as gray; if the floor progress exceeds 7 days and is not finished, displaying the color of the floor in the building model in the visual platform as blue; if the progress of the floors is not finished for more than 10 days, displaying the colors of the floors in the building model in the visual platform as yellow; if the floor progress is not completed for more than 15 days, the color of the floor in the building model in the visual platform is displayed as red.
The construction period prediction module can analyze and return the existing construction time data to help management personnel predict the future construction period, and comprises the following specific steps:
step1: the system automatically records construction data of each construction stage of each layer of working face in the past, wherein the data comprise working face area, construction time length, number of constructors, weather conditions, material supply conditions and the like, and inputs the construction data into a neural network to train a regression model.
The step2 construction manager inputs the number of construction persons, the weather condition of the construction process and the material supply condition of the floor in the system, and the system can automatically predict the construction period of each construction stage of the floor according to the previous data.
Example III
The invention provides a method for intelligently detecting the construction progress of a standard layer working surface, which is characterized in that in the implementation process, external factors such as tower crane lifting, working surface floor lifting and the like can cause the change of a video shooting picture of a camera, and the target picture can be lost. Therefore, the system can automatically recognize the condition of picture change and feed back to the manager, and the manager manually adjusts and resets the shot picture, and the specific steps are as follows:
step1: after the video stream shot by the camera is transmitted to the background, the video frame is automatically intercepted according to the time interval, the system automatically selects the pictures intercepted every hour,
step2: by utilizing the Harris corner detection method based on opencv, firstly, determining which part of the image has large intensity change by moving a sliding window in the whole image, wherein the part with large intensity change is the corner
step3: for each window identified, a score value R is calculated,
step4: a threshold is applied to the score and the corner points are marked.
step5: judging whether the angular point positions of the pictures intercepted at intervals of one hour are obviously changed,
step6: if the angular point position is not changed obviously, the system continues to work normally, and if the angular point position is offset obviously, the system informs a manager to reset the camera by manually operating the cradle head.
Example IV
The invention can be applied to the construction site in practice under the following conditions which can cause abnormal recognition of the system.
The two construction activities of reinforcement bar binding and formwork erection are performed alternately, for example, the reinforcement bar binding is performed on the part of the shear wall and the column, the formwork erection is performed after the reinforcement bar binding, and the reinforcement bar binding is performed on the part of the floor slab after the formwork erection. The method is more focused on the construction of the floor slab, so that the logic of the working procedure is still template erection, steel bar binding and concrete pouring, and the construction process of the column and the wall does not interfere with the discrimination of the whole working procedure.
For large-scale working surfaces, two construction activities of steel bar binding and formwork erection are performed simultaneously, and the construction method accords with construction logic and practice. The system allows and can identify special construction processes, and allows two working states to be performed simultaneously when different construction activities are overlapped. (allowing special construction techniques, if ABC activity has overlap, two working states can be allowed to be carried out)
Concrete pixels are difficult to be recognized by a system in time after pouring starts, concrete pouring starts from a wall column, but due to angle deviation, shielding and other reasons, the concrete pixels in the wall column template cannot be seen clearly by a picture captured by a camera, and the starting time of concrete pouring cannot be accurately recognized.
The system takes the template pixels and the concrete pixels in the working surface area as the starting basis of two construction activities of template lapping and concrete pouring respectively. Although the actual starting time is slightly deviated from the actual starting time, the error is within the allowable range, and the identification of the whole construction progress is not affected.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for intelligently detecting the construction progress of a standard layer working surface is characterized by comprising the following steps of,
a) And dividing the whole area of the working surface of the main body structure. And shooting the construction video of the working surface by using a high-definition camera, intercepting a video frame, and dividing the region of the whole working surface in the video frame.
b) And (5) construction area segmentation. And dividing the region of the construction stage according to different construction stage markers (such as templates, reinforced bars, concrete, scaffolds, concrete pump pipes and the like) in the video frame, so as to judge the construction stage where the working surface is located.
c) And (5) step (2) judging. And judging different construction stages (erecting templates, binding reinforcing steel bars and pouring concrete) on the working surface and the starting and ending time of the construction stages by identifying key features in the working surface area in the picture, so that the construction period of each construction stage is calculated.
d) And (5) predicting the construction period. According to the judgment and analysis of the construction process before, the time consumed by the related process in the future can be obtained through deduction of the regression model, so that the prediction of the future construction period is realized.
It should be noted that, in the steps a and b, the camera may have an abnormal shot due to an unpredictable view angle deviation, so that the corresponding construction picture cannot be intercepted and divided. In this regard, the present method employs a change monitoring technique based on detected corner points to handle such anomalies.
In the step c, the special construction process flow and the complex scene of the construction site can interfere the progress discrimination of the method, so that the abnormal recognition is caused. In this regard, the present method handles such anomalies by setting fixed process logic.
2. The method for intelligently detecting the construction progress of the working surface of the standard layer according to claim 1, wherein in the step a, firstly, a camera is used for shooting the construction condition of the working surface, a picture of the working surface is taken once every 'round time interval' minute, and the taken picture is used for dividing the working surface area by a solov2 algorithm, so that the range of the whole working surface is identified.
3. The method for intelligently detecting the construction progress of the working surface of the standard layer according to claim 1, wherein in the step b, in the intercepted picture, the areas where the features of various construction stages are located are identified and segmented, the areas where the different construction stages are located in the picture are marked, and the areas are used as a data set training solov2 instance segmentation model.
4. The method for intelligently detecting the construction progress of a standard layer working surface according to claim 1, wherein in the step c, we mainly divide the construction stage in one layer of working surface into three procedures: and (3) erecting the templates, binding the steel bars and pouring concrete.
5. The method for intelligently detecting the construction progress of a working surface of a standard layer according to claim 1, wherein in the step c, pixel analysis is performed on a working surface area in the extracted picture of the construction working surface, and in the analysis result, when the working surface area in the picture appears to include but not limited to 'template/steel bar/concrete' pixels, one of working surface procedures including but not limited to 'template lap/steel bar binding/concrete pouring' start time is determined.
6. The method for intelligently detecting the construction progress of a working surface of a standard layer according to claim 1, wherein in the step c, the pixel ratio is continuously increased in the concrete pouring process, and the end time of the concrete pouring process is judged when the pixel ratio of the concrete in the working surface area is more than 90% and is not increased any more.
7. The method for intelligently detecting the construction progress of a working surface of a standard layer according to claim 1, wherein in the step c, the construction stage of the working surface is divided into formwork erection, reinforcement bar binding and concrete pouring, wherein the reinforcement bar binding is started firstly, the reinforcement bar binding of a column and a wall is carried out firstly, the formwork erection is carried out after the completion of the reinforcement bar binding, the floor reinforcement bar binding is carried out after the completion of the complete formwork erection, and the concrete pouring is carried out after the completion of the reinforcement bar binding and the formwork erection. The system judges that the logic sequence among different construction stages of the working face of the main structure is not affected by abnormal conditions.
8. The method for intelligently detecting construction progress of standard layer working surfaces according to claim 1, wherein in the step c, the identification of working procedures in a construction stage is started from an upper layer, and each time three working procedures of formwork erection, reinforcement binding and concrete pouring are carried out, one cycle is adopted, and each time the working surface is lifted by one layer.
9. The method for intelligently detecting the progress of construction of a working surface of a standard layer according to claim 1, wherein in the step d, raw data and a construction period corresponding to the raw data are collected, wherein the raw data comprise construction scale parameters and basic characteristic parameters, and a neural network suitable for the progress prediction of the working surface is trained by using the raw data.
10. The method for intelligently detecting the construction progress of a working surface of a standard layer according to claim 1, wherein in the step d, after a mature neural network prediction model is obtained, current-stage data is input into the prediction model to obtain a construction period to be predicted.
CN202310704735.1A 2023-06-14 2023-06-14 Intelligent detection method for construction progress of construction engineering working face based on computer vision Pending CN116993665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310704735.1A CN116993665A (en) 2023-06-14 2023-06-14 Intelligent detection method for construction progress of construction engineering working face based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310704735.1A CN116993665A (en) 2023-06-14 2023-06-14 Intelligent detection method for construction progress of construction engineering working face based on computer vision

Publications (1)

Publication Number Publication Date
CN116993665A true CN116993665A (en) 2023-11-03

Family

ID=88527361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310704735.1A Pending CN116993665A (en) 2023-06-14 2023-06-14 Intelligent detection method for construction progress of construction engineering working face based on computer vision

Country Status (1)

Country Link
CN (1) CN116993665A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579790A (en) * 2024-01-16 2024-02-20 金钱猫科技股份有限公司 Construction site monitoring method and terminal
CN118038279A (en) * 2024-04-12 2024-05-14 品茗科技股份有限公司 Building construction progress monitoring method, system and camera

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579790A (en) * 2024-01-16 2024-02-20 金钱猫科技股份有限公司 Construction site monitoring method and terminal
CN117579790B (en) * 2024-01-16 2024-03-22 金钱猫科技股份有限公司 Construction site monitoring method and terminal
CN118038279A (en) * 2024-04-12 2024-05-14 品茗科技股份有限公司 Building construction progress monitoring method, system and camera

Similar Documents

Publication Publication Date Title
CN116993665A (en) Intelligent detection method for construction progress of construction engineering working face based on computer vision
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
CN111047818A (en) Forest fire early warning system based on video image
CN106991668B (en) Evaluation method for pictures shot by skynet camera
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
CN109711348A (en) Intelligent monitoring method and system based on the long-term real-time architecture against regulations in hollow panel
CN111446920A (en) Photovoltaic power station monitoring method, device and system
CN114548912A (en) Whole-process tracking method and system for building engineering project management
Ahmadian Fard Fini et al. Using existing site surveillance cameras to automatically measure the installation speed in prefabricated timber construction
CN117114619B (en) Project security management system based on big data analysis
CN115766501B (en) Tunnel construction data management system and method based on big data
CN113240249A (en) Urban engineering quality intelligent evaluation method and system based on unmanned aerial vehicle augmented reality
CN110059076A (en) A kind of Mishap Database semi-automation method for building up of power transmission and transformation line equipment
CN114529537A (en) Abnormal target detection method, system, equipment and medium for photovoltaic panel
CN114387564A (en) Head-knocking engine-off pumping-stopping detection method based on YOLOv5
CN113971781A (en) Building structure construction progress identification method and device, client and storage medium
CN112004063B (en) Method for monitoring connection correctness of oil discharge pipe based on multi-camera linkage
CN113469938A (en) Pipe gallery video analysis method and system based on embedded front-end processing server
CN115937684B (en) Building construction progress identification method and electronic equipment
CN111860202A (en) Beam yard pedestal state identification method and system combining image identification and intelligent equipment
CN116052075A (en) Hoisting operation behavior detection and evaluation method and equipment based on computer vision
CN115689206A (en) Intelligent monitoring method for transformer substation infrastructure progress based on deep learning
CN110135274A (en) A kind of people flow rate statistical method based on recognition of face
CN112487925B (en) Bridge load damage identification method and system
CN112861681B (en) Pipe gallery video intelligent analysis method and system based on cloud processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination