CN117994737A - Monitoring alarm system and method for intelligent building site management and control platform - Google Patents

Monitoring alarm system and method for intelligent building site management and control platform Download PDF

Info

Publication number
CN117994737A
CN117994737A CN202410406401.0A CN202410406401A CN117994737A CN 117994737 A CN117994737 A CN 117994737A CN 202410406401 A CN202410406401 A CN 202410406401A CN 117994737 A CN117994737 A CN 117994737A
Authority
CN
China
Prior art keywords
camera
distance
cameras
subsystem
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410406401.0A
Other languages
Chinese (zh)
Other versions
CN117994737B (en
Inventor
卢赞行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Yuntonghui Intelligent Technology Co ltd
Original Assignee
Liaoning Yuntonghui Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Yuntonghui Intelligent Technology Co ltd filed Critical Liaoning Yuntonghui Intelligent Technology Co ltd
Priority to CN202410406401.0A priority Critical patent/CN117994737B/en
Publication of CN117994737A publication Critical patent/CN117994737A/en
Application granted granted Critical
Publication of CN117994737B publication Critical patent/CN117994737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Alarm Systems (AREA)

Abstract

The invention relates to the technical field of intelligent management, and provides a monitoring alarm system and a monitoring alarm method for an intelligent building site management and control platform, wherein the system comprises an intelligent building site management and control center, an image sensor subsystem, a camera subsystem, a semantic segmentation subsystem, an evaluation early warning subsystem and a target tracking subsystem; the semantic segmentation subsystem is connected with the evaluation early warning subsystem to form a construction site monitoring alarm system, and the intelligent construction site management and control center is respectively connected with the image sensor subsystem, the camera subsystem, the construction site monitoring alarm system and the target tracking subsystem. According to the invention, through the image sensor subsystem, the camera subsystem, the semantic segmentation subsystem, the evaluation and early warning subsystem and the target tracking subsystem, the building monitoring and personnel monitoring of the construction site are realized in an intelligent mode, manual operation is not needed, and the safety of the construction site area is improved.

Description

Monitoring alarm system and method for intelligent building site management and control platform
Technical Field
The invention relates to the technical field of intelligent management, in particular to a monitoring alarm system and a monitoring alarm method for an intelligent building site management and control platform.
Background
In the existing monitoring alarm system of the construction site management and control platform, monitoring alarm is mainly carried out in a manual mode, namely information of a construction site area shot by the camera equipment is analyzed in a manual mode, and after a result is obtained through analysis, if danger is determined to exist, an alarm is sent out. However, the artificial subjective factor affects the manual analysis mode, so that the final analysis result is error, and the analysis result is inconsistent, so that the safety of the construction site area is low.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a monitoring alarm system and a monitoring alarm method for an intelligent building site management and control platform, which are used for carrying out building monitoring and personnel monitoring on a building site in an intelligent mode, do not need manual operation and improve the safety of a building site area.
The technical scheme for solving the technical problems is as follows: a monitoring alarm system of an intelligent building site management and control platform comprises an intelligent building site management and control center, an image sensor subsystem, a camera subsystem, a semantic segmentation subsystem, an evaluation early warning subsystem and a target tracking subsystem; the intelligent building site management and control center platform is respectively connected with the image sensor subsystem, the camera subsystem, the building site monitoring and control alarm system and the target tracking subsystem to store and manage data of the intelligent building site management and control center platform;
The camera subsystem is used for: acquiring horizontal distances of a target object in a preset construction area under the angles of a plurality of cameras, shooting time lengths corresponding to the cameras and external parameter of each camera; the horizontal distance is the distance between the target object and the cameras, the horizontal distance corresponding to each camera is determined based on the images shot by each camera, and the shooting duration is the duration of continuous tracking shooting of the target object by the cameras;
the image sensor subsystem is configured to: acquiring spatial image information of a preset building area at a plurality of moments;
the target tracking subsystem is configured to: weighting calculation is carried out based on the horizontal distances, the shooting time lengths and the external parameters, so that the spatial position of the target object is obtained;
the semantic segmentation subsystem is used for: carrying out semantic segmentation on the space image information at each moment to obtain a plurality of semantic building structure information corresponding to each moment;
The evaluation early warning subsystem is used for: and carrying out safety early warning on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments.
The invention also provides a monitoring alarm method of the intelligent building site management and control platform, which comprises the following steps:
acquiring horizontal distances of target objects of a preset building area under the angles of a plurality of cameras, shooting time lengths corresponding to the cameras, external parameters of the cameras and spatial image information of the preset building area at a plurality of moments; the horizontal distance is the distance between the target object and the cameras, the horizontal distance corresponding to each camera is determined based on the images shot by each camera, and the shooting duration is the duration of continuous tracking shooting of the target object by the cameras;
Weighting calculation is carried out based on the horizontal distances, the shooting time lengths and the external parameters, so that the spatial position of the target object is obtained;
Carrying out semantic segmentation on the space image information at each moment to obtain a plurality of semantic building structure information corresponding to each moment;
and carrying out safety early warning on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments.
According to the monitoring alarm method of the intelligent building site management and control platform provided by the invention, the safety precaution is carried out on the building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments, and the monitoring alarm method comprises the following steps:
And under the condition that the similarity calculated based on the plurality of semantic building structure information corresponding to the plurality of adjacent moments is larger than or equal to a similarity threshold, based on the plurality of semantic building structure information corresponding to the plurality of adjacent moments, carrying out safety evaluation of the discontinuous structure on the building structure at the same position to obtain a discontinuous structure evaluation result, and carrying out safety early warning according to the discontinuous structure evaluation result.
According to the monitoring alarm method of the intelligent building site management and control platform provided by the invention, based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments, the safety evaluation of the discontinuous structure is carried out on the building structure at the same position, a discontinuous structure evaluation result is obtained, and safety early warning is carried out according to the discontinuous structure evaluation result, and the monitoring alarm method comprises the following steps:
extracting a plurality of discontinuous structural features of semantic building structure information corresponding to each moment, and determining feature attributes corresponding to the discontinuous structural features;
classifying risk grades based on the feature attributes corresponding to the discontinuous structural features; wherein the risk levels include a security level, a low risk level, and a high risk level.
According to the monitoring alarm method of the intelligent building site management and control platform provided by the invention, weighting calculation is carried out based on each horizontal distance, each shooting time length and each external parameter to obtain the spatial position of the target object, and the method comprises the following steps:
Respectively determining the predicted positions corresponding to the cameras based on the position prediction step;
Weighting calculation is carried out based on each predicted position and each shooting time length, so that the spatial position of the target object is obtained;
the position prediction step includes:
And determining a predicted position corresponding to the camera based on the horizontal distance of the camera, the external parameter and the included angle, wherein the included angle is an included angle between the target object and the camera.
According to the monitoring alarm method of the intelligent building site management and control platform, when the number of cameras is two, the predicted positions corresponding to the cameras are calculated based on the following formula:
Wherein, Representing the predicted position corresponding to the first camera,Coordinates representing the mounting position of the first camera,Representing the horizontal distance of the first camera,Representing the angle of the first camera,Represents the horizontal rotation angle of the first camera,Representing the predicted position corresponding to the second camera,Coordinates representing the mounting position of the second camera,Representing the horizontal distance of the second camera,Representing the angle of the second camera,Representing the horizontal rotation angle of the second camera.
According to the monitoring alarm method of the intelligent building site management and control platform provided by the invention, before the horizontal distance of a target object in a preset building site area under the angles of a plurality of cameras, the shooting time length corresponding to each camera and the external parameter of each camera are obtained, the monitoring alarm method further comprises the following steps:
comparing and matching the images acquired by the cameras, and identifying the target objects matched to the same target by the same identification information;
determining a linear distance between the target object and each camera based on the images acquired by each camera;
a horizontal distance of each camera is determined based on each of the straight line distances, respectively.
According to the monitoring alarm method of the intelligent building site management and control platform provided by the invention, the linear distance between the target object and each camera is determined based on the images acquired by each camera, and the monitoring alarm method comprises the following steps:
respectively inputting images acquired by each camera into a distance measurement prediction model to obtain a predicted distance correspondingly output by the distance measurement prediction model;
Correcting each predicted distance based on the distance estimation deviation value corresponding to each camera to obtain each linear distance;
Accordingly, the distance estimation bias value is determined based on the following steps:
Aligning the test images acquired by each camera with the test images acquired by the standard cameras respectively;
respectively inputting the aligned test images into the distance measurement prediction model to obtain a predicted test distance corresponding to the output of the distance measurement prediction model;
Inputting the test image acquired by the standard camera into the distance measurement prediction model to obtain a standard prediction distance output by the distance measurement prediction model;
and determining a distance estimation deviation value corresponding to each camera based on each predicted test distance and the standard predicted distance.
The present invention also provides an electronic device including: a memory for storing a computer software program; and the processor is used for reading and executing the computer software program so as to realize the monitoring alarm method of the intelligent building site management and control platform.
The invention also provides a non-transitory computer readable storage medium, which is characterized in that the storage medium stores a computer software program, and the computer software program realizes the monitoring alarm method of any intelligent building site management and control platform when being executed by a processor.
The invention also provides a computer program product, which comprises a computer program, wherein the computer program realizes the monitoring alarm method of the intelligent building site management and control platform when being executed by a processor.
The beneficial effects of the invention are as follows: through the image sensor subsystem, the camera subsystem, the semantic segmentation subsystem, the evaluation early warning subsystem and the target tracking subsystem, the construction monitoring and personnel monitoring of the construction site in an intelligent mode are realized, manual operation is not needed, and the safety of the construction site area is improved.
Drawings
FIG. 1 is a schematic diagram of a monitoring alarm system of an intelligent building site management and control platform provided by the invention;
FIG. 2 is a schematic flow chart of a monitoring alarm method of the intelligent building site management and control platform provided by the invention;
FIG. 3 is a schematic view of the horizontal distance of a target object under a single camera provided by the present invention;
FIG. 4 is a schematic diagram of multi-view joint positioning provided by the present invention;
FIG. 5 is a schematic diagram of a distance metric prediction model according to the present invention;
Fig. 6 is a schematic diagram of an embodiment of an electronic device according to an embodiment of the present invention;
Fig. 7 is a schematic diagram of an embodiment of a computer readable storage medium according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present invention, the term "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "for example" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the invention. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
The following describes a monitoring alarm system of an intelligent building site management and control platform according to the invention with reference to fig. 1, wherein the monitoring alarm system comprises an intelligent building site management and control center, an image sensor subsystem, a camera subsystem, a semantic segmentation subsystem, an evaluation and early warning subsystem and a target tracking subsystem.
In an alternative embodiment, the semantic segmentation subsystem is connected with the evaluation and early warning subsystem to form a building site monitoring and alarming system, and the intelligent building site management and control center is respectively connected with the image sensor subsystem, the camera subsystem, the building site monitoring and alarming system and the target tracking subsystem to store and manage data of the building site monitoring and alarming system.
In an alternative embodiment, the image capturing subsystem may be understood as a photographing system formed by a plurality of cameras, so that the image capturing subsystem may obtain a horizontal distance of a target object in a preset work area under angles of the plurality of cameras, a photographing duration corresponding to each camera, and an external parameter of each camera, where the preset work area is a user according to an actual selection, the horizontal distance is a distance between the target object and the camera, the horizontal distance corresponding to each camera is determined based on images photographed by each camera, and the photographing duration is a duration during which the cameras continuously track and photograph the target object. Meanwhile, the camera shooting subsystem transmits the acquired horizontal distance of the target object in the preset building site area under the angles of the cameras, shooting time length corresponding to each camera and external parameter of each camera to the intelligent building site management and control center station.
In an alternative embodiment, the image sensor subsystem may be understood as a system for spatial image information acquisition consisting of a plurality of differently oriented image sensors, whereby the image sensor subsystem may acquire spatial image information of a predetermined worksite area at a plurality of moments. Meanwhile, the image sensor subsystem transmits the acquired spatial image information of the preset building site area at a plurality of moments to the intelligent building site management and control center station.
In an alternative embodiment, the intelligent building site management and control center console transmits the horizontal distance of the target object in the preset building site area under the angles of the cameras, the shooting time length corresponding to each camera and the external parameter of each camera to the target tracking subsystem, so that the target tracking subsystem can perform weighted calculation according to each horizontal distance, each shooting time length and each external parameter to obtain the spatial position of the target object. Meanwhile, the spatial position of the target object is transmitted to the intelligent building site management and control center station.
In an alternative embodiment, the intelligent building site management and control center station transmits the spatial image information of each moment to the semantic segmentation subsystem, so that the semantic segmentation subsystem can perform semantic segmentation on the spatial image information of each moment to obtain a plurality of semantic building structure information corresponding to each moment. And simultaneously, the semantic segmentation subsystem transmits a plurality of semantic building structure information corresponding to each moment to the evaluation and early warning subsystem.
In an optional embodiment, the evaluation and early-warning subsystem receives a plurality of semantic building structure information corresponding to each moment sent by the semantic segmentation subsystem, so that the evaluation and early-warning subsystem can perform safety early warning on a building evaluation result obtained after the building structure safety evaluation of the same position according to the plurality of semantic building structure information corresponding to a plurality of adjacent moments.
According to the embodiment of the invention, through the image sensor subsystem, the camera subsystem, the semantic segmentation subsystem, the evaluation early warning subsystem and the target tracking subsystem, the building monitoring and personnel monitoring on the construction site in an intelligent mode are realized, manual operation is not needed, and the safety of the construction site area is improved.
The following describes a monitoring alarm method of an intelligent building site management and control platform with reference to fig. 2:
Step 101, obtaining horizontal distances of a target object in a preset construction area under angles of a plurality of cameras, shooting time lengths corresponding to the cameras, external parameters of the cameras and spatial image information of the preset construction area at a plurality of moments.
Specifically, a horizontal distance of a target object of a preset construction area under angles of a plurality of cameras, shooting time lengths corresponding to the cameras, external parameter of the cameras and space image information of the preset construction area at a plurality of moments are obtained, wherein the preset construction area is selected by a user according to actual practice, the target object can comprise but is not limited to pedestrians and vehicles, the horizontal distance is the distance between the target object and the cameras, the horizontal distance corresponding to the cameras is determined based on images shot by the cameras, and the shooting time lengths are time lengths for the cameras to continuously track and shoot the target object.
Fig. 3 is a schematic view of the horizontal distance of the target object under a single camera, as shown in fig. 3,Representing the horizontal distance between the target object and the camera 1, the horizontal distance gradually increases as the target object moves from left to right. It should be understood that the horizontal distance is the horizontal distance between the target object and the camera, and is computationally determined from the image captured by the camera; the plurality of acquired horizontal distances are horizontal distances of the target object at angles of the plurality of cameras at the same time.
Here, the external parameters of the camera are parameters describing the camera position, direction, and viewing angle, for the world coordinates to be converted into camera coordinates, including, for example, the angle of the camera, and the like.
And 102, carrying out weighted calculation based on each horizontal distance, each shooting time length and each external parameter to obtain the spatial position of the target object.
The position coordinate of the camera is fixed, and the position of the target object relative to the camera can be determined based on the horizontal distance and the external parameter, so that the position of the target object is determined. It can be understood that the longer the shooting time of the camera, the higher the reliability thereof, and the more accurate the calculated predicted position thereof can be considered, and therefore, the longer the shooting time of the camera, the greater the weight of the calculated predicted position thereof. Therefore, on one hand, when the single camera has limited vision, if shielding exists, tracking and positioning can still be performed based on other cameras, so that the robustness of positioning the target object is improved, and on the other hand, the target object is positioned based on the positions of the target object in a plurality of cameras, so that the accuracy of monitoring and positioning is improved. In addition, the longer the shooting time of the camera is, the higher the reliability of the camera is, the spatial position of the target object is calculated according to the shooting time of each camera, and the accuracy of positioning the target object is further improved.
And 103, carrying out semantic segmentation on the spatial image information at each moment to obtain a plurality of semantic building structure information corresponding to each moment.
For spatial image information of a preset building site area at a plurality of moments acquired in real time, scene semantic instance segmentation is carried out based on a machine learning related neural network including but not limited to PointNet, KPConv, and related semantic building structure information including but not limited to walls, columns, beams, floors, stairs, bearing walls and the like is extracted. Wherein the same location has a corresponding plurality of semantic building structure information at each time instant. Alternatively, a plurality of semantic building structure information corresponding to each moment may be stored in a respective database.
And 104, carrying out safety early warning on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments.
Specifically, similarity is calculated based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments, and whether the building wall state is abnormal or not is determined based on a similarity evaluation result. Further, analysis of the discontinuous structure is performed based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments, and whether the discontinuous structure state is abnormal is determined based on an analysis result.
Further, safety precaution is carried out on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments, and the safety precaution method comprises the following steps:
and under the condition that the similarity calculated based on the plurality of semantic building structure information corresponding to the plurality of adjacent moments is larger than or equal to a similarity threshold, based on the plurality of semantic building structure information corresponding to the plurality of adjacent moments, carrying out safety evaluation of the discontinuous structure on the building structure at the same position to obtain a discontinuous structure evaluation result, and carrying out safety early warning according to the discontinuous structure evaluation result.
According to the invention, through the information of the target object in the preset construction site area under the angles of the cameras and the spatial image information of the preset construction site area at a plurality of moments, the construction monitoring and personnel monitoring are comprehensively carried out on the construction site, the manual operation is not needed, and the safety of the construction site area is improved.
Optionally, the similarity is obtained based on the following steps:
Calculating based on the plurality of semantic building structure information at the current moment and the plurality of semantic building structure information at the same position at one moment by a digital registration algorithm to obtain similarity; the digital registration algorithm is used for realizing time sequence similarity evaluation of the building structure at the same position.
Specifically, aiming at the extracted semantic building structure information, the digital construction result data with similar historical time sequence and the prior information of the corresponding historical semantic building structure are combined, and the processing analysis is carried out through a digital registration algorithm. The digital registration algorithm comprises, but is not limited to, an iterative closest point algorithm (ICP), a normal distribution transformation algorithm (NDT), a feature registration graph optimization algorithm and a topological structure comparison method, and the time sequence similarity evaluation of the building structure at the same position is realized based on the digital registration algorithm.
Further, under the condition that the similarity value is smaller than the similarity threshold value, the condition that the building structure of the corresponding area is large in physical change degree is considered, the situation that the ledge and the roof fall possibly occur, and abnormal early warning of the safety state is carried out.
Further, under the condition that the similarity value is larger than or equal to a similarity threshold value, safety evaluation of the discontinuous structure is carried out, a discontinuous structure evaluation result is obtained, and safety early warning is carried out according to the discontinuous structure evaluation result. Therefore, after the construction wall body state analysis is performed, the discontinuous structure analysis is further performed, and the real-time monitoring and early warning of the safety state in the working space are realized.
Based on the above embodiment, performing safety precaution on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments includes:
Extracting discontinuous structural features of a plurality of semantic building structure information corresponding to each moment, and determining feature attributes corresponding to the discontinuous structural features;
Classifying risk grades based on feature attributes corresponding to the discontinuous structural features; the risk levels include a security level, a low risk level, and a high risk level.
Specifically, processing and extracting are continuously performed on semantic building structure information, extraction of relevant features of the discontinuous structure including but not limited to cracks, joints, faults and the like is completed according to textures, colors, space depth information and the like by combining various edge recognition algorithms, plane fitting algorithms, machine learning algorithms and various discontinuous structure physical models, and corresponding parameter calculation including but not limited to normal vectors and curvatures is performed according to the physical models. Further, the extracted discontinuous structural features are analyzed, the features are segmented, classified and clustered, the properties of the relevant discontinuous structural features are measured, including but not limited to types, lengths, widths, depths, harmfulness and the like, and the safety level, the low risk level and the high risk level are classified according to the feature property indexes of the discontinuous structural features. Therefore, after the construction wall body state analysis is carried out, the discontinuous structure analysis is further carried out, and the safety state real-time monitoring and early warning in the working space are realized.
Based on the above embodiments, the monitoring and alarming system of the intelligent building site management and control platform provided in the present embodiment further includes a building radar, where the building radar is configured to detect a building area corresponding to a discontinuous structural feature determined that a high risk level exists, so after classifying the risk level based on a feature attribute corresponding to the discontinuous structural feature, the method further includes:
and determining a high-risk building area corresponding to the discontinuous structural features with high risk level, and detecting the high-risk building area through the building radar to obtain a target detection result.
And carrying out building safety early warning under the condition that the target detection result indicates that the building safety hidden danger exists in the high-risk building area.
And continuously monitoring the discontinuous structural characteristics of the high-risk building area under the condition that the target detection result indicates that the high-risk building area has no building safety hidden trouble.
Specifically, for a high-risk building area corresponding to the discontinuous structural features with potential building safety hazards, detection is performed through a building radar, and building analysis is performed through professionals to confirm the real risk coefficient of the building area. If the potential safety hazard of the building is confirmed, carrying out abnormal early warning on the safety state of the building; if the building safety hidden trouble is confirmed not to exist, the identified discontinuous structure features are of low risk level, and dynamic change monitoring of the discontinuous structure is carried out. Meanwhile, the analyzed and acquired low-risk discontinuous structure features are continuously stored in a discontinuous structure time sequence database so as to meet the requirement of discontinuous structure dynamic monitoring. Therefore, the embodiment detects and confirms the real risk coefficient of the high-risk building area corresponding to the discontinuous structural features with potential building safety hazards through the building radar, and ensures the reliability of monitoring.
Based on the above embodiment, after classifying the risk level based on the feature attribute corresponding to the discontinuous structural feature, the method further includes:
And determining a low-risk building area corresponding to the discontinuous structural features with the low risk level, and storing the discontinuous structural features of the low-risk building area into a discontinuous structural time sequence database so as to continuously monitor the discontinuous structural features of the low-risk building area.
Specifically, aiming at the discontinuous structural features with low risk level, continuous key monitoring analysis is carried out, and when the discontinuous structural features are subjected to risk upgrading and the risk of building safety potential hazards such as roof fall and collapse is judged, detection confirmation of a building radar, abnormal early warning of corresponding safety states and visual report output are timely carried out. Therefore, the embodiment performs continuous key monitoring analysis aiming at the discontinuous structural characteristics with low risk level, and ensures the reliability of monitoring.
In an alternative embodiment, the weighting calculation is performed based on each horizontal distance, each shooting duration and each external parameter to obtain the spatial position of the target object, including:
Respectively determining the predicted positions corresponding to the cameras based on the position prediction step;
weighting calculation is carried out based on each predicted position and each shooting time length, so that the spatial position of the target object is obtained;
The position prediction step includes:
and determining a predicted position corresponding to the camera based on the horizontal distance of the camera, the external parameter and the included angle, wherein the included angle is an included angle between the target object and the camera.
Specifically, according to the horizontal distance of a camera, the external parameter and the included angle between the target object and the camera, the predicted position of the target object under the visual angle of the camera can be determined; and carrying out weighted calculation on the predicted positions of the cameras according to shooting time length, and finally determining the spatial position of the target object.
Here, the predicted position is calculated and determined position information of the target object according to the image shot by the camera; the spatial position is the position information of the finally determined target object; both are usually represented in coordinates.
According to the embodiment of the invention, the spatial position of the same target object under different camera angles is combined with the residence time of the target object and the distance from each camera to carry out weighted combined judgment, the three-dimensional spatial coordinates of the target object are calculated and corrected, and the jitter of the spatial position coordinates under the multi-view condition of the cross cameras is reduced.
Further, when the number of cameras is two, the predicted position corresponding to each camera is calculated based on the following formula:
Wherein, Representing the predicted position corresponding to the first camera,Coordinates representing the mounting position of the first camera,Representing the horizontal distance of the first camera,Representing the angle of the first camera,Representing the horizontal rotation angle of the first camera,Representing the predicted position corresponding to the second camera,Coordinates representing the mounting position of the second camera,Representing the horizontal distance of the second camera,Representing the angle of the second camera,Indicating the horizontal rotation angle of the second camera.
It will be appreciated that the number of components,
FIG. 4 is a schematic diagram of multi-view joint positioning according to the present invention, as shown in FIG. 4, using respectivelyAndParameter of the external parameters representing camera 1 and camera 2:
Wherein, For the mounting position coordinates of camera 1,For the mounting position coordinates of camera 2,Is the horizontal rotation angle of the camera 1, namely the included angle between the center line of the camera 1 and the horizontal line,Is the horizontal rotation angle of the camera 2,Is the vertical pitch angle of the camera 1,For the vertical pitch angle of the camera 2,For the mounting height of the camera 1,The mounting height for the camera 2;
Representing the difference between the abscissa of the target object and the abscissa of the camera 1, wherein the camera 1 is positioned at the left side of the target object, and the upper left corner is taken as the origin of coordinates, so that the abscissa of the target object is the sum of the abscissa of the camera 1 and the difference value of the abscissa; /(I) Representing the difference between the ordinate of the target object and the ordinate of the camera 1, wherein the camera 1 is positioned below the target object, and the ordinate of the target object is the difference between the ordinate of the camera 1 and the ordinate difference value by taking the upper left corner as the origin of coordinates;
representing the difference between the abscissa of the target object and the camera 2, wherein the camera 2 is positioned on the right side of the target object, and the upper left corner is taken as the origin of coordinates, and the abscissa of the target object is the difference between the abscissa of the camera 2 and the difference value of the abscissa; /(I) The difference between the ordinate of the target object and the ordinate of the camera 2 is indicated, and the camera 2 is located below the target object with the upper left corner as the origin of coordinates, and the ordinate of the target object is the difference between the ordinate of the camera 2 and the difference value of the ordinate.
Alternatively, the angle of the camera 1 is calculated based on the following formula:
Wherein, Representing the angle of camera 1,Representing the pixel-level abscissa of the target object in the camera 1,A value representing the longest side pixel resolution of camera 1,The horizontal view angle of the camera 1 is shown.
In an alternative embodiment, the weights of the predicted positions corresponding to the cameras are determined based on the following steps:
Determining the weight of a predicted position corresponding to the camera based on the ratio of the shooting time length and the total shooting time length of the camera; wherein, the total shooting duration is the sum of the shooting durations.
Taking the above two cameras as an example, the weight of the predicted position of the camera 1Weights of predicted positions of camera 2WhereinThe shooting time periods of the camera 1 and the camera 2 are respectively;
substituting calculation to obtain final spatial position of the target object
According to the embodiment of the invention, the three-dimensional space coordinate calculation and correction of the target object are performed by combining the space position of the camera, the gesture orientation, the residence time and the relative position of the target object and the camera, so that the jitter of the space position coordinate under the condition of multiple viewing angles of the cross camera is reduced.
To further improve the accuracy of target positioning, in an alternative embodiment, the spatial position of the target object is calculated by the following formula:
Wherein, RepresentationPredicted position of time camera 1,RepresentationThe predicted position of the camera 2 at the moment,RepresentationPredicted position of camera closest to target object at moment,/>, on target objectRepresentationPredicted position of camera closest to target object at moment,/>, on target objectRepresentationThe final spatial position of the target object is determined at the moment.
Here the number of the elements is the number,Time of day andThe time instants may be separated by a predetermined time interval, such as 1ms, or may be separated by a predetermined number of image frames, such as 2 frames, without limitation.
It will be appreciated that the above embodiments are combinedAndIn the practical application process, the spatial position of the target object can be determined by combining the predicted positions at two moments, and the spatial position of the target object can be determined by combining the predicted positions at more moments, for example, two cameras,The predicted positions at the time are combined with the predicted positions at the n times, and the spatial position of the final target object is calculated based on the following formula:
In an optional embodiment, before acquiring the horizontal distances of the target objects in the preset worksite area under the angles of the plurality of cameras, the shooting time lengths corresponding to the cameras, and the external parameters of the cameras, the method further includes:
comparing and matching the images acquired by the cameras, and identifying the target objects matched to the same target by the same identification information;
determining a linear distance between a target object and each camera based on the images acquired by each camera;
the horizontal distance of each camera is determined based on each linear distance, respectively.
Here, the target objects matched with the same target are identified as the same ID, and the IDs of the same target object should be kept consistent within a certain preset period of time for the following target tracking and space positioning.
As shown in fig. 3, the linear distance between the target object and the camera 1 isThe straight line distance is determined by detecting the image acquired by the camera.
Optionally, the horizontal distance is determined based on the following formula:
represents horizontal distance,/> Represent straight line distance,Representing camera vertical pitch angle.
Further, determining a linear distance between the target object and each camera based on the images acquired by each camera includes:
Respectively inputting the images acquired by each camera into a distance measurement prediction model to obtain a predicted distance correspondingly output by the distance measurement prediction model;
And correcting each predicted distance based on the distance estimation deviation value corresponding to each camera to obtain each linear distance.
Here, the distance metric prediction model is a model for estimating a straight line distance between the target object and the camera.
Optionally, the distance metric prediction model is a model based on fusion of a depth map and a target size to estimate the linear distance.
Specifically, fig. 5 is a schematic structural diagram of a distance measurement prediction model provided by the present invention, as shown in fig. 5, the input is an image with the same size after being processed by an alignment mapping process or an equal-proportion scaling method, the size of the image is w×h×c, if the aspect ratio does not meet the input requirement, 255 pixels are filled and filled according to the short side until the input is met. Outputting a predicted depth image including the same size as the input imageCategory region segmented imageN target object center point pixel level coordinatesObject pixel level length width heightObject horizontal deflection orientation angle. In order to facilitate the labeling preparation of training data, the learning supervision method of the depth image part is to conduct prediction error supervision on the region where the target object exists, the class region segmentation image is only output in the training stage, and the reasoning test stage does not output.
Optionally, the distance metric prediction model is trained based on the following steps:
collecting and labeling output images of the same type camera under multiple scenes and multiple time periods, wherein labeling information comprises but is not limited to object types Object sizeObject center point coordinatesObject horizontal deflection orientation angleActual linear distance of object from camera
Wherein,Is the identifiable category number,The length, width and height of the target object at the pixel level respectively,Respectively the pixel level abscissa and ordinate of the target object,For normalized target object distance value,Set to 15;
And training and optimizing the distance measurement prediction model by using the acquired and marked data, so that the optimized model is adapted to the image output by the parameter model camera, and the positioning accuracy of the target object is improved.
Performing linear distance prediction on each target object appearing in the image by using the trained and optimized distance measurement prediction model, and taking the predicted value of the center point of the target object as the distance estimation of the target object to obtain the predicted distance correspondingly output by the model; multiplying the predicted distance by the corresponding distance coefficientObtaining a distance estimated value; because images shot by different types of cameras have certain deviation, if the current camera is not a standard camera, correcting the distance estimation value based on the distance estimation deviation value of the current camera relative to the standard camera to finally obtain a linear distance, and if the current camera is the standard camera, the linear distance corrected by using the corresponding distance estimation deviation value is the same as the distance estimation value, that is, the distance estimation value is the final linear distance, and correction is not needed. /(I)
The embodiment of the invention carries out the distance correction on the cameras with different styles, so that the invention is not limited to the camera with a specific style, and is applicable to the cameras with various styles; based on the distance estimation deviation value, the predicted distance is corrected, the cameras of different models can use the same distance measurement prediction model to perform distance estimation, model training is not required to be performed on the cameras of different models respectively, and model training cost is reduced.
In an alternative embodiment, the distance estimation bias value is determined based on the following steps:
step a, aligning test images acquired by each camera with test images acquired by a standard camera respectively;
B, respectively inputting the aligned test images into a distance measurement prediction model to obtain a predicted test distance corresponding to the output of the distance measurement prediction model;
step c, inputting the test image acquired by the standard camera into a distance measurement prediction model to obtain a standard prediction distance output by the distance measurement prediction model;
And d, determining a distance estimation deviation value corresponding to each camera based on each predicted test distance and the standard predicted distance.
In step a, the test images acquired by the cameras of different internal references are mapped to align with the test images of the standard camera. It should be noted that the execution order of the step b and the step c is not limited, that is, the step b may be executed first, then the step c may be executed, then the step b may be executed, or the step b and the step c may be executed simultaneously.
In an alternative embodiment, the respective distance measurement prediction models are directly trained separately for the camera images of different internal references, and each distance measurement prediction model is matched with a camera of a corresponding model for use in a real-time use process.
Referring to fig. 6, fig. 6 is a schematic diagram of an embodiment of an electronic device according to an embodiment of the invention. As shown in fig. 6, an embodiment of the present invention provides an electronic device 600, including a memory 610, a processor 620, and a computer program 611 stored in the memory 610 and executable on the processor 620, wherein the processor 620 executes the computer program 611 to implement the following steps:
acquiring horizontal distances of target objects of a preset building area under the angles of a plurality of cameras, shooting time lengths corresponding to the cameras, external parameters of the cameras and spatial image information of the preset building area at a plurality of moments; the horizontal distance is the distance between the target object and the cameras, the horizontal distance corresponding to each camera is determined based on the images shot by each camera, and the shooting duration is the duration of continuous tracking shooting of the target object by the cameras;
Weighting calculation is carried out based on the horizontal distances, the shooting time lengths and the external parameters, so that the spatial position of the target object is obtained;
Carrying out semantic segmentation on the space image information at each moment to obtain a plurality of semantic building structure information corresponding to each moment;
and carrying out safety early warning on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments.
Referring to fig. 7, fig. 7 is a schematic diagram of an embodiment of a computer readable storage medium according to an embodiment of the invention. As shown in fig. 7, the present embodiment provides a computer-readable storage medium 700 having stored thereon a computer program 711, which computer program 711, when executed by a processor, performs the steps of:
acquiring horizontal distances of target objects of a preset building area under the angles of a plurality of cameras, shooting time lengths corresponding to the cameras, external parameters of the cameras and spatial image information of the preset building area at a plurality of moments; the horizontal distance is the distance between the target object and the cameras, the horizontal distance corresponding to each camera is determined based on the images shot by each camera, and the shooting duration is the duration of continuous tracking shooting of the target object by the cameras;
Weighting calculation is carried out based on the horizontal distances, the shooting time lengths and the external parameters, so that the spatial position of the target object is obtained;
Carrying out semantic segmentation on the space image information at each moment to obtain a plurality of semantic building structure information corresponding to each moment;
and carrying out safety early warning on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. The monitoring alarm system of the intelligent building site management and control platform is characterized by comprising an intelligent building site management and control center, an image sensor subsystem, a camera subsystem, a semantic segmentation subsystem, an evaluation early warning subsystem and a target tracking subsystem; the intelligent building site management and control center platform is respectively connected with the image sensor subsystem, the camera subsystem, the building site monitoring and control alarm system and the target tracking subsystem to store and manage data of the intelligent building site management and control center platform;
The camera subsystem is used for: acquiring horizontal distances of a target object in a preset construction area under the angles of a plurality of cameras, shooting time lengths corresponding to the cameras and external parameter of each camera; the horizontal distance is the distance between the target object and the cameras, the horizontal distance corresponding to each camera is determined based on the images shot by each camera, and the shooting duration is the duration of continuous tracking shooting of the target object by the cameras;
the image sensor subsystem is configured to: acquiring spatial image information of a preset building area at a plurality of moments;
the target tracking subsystem is configured to: weighting calculation is carried out based on the horizontal distances, the shooting time lengths and the external parameters, so that the spatial position of the target object is obtained;
the semantic segmentation subsystem is used for: carrying out semantic segmentation on the space image information at each moment to obtain a plurality of semantic building structure information corresponding to each moment;
The evaluation early warning subsystem is used for: carrying out safety early warning on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments;
The step of obtaining the spatial position of the target object by performing weighted calculation based on each horizontal distance, each shooting time length and each external parameter includes:
Respectively determining the predicted positions corresponding to the cameras based on the position prediction step;
Weighting calculation is carried out based on each predicted position and each shooting time length, so that the spatial position of the target object is obtained;
the position prediction step includes:
Determining a predicted position corresponding to the camera based on the horizontal distance of the camera, the external parameter and the included angle, wherein the included angle is an included angle between the target object and the camera;
when the number of cameras is two, the predicted positions corresponding to the cameras are calculated based on the following formula:
Wherein, Representing the predicted position corresponding to the first camera,Coordinates representing the mounting position of the first camera,Representing the horizontal distance of the first camera,Representing the angle of the first camera,Represents the horizontal rotation angle of the first camera,Representing the predicted position corresponding to the second camera,Coordinates representing the mounting position of the second camera,Representing the horizontal distance of the second camera,Representing the angle of the second camera,Representing the horizontal rotation angle of the second camera.
2. The monitoring and alarming method for the intelligent building site management and control platform is characterized by comprising the following steps of:
acquiring horizontal distances of target objects of a preset building area under the angles of a plurality of cameras, shooting time lengths corresponding to the cameras, external parameters of the cameras and spatial image information of the preset building area at a plurality of moments; the horizontal distance is the distance between the target object and the cameras, the horizontal distance corresponding to each camera is determined based on the images shot by each camera, and the shooting duration is the duration of continuous tracking shooting of the target object by the cameras;
Weighting calculation is carried out based on the horizontal distances, the shooting time lengths and the external parameters, so that the spatial position of the target object is obtained;
Carrying out semantic segmentation on the space image information at each moment to obtain a plurality of semantic building structure information corresponding to each moment;
carrying out safety early warning on a building evaluation result obtained after the building structure safety evaluation of the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments;
The step of obtaining the spatial position of the target object by performing weighted calculation based on each horizontal distance, each shooting time length and each external parameter includes:
Respectively determining the predicted positions corresponding to the cameras based on the position prediction step;
Weighting calculation is carried out based on each predicted position and each shooting time length, so that the spatial position of the target object is obtained;
the position prediction step includes:
Determining a predicted position corresponding to the camera based on the horizontal distance of the camera, the external parameter and the included angle, wherein the included angle is an included angle between the target object and the camera;
when the number of cameras is two, the predicted positions corresponding to the cameras are calculated based on the following formula:
Wherein, Representing the predicted position corresponding to the first camera,Coordinates representing the mounting position of the first camera,Representing the horizontal distance of the first camera,Representing the angle of the first camera,Represents the horizontal rotation angle of the first camera,Representing the predicted position corresponding to the second camera,Coordinates representing the mounting position of the second camera,Representing the horizontal distance of the second camera,Representing the angle of the second camera,Representing the horizontal rotation angle of the second camera.
3. The monitoring and alarming method of the intelligent building site management and control platform according to claim 2, wherein the performing the safety precaution on the building evaluation result obtained after the building structure safety evaluation of the same location based on the plurality of semantic building structure information corresponding to the plurality of adjacent moments includes:
And under the condition that the similarity calculated based on the plurality of semantic building structure information corresponding to the plurality of adjacent moments is larger than or equal to a similarity threshold, based on the plurality of semantic building structure information corresponding to the plurality of adjacent moments, carrying out safety evaluation of the discontinuous structure on the building structure at the same position to obtain a discontinuous structure evaluation result, and carrying out safety early warning according to the discontinuous structure evaluation result.
4. The monitoring and alarming method of the intelligent building site management and control platform according to claim 3, wherein the performing safety evaluation of the discontinuous structure on the building structure at the same position based on a plurality of semantic building structure information corresponding to a plurality of adjacent moments to obtain a discontinuous structure evaluation result, and performing safety early warning according to the discontinuous structure evaluation result comprises:
extracting a plurality of discontinuous structural features of semantic building structure information corresponding to each moment, and determining feature attributes corresponding to the discontinuous structural features;
classifying risk grades based on the feature attributes corresponding to the discontinuous structural features; wherein the risk levels include a security level, a low risk level, and a high risk level.
5. The monitoring and alarming method of the intelligent building site management and control platform according to claim 2, wherein before obtaining horizontal distances of target objects of a preset building site area under angles of a plurality of cameras, shooting time lengths corresponding to the cameras, and external parameter of each camera, the monitoring and alarming method further comprises:
comparing and matching the images acquired by the cameras, and identifying the target objects matched to the same target by the same identification information;
determining a linear distance between the target object and each camera based on the images acquired by each camera;
a horizontal distance of each camera is determined based on each of the straight line distances, respectively.
6. The method for monitoring and alarming of the intelligent building site management and control platform according to claim 5, wherein determining the linear distance between the target object and each camera based on the images collected by each camera comprises:
respectively inputting images acquired by each camera into a distance measurement prediction model to obtain a predicted distance correspondingly output by the distance measurement prediction model;
Correcting each predicted distance based on the distance estimation deviation value corresponding to each camera to obtain each linear distance;
Accordingly, the distance estimation bias value is determined based on the following steps:
Aligning the test images acquired by each camera with the test images acquired by the standard cameras respectively;
respectively inputting the aligned test images into the distance measurement prediction model to obtain a predicted test distance corresponding to the output of the distance measurement prediction model;
Inputting the test image acquired by the standard camera into the distance measurement prediction model to obtain a standard prediction distance output by the distance measurement prediction model;
and determining a distance estimation deviation value corresponding to each camera based on each predicted test distance and the standard predicted distance.
7. An electronic device, comprising:
a memory for storing a computer software program;
And the processor is used for reading and executing the computer software program so as to realize the monitoring and alarming method of the intelligent building site management and control platform according to any one of claims 2-6.
8. A non-transitory computer readable storage medium, characterized in that the storage medium has stored therein a computer software program which, when executed by a processor, implements the monitoring and alarm method of the intelligent building site management and control platform according to any one of claims 2-6.
CN202410406401.0A 2024-04-07 2024-04-07 Monitoring alarm system and method for intelligent building site management and control platform Active CN117994737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410406401.0A CN117994737B (en) 2024-04-07 2024-04-07 Monitoring alarm system and method for intelligent building site management and control platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410406401.0A CN117994737B (en) 2024-04-07 2024-04-07 Monitoring alarm system and method for intelligent building site management and control platform

Publications (2)

Publication Number Publication Date
CN117994737A true CN117994737A (en) 2024-05-07
CN117994737B CN117994737B (en) 2024-06-14

Family

ID=90901462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410406401.0A Active CN117994737B (en) 2024-04-07 2024-04-07 Monitoring alarm system and method for intelligent building site management and control platform

Country Status (1)

Country Link
CN (1) CN117994737B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203518954U (en) * 2013-08-12 2014-04-02 中国长江三峡集团公司 IoT (Internet of things) based real-time monitoring system for total stability of high dam
CN111563669A (en) * 2020-04-27 2020-08-21 郭琼 Wisdom building site steel piles up high early warning system based on block chain
CN111563433A (en) * 2020-04-27 2020-08-21 郭琼 Wisdom building site is monitored system of overflowing water based on block chain
CN115294377A (en) * 2022-07-31 2022-11-04 北京物资学院 System and method for identifying road cracks
CN116203559A (en) * 2022-12-22 2023-06-02 北京科技大学 Intelligent recognition and early warning system and method for underground rock and soil disease body
CN116761049A (en) * 2023-08-10 2023-09-15 箭牌智能科技(张家港)有限公司 Household intelligent security monitoring method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203518954U (en) * 2013-08-12 2014-04-02 中国长江三峡集团公司 IoT (Internet of things) based real-time monitoring system for total stability of high dam
CN111563669A (en) * 2020-04-27 2020-08-21 郭琼 Wisdom building site steel piles up high early warning system based on block chain
CN111563433A (en) * 2020-04-27 2020-08-21 郭琼 Wisdom building site is monitored system of overflowing water based on block chain
CN115294377A (en) * 2022-07-31 2022-11-04 北京物资学院 System and method for identifying road cracks
CN116203559A (en) * 2022-12-22 2023-06-02 北京科技大学 Intelligent recognition and early warning system and method for underground rock and soil disease body
CN116761049A (en) * 2023-08-10 2023-09-15 箭牌智能科技(张家港)有限公司 Household intelligent security monitoring method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯夏庭;周辉;李邵军;盛谦;江权;: "复杂条件下岩石工程安全性的智能分析评估和时空预测系统", 岩石力学与工程学报, no. 09, 15 September 2008 (2008-09-15), pages 1741 - 1755 *
曹福;刘华;孙涛;伊青: "基于多系统协同管理的可视化智慧航道工地", 中国水运, no. 007, 31 December 2020 (2020-12-31), pages 66 - 69 *

Also Published As

Publication number Publication date
CN117994737B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
JP5180733B2 (en) Moving object tracking device
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN110674680B (en) Living body identification method, living body identification device and storage medium
JP5027758B2 (en) Image monitoring device
KR101469099B1 (en) Auto-Camera Calibration Method Based on Human Object Tracking
CN110796032A (en) Video fence based on human body posture assessment and early warning method
CN112541938A (en) Pedestrian speed measuring method, system, medium and computing device
JP2010002976A (en) Image monitoring device
CN112053397A (en) Image processing method, image processing device, electronic equipment and storage medium
CN117115784A (en) Vehicle detection method and device for target data fusion
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN112802112B (en) Visual positioning method, device, server and storage medium
CN107767366B (en) A kind of transmission line of electricity approximating method and device
KR102457425B1 (en) Quantity measurement method of Construction materials using drone
CN117994737B (en) Monitoring alarm system and method for intelligent building site management and control platform
US11748876B2 (en) Joint surface safety evaluation apparatus
CN113989335A (en) Method for automatically positioning workers in factory building
CN109919999B (en) Target position detection method and device
JP3810755B2 (en) POSITION JUDGING DEVICE, MOVING ROUTE CALCULATION DEVICE, POSITION JUDGING METHOD AND PROGRAM
CN111586299B (en) Image processing method and related equipment
CN116343125B (en) Container bottom lock head detection method based on computer vision
CN115908758B (en) AR technology-based operation method and AR technology-based operation system for panoramic display of intelligent agricultural greenhouse
JP3820995B2 (en) Obstacle detection device and obstacle detection method
US20220366570A1 (en) Object tracking device and object tracking method
Jakovčević et al. A stereo approach to wildfire smoke detection: the improvement of the existing methods by adding a new dimension

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant