CN114863489B - Virtual reality-based movable intelligent auxiliary inspection method and system for construction site - Google Patents

Virtual reality-based movable intelligent auxiliary inspection method and system for construction site Download PDF

Info

Publication number
CN114863489B
CN114863489B CN202210780559.5A CN202210780559A CN114863489B CN 114863489 B CN114863489 B CN 114863489B CN 202210780559 A CN202210780559 A CN 202210780559A CN 114863489 B CN114863489 B CN 114863489B
Authority
CN
China
Prior art keywords
inspection
module
target
images
violation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210780559.5A
Other languages
Chinese (zh)
Other versions
CN114863489A (en
Inventor
李家乐
王雪菲
刘涛
袁成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN202210780559.5A priority Critical patent/CN114863489B/en
Publication of CN114863489A publication Critical patent/CN114863489A/en
Application granted granted Critical
Publication of CN114863489B publication Critical patent/CN114863489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention relates to a movable intelligent site auxiliary inspection method and system based on virtual reality, wherein an auxiliary inspection system used by the inspection method comprises a binocular camera, VR glasses, a raspberry group, a GPS positioning module and an alarm module, the binocular camera, the raspberry group, the GPS positioning module and the alarm module are integrated on the VR glasses, the wearable movable auxiliary inspection of the intelligent site is realized by using the virtual reality-based method, the data recording and statistics can be realized by using an intelligent algorithm, the inspection efficiency is greatly improved by VR inspection, and the inspection flexibility is improved by using a movable inspection mode.

Description

Virtual reality-based movable intelligent auxiliary inspection method and system for construction site
Technical Field
The invention relates to a virtual reality-based movable intelligent auxiliary inspection method and system for a construction site.
Background
With the rapid development of national infrastructure construction and house construction, the construction safety problem of construction sites draws wide attention of all social circles. Whether clothing dressing and labor insurance products of constructors are used correctly, whether production behaviors meet requirements, whether construction environments are safe, whether quality and quantity of production materials meet requirements and the like are directly related to whether construction and production activities can be carried out safely. In order to ensure that production and construction tasks on a construction site are stably and safely carried out and the life safety of constructors and the engineering quality of construction projects are guaranteed, special safety personnel are required to patrol and examine personnel behaviors, construction environments, construction materials and the like in the construction process.
At present, construction site inspection mainly depends on random spot inspection by security personnel and manual filling of construction site management records. This method has a number of disadvantages: the inspection result is greatly influenced by the subjective factors of the security personnel, and the conditions of missed inspection and false inspection can occur when the security personnel is unfamiliar with the inspection range and the inspection content. Due to the fact that a corresponding supervision mechanism is not provided, the fairness of the routing inspection is easily interfered by other external factors. Meanwhile, the inspection result is recorded by a method of manually filling in text description, and the intuitiveness of display and the comprehensiveness of recording are lacked. In addition, the large construction project has large demand on safety inspection personnel, the inspection work consumes time and labor, the data recording and counting workload is huge, and the inspection efficiency is low. Some construction sites also install monitoring equipment to monitor the construction process, but the monitoring range is limited when monitoring facilities are arranged at fixed points, and the flexibility is lacked. In addition, the auxiliary judgment of irregular behaviors in the construction process cannot be realized.
The existing virtual reality inspection method is to acquire images through VR glasses, perform video acquisition on scenery in front of eyes, project a manufactured three-dimensional model into the glasses for display, or scan a two-dimensional code manufactured in advance by using a camera, wherein the two-dimensional code comprises an equipment name, position information and the like, and does not use an artificial intelligent algorithm to identify and track a target, and does not have a function of auxiliary judgment, and the classification of the target and the scene is realized according to the artificial judgment.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to solve the technical problems of providing a movable intelligent construction site auxiliary inspection method and system based on virtual reality, realizing wearable movable auxiliary inspection of the intelligent construction site by using a virtual reality-based method, realizing data recording and statistics by using an intelligent algorithm, greatly improving inspection efficiency by VR inspection, and improving inspection flexibility by a movable inspection mode.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the utility model provides a movable wisdom building site assists patrolling and examining method based on virtual reality uses binocular camera, VR glasses, raspberry group, GPS orientation module, alarm module, and binocular camera, raspberry group, GPS orientation module, alarm module all integrate on VR glasses, and this method includes following content:
a, constructing a routing inspection data set:
the method comprises the following steps of constructing a patrol data set containing images with different non-standard and dangerous behaviors, wherein the patrol data set respectively collects images in the normal day and images at night under the light source searchlighting, and the contents of the patrol data set comprise the following three categories: constructors, construction environment and construction quality are marked with violation risk degrees; the image data which needs to be collected by the constructor are facial images of the constructor, images of a worker wearing a safety helmet correctly and images of a worker not wearing the safety helmet correctly, images of a worker wearing a work garment correctly and images of a worker not wearing the work garment correctly, images of a safety rope used and images of a safety rope not used during high-altitude operation, and images of smoking and non-smoking; the data needing to be collected in the construction environment are images with and without open fire, images with and without fire fighting equipment in a fire fighting facility area, and warning signs, fences and images without warning signs and fences in a dangerous construction area; the construction quality comprises the steps of collecting cracks, pits and falling images of the wall surface and labeling the images of various construction materials with material types.
B, building a patrol model:
the inspection model comprises the improved YOLOv5 model and an adaptive DeepsORT algorithm, the improved YOLOv5 model is used for target detection, 23 layers are shared, the target detection is mainly divided into two parts, wherein 0-7 layers are called a Backbone, 8-22 layers are called a Head, the Backbone is sliced by 0 Focus layer, 1-5 layers and 7 layers are formed by ShuffleNet V2_ Block, a channel attention machine is inserted into the 6 th layer to form an ELU-CA Block, the ELU-CA Block is subjected to average pooling in the horizontal direction and the vertical direction, then transform operation is carried out to encode spatial information, and finally the spatial information is fused in a way of weighting on the channel;
the adaptive DeepsORT algorithm is used for target tracking and statistics, the max age parameter in the DeepsORT network is set as a floating parameter, and the dynamic selection rule of the max age parameter of different targets is as follows: the ratio of the width w of the object frame of the shelter to the normal walking speed v of the adult male is multiplied by the frame rate FPS of the camera, and the calculation formula is as follows:
Figure 779034DEST_PATH_IMAGE001
the normal walking speed v of the adult male is a constant, and each target has a max age parameter when multi-target tracking is carried out;
the calculation rule of the floating parameters is to calculate the shielding time by using the normal walking speed of the adult man and the width of the shielding target frame, and then calculate each max age parameter according to the frame rate of the lens;
c model training
B, training a routing inspection model by using the routing inspection data set constructed in the step A to obtain a trained routing inspection model, wherein the routing inspection model can identify and track violation events and construction materials and can carry out data statistics on the violation events and the types of the construction materials;
d, judging whether to drink wine or not
Collecting the action tracks of different normal persons and persons after drinking wine to form a wine drinking data set, representing the action tracks by using three characteristics of time t, X-direction coordinates under an image coordinate system and Y-direction coordinates under the image coordinate system, training an LSTM network by using the wine drinking data set, and judging whether a constructor drinks wine by using the trained LSTM network; when the trained inspection model is used for detecting that the inspection result belongs to the class of constructors, the constructors are continuously tracked for 3s by using the self-adaptive DeepsORT algorithm, the moving track of the central point of the target frame is collected, and the characteristics of time t of the moving track, X-direction coordinates under an image coordinate system and Y-direction coordinates under the image coordinate system are input into an LSTM network to determine whether the constructors drink wine or not.
E positioning inspection result
Outputting a routing inspection result by using the trained routing inspection model, positioning a target frame of the routing inspection result to obtain an absolute position of the target, and obtaining the absolute position T of the target ap The mathematical expression determined is:
Figure 696174DEST_PATH_IMAGE002
wherein T is ap Is the target absolute position; i is ap The absolute position of the inspector is determined by a GPS positioning module; t is rp The relative position of the target and the inspector is determined by the positioning of the binocular camera.
F result display
Routing inspection information is displayed in VR glasses, routing inspection results are automatically stored in a storage module, and the routing inspection results comprise violation images, information description (violation categories), position information, time information and inspector information; according to the danger degree of the illegal content, the inspection results are transmitted to the cloud in sequence, dangerous and urgent conditions are transmitted preferentially, on one hand, the danger degree of the illegal event is marked by constructing an inspection data set, on the other hand, the judgment is carried out by an inspector, and the manual judgment is carried out by a user button; meanwhile, a prompt tone is sent to the inspector through an alarm module according to the danger degree of the illegal content; the storage module deletes stored contents regularly, the cloud stores the inspection result for a long time, the inspection result is stored in the cloud, the generated inspection report is sent to a responsible person, and the inspection result is notified to the responsible person to correct illegal contents;
the condition description comprises violation categories, and if the violation categories are violating by constructors, the information description also comprises whether the constructors drink wine or not;
and when the next patrol is carried out to the position where the violation content is recorded last time, automatically storing the next patrol result and matching the violation content recorded last time, comparing the detection result to determine whether the rectification is finished or not, if the rectification is finished, recording the rectification effect and storing the rectification effect, and if the rectification is not finished, informing the responsible person of finishing the rectification.
The user buttons comprise an operation object selection button, a confirmation button, a startup key and a power switch key, wherein the operation object selection button starts from a target with a center point of a target frame close to the upper left corner of the image when selecting an operation object, sequentially selects the targets from left to right and from top to bottom, and selects the target with the center point of the target frame close to the upper left corner of the image again until the target with the center point of the target frame closest to the lower right corner of the image is searched; the confirming key is used for confirming or canceling the selected button, one click is used for confirming the current operation, and two continuous clicks are used for canceling the current operation. After the target to be operated is confirmed, the subsequent operation to be executed on the operation target can be selected by using the operation object selection button, such as image storage, and the subsequent operation can be confirmed and cancelled by matching with the confirmation button, and the confirmation button can pop up a corresponding menu for subsequent selection; the power-on key is used for executing power-on and power-off operations; the power switch key is used for executing the switch operation of the visible light source.
The automatic storage and tracking illegal image process of the adaptive DeepSORT algorithm is as follows: storing a frame image of which the center of a target frame is closest to the center point of a left eye image in all frame images of the same ID tracked by a self-adaptive DeepsORT algorithm in a left lens in a binocular camera; the saved image presents the offending content in the closest center position.
The invention also discloses a movable intelligent construction site auxiliary inspection system based on virtual reality, which comprises VR glasses, a storage module, a GPS positioning module, a data transmission module, an alarm module, a visible light source, a construction material counting module and an rectification and supervision module, wherein the VR glasses are used as an assembly frame of the head-mounted auxiliary inspection equipment, the centers of the two cameras of the binocular cameras are placed at the same plane with the eyes, and the camera parameter correction of the cameras is completed; the centers of the two cameras are always in the same straight line; connecting VR glasses, a storage module, a GPS positioning module, a data transmission module, an alarm module and a visible light source to a raspberry group, wherein the storage module, the GPS positioning module, the data transmission module, a battery and the alarm module are respectively connected with corresponding interfaces of the raspberry group;
the user button is a medium for interaction between a user and the inspection system, and the inspector selects information of the inspector, selects and stores images, turns on and off a lamp source and turns off and turns on the machine through the user button, and displays the lens acquisition and inspection model identification tracking result in the VR glasses. The positioned operation image is a left lens image, a two-dimensional scene in front of eyes forms a three-dimensional space coordinate with the left lens as a coordinate origin through coordinate transformation, each point in the image can find a unique three-dimensional coordinate in the three-dimensional space, and the coordinate point of the center point of a target frame in the left image in the three-dimensional space is obtained, so that the space relative position of the target and an inspector can be determined;
the raspberry pie is loaded with a routing inspection model, an illegal image selection program, a storage program, a construction material counting module, a rectification and supervision module, a constructor behavior abnormity analysis module and an information input module,
the inspection model is used for identifying and tracking violation events and construction materials and carrying out data statistics on the types of the violation events and the construction materials; the illegal image selection program is used for selecting and confirming an operation object, and the storage program is used for manually storing the manually selected illegal image;
the alarm module is used for alarming with different volumes according to the violation risk degree of the violation event, so that a patrol inspector can accurately judge the situation and timely handle the dangerous situation;
the storage module is used for storing the detection result, transmitting the detection result to the cloud end according to the violation risk degree of the violation event and storing the detection result in the cloud end for a long time, and storing the information in the storage module for a short time and deleting the information regularly; the stored information content includes: the violation images, the condition description, the position information, the time information and the inspector information can be manually stored by an inspector according to the actual condition on site, the violation danger degree is selected, and the stored images present violation contents at the position closest to the center point of the left-eye image; the content of the situation description is an illegal situation labeling category during data labeling;
the GPS positioning module is used for acquiring the absolute position of the inspector and acquiring the absolute position of a target by combining with the binocular camera;
the construction material counting module is used for carrying out preliminary data counting on construction materials with the same category and storing the category and the quantity of the materials, and inspection personnel can manually carry out data counting and result storage on the materials needing counting according to actual conditions;
the rectification monitoring module is used for storing the routing inspection result again at the position of the last violation event record when the routing inspection personnel performs secondary routing inspection, comparing the routing inspection results of the two times, and analyzing whether the violation content of the last time is rectified or not; if the rectification is finished, recording the rectification effect and storing the rectification effect, and if the rectification is not finished, informing the responsible person to finish the rectification;
the constructor behavior abnormity analysis module is used for judging whether the constructor drinks wine or not;
the information input module is used for inputting the information of the inspector and setting the inspection range;
before the inspection system is used, information of all inspectors is input into the auxiliary inspection system, the inspectors select own information through user buttons, the inspectors firstly input personal information when using the inspection system, the inspection system wears equipment on the head, the camera collects the conditions of a construction site in real time, and collected construction site images and inspection model inspection results are displayed in VR glasses.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, a network structure in a Backbone of YOLOv5 is replaced by a multilayer ShuffleNetV2_ Block, the parameter scale is reduced, the running speed is increased, meanwhile, an attention mechanism is added, the perception degree of a network to a specific target is increased, the identification precision and accuracy of a model are increased, the intelligent routing inspection of constructors, construction environments and construction quality in a construction site is realized by combining with a self-adaptive DeepsORT algorithm, and target tracking, irregular behavior detection and simple statistics of the target are realized. Hardware equipment such as VR glasses, raspberry group combine into the wearable through artificial intelligence algorithm to the supplementary system of patrolling and examining of wisdom building site of discerning and judging the image that obtains simultaneously, realize the portable intelligent assistance to patrolling and examining of the violation and unqualified phenomenon to the job site, guarantee objectivity, comprehensive, fairness and the flexibility of patrolling and examining, reduce patrolling and examining personnel's work load and complexity. And carry out information recording and early warning to the result of patrolling and examining, make the more comprehensive and abundant of information storage of patrolling and examining, can in time carry out the early warning to dangerous incident simultaneously, guarantee project construction and production safety.
The invention inspects the construction site based on multi-target tracking, and can intelligently identify auxiliary decisions: firstly, determining an application scene suitable for an auxiliary inspection system by combining the inspection content of the current inspector, respectively identifying constructors, construction scenes and construction quality, and constructing an inspection data set suitable for all construction sites and construction time. Secondly, hardware equipment is assembled, all hardware is integrated, a wearable auxiliary inspection system is formed by taking VR glasses as a frame, and the integrated hardware system can independently complete inspection tasks. And finally, the improved YOLOv5 model and the self-adaptive DeepsORT algorithm are utilized to realize light weight, the tracking of multiple targets is accelerated, violation judgment and preliminary statistics of construction materials are realized on a construction site, the identification and tracking speed and precision are improved, and the GPS and binocular module are utilized for positioning to realize remote multi-target positioning.
Drawings
FIG. 1 is a schematic diagram of a network structure of an improved YOLOv5 model;
fig. 2 is a schematic flow chart of the inspection method:
fig. 3(a) is a schematic view of a main view structure of the auxiliary inspection apparatus:
FIG. 3(b) is a schematic diagram of the relative positions of the components in the integrated module;
fig. 3(c) is a side view schematic diagram of the auxiliary inspection apparatus;
FIG. 4 is a schematic diagram of the structure of ELU-CA Block.
In the figure, a visible light source 1, a right-eye camera 2, a left-eye camera 3, a battery 4, a raspberry pie 5, a GPS positioning module 6, an alarm module 7, a data transmission module 8, a storage module 9, VR glasses 10, user buttons 11, a display screen 12, an integrated module 13, and a camera 14.
Detailed Description
The present invention will be described in detail below with reference to the following examples and the accompanying drawings. The specific embodiments are merely illustrative and explanatory of the invention in further detail, and do not limit the scope of the invention.
The target in the full text refers to things waiting for recognition of people and things, the target frame refers to a frame which is tracked by the inspection model to recognize the target and is surrounded by a rectangle, and the center point of the target frame is defaulted to represent the position of the target during positioning.
The invention discloses a movable intelligent construction site auxiliary inspection method based on virtual reality, which realizes video data acquisition of a construction site by using hardware devices such as a camera 14, VR glasses 10, raspberry 5 and the like, realizes identification and tracking of non-standard and dangerous behaviors according to an inspection model, and performs early warning, positioning, recording, storing and transmitting according to the identification result of the inspection model, and comprises the following contents:
use VR glasses to be the supplementary equipment frame of patrolling and examining equipment of wear-type, the camera includes right eye camera 2, left eye camera 3, places the camera lens center of two cameras in the position with eyes at the coplanar to accomplish the camera parameter correction of camera. The camera position can be adjusted according to user's different conditions, and different users make the scene in the glasses more laminate the state of watching of bore hole according to the manual adjustment plane height and inclination of oneself's the habit of wearing, makes its at utmost satisfy user's visual perception when using, but the camera lens center of two cameras is in same straight line all the time. VR glasses 10, a storage module 9, a GPS positioning module 6, a data transmission module 8, an alarm module 7 and a visible light source 1 are connected to the raspberry party. The storage module, the GPS positioning module, the data transmission module, the battery 4 and the alarm module are respectively connected with corresponding interfaces of a raspberry group 5, the storage module, the GPS positioning module, the data transmission module, the battery, the alarm module and the raspberry group are integrated together to form an integrated module, an integrated module 13 is arranged in front of a display screen 12 of VR glasses and behind a camera, the integrated module is independently packaged and attached to the display screen of the VR glasses, the raspberry group is equivalent to a microcomputer, a CPU processor is arranged in the raspberry group, and the raspberry group is provided with a plurality of USB interfaces, an SD card interface, a video interface, a network interface and a power interface and can be effectively connected with the modules. The user button 11 is arranged on the side face of the VR glasses and is a medium for interaction between a user and the auxiliary inspection system, and the inspection personnel can select information of the inspection personnel, select and store images, turn on and off the lamp source, turn off the machine and the like through the user button, so that the man-machine interaction of the inspection personnel is facilitated, and the humanization and the auxiliary inspection characteristics of the auxiliary inspection system are highlighted. The visible light source is started at night to provide illumination, and the centers of the two cameras are on the same plane to form a binocular vision module. And calibrating parameters of the two cameras for later binocular module positioning. And displaying the lens acquisition and inspection model identification tracking result in VR glasses. The positioned operation image is a left lens image. The principle of binocular positioning is that two-dimensional scenes in front of eyes form a three-dimensional space coordinate with a left lens as a coordinate origin through coordinate transformation, each point in an image can find a unique three-dimensional coordinate in the three-dimensional space, and in order to realize the positioning of a target frame central point, the spatial relative position of a target and an inspector can be determined only by acquiring the coordinate point of the target frame central point in the left image in the three-dimensional space.
(1) Setting a polling range, collecting data, carrying out image annotation, and constructing a polling data set
Image data of constructors: the method comprises the steps of collecting facial image data of construction project participants, enabling constructors on construction sites to correctly wear images of safety helmets and images of safety helmets, enabling constructors to correctly wear images of work clothes and images of work clothes, correctly using images of safety ropes and images of safety ropes when the constructors work aloft, and enabling the constructors to smoke images and the constructors not to smoke images.
Construction environment image data: the system comprises an environment image with open fire, an environment image without open fire, a fire fighting area image with fire fighting equipment, a fire fighting area image without fire fighting equipment, a dangerous construction area image with warning marks and fences and a dangerous construction area image without warning marks and fences.
Construction quality image data: cracks, pits and falling images of the wall surface; images of various construction materials and labeling material categories.
All the image data comprise construction site images acquired at night in the daytime and under the light source searchlighting, and the corresponding types are marked. And (3) labeling the category and violation risk degree of the image by using an image labeling tool, constructing a patrol data set, and training a YOLOv5 model on the basis of the patrol data set. And recording the trained routing inspection model, an illegal image selection program (selection and confirmation of an operation object) and a manual storage program for manually selected illegal images in the raspberry pie. The inspection model refers to a recognition and tracking model, and the auxiliary inspection system comprises an inspection model, a post-processing program and hardware equipment. The inspection model can identify and track violation events and construction materials and can carry out data statistics on the types of the violation events and the construction materials.
(2) Building inspection model
Because the auxiliary inspection system is worn by the inspector when inspecting, the head and eyes can move ceaselessly, the reaction time of a common person facing to an emergency is 0.3 second, the auxiliary inspection system is required to give an early warning to a dangerous condition in a very short time in order to ensure that the inspector rapidly reacts to an emergency dangerous condition, and therefore the processing speed is required to be improved to the greatest extent on the basis of ensuring the identification and tracking accuracy of the auxiliary inspection system. The invention uses the combination of an improved YOLOv5 model and an adaptive DeepsORT algorithm to build a patrol inspection model to realize the detection and tracking of the illegal dangerous behaviors on the construction site. Each target will have a unique ID when tracking.
The inspection model comprises the improved YOLOv5 model and the adaptive DeepsORT algorithm, the YOLOv5 model is used for realizing target detection, the improved YOLOv5 model has 23 layers in total and is mainly divided into two parts, wherein the 0-7 layers are called a backhaul, and the 8-22 layers are called a Head.
Slicing operation is carried out on 0 Focus layer in the Backbone, the 1-5 layers and the 7 th layer are formed by ShuffleNet V2_ Block, and the ShuffleNet V2_ Block gives consideration to speed and accuracy. The parameter scale is relatively small, the calculation consumption is also less, the real-time reasoning time delay is small, the precision is good, the training time is relatively low, and the lightweight operation of the model can be realized. A channel attention mechanism (ELU-CA Block) is inserted in the layer 6, the layer 6 contains more high-level semantic information, larger weight can be distributed to channels playing an important role in detection, one channel attention mechanism is selectively added in the whole network, and the model identification speed and accuracy are improved on the basis of reducing the complexity of the model to the maximum extent. The ELU-CA Block is used for carrying out average pooling in the horizontal direction and the vertical direction, then carrying out transform (transform operation) on the spatial information, and finally fusing the spatial information in a way of weighting on channels.
In order to improve the performance of the model and consider the size of the parameter scale, the introduced ELU-CA Block adds an ELU activation function on the basis of a CA attention mechanism, and the mathematical expression of the ELU activation function is as follows:
Figure 538228DEST_PATH_IMAGE003
when the mean value of the output of the activation function of the network is not 0, the weights in the network are updated in the positive direction or the negative direction in the back propagation process of the neural network, so that a binding effect is achieved, and the convergence speed is slow. The main effect of the ELU activation function is that the average value of the output of the ELU activation function is close to 0, which can accelerate the convergence speed of the model and reduce the training time. Meanwhile, the CA attention mechanism using the ELU activation function does not have the Dead ReLU problem, namely, the situation that some neurons can never be activated, so that corresponding parameters can never be updated does not occur. Therefore, the ELU-CA Block can enable the target features to be fully learned, and the comprehensiveness of the attention mechanism on feature attention is improved.
The Head section is composed of a convolutional layer, an upsampling layer, a connection layer, a C3 layer, and a detection layer. Wherein, 8, 12, 16 and 19 layers are convolution layers, 9 and 13 layers are up-sampling layers, four connection layers of 10, 14, 17 and 20 are respectively connected with 4, 2, 12 and 8 layers, and finally, the detection layer outputs the detection result.
The adaptive DeepsORT algorithm is used for realizing target tracking, can normally track a target when the target is shielded for a long time, and effectively reduces the occurrence frequency of ID conversion, sets the max age parameter as a floating parameter, namely the DeepsORT algorithm of the dynamic max age parameter, and needs to track forward the number of frames when the max age parameter is the target which is shielded for a long time and appears again so as to judge whether the target is a new target or the same target as the target before shielding. The dynamic selection rule of the max age parameters of different targets is as follows: the ratio of the width w of the object frame of the shelter to the normal walking speed v of the adult male is multiplied by the frame rate FPS of the camera, and the calculation formula is as follows:
Figure 933438DEST_PATH_IMAGE001
the normal walking speed v of the adult male is a constant, and the invention improves the max age parameter so that each target has one max age parameter when multi-target tracking is carried out, thereby achieving the shortest tracking time, improving the inspection speed and reducing the delay.
(3) Training patrol model
And after the network structure of the routing inspection model is determined, training is carried out by using the constructed data set to obtain the trained routing inspection model, and the trained routing inspection model is recorded in a raspberry group after the effect is fully tested.
(4) Application of the method
Before the system is used, the information of all inspectors is input into the auxiliary inspection system, the inspectors select own information through user buttons, the inspectors firstly input personal information when using the system, different inspectors can conveniently use the auxiliary inspection system, equipment is worn on the head, the camera collects the conditions of a construction site in real time, the inspection model carries out auxiliary judgment, the judgment result is presented in VR glasses, and the judgment result is the type of the identified and tracked target, namely the current image, the target frame identified by an algorithm and the target type information. The alarm module can carry out the alarm of different volumes according to the severity of the violation event, so that a patrol inspector can accurately judge the situation and timely handle the dangerous situation.
And (3) information storage: all detection results can be automatically stored in the storage module and can be simultaneously stored according to the violationThe dangerous degree of the rule content is transmitted to the cloud and stored in the cloud for a long time, and the information in the storage module is stored in a short time and deleted regularly. The stored information content includes: violation images, situation description, location information, time information, inspector information. The patrol inspector can also manually save the data according to the actual situation on site. The automatically stored violation image is stored according to a frame image of which the center of the target frame is closest to the center of the image in all frame images of the same ID tracked by the DeepsORT algorithm in the left shot. The stored images can present violation contents at the position closest to the center, and errors caused by image distortion and correction errors for binocular module positioning can be avoided to the greatest extent. Description of the situation: the content of the situation description is an illegal situation labeling type when the data is labeled. Position information: the position information is formed by jointly positioning a GPS positioning module and a binocular module, the GPS positioning module acquires the absolute position of the inspector, the binocular module assigns three-dimensional coordinates to all points on the selected left and right eye images through calculation, the coordinates use a left lens center as a coordinate origin, namely the relative position of the center of a target frame and the inspector can be judged (the coordinates of the center point of the target frame in the left eye image in a three-dimensional coordinate system are the relative positions of the target and the inspector), the absolute position of the inspector and the relative position of the center of the target and the inspector are combined (the positioning of the identified target is realized by combining the GPS and the binocular module, so that the inspector can clearly determine the unique position of the inspector without reaching the vicinity of the target and only finding the target in the visual field. And storing the time information and the inspector information according to actual conditions. Target absolute position T ap The mathematical expression determined is:
Figure 747810DEST_PATH_IMAGE002
wherein T is ap Is the target absolute position; i is ap The absolute position of the inspector is determined by a GPS positioning module; t is rp The relative position of the target and the inspector is determined by the binocular module.
And (3) analyzing abnormal behaviors of constructors: the moving tracks of the center points of the target frames of the normal constructor and the constructor after drinking are respectively collected and decomposed into three characteristics of time t, X-direction coordinates under an image coordinate system and Y-direction coordinates under the image coordinate system (in the image coordinates, a pixel point at the upper left corner of an image is taken as an origin of coordinates, the horizontal direction is taken as the X direction, and the vertical direction is taken as the Y direction), and an LSTM network is used for learning and classification judgment so as to distinguish whether the constructor is on duty after drinking. And analyzing the action track by using an LSTM network, constructing a drinking data set by tracking, realizing abnormal behavior supervision after training, immediately giving an alarm to a patrol inspector if the construction worker judges that the construction worker drinks and goes on duty, storing the patrol inspection result, informing the responsible person to modify and carrying out safety education on the drinker.
Construction material statistics: and carrying out preliminary data statistics on the construction materials with the same category according to the material category marked in the routing inspection data set, wherein the material quantity statistics is simple statistics on the number of the identified target frames with the same category. The auxiliary inspection system stores the material types and the quantity, and the material quantity counting function is realized. The inspection personnel can manually carry out data statistics and result storage on the materials to be counted according to actual conditions.
Rectification and modification supervision: and when the polling personnel perform secondary polling, the polling results are stored again at the last time and the position where the violation event is recorded, the two polling results are compared, and whether the violation content is corrected last time is analyzed. And if the rectification is finished, recording the rectification effect and storing. If not, the responsible person is notified to complete the rectification.
Nothing in this specification is said to apply to the prior art.

Claims (7)

1. The utility model provides a movable wisdom building site assists patrolling and examining method based on virtual reality, its characterized in that, the supplementary patrolling and examining system that this patrolling and examining method used includes binocular camera, VR glasses, raspberry group, GPS orientation module, alarm module, and binocular camera, raspberry group, GPS orientation module, alarm module all integrate on VR glasses, and this method includes following content:
a, constructing a routing inspection data set:
the method comprises the following steps of constructing a patrol inspection data set containing images with different non-normative and dangerous behaviors, wherein the patrol inspection data set respectively collects images in normal daytime and images at night under light source searchlighting, and the contents of the patrol inspection data set comprise the following three categories: marking violation risk degrees of different violation events by constructors, construction environment and construction quality; the construction quality comprises the steps of collecting cracks, pits and falling images of the wall surface, and labeling the images of various construction materials with material types;
b, building a patrol model:
the inspection model comprises an improved YOLOv5 model and an adaptive DeepsORT algorithm, the improved YOLOv5 model is used for target detection, 23 layers are shared, the target detection is mainly divided into two parts, wherein 0-7 layers are called a Backbone, 8-22 layers are called a Head, the Backbone is sliced by 0 Focus layer, 1-5 layers and 7 layers are formed by ShuffleNet V2_ Block, a channel attention machine is inserted in the 6 th layer to form an ELU-CABLock, the ELU-CABLock is subjected to average pooling in the horizontal direction and the vertical direction, then transform operation is carried out to code spatial information, and finally the spatial information is fused in a way of weighting on the channels;
the adaptive DeepsORT algorithm is used for target tracking and statistics, the maxage parameters in the DeepsORT network are set as floating parameters, and the dynamic selection rules of the maxage parameters of different targets are as follows: the ratio of the width w of the object frame of the shelter to the normal walking speed v of the adult male is multiplied by the frame rate FPS of the camera, and the calculation formula is as follows:
Figure FDA0003796940900000011
the normal walking speed v of the adult male is a constant, and each target has a maxage parameter when multi-target tracking is carried out;
c model training
B, training a routing inspection model by using the routing inspection data set constructed in the step A to obtain a trained routing inspection model, wherein the trained routing inspection model can identify and track violation events and construction materials and carry out data statistics on the violation events and the types of the construction materials;
d, judging whether to drink wine or not
Collecting the action tracks of different normal persons and persons after drinking wine to form a wine drinking data set, representing the action tracks by using three characteristics of time t, X-direction coordinates under an image coordinate system and Y-direction coordinates under the image coordinate system, training an LSTM network by using the wine drinking data set, and judging whether a constructor drinks wine by using the trained LSTM network; when the trained inspection model is used for detecting that the inspection result belongs to the class of constructors, continuously tracking the constructors for 3 seconds by using a self-adaptive DeepsORT algorithm, acquiring a moving track of the central point of a target frame, and inputting the moving track into an LSTM network according to three characteristics of time t of the moving track, X-direction coordinates under an image coordinate system and Y-direction coordinates under the image coordinate system to determine whether the constructors drink wine or not;
e positioning inspection result
Outputting a routing inspection result by using the trained routing inspection model, positioning a target frame of the routing inspection result to obtain an absolute position of the target, and obtaining the absolute position T of the target ap The mathematical expression determined is:
T ap =I ap +T rp
wherein T is ap Is the target absolute position; i is ap The absolute position of the inspector is determined by a GPS positioning module; t is rp The relative position of the target and the inspector is determined by positioning through a binocular camera;
f result display
The inspection information is displayed in VR glasses, the inspection result is automatically stored in a storage module, and the inspection result comprises violation images, condition description, position information, time information and inspector information; transmitting the inspection results to the cloud in sequence according to the danger degree of the illegal contents, preferentially transmitting dangerous and urgent conditions, and simultaneously sending a prompt tone to an inspector through an alarm module according to the danger degree of the illegal contents; the storage module deletes stored contents regularly, the cloud stores the inspection result for a long time, the inspection result is stored in the cloud, the generated inspection report is sent to a responsible person, and the inspection result is notified to the responsible person to correct illegal contents;
the condition description comprises violation categories, and if the violation categories are violating by constructors, the information description also comprises whether the constructors drink wine or not;
and when the next patrol is carried out to the position where the violation content is recorded last time, automatically storing the next patrol result and matching the violation content recorded last time, comparing the detection result to determine whether the rectification is finished or not, if the rectification is finished, recording the rectification effect and storing the rectification effect, and if the rectification is not finished, informing the responsible person of finishing the rectification.
2. The virtual reality-based movable intelligent construction site auxiliary inspection method according to claim 1, wherein the auxiliary inspection system takes VR glasses as an assembly frame of the head-mounted auxiliary inspection equipment, centers of lenses of two cameras of a binocular camera are placed at the same plane with the eyes, and camera parameter correction of the cameras is completed; the centers of the two cameras are always in the same straight line; VR glasses, a storage module, a GPS positioning module, a data transmission module, an alarm module and a visible light source are connected to the raspberry group, the storage module, the GPS positioning module, the data transmission module, a battery, the alarm module is respectively connected with corresponding interfaces of the raspberry group, the storage module, the GPS positioning module, the data transmission module, the battery, the alarm module and the raspberry group are integrated together to form an integrated module, the integrated module is arranged in front of a display screen of the VR glasses and behind a camera, the integrated module is packaged independently and is attached to the display screen of the VR glasses, and a user button is arranged on the side face of the VR glasses.
3. The virtual reality-based movable intelligent construction site auxiliary inspection method according to claim 2, wherein the user buttons comprise an operation object selection button, a confirmation button, a power-on key and a power-on switch key, wherein the operation object selection button starts with a target frame center point close to a target at the upper left corner of the image when selecting the operation object, sequentially selects the targets from left to right and from top to bottom, and selects the target with the frame center point close to the upper left corner of the image again until a target with the frame center point closest to the lower right corner of the image is searched; the confirmation key is used for confirming or canceling the selected button, one click is used for confirming the current operation, and two continuous clicks are used for canceling the current operation; after the target to be operated is confirmed, selecting subsequent operation to be executed on the operation target by using the operation object selection button, and confirming and canceling the subsequent operation by matching with the confirmation button, wherein a corresponding menu is popped up after the confirmation button for subsequent selection; the power-on key is used for executing power-on and power-off operations; the power switch key is used for executing the switch operation of the visible light source.
4. The virtual reality-based movable intelligent construction site auxiliary inspection method according to claim 2, characterized in that an inspection data set is constructed, and the image data required to be collected by the constructor are facial images of the constructor, images of a worker wearing a safety helmet and a worker not wearing the safety helmet correctly, images of a worker wearing a work garment and a worker not wearing the work garment correctly, images of a safety rope and a safety rope not used during high-altitude operation, and images of smoking and non-smoking; the data needing to be collected in the construction environment are images with and without open fire, images with and without fire fighting equipment in a fire fighting facility area, warning signs and fences in a dangerous construction area and images without warning signs and fences.
5. The virtual reality-based movable intelligent construction site auxiliary inspection method according to claim 2, wherein the danger degree of the violation event is marked when an inspection data set is constructed on one hand, and is judged by an inspector and is manually selected and judged by a user button on the other hand.
6. The virtual reality-based movable intelligent construction site auxiliary inspection method according to claim 1, wherein the automatic preservation and tracking of the illegal image by the adaptive deep start algorithm comprises the following steps: storing a frame image of which the center of a target frame is closest to the center point of a left eye image in all frame images of the same ID tracked by a self-adaptive DeepsORT algorithm in a left lens in a binocular camera; the saved image presents the offending content in the closest center position.
7. A movable intelligent construction site auxiliary inspection system based on virtual reality is characterized by comprising VR glasses, a storage module, a GPS positioning module, a data transmission module, an alarm module, a visible light source, a construction material counting module and an rectification and supervision module, wherein the VR glasses are used as an assembly frame of head-mounted auxiliary inspection equipment, the centers of two cameras of a binocular camera are placed at the same plane with the eyes, and the camera parameter correction of the cameras is completed; the centers of the two cameras are always in the same straight line; connecting VR glasses, a storage module, a GPS positioning module, a data transmission module, an alarm module and a visible light source to a raspberry group, wherein the storage module, the GPS positioning module, the data transmission module, a battery and the alarm module are respectively connected with corresponding interfaces of the raspberry group;
the user button is a medium for interaction between a user and the inspection system, and an inspector selects inspector information, selects and stores images, turns on and off a lamp source and turns off a machine through the user button, and displays a lens acquisition and inspection model identification tracking result in VR glasses; the positioned operation image is a left lens image, a two-dimensional scene in front of eyes forms a three-dimensional space coordinate with the left lens as a coordinate origin through coordinate transformation, each point in the image can find a unique three-dimensional coordinate in the three-dimensional space, and the coordinate point of the center point of a target frame in the left image in the three-dimensional space is obtained, so that the space relative position of the target and an inspector can be determined;
the raspberry pie is loaded with a routing inspection model, an illegal image selection program, a storage program, a construction material counting module, a rectification and supervision module, a constructor behavior abnormity analysis module and an information input module,
the inspection model is used for identifying and tracking violation events and construction materials and carrying out data statistics on the types of the violation events and the construction materials; the illegal image selection program is used for selecting and confirming an operation object, and the storage program is used for manually storing the manually selected illegal image;
the inspection model comprises an improved YOLOv5 model and an adaptive DeepsORT algorithm, the improved YOLOv5 model is used for target detection, 23 layers are shared, the target detection is mainly divided into two parts, wherein 0-7 layers are called a Backbone, 8-22 layers are called a Head, the Backbone is sliced by 0 Focus layer, 1-5 layers and 7 layers are formed by ShuffleNet V2_ Block, a channel attention machine is inserted in the 6 th layer to form an ELU-CABLock, the ELU-CABLock is subjected to average pooling in the horizontal direction and the vertical direction, then transform operation is carried out to code spatial information, and finally the spatial information is fused in a way of weighting on the channels;
the alarm module is used for alarming with different volumes according to the violation risk degree of the violation event, so that a patrol inspector can accurately judge the situation and timely handle the dangerous situation;
the storage module is used for storing the detection result, transmitting the detection result to the cloud end according to the violation risk degree of the violation event and storing the detection result in the cloud end for a long time, and storing the information in the storage module for a short time and deleting the information regularly; the stored information content includes: the violation images, the condition description, the position information, the time information and the inspector information can be manually stored by an inspector according to the actual condition on site, the violation danger degree is selected, and the stored images present violation contents at the position closest to the center point of the left-eye image; the content of the condition description is an illegal condition marking category during data marking;
the GPS positioning module is used for acquiring the absolute position of the inspector and acquiring the absolute position of a target by combining with the binocular camera;
the construction material counting module is used for carrying out preliminary data counting on construction materials with the same category and storing the category and the quantity of the materials, and inspection personnel can manually carry out data counting and result storage on the materials needing counting according to actual conditions;
the rectification monitoring module is used for storing the routing inspection result again at the position of the last violation event record when the routing inspection personnel performs secondary routing inspection, comparing the routing inspection results of the two times, and analyzing whether the violation content of the last time is rectified or not; if the rectification is finished, recording and storing the rectification effect, and if the rectification is not finished, informing the responsible person of finishing the rectification;
the constructor behavior abnormity analysis module is used for judging whether the constructor drinks wine or not;
the information input module is used for inputting the information of the inspector and setting the inspection range;
before the inspection system is used, information of all inspectors is input into the auxiliary inspection system, the inspectors select own information through user buttons, the inspectors firstly input personal information when using the inspection system, the inspection system wears equipment on the head, the camera collects the conditions of a construction site in real time, and collected construction site images and inspection model inspection results are displayed in VR glasses.
CN202210780559.5A 2022-07-05 2022-07-05 Virtual reality-based movable intelligent auxiliary inspection method and system for construction site Active CN114863489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210780559.5A CN114863489B (en) 2022-07-05 2022-07-05 Virtual reality-based movable intelligent auxiliary inspection method and system for construction site

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210780559.5A CN114863489B (en) 2022-07-05 2022-07-05 Virtual reality-based movable intelligent auxiliary inspection method and system for construction site

Publications (2)

Publication Number Publication Date
CN114863489A CN114863489A (en) 2022-08-05
CN114863489B true CN114863489B (en) 2022-09-20

Family

ID=82625967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210780559.5A Active CN114863489B (en) 2022-07-05 2022-07-05 Virtual reality-based movable intelligent auxiliary inspection method and system for construction site

Country Status (1)

Country Link
CN (1) CN114863489B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115759996B (en) * 2022-11-23 2023-08-08 国网四川省电力公司达州供电公司 Machine room standardized inspection method, equipment and medium
CN116739438B (en) * 2023-08-10 2023-11-17 南通四建集团有限公司 Safety education learning result testing method and system for virtual reality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101683277B1 (en) * 2016-04-07 2016-12-06 주식회사 코엔텍 System of virtual simulation guiding and performing management for industrial labor in water quality management facilities
CN112272356A (en) * 2020-10-28 2021-01-26 国网上海市电力公司 Automatic transformer substation inspection system and inspection method
CN112907389A (en) * 2021-04-09 2021-06-04 北京中安瑞力科技有限公司 Land, air and space integrated intelligent construction site system and management method
KR20210131581A (en) * 2020-04-24 2021-11-03 (주)오픈웍스 Workplace Safety Management Apparatus Based on Virtual Reality and Driving Method Thereof
CN113934212A (en) * 2021-10-14 2022-01-14 北京科创安铨科技有限公司 Intelligent building site safety inspection robot capable of being positioned
CN114399279A (en) * 2022-01-14 2022-04-26 中铁十五局集团有限公司 Intelligent building site safety management system and auxiliary device based on BIM

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102515A (en) * 2020-09-14 2020-12-18 深圳优地科技有限公司 Robot inspection method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101683277B1 (en) * 2016-04-07 2016-12-06 주식회사 코엔텍 System of virtual simulation guiding and performing management for industrial labor in water quality management facilities
KR20210131581A (en) * 2020-04-24 2021-11-03 (주)오픈웍스 Workplace Safety Management Apparatus Based on Virtual Reality and Driving Method Thereof
CN112272356A (en) * 2020-10-28 2021-01-26 国网上海市电力公司 Automatic transformer substation inspection system and inspection method
CN112907389A (en) * 2021-04-09 2021-06-04 北京中安瑞力科技有限公司 Land, air and space integrated intelligent construction site system and management method
CN113934212A (en) * 2021-10-14 2022-01-14 北京科创安铨科技有限公司 Intelligent building site safety inspection robot capable of being positioned
CN114399279A (en) * 2022-01-14 2022-04-26 中铁十五局集团有限公司 Intelligent building site safety management system and auxiliary device based on BIM

Also Published As

Publication number Publication date
CN114863489A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN114863489B (en) Virtual reality-based movable intelligent auxiliary inspection method and system for construction site
CN110390265A (en) A kind of recognition detection method and system of unmanned plane inspection
CN110398982A (en) A kind of method for inspecting and system of unmanned plane
CN109377703A (en) A kind of forest fireproofing early warning system and its method based on machine vision
CN113705372B (en) AI identification system for join in marriage net job site violating regulations
CN111275923B (en) Man-machine collision early warning method and system for construction site
CN113516076A (en) Improved lightweight YOLO v4 safety protection detection method based on attention mechanism
KR102149832B1 (en) Automated Violence Detecting System based on Deep Learning
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
CN209543514U (en) Monitoring and alarm system based on recognition of face
CN111920129A (en) Intelligent safety helmet system
CN112184773A (en) Helmet wearing detection method and system based on deep learning
CN113240249B (en) Urban engineering quality intelligent evaluation method and system based on unmanned aerial vehicle augmented reality
CN112465309A (en) Visual construction management and control system and implementation method thereof
CN109523041A (en) Nuclear power station management system
CN111401310B (en) Kitchen sanitation safety supervision and management method based on artificial intelligence
CN115294533A (en) Building construction state monitoring method based on data processing
CN113920461A (en) Power grid operation and maintenance process image monitoring system and monitoring method
CN112883755A (en) Smoking and calling detection method based on deep learning and behavior prior
CN213128247U (en) Intelligent safety helmet system
CN113807240A (en) Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition
CN111242010A (en) Method for judging and identifying identity of litter worker based on edge AI
CN115829324A (en) Personnel safety risk silent monitoring method
CN115240277A (en) Security check behavior monitoring method and device, electronic equipment and storage medium
Sharma et al. An edge-controlled outdoor autonomous UAV for colorwise safety helmet detection and counting of workers in construction sites

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant