CN112200043A - Intelligent danger source identification system and method for outdoor construction site - Google Patents

Intelligent danger source identification system and method for outdoor construction site Download PDF

Info

Publication number
CN112200043A
CN112200043A CN202011058805.3A CN202011058805A CN112200043A CN 112200043 A CN112200043 A CN 112200043A CN 202011058805 A CN202011058805 A CN 202011058805A CN 112200043 A CN112200043 A CN 112200043A
Authority
CN
China
Prior art keywords
module
construction site
video frame
construction
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011058805.3A
Other languages
Chinese (zh)
Other versions
CN112200043B (en
Inventor
司方来
李睿
崔昊
蔡杰
王玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Utone Construction Consulting Co ltd
Original Assignee
China Utone Construction Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Utone Construction Consulting Co ltd filed Critical China Utone Construction Consulting Co ltd
Priority to CN202011058805.3A priority Critical patent/CN112200043B/en
Publication of CN112200043A publication Critical patent/CN112200043A/en
Application granted granted Critical
Publication of CN112200043B publication Critical patent/CN112200043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The invention discloses an intelligent danger source identification system for an outdoor construction site, which comprises: the system comprises a video monitoring module, a comprehensive dangerous source identification module, an anti-intrusion detection module, a safety helmet identification module and an alarm module, wherein the video monitoring module is used for monitoring in real time through a camera; the comprehensive danger source identification module is used for identifying a comprehensive danger source on a construction site; the anti-intrusion detection module is used for detecting whether a person approaches a high-risk area; the safety helmet identification module is used for identifying whether the safety helmet is worn by the identification target face; and the alarm module is respectively connected and matched with the safety helmet identification module, the anti-intrusion detection module and the comprehensive danger source identification module to send out sound and/or light early warning. The invention combines deep learning with computer vision technology, intelligently monitors the construction site, studies, judges and disposes potential risk factors, and ensures the timeliness and comprehensiveness of safety information feedback of the construction site.

Description

Intelligent danger source identification system and method for outdoor construction site
Technical Field
The invention relates to the technical field of danger source early warning, in particular to an outdoor construction site-oriented danger source intelligent identification system and method.
Background
The outdoor construction hazard source has the characteristics of diversity, comprehensiveness, concealment and easiness, and construction safety accidents are possibly caused greatly when the outdoor construction is not carried out according to the regulation. Due to the complexity and the particularity of outdoor construction scenes and the weak safety consciousness of construction personnel at present, construction safety accidents are frequent. At present, a system capable of intelligently identifying and alarming common dangerous sources in an outdoor construction site is lacked, and the incidence rate of safety accidents is reduced from the source.
The existing construction site monitoring system based on artificial intelligence and computer vision only applies a target detection technology to identify a target to be identified, and the identification target is single, cannot support comprehensive potential hazard source detection, and cannot meet the requirements of complex scenes such as outdoor construction. For the reasons, the conventional monitoring system cannot simultaneously meet the monitoring requirements of a general danger source on a construction site and a special danger source for outdoor construction. Meanwhile, only by using a target recognition technology, the computer can only understand simple and single semantic information in the construction scene, and cannot understand complex and rich semantics in the construction scene.
The intelligent danger source identification system for the construction site supports all-around danger source identification of outdoor construction scenes and behavior analysis of constructors by utilizing deep learning and computer vision technology, and avoids potential accident hidden dangers of the construction site from the source.
Disclosure of Invention
With the rapid development of digital economy, artificial intelligence is used as part of information infrastructure and fusion infrastructure in new infrastructure, and is greatly different in the digital treatment of smart cities. Meanwhile, the epidemic situation accelerates the development process of the large emergency industry, and the social demands for safety control, emergency mechanism system, emergency management system and other related safety products are called. How to use digital technologies such as artificial intelligence and computer vision to enable the large emergency industry, and the safety management is controllable and predictable by means of digital intelligent management, which is a problem worthy of thinking in social management.
In order to achieve the purpose, the technical scheme of the invention is as follows:
intelligent danger source identification system for outdoor construction site comprises: the system comprises a video monitoring module, a comprehensive dangerous source identification module, an anti-intrusion detection module, a safety helmet identification module and an alarm module, wherein the video monitoring module is respectively connected with the comprehensive dangerous source identification module, the anti-intrusion detection module and the safety helmet identification module,
the video monitoring module is used for monitoring in real time through a camera;
the comprehensive danger source identification module is used for identifying a comprehensive danger source on a construction site;
the anti-intrusion detection module is used for detecting whether a person approaches a high-risk area;
the safety helmet identification module is used for identifying whether a safety helmet is worn by a human face of an identification target or not;
and the alarm module is respectively connected with the safety helmet identification module, the anti-intrusion detection module and the comprehensive danger source identification module and is used for sending out sound and/or light early warning according to the matching of an analysis result.
Preferably, the device also comprises a face recognition module and a mask recognition module, the video monitoring module is connected with the face recognition module, the face recognition module is connected with the mask recognition module, the alarm module is respectively connected with the face recognition module and the mask recognition module, wherein,
the face recognition module is used for carrying out identity authentication on the target face image, judging whether a qualified person is qualified for construction or not, and sending out sound and/or light early warning if the qualified person is not qualified for construction; if the construction qualification is met, the target face image meeting the construction qualification is sent to a mask recognition module;
and the mask identification module is used for identifying whether the mask is worn on the target face image which accords with the construction qualification.
Preferably, the system further comprises a weather self-adapting module, which is used for judging the real-time weather condition and adjusting the image according to the real-time weather condition to enhance the image.
Preferably, the comprehensive danger source identification module comprises a conventional detection module, a man-lift operation detection module and a depth analysis module, wherein,
the conventional detection module is used for detecting whether a construction site has the conditions that a constructor does not wear a reflective garment, whether a warning mark is placed and whether a construction tool is subjected to insulation treatment;
the people ladder operation detection module is used for detecting whether people ladder operation exists in a construction site or not, and if yes, sending a detection video frame to the depth analysis module;
the depth analysis module is used for receiving the video frames and judging whether single person ladder operation exists in the video frames or not, and/or the number of constructors on the ladder is more than one, and/or the actions of the ladder personnel are not standard.
Preferably, still include the alarm module, the alarm module is connected the cooperation with face identification module, gauze mask identification module, safety helmet identification module, anti-intrusion detection module, conventional detection module and degree of depth analysis module respectively.
The intelligent identification method of the dangerous source facing the outdoor construction site comprises the following steps:
extracting a video frame based on the obtained outdoor construction site video stream;
analyzing the video frame by using the model, identifying the comprehensive hazard source of the construction site in the video frame, detecting whether a person approaches a high-risk area or not in the video frame, and identifying whether a safety helmet is worn on a target face in the video frame or not;
if a danger source is identified in the video, and/or a person approaches a high-risk area, and/or a safety helmet is not worn, a sound and/or light early warning is sent.
Preferably, after extracting the video frame, the method further comprises the following steps: and judging the real-time weather condition, and adjusting the video frame according to the real-time weather condition to enhance the image.
Preferably, after extracting the video frame, the method further comprises the following steps:
extracting a target face image, performing identity authentication according to the target face image, comparing the authenticated qualified person by using a database, judging whether the qualified person has construction qualification, and if the qualified person does not have the construction qualification, sending out sound and/or light early warning;
and identifying whether the target face image wears the mask or not, and giving out sound and/or light early warning if the mask is not worn.
Preferably, after identifying the potential hazard source of the construction site in the video frame, the method further comprises the following steps: detecting whether people and ladder operation exists in a construction site, if so, identifying constructors in a ladder space domain in a video frame, cutting the video frame by utilizing the identified boundary frame, identifying the posture of the cut video frame by utilizing a posture detection algorithm, identifying the behavior of the cut video frame, judging whether the single people and ladder operation exists, and/or judging the conditions that the number of the constructors on the ladder is more than one and/or the actions of the escalator personnel are irregular, and if so, giving out sound and/or light early warning.
Preferably, the analysis of the video frame using the model comprises the steps of:
taking the extracted video frame as a training data set;
extracting a characteristic value of a video frame by using a characteristic extraction network combined with a gradient shunting technology;
fusing the characteristic values of different stages by using a self-adaptive characteristic fusion network to obtain a fusion characteristic value;
dividing the anchor frame into a positive sample and a negative sample by using a self-adaptive training sample sampling algorithm to obtain a target value;
substituting the fusion characteristic value and the target value into an attention loss function, and optimizing attention by using an optimizer
Continuously training a model by using a loss function to obtain a prediction model;
and carrying out perception analysis on the real-time video of the construction site by using the prediction model.
Preferably, before extracting the video frame as the training data set, the method further comprises the following steps: judging the real-time weather condition, and performing image enhancement according to the real-time weather adjustment image
Based on the technical scheme, the invention has the beneficial effects that:
1) the comprehensive hazard source identification module can collect 2 million high-quality pictures every day, and the model is iterated and updated by using the data set collected every day, so that the average precision of the mean value of the model is continuously improved. Model iterative optimization is carried out in an agile development-based mode, new danger sources are continuously excavated, and the diversity and the comprehensiveness of the comprehensive danger source identification module can be continuously improved;
2) the occurrence frequency of potential hazard sources in the outdoor construction site is obviously reduced;
3) the qualification rate of the human-elevator operation scene is obviously improved;
4) the qualification rate of the constructors entering the site is obviously improved;
5) the wearing rate of the safety helmet in the construction site is obviously improved;
6) the wearing rate of the mask is obviously improved;
7) even if a dangerous source which possibly causes a safety accident appears on a construction site, the system can timely remind constructors to dispose the corresponding dangerous source through the intelligent identification module and the alarm module.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
FIG. 1: the invention relates to a flow chart of a dangerous source intelligent identification method for an outdoor construction site;
FIG. 2: the invention relates to a model architecture diagram in an intelligent danger source identification method for an outdoor construction site.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Example one
Under the support of research and development funds of an artificial intelligence research and development center of China mail construction and consultation Limited company, the application of artificial intelligence in the field of safety control of construction sites is intensively researched by adopting deep learning and computer vision technologies. The intelligent detection is carried out on the construction site by using a high-performance image processing technology, potential dangerous factors are researched, judged and disposed, and timeliness and comprehensiveness of safety information feedback of the construction site are guaranteed.
As shown in fig. 1 and 2, the intelligent identification system for the dangerous source of the outdoor construction site comprises: a video monitoring module, a face recognition module, a mask recognition module, a comprehensive dangerous source recognition module, an anti-invasion detection module, a safety helmet recognition module and an alarm module, wherein the alarm module is respectively connected with the face recognition module, the mask recognition module, the comprehensive dangerous source recognition module, the anti-invasion detection module and the safety helmet recognition module,
1. and a construction site face recognition module. And acquiring a face image by using a camera, and inputting the face image into a face recognition module.
The specific process comprises the following steps: the face recognition technology is adopted to carry out identity authentication on personnel entering a construction site and judge whether the personnel meet the qualification of site construction, for example, whether special construction personnel hold construction qualification certificates. The detection model is divided into two stages of face detection and face recognition, and the two-stage model can simplify the data set acquisition process and reduce the project cost. The face detection model captures and stores the face region, transmits the face image to the face detection module, and utilizes the face recognition module based on metric learning to identify and authenticate the identity of constructors.
2. A mask identification module. Whether a person wears a mask or not has become a new source of danger since the time of the confid-19 outbreak. Therefore, a cascade identification system adopting face detection and mask identification is developed to identify whether a constructor wears a mask.
The specific process comprises the following steps: firstly, a face area is intercepted, an image of the face area is input into a mask recognition system, and whether a constructor wears a mask is predicted. The neural network parameters are compressed by combining the layer-by-layer convolution technology, so that the calculated amount of the mask recognition algorithm is reduced, and the overall operation efficiency of the system is improved.
Model training: the head images of people wearing the mask and people not wearing the mask are collected to be used as data sets, the prediction model is a two-classification model, and the prediction result is divided into two types of wearing the mask and people not wearing the mask. Firstly, carrying out face alignment on a data set, and then inputting the data set into a prediction model for end-to-end training.
3. The comprehensive danger source identification module comprises a conventional detection module, a man-ladder operation detection module and a depth analysis module, wherein the conventional detection module is used for detecting whether a construction site has the condition that a constructor does not wear a reflective garment, whether a warning mark is placed and whether a construction tool is subjected to insulation treatment; the people ladder operation detection module is used for detecting whether people ladder operation exists in a construction site or not, and if yes, sending the detection video frame to the depth analysis module; the depth analysis module is used for receiving the video frames and judging whether single person ladder operation exists in the video frames or not, and/or the number of constructors on the ladder is more than one, and/or the actions of the ladder personnel are not standard. The system comprehensively supports the recognition of common danger sources such as whether warning signs are placed on outdoor construction scenes according to requirements, whether construction personnel wear reflective clothes, whether construction tools are subjected to insulation treatment, whether the construction site has man-elevator site operation and the like.
The specific process comprises the following steps: firstly, a high-definition camera of a construction site is used for collecting a real-time scene of the construction site, and each frame of picture is transmitted to the comprehensive danger source identification module. And then preprocessing and predicting the field image, and identifying whether a potential hazard source exists in the construction field by utilizing target detection. If the system detects that a danger source exists, for example, a constructor does not wear a reflective garment according to construction requirements, then the detection result is input to the warning module to warn the constructor.
If the on-site people and elevator operation behaviors are identified, then the pictures are transmitted to a people and elevator operation scene depth analysis module based on the space-time convolution network, and whether constructors meet the safety requirements of people and elevator operation is analyzed by the module.
Model training: and (3) acquiring a scene hazard source photo as a data set, screening and manually marking the data set, and improving the generalization capability of the system by adopting a data enhancement technology. In order to enhance the accuracy of small target identification, a gradient flow-splitting algorithm supporting self-adaptive feature fusion learning is provided. Meanwhile, the problem of unbalance of the positive and negative samples in the training stage is solved by using the attention loss function, and the identification accuracy is improved.
4. And a human-elevator operation scene depth analysis module. The real-time scene of the field operation collected by the high-definition camera is used as the input of the module, and the deep semantic analysis is carried out on the human operation scene. The module firstly judges whether the people ladder operation exists, and if the people ladder operation exists, the operation behavior detection is carried out. And if the scene that the operation ladder is not supported by the operating personnel according to the standard in the single-person ladder operation or the multi-person ladder operation is detected, judging that the operation ladder is a danger source.
The specific process comprises the following steps: and in the first stage, whether a man-stair operation scene exists in a construction site is identified by utilizing a comprehensive danger source identification module. And in the second stage, if the person ladder operation scene exists, judging whether at least two constructors exist in the vertical space domain of the ladder by using an original person ladder operation scene specification identification algorithm. If a single escalator operation scene is found, the system directly judges the escalator operation scene as a hazard source, otherwise, the system enters a third stage. And in the third stage, posture detection and behavior recognition in a time-space domain are carried out on personnel in the ladder space domain, and whether the operation personnel operate according to the standard or not is judged by utilizing high-level semantics in the pictures. And if the irregular operation exists, the real-time alarm is carried out by utilizing the alarm module.
Model training: and acquiring a picture of a construction site, identifying constructors in the picture by using a target detection model pre-trained by a COCO data set, and screenshot and storing the constructors to form a constructor data set. And manually marking the attitude points of the constructor data set to form the constructor attitude data set. And then, taking the constructor posture data set and the MPII data set as training sets, and inputting the training sets into a single posture detection model for end-to-end training.
5. And a construction site safety helmet identification module.
The specific process comprises the following steps: the safety helmet identification system based on the convolutional neural network is provided, whether a constructor wears a safety helmet is identified through a target detection algorithm, and the constructor who does not wear the safety helmet is framed out by a red early warning frame. The module will enter this information into the alarm module after finding that someone is not wearing a safety helmet at the job site. And carrying out voice warning through a built-in voice system of the warning module. By means of intelligent identification, the wearing rate of the safety helmet on the construction site is improved by 50%.
Model training: the construction scene including wearing and not wearing the safety helmet is collected and used as a data set, and manual marking is carried out on the data set to form a safety helmet data set. And establishing a multi-task detection model, performing data enhancement on a safety helmet data set, inputting the data into the detection model, and training the model.
6. And an anti-intrusion detection module. And the real-time image of the high-risk area is captured by the on-site high-definition camera, and the image is input into the anti-intrusion detection module for analysis.
The specific process comprises the following steps: and monitoring a high-risk area which is not allowed to enter in a construction site by utilizing a multi-target dynamic tracking technology. The high-definition camera is used for collecting the field operation real-time scene as the input of the module, and the anti-intrusion detection is carried out on the construction high-risk area. When the object is found to enter the visual field of the camera, the personnel in the visual field of the camera are tracked by utilizing the dynamic tracking technology, and when the personnel are found to be close to the critical point of the high-risk area, the information is transmitted to the warning module, and the warning module can automatically carry out voice warning.
7. And a weather adaptation module. After the constructor enters a construction site, the camera is used for collecting the operation environment of the site in real time, and the weather self-adaptive module is used for carrying out image enhancement on the operation environment. When the system automatically judges that the current environment is insufficient in exposure or in a dark environment, the system automatically performs tone mapping, and adjusts the contrast and the tone of the image by using the technologies of gamma correction, histogram equalization and the like. When the system automatically judges that the current weather is in foggy days, the system automatically calls a dark channel defogging algorithm, utilizes a defogging model to perform defogging processing on a video stream, and adds guiding filtering to the defogging processing algorithm to enhance the defogging quality of the image. The image enhancement technology of the self-adaptive weather environment can minimize the influence of weather factors on the system performance capability when the intelligent emergency system works in the outdoor environment, and improve the system operation robustness. By adding the image enhancement module, the overall operation efficiency of the system is improved by 20%, the identification accuracy of each module is improved by 10.3% on average, the recall rate is improved by 5% relatively, and the identification accuracy is improved by 7% relatively.
The intelligent identification method of the dangerous source facing the outdoor construction site comprises the following steps:
firstly, a work site image is obtained by utilizing a real-time sensing camera of a construction site, and the collected site image is used as a training data set. And then extracting the characteristic value of the video frame by utilizing a characteristic extraction network combined with a gradient shunting technology. And fusing the characteristic values of different stages by using the self-adaptive characteristic fusion network to obtain a fusion characteristic value. And then, dividing the anchor frame into a positive sample and a negative sample by using a self-adaptive training sample sampling algorithm to obtain a target value. And finally, substituting the fusion characteristic value and the target value into the attention loss function, optimizing the loss function by using an optimizer, and continuously training the model. After the model training is completed, we obtain a prediction model. And carrying out perception analysis on the real-time video of the construction site by using the prediction model. Wherein, the concrete description is as follows:
a completely new feature extractor is employed. The danger source identification feature extractor adopts convolution kernels of 3 x 3 and 1 x 1, dimension reduction is carried out on a channel layer by using the convolution kernels of 1 x 1, model calculation amount is reduced, and then feature extraction of a local area is carried out by adopting the convolution kernels of 3 x 3. And, with reference to the residual error network, long jump connection is introduced in the feature extraction stage, so that the convergence speed of the model is increased. Finally, the feature extractor contains a total of 50 convolutional layers, and the model architecture is shown in FIG. 2.
A gradient shunt technique is used. Convolutional neural networks have found widespread use in computer vision due to their model representation capabilities. But its performance benefits from expensive computational resources, and how to reduce computational cost becomes a major issue in the field of object recognition at present. In a danger source identification module, a brand-new gradient distribution model is provided, and redundant gradient information in the model is eliminated by utilizing gradient distribution, so that idle neurons in the model are efficiently utilized. The gradient splitting algorithm divides the input features into two parts, one part participates in the calculation of the local network, and the other part directly crosses the local network and is connected with the result of the previous calculation of the local network in the channel dimension. The gradient flow distribution algorithm can not only reduce the calculation cost and the memory occupation of the network, but also improve the precision and the speed of the hazard source identification module.
Figure BDA0002711459540000081
The DenseNet flow splitting algorithm achieves the effect of gradient flow splitting through cross-region connection as described in the above formula. Wherein matmul represents a matrix multiplication, (X)0,X1,..,Xk) Representing that k +1 matrixes are connected according to the dimension of a channel; w is anN ═ 1, 2, 3,. and k) is the weight value of each layer, XnN ═ 1, 2, 3,. and k) is input data for each layer; xshortcutFor feature values across regions not involved in the calculation, XstageIs the output value of the regional neural network.
And adopting an adaptive feature fusion network. In a traditional target detection algorithm, features of different feature levels are only simply subjected to channel fusion by adopting upsampling and convolutional layers. However, the contribution degree of the features of different levels to the target identification performance is different, and a feature fusion network with learnable parameters is provided. In the up-sampling algorithm, 1 × 1 convolution kernel is used for unifying channels, and then the characteristics are up-sampled; in the down-sampling algorithm, the channel and resolution are adjusted simultaneously by simply adopting a 3 x 3 convolution kernel. Learnable parameters wij are then introduced for each feature layer, and the learnable parameters are normalized using softmax. Finally, each feature layer is weighted and summed. Through the self-adaptive fusion network, the incidence relation among different hierarchical features is directly and explicitly shown, and the representation capability of the model is greatly improved.
Figure BDA0002711459540000091
The feature fusion algorithm is described by the above formula: wherein
Figure BDA0002711459540000092
Respectively, the L-th layer characteristic value at the corresponding learnable weight parameter of the point (i, j),
Figure BDA0002711459540000093
indicating that the nth layer characteristic value is transformed into the l layer characteristic value through convolution and pooling operations,
Figure BDA0002711459540000094
and (3) representing the characteristic value of the fused ith layer at the point (i, j).
An attention loss function is used. In target detection, the imbalance between the number of positive and negative samples in the training phase can result in the positive samples being swamped by the negative samples in the training phase. When the danger source identification algorithm is reversely propagated, in the composition of the parameter gradient, the proportion of negative samples is far larger than that of positive samples, which may cause deviation of the optimization method of the danger source identification model. Currently, the mainstream technology is to filter and proportionally sample positive and negative samples by using a two-stage cascade and biased sampling method. However, using this method to balance positive and negative samples increases the complexity of the model. Therefore, an attention loss function is proposed, in which a sample weight factor is added to the cross entropy loss function. And the sample weight factor is dynamically and adaptively adjusted according to the difficulty of sample classification. The model judges samples difficult to classify, and the sample factors are small; the model determines samples that are easy to classify, with larger sample factors. Through the attention loss function, in the process of model optimization, the attention of the model optimization focuses on samples which are difficult to classify, so that a large number of pure samples which are easy to classify are restrained. And the attention loss function is adopted, so that the complexity of the danger source identification model is reduced, and the model convergence speed and the model complexity are improved.
Loss=-Alpha(α,y)(1-pt)γlog pt
Figure BDA0002711459540000095
Figure BDA0002711459540000096
The attention loss function (focal loss) is described by the above formula, where Alpha (α, γ) and γ are harmonic coefficients, and pt is the difficulty level of prediction of the target region; p is the logistic regression value of the model, y is the sample label value, and α is a constant belonging to 0 to 1.
An adaptive training sample sampling strategy is used. When training sample sampling, different from the traditional sample sampling strategy, an adaptive sample sampling method is provided. As the traditional sample sampling uses the hyper-parameter as the classification threshold of the positive and negative samples, the hyper-parameter in the model is increased, the classification results caused by different classification thresholds are different, and the optimization complexity of the model is increased. And adopting a self-adaptive sampling strategy in a danger source identification algorithm, and taking out 9 sample frames with the anchor frame center points closest to the center point of the real sample frame from each layer of feature layers extracted from the feature pyramid for each real sample frame. Calculating IOUs between all the taken anchor frames and the real sample frames, and calculating the average value and standard deviation of all the IOUs. Finally, the sum of the mean and standard deviation is used as the adaptive classification threshold.
Figure BDA0002711459540000101
iou_mean=Mean(ioui)(i∈Si)
iou_std=Std(ioui)(i∈Si)
iou_thres=iou_mean+iou_std
The adaptive training sample sampling strategy is described in the formula above, wherein center _ distance is the distance from the midpoint of the predicted target bounding box to the midpoint of the real sample bounding box; (x)gt,ygt) Is the coordinate value of the center point of the real sample boundary box, (x)pred,ypred) The coordinate value of the central point of the predicted boundary frame; iouiThe similarity degree between the predicted bounding box and the real bounding box is calculated; siSet of 9 prediction boxes closest to the true sample box; iou _ mean and iou _ std are mean values and standard deviations of the 9 similarities, respectively, and iou _ thres is a threshold for distinguishing positive and negative samples.
Further, in the process of detecting the high-risk area anti-intrusion, firstly, the video frame is detected through the target identification model, and the position information of the personnel is detected. Then, a target in the picture is dynamically tracked by using Kalman filtering and target matching between frames based on metric learning. When it is found that there is an object entering the critical area, the system automatically issues an alarm.
Figure BDA0002711459540000102
Figure BDA0002711459540000108
Figure BDA0002711459540000103
Figure BDA0002711459540000104
Figure BDA0002711459540000109
The Kalman filtering algorithm is described by the above formula, wherein
Figure BDA0002711459540000105
Which are posterior estimates of k-1, k frames in the video,
Figure BDA0002711459540000106
for a posteriori estimation from the previous frame
Figure BDA0002711459540000107
Predicting the obtained prior value; a is a state transition matrix, ATThe matrix is a transposed matrix of the matrix A, and the B is a conversion matrix for converting the noise into the state; u. ofk-1For the action matrix of the outside world on the system, in the target tracking algorithm of the module, uk-1Set to 0; pk,Pk-1Are respectively as
Figure BDA0002711459540000111
The covariance of (a) of (b),
Figure BDA0002711459540000112
for the covariance of the predicted values, H is a transformation matrix from the state coordinate system to the measurement coordinate system, HTA transposed matrix that is H; zkIs the observed value of the Kth frame, wherein the observed value is the predicted value of the target detection algorithm, KKFor kalman gain, R, Q are the covariances of noisy data. And predicting the position information of the next frame of the continuous moving object through Kalman filtering, obtaining the position of a boundary frame of the object, and obtaining the characteristic value of the region through ROI posing. Then, the ROI posing is performed on the frame actual prediction result to obtain the characteristic value of the region. And finally, calculating the Mahalanobis distance of the characteristic values of the video frame and the video frame to judge whether the characteristic values of the video frame and the video frame are the same constructor or, and tracking the constructor appearing in the video frame.
And further, after the video frame is extracted, judging the real-time weather condition, adjusting the video frame according to the real-time weather condition, and enhancing the image. The system has the functions of automatic focusing, automatic exposure and automatic white balance, so that the system can adapt to different weather changes, and the accuracy and flexibility of predicting the target identification module outdoors by the danger source identification module and the safety helmet identification module are improved. When the system automatically judges that the current environment is insufficient in exposure or in a dark environment, the system automatically performs tone mapping, and adjusts the contrast and the tone of the image by using the technologies of gamma correction, histogram equalization and the like. When the system automatically judges that the current weather is in foggy days, the system automatically calls a dark channel defogging algorithm, utilizes a defogging model to perform defogging processing on a video stream, and adds guiding filtering to the defogging processing algorithm to enhance the defogging quality of the image. The image enhancement technology of the self-adaptive weather environment can minimize the influence of weather factors on the system performance capability when the intelligent emergency system works in the outdoor environment, and improve the system operation robustness. By adding the image enhancement module, the overall operation efficiency of the system is improved by 20%, the identification accuracy of each module is improved by 10.3% on average, the recall rate is improved by 5% relatively, and the identification accuracy is improved by 7% relatively.
Further, after extracting the video frame, the method also comprises the following steps: extracting a target face image, performing identity authentication according to the target face image, comparing the authenticated qualified person by using a database, judging whether the qualified person has construction qualification, and if the qualified person does not have the construction qualification, sending out sound and/or light early warning;
and identifying whether the target face image wears the mask or not, and giving out sound and/or light early warning if the mask is not worn. Wherein:
in the process of target face recognition and in the process of auditing the working qualification of the operator detected in the video frame, the invention provides a cascade neural network algorithm for detecting and recognizing the face of the on-site constructor, and the recognition result shows that the constructor not meeting the working qualification forbids to carry out operation on the construction site. The face detection stage adopts three stages of cascade neural networks, namely a regional suggestion network, a regional improvement network and an output network. Firstly, a regional suggestion network with 12 × 12 receptive fields is used for carrying out primary screening on positions with human faces in a video. And then, cutting and scaling the area with the face preliminarily judged, and outputting the area as an improved network. The overall receptive field size of the improved network is 24 × 24, and the regions where human faces may exist are corrected. And (4) cutting and scaling the area with the face predicted in the two stages to be used as the output of the output network. The size of the receptive field of the output network is 48 x 48, and finally the position of the face area and the position coordinates of the key points of the face are output. A loss function with an added angle interval is provided in the face recognition stage, and the algorithm is easy to implement and very efficient. By adding the angle interval θ to the last prediction layer, the intra-class interval is decreased and the out-class interval is increased when the score is classified. By adding the loss function of the angle interval, the distinguishing capability of the feature embedding layer for the human face is effectively enhanced.
Figure BDA0002711459540000121
The face recognition penalty function for adding intervals is shown above, where LfaceM is an angle interval, which is a constant, for the loss function value; n is the number of each batch in the training, thetayiIs the angle between the input vector in the final classification layer and the weight vector of the ith row in the final classification layer, thetajAnd s is the product of the normalized input vector mode and the normalized weight vector mode, and is the included angle between the input vector in the final classification layer and the non-yi row weight vector in the final classification layer.
In the process of face mask recognition, a multitask cascade neural network which is the same as a face recognition module is adopted to recognize the face in a video. And editing the recognized face, scaling the face into a data format of 224 × 224 according to the uniform size, and finally inputting the face into a classification model to perform two-classification detection on whether the mask is worn or not. In the classification model, an inverse residual block is used that is different from the original residual network. In the original residual error network, a residual error block firstly uses a 1 × 1 convolution kernel to compress the number of channels of input data, then uses a 3 × 3 convolution kernel to extract local features, and finally uses the 1 × 1 convolution kernel to adjust the channels. In the module, the number of the characteristic data channels is firstly enlarged, and then the number of the characteristic data channels is reduced. Therefore, an inverse residual technique is proposed, which first uses a 1 × 1 convolution kernel to expand the number of input data channels, then uses a 3 × 3 convolution kernel to perform a hierarchical convolution, and finally uses a 1 × 1 convolution kernel to output data. Moreover, intuitively, after feature extraction is performed on image feature information by a previous convolution layer, the extracted feature information is already high-order feature information, and is compressed by using an activation function with compression, so that the high-order feature information is lost to some extent. Therefore, in the inverse residual technique, the activation function is removed from the output data, i.e., the output features are not subjected to the activation function compression process. The system utilizes the artificial intelligence technology, automatically identifies whether the field operation personnel wear the mask, improves epidemic situation prevention and control efficiency, and reduces labor cost. By applying an integrated network model, an inverse residual error technology and a layered convolution technology, the recognition accuracy of the mask recognition module is 98.7%, and the Bayesian error level is achieved.
Further, after identifying potential danger sources of a construction site in the video frame, the method further comprises the following steps: detecting whether people and ladder operation exists in a construction site, if so, identifying constructors in a ladder space domain in a video frame, cutting the video frame by utilizing the identified boundary frame, identifying the posture of the cut video frame by utilizing a posture detection algorithm, identifying the behavior of the cut video frame, judging whether the single people and ladder operation exists, and/or judging the conditions that the number of the constructors on the ladder is more than one and/or the actions of the escalator personnel are irregular, and if so, giving out sound and/or light early warning.
For a person ladder operation scene, a computer is required to further understand the relationship between persons and ladders in an image and identify the behavior of an operator. A semantic understanding algorithm adopting posture detection and behavior recognition is provided, and due to the fact that the real-time operation efficiency of the system is considered, the behavior recognition in the module adopts a heuristic algorithm combining manual design and deep learning. By utilizing the innovative algorithm, dangerous sources such as nonstandard escalator actions in single escalator operation and escalator operation can be effectively stopped in real time, so that the safety of the escalator operation is guaranteed. The gesture detection algorithm adopts a top-down single gesture detection algorithm, firstly detects the region where the human body is located, and then detects the human body gesture point by using the single gesture detection algorithm.
Figure BDA0002711459540000131
Figure BDA0002711459540000132
The single-person gesture detection algorithm is as described in the formula: heatmapgtFor a sample true hotspot graph generated by a gaussian distribution, MSE is the mean square error loss function, heatmappredFor the attitude hotspot graph predicted by the model, sigma is the standard deviation of Gaussian distribution, xi,yiIs the coordinate value of the ith attitude point, and x and y are the coordinate values of the points on the hot spot diagram.
The above description is only a preferred embodiment of the intelligent identification system for a hazard source on an outdoor construction site, which is disclosed by the present invention, and is not intended to limit the protection scope of the embodiments of the present specification. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the embodiments of the present disclosure should be included in the protection scope of the embodiments of the present disclosure.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present specification are all described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

Claims (10)

1. Dangerous source intelligent recognition system towards outdoor job site, its characterized in that includes: the system comprises a video monitoring module, a comprehensive dangerous source identification module, an anti-intrusion detection module, a safety helmet identification module and an alarm module, wherein the video monitoring module is respectively connected with the comprehensive dangerous source identification module, the anti-intrusion detection module and the safety helmet identification module,
the video monitoring module is used for monitoring in real time through a camera;
the comprehensive danger source identification module is used for identifying a comprehensive danger source on a construction site;
the anti-intrusion detection module is used for detecting whether a person approaches a high-risk area;
the safety helmet identification module is used for identifying whether a safety helmet is worn by a human face of an identification target or not;
and the alarm module is respectively connected with the safety helmet identification module, the anti-intrusion detection module and the comprehensive danger source identification module and is used for sending out sound and/or light early warning according to the matching of an analysis result.
2. The intelligent danger source recognition system for outdoor construction sites according to claim 1, further comprising a face recognition module and a mask recognition module, wherein the video monitoring module is connected with the face recognition module, the face recognition module is connected with the mask recognition module, the alarm module is respectively connected with the face recognition module and the mask recognition module, wherein,
the face recognition module is used for carrying out identity authentication on the target face image, judging whether a qualified person is qualified for construction or not, and sending out sound and/or light early warning if the qualified person is not qualified for construction; if the construction qualification is met, the target face image meeting the construction qualification is sent to a mask recognition module;
and the mask identification module is used for identifying whether the mask is worn on the target face image which accords with the construction qualification.
3. The intelligent danger source identification system for outdoor construction sites according to claim 1, further comprising a weather adaptation module for judging real-time weather conditions and adjusting images according to the real-time weather conditions for image enhancement.
4. The intelligent danger source recognition system for outdoor construction sites according to claim 1, wherein the comprehensive danger source recognition module comprises a conventional detection module, a man-lift operation detection module and a depth analysis module, wherein,
the conventional detection module is used for detecting whether a construction site has the conditions that a constructor does not wear a reflective garment, whether a warning mark is placed and whether a construction tool is subjected to insulation treatment;
the people ladder operation detection module is used for detecting whether people ladder operation exists in a construction site or not, and if yes, sending a detection video frame to the depth analysis module;
the depth analysis module is used for receiving the video frames and judging whether single person ladder operation exists in the video frames or not, and/or the number of constructors on the ladder is more than one, and/or the actions of the ladder personnel are not standard.
5. The intelligent identification method of the dangerous source facing the outdoor construction site is characterized by comprising the following steps:
extracting a video frame based on the obtained outdoor construction site video stream;
analyzing the video frame by using the model, identifying the comprehensive hazard source of the construction site in the video frame, detecting whether a person approaches a high-risk area or not in the video frame, and identifying whether a safety helmet is worn on a target face in the video frame or not;
if a potential hazard source is identified in the video, and/or a person approaches a high-risk area, and/or a safety helmet is not worn, a sound and/or light early warning is sent.
6. The intelligent identification method for the dangerous source of the outdoor construction site as claimed in claim 5, wherein after the video frame is extracted, the method further comprises the following steps: and judging the real-time weather condition, and adjusting the video frame according to the real-time weather condition to enhance the image.
7. The intelligent identification method for the dangerous source of the outdoor construction site as claimed in claim 5, wherein after the video frame is extracted, the method further comprises the following steps:
extracting a target face image, performing identity authentication according to the target face image, comparing the authenticated qualified person by using a database, judging whether the qualified person has construction qualification, and if the qualified person does not have the construction qualification, sending out sound and/or light early warning;
and identifying whether the target face image wears the mask or not, and giving out sound and/or light early warning if the mask is not worn.
8. The intelligent identification method for the dangerous source of the outdoor construction site as claimed in claim 5, wherein after the comprehensive dangerous source of the construction site in the video frame is identified, the method further comprises the following steps:
detecting whether people and ladder operation exists in a construction site, if so, identifying constructors in a ladder space domain in a video frame, cutting the video frame by utilizing the identified boundary frame, identifying the posture of the cut video frame by utilizing a posture detection algorithm, identifying the behavior of the cut video frame, judging whether the single people and ladder operation exists, and/or judging the conditions that the number of the constructors on the ladder is more than one and/or the actions of the escalator personnel are irregular, and if so, giving out sound and/or light early warning.
9. The intelligent identification method for the hazard source of the outdoor construction site as claimed in claim 5, wherein the analyzing the video frame by using the model comprises the following steps:
taking the extracted video frame as a training data set;
extracting a characteristic value of a video frame by using a characteristic extraction network combined with a gradient shunting technology;
fusing the characteristic values of different stages by using a self-adaptive characteristic fusion network to obtain a fusion characteristic value;
dividing the anchor frame into a positive sample and a negative sample by using a self-adaptive training sample sampling algorithm to obtain a target value;
substituting the fusion characteristic value and the target value into an attention loss function, optimizing the attention loss function by using an optimizer, and continuously training a model to obtain a prediction model;
and carrying out perception analysis on the real-time video of the construction site by using the prediction model.
10. The intelligent identification method for the hazard source of the outdoor construction site as claimed in claim 9, wherein before the video frame is extracted as the training data set, the method further comprises the following steps:
and judging the real-time weather condition, and adjusting the image according to the real-time weather condition to enhance the image.
CN202011058805.3A 2020-09-30 2020-09-30 Intelligent danger source identification system and method for outdoor construction site Active CN112200043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011058805.3A CN112200043B (en) 2020-09-30 2020-09-30 Intelligent danger source identification system and method for outdoor construction site

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011058805.3A CN112200043B (en) 2020-09-30 2020-09-30 Intelligent danger source identification system and method for outdoor construction site

Publications (2)

Publication Number Publication Date
CN112200043A true CN112200043A (en) 2021-01-08
CN112200043B CN112200043B (en) 2022-04-19

Family

ID=74008240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011058805.3A Active CN112200043B (en) 2020-09-30 2020-09-30 Intelligent danger source identification system and method for outdoor construction site

Country Status (1)

Country Link
CN (1) CN112200043B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804494A (en) * 2021-01-13 2021-05-14 广州穗能通能源科技有限责任公司 Power construction site monitoring method and system and storage medium
CN112906651A (en) * 2021-03-25 2021-06-04 中国联合网络通信集团有限公司 Target detection method and device
CN112907807A (en) * 2021-01-20 2021-06-04 广东电网有限责任公司 Transformer substation safety management and control method, device and system
CN113128353A (en) * 2021-03-26 2021-07-16 安徽大学 Emotion sensing method and system for natural human-computer interaction
CN113283753A (en) * 2021-05-27 2021-08-20 中铁建工集团有限公司 Safety management system for personnel on construction site
CN113538847A (en) * 2021-07-13 2021-10-22 金华高等研究院 Artificial intelligence monitoring device that can automatically regulated
CN114285976A (en) * 2021-12-27 2022-04-05 深圳市海洋王铁路照明技术有限公司 File management method, device and equipment of camera shooting illumination equipment and storage medium
CN114936799A (en) * 2022-06-16 2022-08-23 黄冈强源电力设计有限公司 Risk identification method and system in cement fiberboard construction process
CN115150552A (en) * 2022-06-23 2022-10-04 中国华能集团清洁能源技术研究院有限公司 Constructor safety monitoring method, system and device based on deep learning self-adaption
CN115205929A (en) * 2022-06-23 2022-10-18 池州市安安新材科技有限公司 Authentication method and system for avoiding false control of electric spark cutting machine tool workbench
CN115240362A (en) * 2022-06-30 2022-10-25 江西大正智能信息技术有限公司 Monitoring alarm system and method of intelligent construction site management and control platform
CN115249392A (en) * 2022-04-29 2022-10-28 中铁建(无锡)工程科技发展有限公司 Intelligent visitor system of wisdom building site
CN115376275A (en) * 2022-10-25 2022-11-22 山东工程职业技术大学 Construction safety warning method and system based on image processing
CN115406488A (en) * 2022-09-13 2022-11-29 北京千尧新能源科技开发有限公司 Offshore operation platform, boarding corridor bridge safety early warning method and related equipment
CN115731670A (en) * 2022-11-15 2023-03-03 广州一洲信息技术有限公司 Safety alarm method, device and monitoring system for construction site
CN116311361A (en) * 2023-03-02 2023-06-23 北京化工大学 Dangerous source indoor staff positioning method based on pixel-level labeling
CN116311088A (en) * 2023-05-24 2023-06-23 山东亿昌装配式建筑科技有限公司 Construction safety monitoring method based on construction site
CN116343450A (en) * 2023-03-14 2023-06-27 东莞市合诚建设有限公司 Engineering canopy frame use safety monitoring method, system, equipment and storage medium
CN117197726A (en) * 2023-11-07 2023-12-08 四川三思德科技有限公司 Important personnel accurate management and control system and method
CN117557108A (en) * 2024-01-10 2024-02-13 中国南方电网有限责任公司超高压输电公司电力科研院 Training method and device for intelligent identification model of power operation risk

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574830A (en) * 2016-02-04 2016-05-11 沈阳工业大学 Low-quality image enhancement method under extreme weather conditions
CN106412501A (en) * 2016-09-20 2017-02-15 华中科技大学 Construction safety behavior intelligent monitoring system based on video and monitoring method thereof
CN107145851A (en) * 2017-04-28 2017-09-08 西南科技大学 Constructions work area dangerous matter sources intelligent identifying system
CN109919036A (en) * 2019-01-18 2019-06-21 南京理工大学 Worker's work posture classification method based on time-domain analysis depth network
CN110119656A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations
CN110135461A (en) * 2019-04-18 2019-08-16 南开大学 The method of the emotional image retrieval of perceived depth metric learning is paid attention to based on layering
CN110602449A (en) * 2019-09-01 2019-12-20 天津大学 Intelligent construction safety monitoring system method in large scene based on vision
CN111083640A (en) * 2019-07-25 2020-04-28 中国石油天然气股份有限公司 Intelligent supervision method and system for construction site
CN111093054A (en) * 2019-07-25 2020-05-01 中国石油天然气股份有限公司 Intelligent supervision method and system for construction site
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device
CN111401746A (en) * 2020-03-17 2020-07-10 中交第一航务工程局有限公司 Intelligent construction site informatization management system
CN111598843A (en) * 2020-04-24 2020-08-28 国电南瑞科技股份有限公司 Power transformer respirator target defect detection method based on deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574830A (en) * 2016-02-04 2016-05-11 沈阳工业大学 Low-quality image enhancement method under extreme weather conditions
CN106412501A (en) * 2016-09-20 2017-02-15 华中科技大学 Construction safety behavior intelligent monitoring system based on video and monitoring method thereof
CN107145851A (en) * 2017-04-28 2017-09-08 西南科技大学 Constructions work area dangerous matter sources intelligent identifying system
CN110119656A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations
CN109919036A (en) * 2019-01-18 2019-06-21 南京理工大学 Worker's work posture classification method based on time-domain analysis depth network
CN110135461A (en) * 2019-04-18 2019-08-16 南开大学 The method of the emotional image retrieval of perceived depth metric learning is paid attention to based on layering
CN111083640A (en) * 2019-07-25 2020-04-28 中国石油天然气股份有限公司 Intelligent supervision method and system for construction site
CN111093054A (en) * 2019-07-25 2020-05-01 中国石油天然气股份有限公司 Intelligent supervision method and system for construction site
CN110602449A (en) * 2019-09-01 2019-12-20 天津大学 Intelligent construction safety monitoring system method in large scene based on vision
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device
CN111401746A (en) * 2020-03-17 2020-07-10 中交第一航务工程局有限公司 Intelligent construction site informatization management system
CN111598843A (en) * 2020-04-24 2020-08-28 国电南瑞科技股份有限公司 Power transformer respirator target defect detection method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHIEN-YAO WANG 等: "CSPNET: A NEW BACKBONE THAT CAN ENHANCE LEARNING CAPABILITY OF CNN", 《ARXIV:1911.11929V1》 *
HAIKUAN WANG 等: "A Real-Time Safety Helmet Wearing Detection Approach Based on CSYOLOv3", 《APPLIED SCIENCES》 *
王文正 等: "钻井过程人员异常行为视频智能识别系统", 《安全健康和环境》 *
谈小峰 等: "基于 YOLOv4 改进算法的乒乓球识别", 《科技创新与应用》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804494A (en) * 2021-01-13 2021-05-14 广州穗能通能源科技有限责任公司 Power construction site monitoring method and system and storage medium
CN112907807A (en) * 2021-01-20 2021-06-04 广东电网有限责任公司 Transformer substation safety management and control method, device and system
CN112906651A (en) * 2021-03-25 2021-06-04 中国联合网络通信集团有限公司 Target detection method and device
CN112906651B (en) * 2021-03-25 2023-07-11 中国联合网络通信集团有限公司 Target detection method and device
CN113128353A (en) * 2021-03-26 2021-07-16 安徽大学 Emotion sensing method and system for natural human-computer interaction
CN113128353B (en) * 2021-03-26 2023-10-24 安徽大学 Emotion perception method and system oriented to natural man-machine interaction
CN113283753A (en) * 2021-05-27 2021-08-20 中铁建工集团有限公司 Safety management system for personnel on construction site
CN113538847A (en) * 2021-07-13 2021-10-22 金华高等研究院 Artificial intelligence monitoring device that can automatically regulated
CN114285976A (en) * 2021-12-27 2022-04-05 深圳市海洋王铁路照明技术有限公司 File management method, device and equipment of camera shooting illumination equipment and storage medium
CN115249392A (en) * 2022-04-29 2022-10-28 中铁建(无锡)工程科技发展有限公司 Intelligent visitor system of wisdom building site
CN114936799A (en) * 2022-06-16 2022-08-23 黄冈强源电力设计有限公司 Risk identification method and system in cement fiberboard construction process
CN115150552A (en) * 2022-06-23 2022-10-04 中国华能集团清洁能源技术研究院有限公司 Constructor safety monitoring method, system and device based on deep learning self-adaption
CN115205929A (en) * 2022-06-23 2022-10-18 池州市安安新材科技有限公司 Authentication method and system for avoiding false control of electric spark cutting machine tool workbench
CN115240362A (en) * 2022-06-30 2022-10-25 江西大正智能信息技术有限公司 Monitoring alarm system and method of intelligent construction site management and control platform
CN115406488A (en) * 2022-09-13 2022-11-29 北京千尧新能源科技开发有限公司 Offshore operation platform, boarding corridor bridge safety early warning method and related equipment
CN115406488B (en) * 2022-09-13 2023-10-10 北京千尧新能源科技开发有限公司 Offshore operation platform, boarding corridor safety pre-warning method and related equipment
CN115376275A (en) * 2022-10-25 2022-11-22 山东工程职业技术大学 Construction safety warning method and system based on image processing
CN115376275B (en) * 2022-10-25 2023-02-10 山东工程职业技术大学 Construction safety warning method and system based on image processing
CN115731670A (en) * 2022-11-15 2023-03-03 广州一洲信息技术有限公司 Safety alarm method, device and monitoring system for construction site
CN116311361A (en) * 2023-03-02 2023-06-23 北京化工大学 Dangerous source indoor staff positioning method based on pixel-level labeling
CN116311361B (en) * 2023-03-02 2023-09-15 北京化工大学 Dangerous source indoor staff positioning method based on pixel-level labeling
CN116343450A (en) * 2023-03-14 2023-06-27 东莞市合诚建设有限公司 Engineering canopy frame use safety monitoring method, system, equipment and storage medium
CN116311088A (en) * 2023-05-24 2023-06-23 山东亿昌装配式建筑科技有限公司 Construction safety monitoring method based on construction site
CN117197726A (en) * 2023-11-07 2023-12-08 四川三思德科技有限公司 Important personnel accurate management and control system and method
CN117197726B (en) * 2023-11-07 2024-02-09 四川三思德科技有限公司 Important personnel accurate management and control system and method
CN117557108A (en) * 2024-01-10 2024-02-13 中国南方电网有限责任公司超高压输电公司电力科研院 Training method and device for intelligent identification model of power operation risk

Also Published As

Publication number Publication date
CN112200043B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN112200043B (en) Intelligent danger source identification system and method for outdoor construction site
CN110119676B (en) Driver fatigue detection method based on neural network
CN111178183B (en) Face detection method and related device
CN105678213B (en) Dual-mode mask person event automatic detection method based on video feature statistics
TWI441096B (en) Motion detection method for comples scenes
Yimyam et al. Face detection criminals through CCTV cameras
CN111353338B (en) Energy efficiency improvement method based on business hall video monitoring
WO2020233000A1 (en) Facial recognition method and apparatus, and computer-readable storage medium
CN101923637A (en) Mobile terminal as well as human face detection method and device thereof
CN114998830A (en) Wearing detection method and system for safety helmet of transformer substation personnel
CN117576632B (en) Multi-mode AI large model-based power grid monitoring fire early warning system and method
CN113177439B (en) Pedestrian crossing road guardrail detection method
CN113052139A (en) Deep learning double-flow network-based climbing behavior detection method and system
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion
CN115862128A (en) Human body skeleton-based customer abnormal behavior identification method
CN107403192B (en) Multi-classifier-based rapid target detection method and system
CN114694090A (en) Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5
CN115909144A (en) Method and system for detecting abnormity of surveillance video based on counterstudy
CN114997279A (en) Construction worker dangerous area intrusion detection method based on improved Yolov5 model
CN115631457A (en) Man-machine cooperation abnormity detection method and system in building construction monitoring video
JP3305551B2 (en) Specific symmetric object judgment method
CN112733722A (en) Gesture recognition method, device and system and computer readable storage medium
Zhou et al. Research and implementation of forest fire detection algorithm improvement
CN111582001A (en) Method and system for identifying suspicious people based on emotion perception
Subramanian et al. Fuzzy logic based content protection for image resizing by seam carving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant