CN113052125A - Construction site violation image recognition and alarm method - Google Patents

Construction site violation image recognition and alarm method Download PDF

Info

Publication number
CN113052125A
CN113052125A CN202110384848.9A CN202110384848A CN113052125A CN 113052125 A CN113052125 A CN 113052125A CN 202110384848 A CN202110384848 A CN 202110384848A CN 113052125 A CN113052125 A CN 113052125A
Authority
CN
China
Prior art keywords
alarm
equipment
construction site
primary
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110384848.9A
Other languages
Chinese (zh)
Other versions
CN113052125B (en
Inventor
李天翼
陈满意
衣丰超
宋云平
曹玲
刘帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia Kedian Data Service Co ltd
Original Assignee
Inner Mongolia Kedian Data Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia Kedian Data Service Co ltd filed Critical Inner Mongolia Kedian Data Service Co ltd
Priority to CN202110384848.9A priority Critical patent/CN113052125B/en
Publication of CN113052125A publication Critical patent/CN113052125A/en
Application granted granted Critical
Publication of CN113052125B publication Critical patent/CN113052125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources

Abstract

The invention provides a construction site violation image recognition and alarm method and a system, wherein the method comprises the following steps: acquiring a construction site video image as a data source through image acquisition equipment arranged on a construction site; detecting a data source based on a primary model to obtain a first detection result; inputting the first detection result into a secondary detection network for secondary classification detection; generating alarm information metadata according to the detection result of the secondary detection network; and sending the alarm information metadata to the field alarm equipment. The construction site violation image recognition and alarm method adopts a multi-level neural network recognition technology, uses a two-layer Yolov5 model for recognition, takes a target detection result of a first-level network as the input of a second-level network, optimizes the second-level network, reduces the input size of the second-level network and the width of the neural network, and greatly improves the detection precision of a large-size image micro target.

Description

Construction site violation image recognition and alarm method
Technical Field
The invention relates to the technical field of image recognition, in particular to a construction site violation image recognition and alarm method.
Background
At present, a video monitoring system is characterized in that: manual type and post-processing type. That is, the current operation and maintenance work is generally performed manually. Monitoring personnel monitor each path of video image manually in a central monitoring room, and because the number of monitoring screens is limited, the monitoring personnel often monitor a plurality of cameras simultaneously or randomly pick up the cameras to watch on one monitoring screen, so that part of monitoring points are overlooked or ignored; in addition, the attention of people is limited, people are easy to fatigue and can be interfered by other things, according to investigation, after people observe the same video for 22 minutes, the people cannot perceive 95% of action change of the picture, so that the probability of missing important information by manual monitoring is very high. The intelligent video analysis technology can liberate users from monotonous and boring monitoring work, avoids the problem of reduced attention caused by long-time viewing of monitoring videos, realizes all-weather monitoring for 24 hours, and is concerned and paid more and more by users.
With the continuous promotion of concepts such as internet of things, edge computing, cloud computing and the like and the development of deep learning technology, the progress of software and hardware supporting facilities, the implementation of video analysis technology based on deep learning is gradually improved, and a video monitoring system is gradually developed to high clarity and intellectualization through digitalization and networking. The intellectualization of the video monitoring system means that the system can automatically detect, analyze and identify abnormal conditions in a monitoring image picture without human intervention, and give a pre-alarm/alarm in time, so that the development prospect of the video monitoring industry is widely accepted in the industry, and the important role and value played in security monitoring will be increasingly shown.
Disclosure of Invention
The invention aims to provide a construction site violation image recognition and alarm method, which adopts a multi-level neural network recognition technology, uses a two-layer Yolov5 model for recognition, takes a target detection result of a first-level network as the input of a second-level network, optimizes the second-level network, reduces the input size of the second-level network and the width of the neural network, and greatly improves the detection precision of a large-size image micro target.
The embodiment of the invention provides a construction site violation image identification and alarm method, which comprises the following steps:
acquiring a construction site video image as a data source through image acquisition equipment arranged on a construction site;
detecting a data source based on a primary model to obtain a first detection result;
inputting the first detection result into a secondary detection network for secondary classification detection;
generating alarm information metadata according to the detection result of the secondary detection network;
and sending the alarm information metadata to the field alarm equipment.
Preferably, the primary model is a YOLOV5 model; the secondary model comprises: YOLO (V3-V5), KNN, AlexNet, VGGNet, GoogleNet, ResNet, R-CNN, Fast R-CNN, SSD, FCN, UNet, SegNet, PSPNet;
the model training process of the primary model and the secondary model is as follows:
and acquiring an original video image of a construction site through image acquisition equipment to generate an original data set.
And performing primary image annotation on the original data set to form a primary annotation set.
Dividing a training set and a verification set according to a preset proportion for the primary label set, inputting the training set and the verification set into a YOLOV5 initial model for model training, and obtaining a trained primary model;
extracting the images of the primary label set according to primary classification to obtain a plurality of secondary original data sets;
labeling the secondary data sets to form a plurality of secondary labeling sets;
dividing the secondary label set into a plurality of training sets and verification sets according to a preset proportion, selecting proper networks according to different identification categories for model training, and obtaining a plurality of secondary models through model training
Wherein, the primary labeling content is one or a plurality of combinations of personnel, vehicles, meters and signboard.
Preferably, the image pickup apparatus includes: one or more of a camera, a cloth control ball and a network camera are combined;
the field alarm device includes: one or more of a handheld device, a large monitoring screen and an acousto-optic alarm device distributed on the site in an array mode.
Preferably, the generating of the alarm information metadata according to the detection result of the secondary detection network includes:
constructing a virtual space based on image acquisition equipment which is distributed and arranged on a construction site in an array manner;
determining a first position of the violation target in the virtual space based on the distance between the shooting end of the data source corresponding to the detection result and the violation target;
determining the alarm volume of the acousto-optic alarm equipment in the field alarm equipment based on the first position of the violation target in the virtual space and the second position of the acousto-optic alarm equipment in the field alarm equipment; the calculation formula is as follows:
Figure BDA0003014376120000031
wherein D isiThe alarm volume of the ith sound-light alarm device is set; l is0Is a preset distance threshold; l isiThe distance value between the first position of the violation target in the virtual space and the second position of the field warning device is obtained; r is the preset maximum volume;
determining the warning light intensity of the acousto-optic warning device in the field warning device based on the first position of the violation target in the virtual space and the second position of the acousto-optic warning device in the field warning device; the calculation formula is as follows:
Figure BDA0003014376120000032
wherein, TiThe intensity of the warning light of the ith acousto-optic warning device; h is the preset maximum light intensity.
Preferably, the sending of the alarm information metadata to the field alarm device includes:
acquiring position information of handheld equipment positioned on a construction site;
mapping the position information of the handheld device to a virtual space to obtain a third position of the handheld device;
matching the first position with each third position, and sending alarm metadata to the handheld equipment when the matches are matched; the handheld device sends out an alarm;
wherein matching the first location with the third location comprises:
calculating the distance value between the first position and the third position according to the following formula:
Figure BDA0003014376120000033
wherein A isjIs the distance value, x, of the third position of the jth handheld device from the first positionkA k-dimension parameter value of a third position; y iskThe k-dimension parameter value of the first position, and n is the dimension of the first position or the third position;
and when the minimum distance value is smaller than or equal to a preset threshold value, determining that the first position is matched with a third position corresponding to the minimum distance value.
The invention also provides a construction site violation image recognition and alarm system, which comprises:
the data acquisition module is used for acquiring a construction site video image as a data source through image acquisition equipment arranged on a construction site;
the primary detection module is used for detecting the data source based on the primary model to obtain a first detection result;
the secondary detection module is used for inputting the first detection result into a secondary detection network for secondary classification detection;
the alarm generating module is used for generating alarm information metadata according to the detection result of the secondary detection network;
and the sending module is used for sending the alarm information metadata to the field alarm equipment.
Preferably, the primary model is a YOLOV5 model; the secondary model comprises: YOLO (V3-V5), KNN, AlexNet, VGGNet, GoogleNet, ResNet, R-CNN, Fast R-CNN, SSD, FCN, UNet, SegNet, PSPNet;
the model training process of the primary model and the secondary model is as follows:
and acquiring an original video image of a construction site through image acquisition equipment to generate an original data set.
And performing primary image annotation on the original data set to form a primary annotation set.
Dividing a training set and a verification set according to a preset proportion for the primary label set, inputting the training set and the verification set into a YOLOV5 initial model for model training, and obtaining a trained primary model;
extracting the images of the primary label set according to primary classification to obtain a plurality of secondary original data sets;
labeling the secondary data sets to form a plurality of secondary labeling sets;
and dividing the secondary labeling set into a plurality of training sets and verification sets according to a preset proportion, selecting proper networks according to different identification categories for model training, and obtaining a plurality of secondary models through model training, wherein the primary labeling content is one or combination of a person, a vehicle, a meter and a signboard.
Preferably, the image pickup apparatus includes: one or more of a camera, a cloth control ball and a network camera are combined;
the field alarm device includes: one or more of a handheld device, a large monitoring screen and an acousto-optic alarm device distributed on the site in an array mode.
Preferably, the alarm generation module performs the following operations:
constructing a virtual space based on image acquisition equipment which is distributed and arranged on a construction site in an array manner;
determining a first position of the violation target in the virtual space based on the distance between the shooting end of the data source corresponding to the detection result and the violation target;
determining the alarm volume of the acousto-optic alarm equipment in the field alarm equipment based on the first position of the violation target in the virtual space and the second position of the acousto-optic alarm equipment in the field alarm equipment; the calculation formula is as follows:
Figure BDA0003014376120000051
wherein D isiThe alarm volume of the ith sound-light alarm device is set; l is0Is a preset distance threshold; l isiThe distance value between the first position of the violation target in the virtual space and the second position of the field warning device is obtained; r is the preset maximum volume;
determining the warning light intensity of the acousto-optic warning device in the field warning device based on the first position of the violation target in the virtual space and the second position of the acousto-optic warning device in the field warning device; the calculation formula is as follows:
Figure BDA0003014376120000052
wherein, TiThe intensity of the warning light of the ith acousto-optic warning device; h is the preset maximum light intensity.
Preferably, the sending module performs the following operations:
acquiring position information of handheld equipment positioned on a construction site;
mapping the position information of the handheld device to a virtual space to obtain a third position of the handheld device;
matching the first position with each third position, and sending alarm metadata to the handheld equipment when the matches are matched; the handheld device sends out an alarm;
wherein matching the first location with the third location comprises:
calculating the distance value between the first position and the third position according to the following formula:
Figure BDA0003014376120000053
wherein A isjIs the distance value, x, of the third position of the jth handheld device from the first positionkA k-dimension parameter value of a third position; y iskThe k-dimension parameter value of the first position, and n is the dimension of the first position or the third position;
and when the minimum distance value is smaller than or equal to a preset threshold value, determining that the first position is matched with a third position corresponding to the minimum distance value.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a construction site violation image identification warning method in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a model training process for a primary model and a secondary model in an embodiment of the invention;
FIG. 3 is a schematic diagram of a model training process for a primary model and a secondary model in an embodiment of the invention;
FIG. 4 is a schematic diagram of a model training process of a primary model and a secondary model in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The embodiment of the invention provides a construction site violation image identification and alarm method, which comprises the following steps of:
step S1: acquiring a construction site video image as a data source through image acquisition equipment arranged on a construction site;
step S2: detecting a data source based on a primary model to obtain a first detection result;
step S3: inputting the first detection result into a secondary detection network for secondary classification detection;
step S4: generating alarm information metadata according to the detection result of the secondary detection network;
step S5: and sending the alarm information metadata to the field alarm equipment.
The working principle and the beneficial effects of the technical scheme are as follows:
video monitoring equipment such as a camera, a control ball, a network camera and the like is used for acquiring construction site video image data as a data source, an edge computing device or an image recognition server is used for acquiring and decoding video streams, and the video streams support H264 and H265 coding.
The image is detected by an edge computing device or an image recognition server by using a primary model, and objects such as personnel, vehicles, meters, signboard and the like are detected.
Alternative algorithms include KLT, IOU, DCF, etc. using object tracking algorithms.
The KLT tracker uses a CPU-based implementation of the Kanade-Lucas-Tomasi (KLT) tracker algorithm.
The IOU tracker uses an interaction-Over-Union (IOU) Intersection comparison algorithm to perform an association between two consecutive frames or assign a new ID using the IOU value in the detector bounding box between them.
DCR (cognitive Correlation Filter) tracker.
And intercepting a target detection result obtained by the primary model into a data source of a secondary model, inputting the data source into a secondary detection network for secondary classification detection, wherein detection contents comprise personnel identity information checking, whether a safety helmet is worn, whether a cigarette is drawn, whether a tool is worn, signboard OCR (optical character recognition), meter recognition and the like.
Generating alarm information metadata according to a secondary network detection result, wherein the alarm main information comprises: the method comprises the steps of sending alarm information to a message middleware, issuing the message in a subscription-issuing mode, subscribing the alarm information by a handheld device, a service processing system, a monitoring large screen and a field acousto-optic control system, and performing subsequent processing.
The invention solves the problems of poor detection precision and low accuracy of the traditional single-stage image recognition network on small targets through a multi-stage neural network detection technology, generates alarm information according to the violation type of the detection result, sends the alarm information to handheld equipment, a monitoring large screen, a business background and acousto-optic equipment for alarm, and carries out character, voice, image, acousto-optic and combined alarm according to the equipment type so as to improve the safety monitoring level of a construction site and reduce the accident occurrence probability.
In one embodiment, the primary model is the YOLOV5 model; the secondary model comprises: YOLO (V3-V5), KNN, AlexNet, VGGNet, GoogleNet, ResNet, R-CNN, Fast R-CNN, SSD, FCN, UNet, SegNet, PSPNet;
as shown in fig. 2 to 4, the model training process of the primary model and the secondary model is as follows:
and acquiring an original video image of a construction site through image acquisition equipment to generate an original data set.
And performing primary image annotation on the original data set to form a primary annotation set.
Dividing a training set and a verification set according to a preset proportion for the primary label set, inputting the training set and the verification set into a YOLOV5 initial model for model training, and obtaining a trained primary model;
extracting the images of the primary label set according to primary classification to obtain a plurality of secondary original data sets;
labeling the secondary data sets to form a plurality of secondary labeling sets;
and dividing the secondary labeling set into a plurality of training sets and verification sets according to a preset proportion, selecting proper networks according to different identification categories for model training, and obtaining a plurality of secondary models through model training, wherein the primary labeling content is one or combination of a person, a vehicle, a meter and a signboard.
The working principle and the beneficial effects of the technical scheme are as follows:
the method comprises the steps of collecting original video images of a construction site through image collection equipment such as a camera, a ball placement controller, a network camera and the like to generate an original data set.
And performing primary image annotation on the original data set, wherein primary annotation contents are easily recognized large targets such as personnel, vehicles, meters, signboard and the like to form a primary annotation set.
And dividing the training set and the verification set according to the proportion of 8:2 for the primary label set, and sending the training set and the verification set to yolov5 for model training to obtain a primary target detection model.
And extracting the images of the primary labeling sets according to the primary classification, extracting a plurality of secondary original data sets, labeling the secondary data sets, and forming a plurality of secondary labeling sets (specifically depending on the primary classification).
And dividing the secondary label set into a plurality of training sets and verification sets (depending on primary classification) according to the ratio of 8:2, and selecting a proper network according to different recognition categories for model training, wherein the network model comprises but is not limited to YOLO (V3-V5), KNN, AlexNet, VGGNet, GoogleNet, ResNet, R-CNN, Fast R-CNN, SSD, FCN, UNet, SegNet and PSPNet. And obtaining a plurality of secondary models through model training.
And issuing the primary model and the secondary model to an image recognition system for multi-stage real-time video analysis of a construction site.
Real-time video analysis is as follows:
the original video image of the construction site is collected through image collecting equipment such as a camera, a ball controller, a network camera and the like, and the image size is 1920x 1080.
And (3) sending the image to a primary network for target detection, converting the size of the input image to 640x640, keeping the aspect ratio, supplementing 0 to the vacant area, and detecting the personnel target.
And extracting the coordinate position of the person through the information of the person target bounding box identified by the primary network, and intercepting the image of the person target on the original image as the input of the secondary network.
And (4) converting the personnel target image into 416x416, keeping the aspect ratio, supplementing 0 to the vacant area, inputting the personnel target image into a secondary network for smoke target detection, and storing the result into matedata to return to the detection system.
The primary network uses YOLOV5 as a target detection core model to train an image data source to recognize large targets which are easy to recognize such as people, vehicles, signboards and the like.
The gradient descent algorithm and the main hyper-parameters used for model training in the model training comprise:
initial learning rate α to-be-optimized parameter: w first order momentum control parameter beta1Second order momentum control parameter beta2The objective function f (w).
Then, iterative optimization is started. At each epoch t:
calculating the gradient of the objective function with respect to the current parameter:
Figure BDA0003014376120000091
calculating first-order momentum and second-order momentum according to historical gradients:
Figure BDA0003014376120000092
this example uses the Adam algorithm, first order momentum:
mt=β1·mt-1+(1-β1)·gt
second-order momentum:
Vt=β2·Vt-1+(1-β2)·gt
calculating the falling gradient at the current moment:
Figure BDA0003014376120000101
updating according to the gradient of descent:
wt+1=wtt
default values of the hyper-parameters are as follows:
α=0.001
β1=0.937
β2=0.8
the primary model input size 640x 640;
the secondary model input size 416x 416;
the depth selectable value of the primary model is as follows: 0.33, 0.67, 1.0, 1.33, in this case 0.33;
optional values for the primary model width: 0.50, 0.75, 1.0, 1.25, in this case 0.50;
the depth selectable value of the secondary model is as follows: 0.33, 0.67, 1.0, 1.33, in this case 0.33;
optional values for the secondary model width: 0.50, 0.75, 1.0, 1.25, in this case 0.50;
the evaluation indexes used for model training comprise:
true Positives (TP), the number of instances that are correctly classified as positive, i.e., the number of instances that are actually positive and classified as positive by the classifier (number of samples);
false Positives (FP), the number of instances that are incorrectly divided into positive instances, i.e., instances that are actually negative but are divided into positive instances by the classifier;
false Negatives (FN), the number of instances that are wrongly divided into negative cases, i.e. the number of instances that are actually positive cases but are divided into negative cases by the classifier;
true Negotives (TN): the number of instances that are correctly divided into negative cases, i.e., the number of instances that are actually negative and are divided into negative cases by the classifier.
Precision (P): the accuracy is calculated by the following formula: predicting the actual number of positive samples/all positive samples in a sample
Figure BDA0003014376120000111
Recall (R): the recall ratio is calculated by the formula: actual positive number of samples/predicted number of samples in predicted sample i.e
Figure BDA0003014376120000112
AP: AP means Average Precision.
mAP: the mAP (Mean Average Precision) value is an Average AP value obtained by averaging a plurality of verification set individuals and is used as an index for measuring detection Precision in object section.
Figure BDA0003014376120000113
The loss function used for model training is:
Figure BDA0003014376120000114
Figure BDA0003014376120000115
Figure BDA0003014376120000116
loss=lbox+lobj+lcls
wherein:
x, y represent the center coordinates.
w, h represent width and height.
S represents grid size, S2Represents 13x13,26x26,52x 52.
B:box。
Figure BDA0003014376120000121
If the box at i, j has a target, its value is 1, otherwise it is 0.
Figure BDA0003014376120000122
If the box at i, j has no target, its value is 1, otherwise it is 0.
The specific calculation formula of BCE (binary cross entry) is as follows:
Figure BDA0003014376120000123
in order to ensure the fluency of real-time video analysis, frame skipping processing is carried out on multi-channel video analysis processing, and the specific algorithm is as follows:
P=m·n
where P denotes that each video is detected once per P frame.
m represents a basic frame detection threshold value, and the value range m is more than or equal to 1.
n represents the number of the video sources, and the value range n is more than or equal to 0.
In one embodiment, an image capture device comprises: one or more of a camera, a cloth control ball and a network camera are combined;
the field alarm device includes: one or more of a handheld device, a large monitoring screen and an acousto-optic alarm device distributed on the site in an array mode.
The working principle and the beneficial effects of the technical scheme are as follows:
after receiving the service alarm information, the handheld device pops up an alarm popup window on the interface in real time, prompts the alarm collected by the field operator in a vibration and voice reminding mode, displays the current alarm image, and the operator can click the alarm popup window to view detailed contents and video monitoring information, including the alarm video in a certain time range (1 second-15 minutes) before and after the occurrence time point of the real-time monitoring and alarm.
After the large monitoring screen receives the service alarm information, a text alarm popup window or an image/video alarm popup window is popped up on the interface in real time, and the video monitoring information of the alarm service generation place is displayed in real time, so that service personnel can know the site construction condition of the alarm service generation place in time, and the large monitoring screen service operating personnel can open the rectification worksheet in real time and send a manager of the alarm service generation place to request rectification.
The monitoring large-screen service operator can start the camera voice call function to carry out voice call with a construction site in real time, and know the site situation and modify the situation.
And after receiving the service alarm information, the acousto-optic control system sends out alarm signals to site constructors through sound and various lights according to the alarm types and alarm settings.
In one embodiment, generating the alarm information metadata according to the detection result of the secondary detection network comprises:
constructing a virtual space based on image acquisition equipment which is distributed and arranged on a construction site in an array manner;
determining a first position of the violation target in the virtual space based on the distance between the shooting end of the data source corresponding to the detection result and the violation target;
determining the alarm volume of the acousto-optic alarm equipment in the field alarm equipment based on the first position of the violation target in the virtual space and the second position of the acousto-optic alarm equipment in the field alarm equipment; the calculation formula is as follows:
Figure BDA0003014376120000131
wherein D isiThe alarm volume of the ith sound-light alarm device is set; l is0Is a preset distance threshold; l isiThe violation target is located between a first location in the virtual space and a second location of the field alarm deviceThe value of the distance between; r is the preset maximum volume;
determining the warning light intensity of the acousto-optic warning device in the field warning device based on the first position of the violation target in the virtual space and the second position of the acousto-optic warning device in the field warning device; the calculation formula is as follows:
Figure BDA0003014376120000132
wherein, TiThe intensity of the warning light of the ith acousto-optic warning device; h is the preset maximum light intensity.
The working principle and the beneficial effects of the technical scheme are as follows:
the scene video acquisition equipment and the sound-light alarm equipment are distributed in a construction site, the volume and the light intensity of the sound-light alarm equipment near the position of the violation target are set to be maximum, the volume and the light intensity of the sound-light alarm equipment far away from the position of the violation target are related to the distance from the violation target, the alarm volume and the light are set, and law enforcement personnel and field workers can determine the position of the violation target according to the sound and the light intensity of the scene alarm; in addition, after the violation target moves, the volume and the light intensity of the sound-light alarm device are adjusted in real time according to the distance between the violation target and the sound-light alarm device; so as to realize the tracing and indication of the acousto-optic alarm to the position of the violation target.
In one embodiment, sending alarm information metadata to a field alarm device includes:
acquiring position information of handheld equipment positioned on a construction site;
mapping the position information of the handheld device to a virtual space to obtain a third position of the handheld device;
matching the first position with each third position, and sending alarm metadata to the handheld equipment when the matches are matched; the handheld device sends out an alarm;
wherein matching the first location with the third location comprises:
calculating the distance value between the first position and the third position according to the following formula:
Figure BDA0003014376120000141
wherein A isjIs the distance value, x, of the third position of the jth handheld device from the first positionkA k-dimension parameter value of a third position; y iskThe k-dimension parameter value of the first position, and n is the dimension of the first position or the third position;
and when the minimum distance value is smaller than or equal to a preset threshold value, determining that the first position is matched with a third position corresponding to the minimum distance value.
The working principle and the beneficial effects of the technical scheme are as follows:
the position of the field handheld device is matched with the position of the violation target, the violation personnel is determined, the violation alarm information is accurately put on the violation personnel, and the violation personnel can stop the violation in time.
In one embodiment, when the detection result after the image detection performed by the secondary detection network corresponds to only one live image capturing device in step S3; when only one image acquisition device acquires the violation image in the construction site, determining the position area of the violation target possibly existing in the virtual space based on the first relative position relation between the violation target and the first image acquisition device by using the image acquisition device as the first image acquisition device;
calling at least one second image acquisition device around to shoot the position area; carrying out image analysis on the shot picture, and determining a second relative position relation of the violation target relative to the second image acquisition equipment;
and determining the position of the violation target in the virtual space based on the first relative position relationship, the second relative position relationship, the setting position of the first image acquisition device and the setting position of the second image acquisition device.
The working principle and the beneficial effects of the technical scheme are as follows:
shooting the violation target through a plurality of image acquisition devices to realize accurate positioning of the violation target; an accurate position basis is provided for accurate placement of the alarm of the handheld device. Wherein, the first position relative relation comprises distance, angle, etc.; the second positional relationship includes a distance, an angle, and the like.
In one embodiment, based on the situation of site construction, the danger coefficients of all the position points are determined for the virtual space;
determining the danger coefficient X at the position of the violation target, and judging the threshold value t of the violation target0Setting is carried out; the setting formula is as follows:
Figure BDA0003014376120000151
wherein t is a preset initial judgment threshold value;
in step S4, the violation is determined by the threshold value of the violation target determined corresponding to the position of the violation target, that is, the violation is identified by the image, the violation needs to be tracked, and when the time of the violation is greater than the threshold value of the violation target determined, the alarm information metadata is generated; the occurrence of false alarm can be effectively avoided; for example: in the area that danger coefficient is low, the workman finds that the safety helmet wears insecurely, takes off the safety helmet and takes up and put on again after the bridle, and this kind of condition need not to send out the warning. However, in areas with high risk factors, such time is not allowed and an alarm needs to be given; the intelligent distinguishing is realized mainly by the threshold value adjusted according to the danger coefficient, and the intellectualization of the system is improved.
Determining the danger coefficient of each position point in the virtual space based on the condition of site construction; the method comprises the following steps:
determining a danger coefficient according to the distance from the construction area, wherein the danger coefficient is 90 in an area which is 10 meters away from the boundary of the construction area; the risk factor in the construction area is 100; 80 m to 10 m; 20 to 30 meters is 50, and so on. And when a plurality of construction areas exist in the construction site, judging the danger coefficient of the position according to the distance between the position and the nearest construction area.
The invention also provides a construction site violation image recognition and alarm system, which comprises:
the data acquisition module is used for acquiring a construction site video image as a data source through image acquisition equipment arranged on a construction site;
the primary detection module is used for detecting the data source based on the primary model to obtain a first detection result;
the secondary detection module is used for inputting the first detection result into a secondary detection network for secondary classification detection;
the alarm generating module is used for generating alarm information metadata according to the detection result of the secondary detection network;
and the sending module is used for sending the alarm information metadata to the field alarm equipment.
The working principle and the beneficial effects of the technical scheme are as follows:
video monitoring equipment such as a camera, a control ball, a network camera and the like is used for acquiring construction site video image data as a data source, an edge computing device or an image recognition server is used for acquiring and decoding video streams, and the video streams support H264 and H265 coding.
The image is detected by an edge computing device or an image recognition server by using a primary model, and objects such as personnel, vehicles, meters, signboard and the like are detected.
Alternative algorithms include KLT, IOU, DCF, etc. using object tracking algorithms.
The KLT tracker uses a CPU-based implementation of the Kanade-Lucas-Tomasi (KLT) tracker algorithm.
The IOU tracker uses an interaction-Over-Union (IOU) Intersection comparison algorithm to perform an association between two consecutive frames or assign a new ID using the IOU value in the detector bounding box between them.
DCR (cognitive Correlation Filter) tracker.
And intercepting a target detection result obtained by the primary model into a data source of a secondary model, inputting the data source into a secondary detection network for secondary classification detection, wherein detection contents comprise personnel identity information checking, whether a safety helmet is worn, whether a cigarette is drawn, whether a tool is worn, signboard OCR (optical character recognition), meter recognition and the like.
Generating alarm information metadata according to a secondary network detection result, wherein the alarm main information comprises: the method comprises the steps of sending alarm information to a message middleware, issuing the message in a subscription-issuing mode, subscribing the alarm information by a handheld device, a service processing system, a monitoring large screen and a field acousto-optic control system, and performing subsequent processing.
The invention solves the problems of poor detection precision and low accuracy of the traditional single-stage image recognition network on small targets through a multi-stage neural network detection technology, generates alarm information according to the violation type of the detection result, sends the alarm information to handheld equipment, a monitoring large screen, a business background and acousto-optic equipment for alarm, and carries out character, voice, image, acousto-optic and combined alarm according to the equipment type so as to improve the safety monitoring level of a construction site and reduce the accident occurrence probability.
In one embodiment, the primary model is the YOLOV5 model; the secondary model comprises: YOLO (V3-V5), KNN, AlexNet, VGGNet, GoogleNet, ResNet, R-CNN, Fast R-CNN, SSD, FCN, UNet, SegNet, PSPNet;
the model training process of the primary model and the secondary model is as follows:
and acquiring an original video image of a construction site through image acquisition equipment to generate an original data set.
And performing primary image annotation on the original data set to form a primary annotation set.
Dividing a training set and a verification set according to a preset proportion for the primary label set, inputting the training set and the verification set into a YOLOV5 initial model for model training, and obtaining a trained primary model;
extracting the images of the primary label set according to primary classification to obtain a plurality of secondary original data sets;
labeling the secondary data sets to form a plurality of secondary labeling sets;
and dividing the secondary labeling set into a plurality of training sets and verification sets according to a preset proportion, selecting proper networks according to different identification categories for model training, and obtaining a plurality of secondary models through model training, wherein the primary labeling content is one or combination of a person, a vehicle, a meter and a signboard.
In one embodiment, an image capture device comprises: one or more of a camera, a cloth control ball and a network camera are combined;
the field alarm device includes: one or more of a handheld device, a large monitoring screen and an acousto-optic alarm device distributed on the site in an array mode.
In one embodiment, the alert generation module performs the following operations:
constructing a virtual space based on image acquisition equipment which is distributed and arranged on a construction site in an array manner;
determining a first position of the violation target in the virtual space based on the distance between the shooting end of the data source corresponding to the detection result and the violation target;
determining the alarm volume of the acousto-optic alarm equipment in the field alarm equipment based on the first position of the violation target in the virtual space and the second position of the acousto-optic alarm equipment in the field alarm equipment; the calculation formula is as follows:
Figure BDA0003014376120000181
wherein D isiThe alarm volume of the ith sound-light alarm device is set; l is0Is a preset distance threshold; l isiThe distance value between the first position of the violation target in the virtual space and the second position of the field warning device is obtained; r is the preset maximum volume;
determining the warning light intensity of the acousto-optic warning device in the field warning device based on the first position of the violation target in the virtual space and the second position of the acousto-optic warning device in the field warning device; the calculation formula is as follows:
Figure BDA0003014376120000182
wherein, TiThe intensity of the warning light of the ith acousto-optic warning device; h is the preset maximum light intensity.
The working principle and the beneficial effects of the technical scheme are as follows:
the scene video acquisition equipment and the sound-light alarm equipment are distributed in a construction site, the volume and the light intensity of the sound-light alarm equipment near the position of the violation target are set to be maximum, the volume and the light intensity of the sound-light alarm equipment far away from the position of the violation target are related to the distance from the violation target, the alarm volume and the light are set, and law enforcement personnel and field workers can determine the position of the violation target according to the sound and the light intensity of the scene alarm; in addition, after the violation target moves, the volume and the light intensity of the sound-light alarm device are adjusted in real time according to the distance between the violation target and the sound-light alarm device; so as to realize the tracing and indication of the acousto-optic alarm to the position of the violation target.
In one embodiment, the sending module performs the following operations:
acquiring position information of handheld equipment positioned on a construction site;
mapping the position information of the handheld device to a virtual space to obtain a third position of the handheld device;
matching the first position with each third position, and sending alarm metadata to the handheld equipment when the matches are matched; the handheld device sends out an alarm;
wherein matching the first location with the third location comprises:
calculating the distance value between the first position and the third position according to the following formula:
Figure BDA0003014376120000183
wherein A isjThe third position and the first position of the jth handheld deviceA value of distance, xkA k-dimension parameter value of a third position; y iskThe k-dimension parameter value of the first position, and n is the dimension of the first position or the third position;
and when the minimum distance value is smaller than or equal to a preset threshold value, determining that the first position is matched with a third position corresponding to the minimum distance value.
The working principle and the beneficial effects of the technical scheme are as follows:
the position of the field handheld device is matched with the position of the violation target, the violation personnel is determined, the violation alarm information is accurately put on the violation personnel, and the violation personnel can stop the violation in time.
In one embodiment, the positioning module is configured to determine whether a detection result obtained after image detection performed by the secondary detection network corresponds to only one field image acquisition device; when only one image acquisition device acquires the violation image in the construction site, determining the position area of the violation target possibly existing in the virtual space based on the first relative position relation between the violation target and the first image acquisition device by using the image acquisition device as the first image acquisition device;
calling at least one second image acquisition device around to shoot the position area; carrying out image analysis on the shot picture, and determining a second relative position relation of the violation target relative to the second image acquisition equipment;
and determining the position of the violation target in the virtual space based on the first relative position relationship, the second relative position relationship, the setting position of the first image acquisition device and the setting position of the second image acquisition device.
The working principle and the beneficial effects of the technical scheme are as follows:
shooting the violation target through a plurality of image acquisition devices to realize accurate positioning of the violation target; an accurate position basis is provided for accurate placement of the alarm of the handheld device. Wherein, the first position relative relation comprises distance, angle, etc.; the second positional relationship includes a distance, an angle, and the like.
In one embodiment, the threshold adjustment module is configured to determine a risk coefficient of each location point in the virtual space based on a situation of on-site construction;
determining the danger coefficient X at the position of the violation target, and judging the threshold value t of the violation target0Setting is carried out; the setting formula is as follows:
Figure BDA0003014376120000191
wherein t is a preset initial judgment threshold value;
in step S4, the violation is determined by the threshold value of the violation target determined corresponding to the position of the violation target, that is, the violation is identified by the image, the violation needs to be tracked, and when the time of the violation is greater than the threshold value of the violation target determined, the alarm information metadata is generated; the occurrence of false alarm can be effectively avoided; for example: in the area that danger coefficient is low, the workman finds that the safety helmet wears insecurely, takes off the safety helmet and takes up and put on again after the bridle, and this kind of condition need not to send out the warning. However, in areas with high risk factors, such time is not allowed and an alarm needs to be given; the intelligent distinguishing is realized mainly by the threshold value adjusted according to the danger coefficient, and the intellectualization of the system is improved.
Determining the danger coefficient of each position point in the virtual space based on the condition of site construction; the method comprises the following steps:
determining a danger coefficient according to the distance from the construction area, wherein the danger coefficient is 90 in an area which is 10 meters away from the boundary of the construction area; the risk factor in the construction area is 100; 80 m to 10 m; 20 to 30 meters is 50, and so on. And when a plurality of construction areas exist in the construction site, judging the danger coefficient of the position according to the distance between the position and the nearest construction area.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A construction site violation image recognition alarm method is characterized by comprising the following steps:
acquiring a construction site video image as a data source through image acquisition equipment arranged on a construction site;
detecting the data source based on a primary model to obtain a first detection result;
inputting the first detection result into a secondary detection network for secondary classification detection;
generating alarm information metadata according to the detection result of the secondary detection network;
and sending the alarm information metadata to a field alarm device.
2. The construction site violation image recognition alarm method of claim 1 wherein said primary model is the YOLOV5 model; the secondary model comprises: YOLO (V3-V5), KNN, AlexNet, VGGNet, GoogleNet, ResNet, R-CNN, Fast R-CNN, SSD, FCN, UNet, SegNet, PSPNet;
the model training process of the primary model and the secondary model is as follows:
acquiring an original video image of a construction site through image acquisition equipment to generate an original data set;
performing primary image annotation on the original data set to form a primary annotation set;
dividing a training set and a verification set of the primary label set according to a preset proportion, inputting the training set and the verification set into a YOLOV5 initial model for model training, and obtaining the trained primary model;
extracting the images of the primary label set according to primary classification to obtain a plurality of secondary original data sets;
labeling the secondary data sets to form a plurality of secondary labeling sets;
dividing the secondary label set into a plurality of training sets and verification sets according to a preset proportion, selecting proper networks according to different identification categories for model training, and obtaining a plurality of secondary models through model training
Wherein, the primary labeling content is one or a plurality of combinations of personnel, vehicles, meters and signboard.
3. The construction site violation image recognition alarm method of claim 1 wherein said image capture device comprises: one or more of a camera, a cloth control ball and a network camera are combined;
the field alarm device includes: one or more of a handheld device, a large monitoring screen and an acousto-optic alarm device distributed on the site in an array mode.
4. The construction site violation image recognition alarm method of claim 1 wherein said generating alarm information metadata based on the detection results of said secondary detection network comprises:
constructing a virtual space based on image acquisition equipment which is distributed and arranged on a construction site in an array manner;
determining a first position of the violation target in the virtual space based on the distance between a shooting end of the data source corresponding to the detection result and the violation target;
determining the alarm volume of the acousto-optic alarm equipment in the field alarm equipment based on the first position of the violation target in the virtual space and the second position of the acousto-optic alarm equipment in the field alarm equipment; the calculation formula is as follows:
Figure FDA0003014376110000021
wherein D isiThe alarm volume of the ith acousto-optic alarm device is set; l is0Is a preset distance threshold; l isiThe distance value between the first position of the violation target in the virtual space and the second position of the field alarming device is obtained; r is the preset maximum volume;
determining the warning light intensity of acousto-optic warning equipment in the field warning equipment based on the first position of the violation target in the virtual space and the second position of acousto-optic warning equipment in the field warning equipment; the calculation formula is as follows:
Figure FDA0003014376110000022
wherein, TiThe intensity of the warning light of the ith acousto-optic warning device; h is the preset maximum light intensity.
5. The construction site violation image recognition alarm method of claim 4 wherein said sending said alarm information metadata to a site alarm device comprises:
acquiring position information of handheld equipment positioned on a construction site;
mapping the position information of the handheld device to the virtual space to obtain a third position of the handheld device;
matching the first position with each third position, and sending the alarm metadata to the handheld equipment when the matches are matched; the handheld equipment sends out an alarm;
wherein matching the first location with the third location comprises:
calculating a distance value between the first position and the third position, wherein the calculation formula is as follows:
Figure FDA0003014376110000031
wherein A isjIs the distance value, x, of the jth handheld device's third position from the first positionkA value of a k-dimension parameter for the third location; y iskA k-dimension parameter value of the first position, n being a dimension of the first position or a third position;
and when the minimum distance value is smaller than or equal to a preset threshold value, determining that the first position is matched with the third position corresponding to the minimum distance value.
6. The utility model provides a job site image recognition alarm system that breaks rules and regulations which characterized in that includes:
the data acquisition module is used for acquiring a construction site video image as a data source through image acquisition equipment arranged on a construction site;
the primary detection module is used for detecting the data source based on a primary model to obtain a first detection result;
the secondary detection module is used for inputting the first detection result into a secondary detection network for secondary classification detection;
the alarm generating module is used for generating alarm information metadata according to the detection result of the secondary detection network;
and the sending module is used for sending the alarm information metadata to field alarm equipment.
7. The job site violation image recognition alarm system of claim 6 wherein said primary model is the YOLOV5 model; the secondary model comprises: YOLO (V3-V5), KNN, AlexNet, VGGNet, GoogleNet, ResNet, R-CNN, Fast R-CNN, SSD, FCN, UNet, SegNet, PSPNet;
the model training process of the primary model and the secondary model is as follows:
acquiring an original video image of a construction site through image acquisition equipment to generate an original data set;
performing primary image annotation on the original data set to form a primary annotation set;
dividing a training set and a verification set of the primary label set according to a preset proportion, inputting the training set and the verification set into a YOLOV5 initial model for model training, and obtaining the trained primary model;
extracting the images of the primary label set according to primary classification to obtain a plurality of secondary original data sets;
labeling the secondary data sets to form a plurality of secondary labeling sets;
dividing the secondary label set into a plurality of training sets and verification sets according to a preset proportion, selecting proper networks according to different identification categories for model training, and obtaining a plurality of secondary models through model training
Wherein, the primary labeling content is one or a plurality of combinations of personnel, vehicles, meters and signboard.
8. The construction site violation image recognition alarm system of claim 6 wherein said image capture device comprises: one or more of a camera, a cloth control ball and a network camera are combined;
the field alarm device includes: one or more of a handheld device, a large monitoring screen and an acousto-optic alarm device distributed on the site in an array mode.
9. The construction site violation image recognition alarm system of claim 6 wherein said alarm generation module performs the following operations:
constructing a virtual space based on image acquisition equipment which is distributed and arranged on a construction site in an array manner;
determining a first position of the violation target in the virtual space based on the distance between a shooting end of the data source corresponding to the detection result and the violation target;
determining the alarm volume of the acousto-optic alarm equipment in the field alarm equipment based on the first position of the violation target in the virtual space and the second position of the acousto-optic alarm equipment in the field alarm equipment; the calculation formula is as follows:
Figure FDA0003014376110000041
wherein D isiThe alarm volume of the ith acousto-optic alarm device is set; l is0Is presetA distance threshold; l isiThe distance value between the first position of the violation target in the virtual space and the second position of the field alarming device is obtained; r is the preset maximum volume;
determining the warning light intensity of acousto-optic warning equipment in the field warning equipment based on the first position of the violation target in the virtual space and the second position of acousto-optic warning equipment in the field warning equipment; the calculation formula is as follows:
Figure FDA0003014376110000051
wherein, TiThe intensity of the warning light of the ith acousto-optic warning device; h is the preset maximum light intensity.
10. The construction site violation image recognition alarm system of claim 9 wherein said transmitting module performs the following operations:
acquiring position information of handheld equipment positioned on a construction site;
mapping the position information of the handheld device to the virtual space to obtain a third position of the handheld device;
matching the first position with each third position, and sending the alarm metadata to the handheld equipment when the matches are matched; the handheld equipment sends out an alarm;
wherein matching the first location with the third location comprises:
calculating a distance value between the first position and the third position, wherein the calculation formula is as follows:
Figure FDA0003014376110000052
wherein A isjIs the distance value, x, of the jth handheld device's third position from the first positionkIs the k-dimension parameter of the third positionA value; y iskA k-dimension parameter value of the first position, n being a dimension of the first position or a third position;
and when the minimum distance value is smaller than or equal to a preset threshold value, determining that the first position is matched with the third position corresponding to the minimum distance value.
CN202110384848.9A 2021-04-09 2021-04-09 Construction site violation image recognition and alarm method Active CN113052125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110384848.9A CN113052125B (en) 2021-04-09 2021-04-09 Construction site violation image recognition and alarm method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110384848.9A CN113052125B (en) 2021-04-09 2021-04-09 Construction site violation image recognition and alarm method

Publications (2)

Publication Number Publication Date
CN113052125A true CN113052125A (en) 2021-06-29
CN113052125B CN113052125B (en) 2022-10-28

Family

ID=76518972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110384848.9A Active CN113052125B (en) 2021-04-09 2021-04-09 Construction site violation image recognition and alarm method

Country Status (1)

Country Link
CN (1) CN113052125B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782988A (en) * 2022-03-29 2022-07-22 西安交通大学 Construction environment-oriented multi-stage safety early warning method
CN115482217A (en) * 2022-09-21 2022-12-16 内蒙古科电数据服务有限公司 Electric shock prevention video detection method for transformer substation based on Gaussian mixture model separation algorithm
CN115482217B (en) * 2022-09-21 2024-05-10 内蒙古科电数据服务有限公司 Transformer substation electric shock prevention video detection method based on Gaussian mixture model separation algorithm

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150302744A1 (en) * 2013-09-18 2015-10-22 Shihao Lin Real time notification and confirmation system and method for vehicle traffic violation technical field
CN107194396A (en) * 2017-05-08 2017-09-22 武汉大学 Method for early warning is recognized based on the specific architecture against regulations in land resources video monitoring system
CN108376246A (en) * 2018-02-05 2018-08-07 南京蓝泰交通设施有限责任公司 A kind of identification of plurality of human faces and tracking system and method
US20190026564A1 (en) * 2017-07-19 2019-01-24 Pegatron Corporation Video surveillance system and video surveillance method
CN110119656A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations
CN111091098A (en) * 2019-12-20 2020-05-01 浙江大华技术股份有限公司 Training method and detection method of detection model and related device
CN111650204A (en) * 2020-05-11 2020-09-11 安徽继远软件有限公司 Transmission line hardware defect detection method and system based on cascade target detection
CN111753780A (en) * 2020-06-29 2020-10-09 广东电网有限责任公司清远供电局 Transformer substation violation detection system and violation detection method
CN112149511A (en) * 2020-08-27 2020-12-29 深圳市点创科技有限公司 Method, terminal and device for detecting violation of driver based on neural network
CN112183265A (en) * 2020-09-17 2021-01-05 国家电网有限公司 Electric power construction video monitoring and alarming method and system based on image recognition
CN112613454A (en) * 2020-12-29 2021-04-06 国网山东省电力公司建设公司 Electric power infrastructure construction site violation identification method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150302744A1 (en) * 2013-09-18 2015-10-22 Shihao Lin Real time notification and confirmation system and method for vehicle traffic violation technical field
CN107194396A (en) * 2017-05-08 2017-09-22 武汉大学 Method for early warning is recognized based on the specific architecture against regulations in land resources video monitoring system
US20190026564A1 (en) * 2017-07-19 2019-01-24 Pegatron Corporation Video surveillance system and video surveillance method
CN108376246A (en) * 2018-02-05 2018-08-07 南京蓝泰交通设施有限责任公司 A kind of identification of plurality of human faces and tracking system and method
CN110119656A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations
CN111091098A (en) * 2019-12-20 2020-05-01 浙江大华技术股份有限公司 Training method and detection method of detection model and related device
CN111650204A (en) * 2020-05-11 2020-09-11 安徽继远软件有限公司 Transmission line hardware defect detection method and system based on cascade target detection
CN111753780A (en) * 2020-06-29 2020-10-09 广东电网有限责任公司清远供电局 Transformer substation violation detection system and violation detection method
CN112149511A (en) * 2020-08-27 2020-12-29 深圳市点创科技有限公司 Method, terminal and device for detecting violation of driver based on neural network
CN112183265A (en) * 2020-09-17 2021-01-05 国家电网有限公司 Electric power construction video monitoring and alarming method and system based on image recognition
CN112613454A (en) * 2020-12-29 2021-04-06 国网山东省电力公司建设公司 Electric power infrastructure construction site violation identification method and system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
M SHYU等: "A novel anomaly detection scheme based on principal component classifier", 《PROC ICDM FOUNDATION & NEW DIRECTION OF DATA MINING WORKSHOP》 *
刘振宇等: "一种基于主动学习和多种监督学习的情感分析模型", 《中国电子科学研究院学报》 *
姜小云等: "基于SMS的气象灾害短时临近自动预警系统设计与实现", 《计算机测量与控制》 *
孙懿: "施工现场人员类型识别方法的研究与实现", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 *
张凯等: "飞机灯光告警系统自动测试设备研制", 《新技术新仪器》 *
王辉伟等: "抽水蓄能电站施工视频智能违章识别预警实时监测", 《水利水电技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782988A (en) * 2022-03-29 2022-07-22 西安交通大学 Construction environment-oriented multi-stage safety early warning method
CN114782988B (en) * 2022-03-29 2024-04-02 西安交通大学 Multistage safety early warning method oriented to construction environment
CN115482217A (en) * 2022-09-21 2022-12-16 内蒙古科电数据服务有限公司 Electric shock prevention video detection method for transformer substation based on Gaussian mixture model separation algorithm
CN115482217B (en) * 2022-09-21 2024-05-10 内蒙古科电数据服务有限公司 Transformer substation electric shock prevention video detection method based on Gaussian mixture model separation algorithm

Also Published As

Publication number Publication date
CN113052125B (en) 2022-10-28

Similar Documents

Publication Publication Date Title
CN110738127B (en) Helmet identification method based on unsupervised deep learning neural network algorithm
CN108090458B (en) Human body falling detection method and device
CN108921159B (en) Method and device for detecting wearing condition of safety helmet
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN111488799B (en) Falling object identification method and system based on image identification
CN101751744B (en) Detection and early warning method of smoke
CN109300471A (en) Merge place intelligent video monitoring method, the apparatus and system of sound collection identification
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
CN112396658B (en) Indoor personnel positioning method and system based on video
CN106128022B (en) A kind of wisdom gold eyeball identification violent action alarm method
CN110390229B (en) Face picture screening method and device, electronic equipment and storage medium
CN112235537B (en) Transformer substation field operation safety early warning method
CN209543514U (en) Monitoring and alarm system based on recognition of face
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN110458794B (en) Quality detection method and device for accessories of rail train
CN112329691A (en) Monitoring video analysis method and device, electronic equipment and storage medium
CN110008804B (en) Elevator monitoring key frame obtaining and detecting method based on deep learning
CN111785050A (en) Expressway fatigue driving early warning device and method
CN113052125B (en) Construction site violation image recognition and alarm method
CN115620192A (en) Method and device for detecting wearing of safety rope in aerial work
Hermina et al. A Novel Approach to Detect Social Distancing Among People in College Campus
CN112016509B (en) Personnel station abnormality reminding method and device
CN112580470A (en) City visual perception method and device, electronic equipment and storage medium
CN114979567B (en) Object and region interaction method and system applied to video intelligent monitoring
CN111598040B (en) Construction worker identity recognition and safety helmet wearing detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant