CN112434828A - Intelligent identification method for safety protection in 5T operation and maintenance - Google Patents

Intelligent identification method for safety protection in 5T operation and maintenance Download PDF

Info

Publication number
CN112434828A
CN112434828A CN202011319418.0A CN202011319418A CN112434828A CN 112434828 A CN112434828 A CN 112434828A CN 202011319418 A CN202011319418 A CN 202011319418A CN 112434828 A CN112434828 A CN 112434828A
Authority
CN
China
Prior art keywords
protective clothing
target detection
safety helmet
executing
detection network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011319418.0A
Other languages
Chinese (zh)
Other versions
CN112434828B (en
Inventor
叶彦斐
林志峰
姜磊
胡文杰
涂娟
童先洲
华琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Fudao Software Co Ltd
Original Assignee
Nanjing Fudao Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Fudao Software Co Ltd filed Critical Nanjing Fudao Software Co Ltd
Priority to CN202011319418.0A priority Critical patent/CN112434828B/en
Publication of CN112434828A publication Critical patent/CN112434828A/en
Application granted granted Critical
Publication of CN112434828B publication Critical patent/CN112434828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Biophysics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses an intelligent identification method for safety protection in 5T operation and maintenance, which comprises the steps of acquiring configuration parameters and identifying task start-stop commands by a control module, wherein the configuration parameters comprise: identifying time length, an ip address of a camera, a user name of the camera, a password of the camera, an external port number of a video stream, the number of stored pictures of each event and a state reporting period; a step of pulling the video stream from the corresponding numbered camera by a video stream management module according to the configuration parameters obtained from the control module; and an intelligent recognition analysis module carries out intelligent recognition and event analysis on the safety helmet and the protective clothing of the workers in the video frame. The automatic identification and event identification uploading of workers' safety helmets and protective clothing worn on the railway 5T operation and maintenance site is realized, the configuration information is remotely set, the identification task is turned on and off, the system running state is monitored, and the intelligent management of the railway 5T operation and maintenance is realized.

Description

Intelligent identification method for safety protection in 5T operation and maintenance
Technical Field
The invention relates to the field of railway safety monitoring and operation and maintenance, in particular to an intelligent identification method for safety protection in 5T operation and maintenance.
Background
With the increase of railway lines and the widening of coverage areas, more and more detection stations for placing 5T equipment are built along the railway operation lines. The 5T system is a vehicle safety precaution system established by relevant departments of railways in China to adapt to the development of modern railways, and the normal operation of the 5T detection station is directly related to the safety and efficiency of the daily operation of railways, so that the system has important significance on the safe operation of railways.
During the daily operation and maintenance work of the 5T detection station, workers need to correctly wear safety helmets and protective clothing. However, although the existing video monitoring system in the 5T detection station can display the working condition of the staff in the station in real time, the wearing of the safety helmet and the wearing of the protective clothing of the staff in the operation engineering cannot be identified in real time, and the staff needs to monitor for 24 hours and manually patrol. Due to the limited energy of the watchmen, the watchmen are easy to be tired and distracted, and accidents which potentially threaten safe production often occur.
Disclosure of Invention
The invention discloses an intelligent identification method for safety protection in 5T operation and maintenance, which is based on an identification unit, wherein the identification unit comprises a control module, a video stream management module, an intelligent identification analysis module and a comprehensive management module; the 5T detection upper computer can set configuration information, start and close identification task through a thrift interface of the control module, and realize monitoring of the system running state; the video stream management module pulls the video stream from the camera and carries out camera state maintenance according to the configuration information and the command sent by the control module; the intelligent identification and analysis module carries out intelligent identification on operators, safety helmets and protective clothing through a safety protection and identification method in 5T operation and maintenance and carries out event analysis on the safety helmets and the protective clothing; the integrated management module is responsible for uploading the events to the platform and deleting the events periodically in the local cache.
The specific flow of the identification method is as follows:
(1) the control module acquires configuration parameters and identifies a task start-stop command, wherein the configuration parameters comprise: identifying time length, an ip address of a camera, a user name of the camera, a password of the camera, an external port number of a video stream, the number of stored pictures of each event and a state reporting period;
(2) judging whether to start the identification operation, if so, turning to the step (3), otherwise, returning to the step (1);
(3) acquiring environmental illumination intensity data;
(4) the video stream management module pulls the video stream from the corresponding numbered camera according to the configuration parameters obtained from the control module;
(5) judging whether video frames are not successfully acquired for 600 seconds continuously, and if the video frames are not successfully acquired for 600 seconds continuously, reporting abnormal state information of the camera; otherwise, turning to the step (6);
(6) the intelligent recognition analysis module carries out intelligent recognition and event analysis on safety helmets and protective clothing of workers in the video frames;
(7) the comprehensive management module uploads the generated event through a post request;
(8) and the integrated management module deletes the locally cached events periodically.
Preferably, the video stream pulling in the overall process specifically comprises the following steps:
(4-1) acquiring a video current frame from a corresponding numbered camera determined by the configuration parameters according to an RSTP standard stream protocol;
(4-2) judging whether the current frame of the camera is successfully acquired, if so, executing the step (4-3), otherwise, directly turning to the step (4-4);
(4-3) outputting the current frame data of the camera to a picture data queue;
(4-4) performing a camera status maintenance operation;
(4-5) after waiting for 1 second, go to step (4-1).
Preferably, the specific steps of maintaining the camera status in the pull video stream flow include:
(4-4-1) reading the current state of the camera and the number of times of abnormal states of the camera;
(4-4-2) judging whether the current state of the camera is normal, if so, executing the step (4-4-3), otherwise, executing the step (4-4-4);
(4-4-3) setting the number of times of abnormal states of the camera to be 0;
(4-4-4) adding 1 to the abnormal times of the camera;
(4-4-5) judging whether the number of times of the abnormal state of the camera is more than 600, and if the number of times of the abnormal state of the camera is more than 600, executing the step (4-4-6);
(4-4-6) setting the camera status as abnormal.
Preferably, the intelligent identification in the overall process adopts a method of cascading a YOLOv4 target detection network and a modified YOLOv3-Tiny target detection network and identifies the wearing of personal safety helmets and protective clothing based on a plurality of network identification models matched with different illumination intensities, and the specific steps are as follows:
(6A-1) acquiring a video frame from the data queue;
(6A-2) judging whether the video frame is successfully acquired, if so, executing the step (6A-4), otherwise, executing the step (6A-3);
(6A-3) waiting until the data queue has data and going to step (6A-1);
(6A-4) inputting the video frames into a YOLOv4 target detection network for personnel detection;
(6A-5) judging whether the person is detected according to the credibility of the detected person output by the YOLOv4 target detection network, if no person is detected, executing the step (6A-1), otherwise, executing the step (6A-6);
(6A-6) cutting the human target frame area detected in the video frame according to the coordinate parameter of the human target frame output by the YOLOv4 target detection network;
(6A-7) judging whether the ambient light intensity is less than 0.0018Lux, if so, executing the step (6A-8), otherwise, executing the step (6A-9);
(6A-8) inputting the cut-out image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for night recognition for detecting human safety helmets and protective clothing, and then executing the step (6A-20);
(6A-9) judging whether the ambient illumination intensity is less than 0.0022Lux, if so, executing the step (6A-10), otherwise, executing the step (6A-11);
(6A-10) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for night recognition and daytime low-light intensity recognition to respectively detect personnel safety helmets and protective clothing, and performing fusion judgment on adjacent protective states based on recognition results of two adjacent models, and then executing the step (6A-20);
(6A-11) judging whether the ambient light intensity is less than 9Lux, if so, executing the step (6A-12), otherwise, executing the step (6A-13);
(6A-12) inputting the cut-out image of the human target frame area into a modified YOLOv3-Tiny target detection network suitable for daytime low-light intensity recognition for detecting human safety helmets and protective clothing, and then executing the step (6A-20);
(6A-13) judging whether the ambient light intensity is less than 11Lux, if so, executing the step (6A-14), otherwise, executing the step (6A-15);
(6A-14) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for identifying the daytime low illumination intensity and the daytime illumination intensity to respectively detect the personnel safety helmet and the protective clothing, and carrying out fusion judgment on adjacent protective states based on the identification results of two adjacent models, and then executing the step (6A-20);
(6A-15) judging whether the ambient illumination intensity is less than 90Lux, if so, executing the step (6A-16), otherwise, executing the step (6A-17);
(6A-16) inputting the cut-out image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for the identification of the illumination intensity in the daytime for the detection of the safety helmet and the protective clothing of the human, and then executing the step (6A-20);
(6A-17) judging whether the ambient light intensity is less than 110Lux, if so, executing the step (6A-18), otherwise, executing the step (6-19);
(6A-18) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for identifying the illumination intensity in the daytime and the strong illumination intensity in the daytime to respectively detect the personnel safety helmet and the protective clothing, and carrying out fusion judgment on adjacent protective states based on the identification results of two adjacent models, and then executing the step (6A-20);
(6A-19) inputting the cut image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for daytime strong illumination intensity recognition to detect human safety helmets and protective clothing;
(6A-20) storing the wearing information of the safety helmet and the protective clothing of the current frame personnel.
Preferably, the method for cascading the improved YOLOv3-Tiny target detection network with the YOLOv4 target detection network firstly adopts the YOLOv4 target detection network to detect the personnel, and comprises the following specific steps:
(6A-4-1) extracting a video frame containing an operator from monitoring videos shot by a monitoring camera under different illumination intensities on an operation site, and establishing an operator image data set;
(6A-4-2) marking the personnel in the image by using a LabelImg tool to obtain a corresponding data set file in an XML format, and converting the data set in the XLM format into a data set in a txt format suitable for a YOLOv4 target detection network;
(6A-4-3) building a YOLOv4 target detection network by using a dark learning framework of dark darknet, and the method comprises the following steps:
1) a BackBone part of a YOLOv4 target detection network is built by adopting a CSPDarknet53 network structure, an activation function of the BackBone part uses a Mish activation function, and the formula is as follows:
f(x)=x*tanh(log(1+ex))
wherein x is an input value of a network layer where the activation function is located, and tanh () is a hyperbolic tangent function; the curve of the Mish activation function is smooth, better information can be allowed to enter a neural network, so that better accuracy and generalization can be obtained, and the Dropblock method is adopted, so that the image information of the characteristic diagram is randomly discarded to relieve overfitting;
2) constructing a Neck part of a YOLOv4 target detection network by adopting an SPP module and an FPN + PAN structure;
3) the CIOU _ LOSS LOSS function is adopted as the regression LOSS function of the target frame of the YOLOV4 target detection network, so that the speed and the precision of the regression of the prediction frame are higher, and the formula is
Figure BDA0002792356480000041
The IOU is the intersection ratio of the target detection prediction frame and the real frame, Distance _ c is the diagonal Distance of the minimum external rectangle of the target detection prediction frame and the real frame, Distance _2 is the Euclidean Distance between the center points of the target detection prediction frame and the real frame, and V is a parameter for measuring the consistency of the length-width ratios of the target detection prediction frame and the real frame;
4) the YOLOv4 target detection network adopts a DIOU _ nms target frame screening method;
(6A-4-4) carrying out object classification training on the YOLOv4 target detection network by adopting a COCO image data set to obtain a part of trained YOLOv4 network model;
(6A-4-5) on the basis of the result of the step (6A-4-4), training a YOLOv4 target detection network by using the manufactured field worker image data set to obtain a YOLOv4 network model capable of being used for field worker detection;
(6A-4-6) inputting the video frame into a YOLOv4 target detection network, and detecting the credibility of the personnel and the coordinate parameters of the personnel target frame.
Preferably, the method of cascading a modified YOLOv3-Tiny target detection network through a YOLOv4 target detection network and then performing personnel safety helmet and protective clothing detection through the modified YOLOv3-Tiny target detection network based on a plurality of network model weights matched with different illumination intensities is characterized by comprising the following steps:
(6A-8-1) extracting video frames containing personal safety caps and protective clothing from monitoring videos shot by monitoring cameras under different illumination intensities in an operation site, respectively establishing an image data set of the daytime low-light-intensity personal safety caps and protective clothing, an image data set of the daytime medium-light-intensity personal safety caps and protective clothing, an image data set of the daytime high-light-intensity personal safety caps and protective clothing, and an image data set of the nighttime personal safety caps and protective clothing, and expanding the data sets by utilizing a Mosaic data enhancement mode;
(6A-8-2) marking the personnel safety helmet and the protective clothing in the image by using a LabelImg tool to obtain a corresponding data set file in an XML format, and converting the data set in the XLM format into a txt format data set suitable for a YOLOv3-Tiny target detection network;
(6A-8-3) constructing an improved YOLOv3-Tiny target detection network by using a dark learning framework of darknet, and having the following steps:
1) carrying out network model modification and pruning operation by taking a YOLOv3-Tiny target detection network as a basic framework;
2) replacing an original backbone network of YOLOv3-Tiny with a Google effective-B0 deep convolutional neural network, removing 132-135 layers of the effective-B0 deep convolutional neural network, and respectively adding 2 convolutional layers, 1 shortcut layer, 1 convolutional layer and a YOLO layer after 131 layers;
3) on the basis of the network obtained in the step 2), sequentially connecting 1 route layer, 1 convolution layer, 1 down-sampling layer, 1 short layer, 1 convolution layer, 2 short layers, 1 convolution layer and 1YOLO layer behind 133 layers of the network to obtain an improved YOLOv3-Tiny target detection network;
(6A-8-4) clustering calculation is carried out on the length and width parameters of the real frames of the safety helmet and the protective clothing in the data set of the safety helmet and the protective clothing by using a k-means algorithm, and the original prior frame length and width data of the YOLOv3-Tiny target detection network are replaced by the length and width data obtained by real frame clustering so as to improve the detection rate of the target frame;
(6A-8-5) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the safety helmet of the person with low light intensity in the daytime and the protective clothing to obtain a network model suitable for detecting the safety helmet of the person and the protective clothing under the low light intensity in the daytime;
(6A-8-6) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the light intensity personnel safety helmet and the protective clothing in the daytime to obtain a network model suitable for the personnel safety helmet and the protective clothing detection under the illumination intensity in the daytime;
(6A-8-7) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the personnel safety helmet with strong light intensity in the daytime and the protective clothing to obtain a network model suitable for detecting the personnel safety helmet and the protective clothing under the strong light intensity in the daytime;
(6A-8-8) training the improved YOLOv3-Tiny target detection network by using the manufactured night personnel safety helmet and protective clothing data set to obtain a network model for detecting the night personnel safety helmet and protective clothing;
(6A-8-9) inputting the cut personnel target area into an improved YOLOv3-Tiny target detection network suitable for different illumination intensities according to the illumination intensity data of the field environment, and obtaining the reliability of the safety helmet and the protective clothing worn by the field personnel and the coordinate parameters of the target frame of the safety helmet and the protective clothing.
Preferably, in the process of identifying the wearing condition of the safety helmet and the protective clothing by adopting a plurality of network identification models matched with different illumination intensities, if the illumination value measured by the safety protection identification unit is in the neighborhood range of the application distinguishing value of each illumination intensity identification model, the method for identifying and judging the wearing condition of the safety helmet and the protective clothing by adopting the fusion of the identification results of the identification models based on the adjacent illumination intensities is adopted, firstly, the identification results of the low-level illumination intensity identification model closest to the measured illumination value and the high-level illumination intensity identification model closest to the measured illumination value are respectively adopted, then, the wearing condition of the safety helmet and the protective clothing is judged by fusion calculation, and the specific fusion judgment process is as follows:
(6A-10-1) recording that the critical light intensity value of the two adjacent illumination intensity recognition models is distinguished as xl(night recognition model, daytime, corresponding to neighborhood with lower limit light intensity value of xll=0.9xlThe upper limit value of the neighborhood light intensity is xlh=1.1xlIf the current light intensity value is x, the confidence level weight of the low-level illumination intensity model identification is recorded as
Figure BDA0002792356480000061
Confidence weights for high-level illumination intensity model identification are recorded as
Figure BDA0002792356480000062
(6A-10-2) identifying the safety helmet and the protective clothing of the person based on the improved YOLOv3-Tiny low-level illumination intensity identification model, and obtaining the reliability h of the safety helmet worn by the person1The reliability of wearing protective clothing is c1Weighted confidence m of the person wearing the safety helmet1(A)=h1wlWeighted confidence m of unworn crash helmet1(B)=(1-h1)wlWeighted confidence m that the wearing state of the helmet is unknown1(C)=1-wlWeighted confidence m of wearing protective clothing1(D)=c1wlWeighted confidence m of not wearing protective clothing1(E)=(1-c1)wlWeighted confidence m that the wearing state of the protective clothing is unknown1(F)=1-wl
(6A-10-3) identifying the safety helmet and the protective clothing of the person based on the improved YOLOv3-Tiny high-level illumination intensity identification model to obtain the reliability h of the safety helmet worn by the person2The reliability of wearing protective clothing is c2Weighted confidence m of the person wearing the safety helmet2(A)=h2whWeighted confidence m of unworn crash helmet2(B)=(1-h2)whWearing state of safety helmetUnknown weighted confidence m2(C)=1-whWeighted confidence m of wearing protective clothing2(D)=c2whWeighted confidence m of not wearing protective clothing2(E)=(1-c2)whWeighted confidence m that the wearing state of the protective clothing is unknown2(F)=1-wh
(6A-10-4) calculating the credibility m (A) of the safety helmet without the safety helmet, the credibility m (B) of the safety helmet without the safety helmet, the credibility m (D) of the protective clothing without the protective clothing and the credibility m (E) of the protective clothing based on the recognition result fusion of two adjacent illumination intensity recognition models, wherein the credibility m (A), the credibility m (B) of the protective clothing without the safety helmet, the credibility m (D) of the protective clothing without
Figure BDA0002792356480000063
Figure BDA0002792356480000071
Figure BDA0002792356480000072
Figure BDA0002792356480000073
(6A-10-5) comparing m (A) with m (B), if m (A) is more than or equal to m (B), the fusion is judged to be wearing the safety helmet, if m (A) is less than m (B), the fusion is judged to be not wearing the safety helmet;
(6A-10-6) comparing m (D) with m (E), if m (D) is more than or equal to m (E), the fusion judges that the protective clothing is worn, and if m (D) is less than m (E), the fusion judges that the protective clothing is not worn.
Preferably, the event analysis in the overall process comprises the following specific steps:
(6B-1) reading the identification results of the personnel safety helmet and the protective clothing of the current video frame;
(6B-2) judging whether the current video frame camera ip belongs to a certain event in the event task dictionary, and if the current video frame camera ip belongs to the certain event in the event task dictionary, executing the step (6B-3); otherwise, executing the step (6B-4);
(6B-3) putting the current video frame data into the video frame data queue corresponding to the event;
(6B-4) creating a new event task, and putting the current video frame data into a video frame data queue corresponding to the event;
(6B-5) judging whether the number of the data in the video frame data queue is equal to 60 or not, and if the number of the data in the video frame data queue is not equal to 60 or not, turning to the step (6B-5);
(6B-6) counting the number of the personnel who don't wear the protective clothing and wear the safety helmet in the video frame data queue;
(6B-7) judging whether the number of the unworn protective clothing or the unworn safety helmet is more than 70% of the total number of the video frame data queue data, if not, turning to the step (6B-9)
(6B-8) performing an event upload operation;
and (6B-9) releasing the resources.
The event uploading method comprises the following specific steps:
(6B-8-1) inputting pictures and video information needing to be uploaded;
(6B-8-2) uploading the event;
(6B-8-3) judging whether the event uploading is successful, if so, ending the flow, otherwise, turning to the step (6B-8-4);
and (6B-8-4) saving the picture and video information needing to be uploaded to the local.
Preferably, the specific steps of periodically deleting the local cache event in the overall process are as follows:
(8-1) judging whether a cache event exists locally, if not, turning to the step (8-2), otherwise, turning to the step (8-3);
(8-2) after waiting a fixed time, going to step (8-1);
(8-3) uploading the event;
(8-4) judging whether the event uploading is successful, if so, turning to the step (8-5), otherwise, turning to the step (8-2);
and (8-5) deleting the local cache event.
Advantageous effects
1. The intelligent safety protection identification method realizes real-time automatic identification and identification event uploading of workers' safety helmets and protective clothing worn on a railway 5T operation and maintenance site, and the 5T detection upper computer can remotely set configuration information, start and close identification task and monitor system running state, thereby realizing railway 5T operation and maintenance intelligent management;
2. the safety protection intelligent identification method can realize the periodic deletion of the local cache events, greatly save the local storage space and save the investment;
3. the intelligent safety protection identification method is characterized in that a target detection network structure is pertinently modified and designed according to a railway 5T operation and maintenance application scene, wearing of personnel safety helmets and protective clothing is identified through cascade connection of a YOLOv4 target detection network and an improved YOLOv3-Tiny target detection network and based on a plurality of network model weights matched with different illumination intensities, and the target classification capability and detection accuracy under different environmental backgrounds are obviously improved while the identification speed is ensured;
4. the intelligent safety protection identification method provides a method for fusion judgment of adjacent illumination intensity models aiming at the situation that model mismatching can occur when the illumination value measured by the safety protection identification unit is in the neighborhood range of the application distinguishing value of each illumination intensity identification model, so that smooth switching of different illumination intensity models is realized, and the identification accuracy is ensured.
Drawings
FIG. 1 is a functional block diagram of an identification unit according to the present invention
FIG. 2 is an overall flow chart of the algorithm of the present invention
FIG. 3 is a flow chart of a camera pull video stream according to the present invention
FIG. 4 is a flowchart illustrating the maintenance of the status of a camera according to the present invention
FIG. 5 is an intelligent flow chart of the present invention
FIG. 6 is a flow chart of event analysis according to the present invention
FIG. 7 is a flowchart illustrating an event upload process according to the present invention
FIG. 8 is a flowchart illustrating a periodic deletion of a local cache event according to the present invention
Detailed Description
Fig. 1 shows a structure diagram of a functional module of a protection identification unit, which includes:
1. control module
The 5T detection upper computer can set configuration information through a thrift interface of the control module, start and close the identification task and monitor the running state of the system (the remote operation and the real-time monitoring of the state of the system are realized).
2. Video stream management module
And the video stream management module captures the video stream from the camera and performs camera state maintenance according to the configuration information and the command sent by the control module.
3. Intelligent recognition analysis module
The intelligent identification and analysis module is mainly responsible for intelligent identification of operators, safety helmets and protective clothing; and (4) event analysis of safety helmets and protective clothing (real-time automatic identification and event analysis of wearing of safety helmets and protective clothing of workers on a 5T operation and maintenance operation site are realized).
4. Integrated management module
The integrated management module is responsible for uploading the events to the platform and deleting the events of the local cache periodically (the local storage space can be greatly saved and the investment can be saved).
Second, protection identification method flow
1. With reference to fig. 2, the overall process:
(1) the control module acquires configuration parameters and identifies a task start-stop command, wherein the configuration parameters comprise: identifying time length, an ip address of a camera, a user name of the camera, a password of the camera, an external port number of a video stream, the number of stored pictures of each event and a state reporting period;
(2) judging whether to start the identification operation, and if the identification task is started, turning to the step (3); otherwise, returning to the step (1);
(3) acquiring environmental illumination intensity data;
(4) the video stream management module pulls the video stream from the corresponding numbered camera according to the configuration parameters obtained from the control module;
(5) judging whether video frames are not successfully acquired for 600 seconds continuously, and if the video frames are not successfully acquired for 600 seconds continuously, reporting abnormal state information of the camera; otherwise, turning to the step (6);
(6) the intelligent recognition analysis module carries out intelligent recognition and event analysis on safety helmets and protective clothing of workers in the video frames;
(7) the comprehensive management module uploads the generated event through a post request;
(8) and the integrated management module deletes the locally cached events periodically.
2. With reference to fig. 3, the specific steps of the camera pulling the video stream are as follows:
(4-1) acquiring a video current frame from a corresponding numbered camera determined by the configuration parameters according to an RSTP standard stream protocol;
(4-2) judging whether the current frame of the camera is successfully acquired, if so, executing the step (4-3), otherwise, directly turning to the step (4-4);
(4-3) outputting the current frame data of the camera to a picture data queue;
(4-4) performing a camera status maintenance operation;
(4-5) after waiting for 1 second, go to step (4-1).
3. With reference to fig. 4, the specific steps of maintaining the camera state are as follows:
(4-4-1) reading the current state of the camera and the number of times of abnormal states of the camera;
(4-4-2) judging whether the current state of the camera is normal or not, and if the current state of the camera is normal, executing the step (4-4-3); otherwise, executing the step (4-4-4);
(4-4-3) setting the number of times of abnormal states of the camera to be 0;
(4-4-4) adding 1 to the abnormal times of the camera;
(4-4-5) judging whether the number of times of abnormal state of the camera is more than 600, if the number of times of abnormal state of the camera is more than 600, executing the step (4-4-6)
(4-4-6) setting the camera status as abnormal.
4. With reference to fig. 5, the smart recognition is characterized in that the personal safety helmet and the protective clothing are recognized by cascading the YOLOv4 target detection network with the improved YOLOv3-Tiny target detection network and based on a plurality of network model weights matching different illumination intensities (compared with the way of directly detecting the personal target wearing the safety helmet and the protective clothing based on a single network model weight, the recognition accuracy in a complex background image can be significantly improved, and the method has strong robustness and adaptability), and the specific steps are as follows:
(6A-1) acquiring a video frame from the data queue;
(6A-2) judging whether the video frame is successfully acquired, if so, executing the step (6A-4), otherwise, executing the step (6A-3);
(6A-3) waiting until the data queue has data and going to step (6A-1);
(6A-4) inputting the video frames into a YOLOv4 target detection network for personnel detection;
(6A-5) judging whether the person is detected according to the credibility of the detected person output by the YOLOv4 target detection network, if no person is detected, executing the step (6A-1), otherwise, executing the step (6A-6);
(6A-6) cutting the human target frame area detected in the video frame according to the coordinate parameter of the human target frame output by the YOLOv4 target detection network;
(6A-7) judging whether the ambient light intensity is less than 0.0018Lux, if so, executing the step (6A-8), otherwise, executing the step (6A-9);
(6A-8) inputting the cut-out image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for night recognition for detecting human safety helmets and protective clothing, and then executing the step (6A-20);
(6A-9) judging whether the ambient illumination intensity is less than 0.0022Lux, if so, executing the step (6A-10), otherwise, executing the step (6A-11);
(6A-10) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for night recognition and daytime low-light intensity recognition to respectively detect personnel safety helmets and protective clothing, and performing fusion judgment on adjacent protective states based on recognition results of two adjacent models, and then executing the step (6A-20);
(6A-11) judging whether the ambient light intensity is less than 9Lux, if so, executing the step (6A-12), otherwise, executing the step (6A-13);
(6A-12) inputting the cut-out image of the human target frame area into a modified YOLOv3-Tiny target detection network suitable for daytime low-light intensity recognition for detecting human safety helmets and protective clothing, and then executing the step (6A-20);
(6A-13) judging whether the ambient light intensity is less than 11Lux, if so, executing the step (6A-14), otherwise, executing the step (6A-15);
(6A-14) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for identifying the daytime low illumination intensity and the daytime illumination intensity to respectively detect the personnel safety helmet and the protective clothing, and carrying out fusion judgment on adjacent protective states based on the identification results of two adjacent models, and then executing the step (6A-20);
(6A-15) judging whether the ambient illumination intensity is less than 90Lux, if so, executing the step (6A-16), otherwise, executing the step (6A-17);
(6A-16) inputting the cut-out image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for the identification of the illumination intensity in the daytime for the detection of the safety helmet and the protective clothing of the human, and then executing the step (6A-20);
(6A-17) judging whether the ambient light intensity is less than 110Lux, if so, executing the step (6A-18), otherwise, executing the step (6A-19);
(6A-18) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for identifying the illumination intensity in the daytime and the strong illumination intensity in the daytime to respectively detect the personnel safety helmet and the protective clothing, and carrying out fusion judgment on adjacent protective states based on the identification results of two adjacent models, and then executing the step (6A-20);
(6A-19) inputting the cut image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for daytime strong illumination intensity recognition to detect human safety helmets and protective clothing;
(6A-20) storing the wearing information of the safety helmet and the protective clothing of the current frame personnel.
4.1 the YOLOv4 target detection network carries out personnel detection (YOLOv4 is faster than YOLOv3 in detection speed and higher in identification precision), and is characterized by comprising the following steps:
(6A-4-1) extracting a video frame containing an operator from monitoring videos shot by a monitoring camera under different illumination intensities on an operation site, and establishing an operator image data set;
(6A-4-2) marking the personnel in the image by using a LabelImg tool to obtain a corresponding data set file in an XML format, and converting the data set in the XLM format into a data set in a txt format suitable for a YOLOv4 target detection network;
(6A-4-3) building a YOLOv4 target detection network by using a dark learning framework of dark darknet, and the method comprises the following steps:
1) a BackBone part of a YOLOv4 target detection network is built by adopting a CSPDarknet53 network structure, an activation function of the BackBone part uses a Mish activation function, and the formula is as follows:
f(x)=x*tanh(log(1+ex))
wherein x is an input value of a network layer where an activation function is located, a curve of a Mish activation function is smooth, better information can be allowed to enter a neural network, and therefore better accuracy and generalization can be obtained, and image information of a characteristic diagram is randomly discarded by adopting a Dropblock method to relieve overfitting;
2) constructing a Neck part of a YOLOv4 target detection network by adopting an SPP module and an FPN + PAN structure;
3) the CIOU _ LOSS LOSS function is adopted as the regression LOSS function of the target frame of the YOLOV4 target detection network, so that the speed and the precision of the regression of the prediction frame are higher, and the formula is
Figure BDA0002792356480000121
The IOU is the intersection ratio of the target detection prediction frame and the real frame, Distance _ c is the diagonal Distance of the minimum external rectangle of the target detection prediction frame and the real frame, Distance _2 is the Euclidean Distance between the center points of the target detection prediction frame and the real frame, and V is a parameter for measuring the consistency of the length-width ratios of the target detection prediction frame and the real frame;
4) the YOLOv4 target detection network employs the DIOU _ nms target box screening method (in the detection of overlapping targets, the effect of DIOU _ nms is better than that of the traditional nms).
(6A-4-4) carrying out object classification training on a YOLOv4 target detection network by adopting a COCO image data set (the COCO data set comprises 20 ten thousand images intercepted from a complex daily scene, has 80 target categories and more than 50 ten thousand target labels, and is the most widely disclosed target detection data set at present) to obtain a partially trained YOLOv4 network model;
(6A-4-5) on the basis of the result of the step (6A-4-4), training a YOLOv4 target detection network by using the manufactured field worker image data set to obtain a YOLOv4 network model capable of being used for field worker detection;
(6A-4-6) inputting the video frame into a YOLOv4 target detection network to obtain the credibility of the detected personnel in the video frame and the coordinate parameters of the personnel target frame.
4.2 improved YOLOv3-Tiny target detection network and based on matching different illumination intensity multiple network model weight to carry out personnel's safety helmet and protective clothing detection (through modifying the backbone network, increased the depth of original network, to the recognition under the different illumination intensity respectively adopt different network model weight, guaranteed the recognition speed simultaneously greatly promoted target classification ability and detection precision under the different environment backgrounds), its characterized in that has the following steps:
(6A-8-1) extracting video frames containing personal safety caps and protective clothing from monitoring videos shot by monitoring cameras under different illumination intensities in an operation site, respectively establishing an image data set of the daytime low-light-intensity personal safety caps and protective clothing, an image data set of the daytime medium-light-intensity personal safety caps and protective clothing, an image data set of the daytime high-light-intensity personal safety caps and protective clothing, and an image data set of the nighttime personal safety caps and protective clothing, and expanding the data sets by utilizing a Mosaic data enhancement mode (splicing 4 pictures into 1 picture through random zooming, random cutting and random arrangement);
(6A-8-2) marking the personnel safety helmet and the protective clothing in the image by using a LabelImg tool to obtain a corresponding data set file in an XML format, and converting the data set in the XLM format into a txt format data set suitable for a YOLOv3-Tiny target detection network;
(6A-8-3) constructing an improved YOLOv3-Tiny target detection network by using a dark learning framework of darknet, and having the following steps:
1) carrying out network model modification and pruning operation by taking a YOLOv3-Tiny target detection network as a basic framework;
2) replacing an original backbone network of YOLOv3-Tiny with a Google effective-B0 deep convolutional neural network, removing 132-135 layers of the effective-B0 deep convolutional neural network, and respectively adding 2 convolutional layers, 1 shortcut layer, 1 convolutional layer and a YOLO layer after 131 layers;
3) on the basis of the network obtained in the step 2), sequentially connecting 1 route layer, 1 convolution layer, 1 down-sampling layer, 1 short layer, 1 convolution layer, 2 short layers, 1 convolution layer and 1YOLO layer behind 133 layers of the network to obtain an improved YOLOv3-Tiny target detection network;
(6A-8-4) clustering calculation is carried out on the real frame length and width parameters of the safety helmet and the protective clothing in the data set of the safety helmet and the protective clothing by using a k-means algorithm (for a given data set, the data set is divided into k clusters according to the distance between numerical values, the distance between the numerical values in the clusters is made to be as small as possible, and the inter-cluster distance is made to be as large as possible), and the original prior frame length and width data of a YOLOv3-Tiny target detection network are replaced by the length and width data obtained by real frame clustering, so that the detection rate of a target frame is improved;
(6A-8-5) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the safety helmet of the person with low light intensity in the daytime and the protective clothing to obtain a network model suitable for detecting the safety helmet of the person and the protective clothing under the low light intensity in the daytime;
(6A-8-6) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the light intensity personnel safety helmet and the protective clothing in the daytime to obtain a network model suitable for the personnel safety helmet and the protective clothing detection under the illumination intensity in the daytime;
(6A-8-7) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the personnel safety helmet with strong light intensity in the daytime and the protective clothing to obtain a network model suitable for detecting the personnel safety helmet and the protective clothing under the strong light intensity in the daytime;
(6A-8-8) training the improved YOLOv3-Tiny target detection network by using the manufactured night personnel safety helmet and protective clothing data set to obtain a network model for detecting the night personnel safety helmet and protective clothing;
(6A-8-9) inputting the cut personnel target area into an improved YOLOv3-Tiny target detection network suitable for different illumination intensities according to the illumination intensity data of the field environment, and obtaining the reliability of the safety helmet and the protective clothing worn by the field personnel and the coordinate parameters of the target frame of the safety helmet and the protective clothing.
Neighboring protection state fusion judgment (ensuring smooth model switching)
Because the illumination sensor may have errors and the illumination values of various positions in the actual environment may have differences, a model mismatching situation may occur when the illumination value measured by the safety protection identification unit is in the neighborhood range of the application discrimination value of each illumination intensity identification model. In order to ensure the identification accuracy, the identification results of the low-level illumination intensity identification model closest to the measured illumination value and the identification results of the high-level illumination intensity identification model closest to the measured illumination value can be adopted respectively, and then the wearing conditions of the safety helmet and the protective clothing are judged by fusion calculation, wherein the specific fusion judgment process is as follows:
(6A-10-1) recording that the critical light intensity value of the two adjacent illumination intensity recognition models is distinguished as xl(night recognition model, day weak light intensity recognition model, day medium light intensity recognition model, day strong light intensity recognition model apply to distinguish the critical light intensity value xl0.002lx, 10lx, 100lx, respectively), corresponding to the neighborhood lower limit light intensity value xll=0.9xlThe upper limit value of the neighborhood light intensity is xlh=1.1xlIf the light intensity value is x, the confidence level weight of the low-level illumination intensity model identification is recorded as
Figure BDA0002792356480000141
Confidence weights for high-level illumination intensity model identification are recorded as
Figure BDA0002792356480000142
(6A-10-2) identifying the safety helmet and the protective clothing of the person based on the improved YOLOv3-Tiny low-level illumination intensity identification model, and obtaining the reliability h of the safety helmet worn by the person1The reliability of wearing protective clothing is c1Weighted confidence m of the person wearing the safety helmet1(A)=h1wlWeighted confidence m of unworn crash helmet1(B)=(1-h1)wlWeighted confidence m that the wearing state of the helmet is unknown1(C)=1-wlWeighted confidence m of wearing protective clothing1(D)=c1wlWeighted confidence m of not wearing protective clothing1(E)=(1-c1)wlWeighted confidence m that the wearing state of the protective clothing is unknown1(F)=1-wl
(6A-10-3) identifying the safety helmet and the protective clothing of the person based on the improved YOLOv3-Tiny high-level illumination intensity identification model to obtain the reliability h of the safety helmet worn by the person2The reliability of wearing protective clothing is c2Weighted confidence m of the person wearing the safety helmet2(A)=h2whWeighted confidence m of unworn crash helmet2(B)=(1-h2)whWeighted confidence m that the wearing state of the helmet is unknown2(C)=1-whWeighted confidence m of wearing protective clothing2(D)=c2whWeighted confidence m of not wearing protective clothing2(E)=(1-c2)whWeighted confidence m that the wearing state of the protective clothing is unknown2(F)=1-wh
(6A-10-4) calculating the credibility m (A) of the safety helmet without the safety helmet, the credibility m (B) of the safety helmet without the safety helmet, the credibility m (D) of the protective clothing without the protective clothing and the credibility m (E) of the protective clothing based on the recognition result fusion of two adjacent illumination intensity recognition models, wherein the credibility m (A), the credibility m (B) of the protective clothing without the safety helmet, the credibility m (D) of the protective clothing without
Figure BDA0002792356480000151
Figure BDA0002792356480000152
Figure BDA0002792356480000153
Figure BDA0002792356480000154
(6A-10-5) comparing m (A) with m (B), if m (A) is more than or equal to m (B), the fusion is judged to be wearing the safety helmet, if m (A) is less than m (B), the fusion is judged to be not wearing the safety helmet;
(6A-10-6) comparing m (D) with m (E), if m (D) is more than or equal to m (E), the fusion judges that the protective clothing is worn, if m (D) is less than m (E), the fusion judges that the protective clothing is not worn;
6. in connection with fig. 6, the event analysis has the following steps:
(6B-1) reading the identification results of the personnel safety helmet and the protective clothing of the current video frame;
(6B-2) judging whether the current video frame camera ip belongs to a certain event in the event task dictionary, and if the current video frame camera ip belongs to the certain event in the event task dictionary, executing the step (6B-3); otherwise, executing the step (6B-4);
(6B-3) putting the current video frame data into the video frame data queue corresponding to the event;
(6B-4) creating a new event task, and putting the current video frame data into a video frame data queue corresponding to the event;
(6B-5) judging whether the number of the data in the video frame data queue is equal to 60 or not, and if the number of the data in the video frame data queue is not equal to 60 or not, turning to the step (6B-5);
(6B-6) counting the number of the personnel who don't wear the protective clothing and wear the safety helmet in the video frame data queue;
(6B-7) judging whether the number of the unworn protective clothing or the unworn safety helmet is more than 70% of the total number of the video frame data queue data, if not, turning to the step (6B-9)
(6B-8) performing an event upload operation;
and (6B-9) releasing the resources.
6.1 in conjunction with FIG. 7, event upload has the following steps:
(6B-8-1) inputting pictures and video information needing to be uploaded;
(6B-8-2) uploading the event;
(6B-8-3) judging whether the event uploading is successful, if so, ending the flow, otherwise, turning to the step (6B-8-4);
and (6B-8-4) saving the picture and video information needing to be uploaded to the local.
7. With reference to fig. 8, the periodic deletion of local cache events has the following steps:
(8-1) judging whether a cache event exists locally, if not, turning to the step (8-2), otherwise, turning to the step (8-3);
(8-2) after waiting a fixed time, going to step (8-1);
(8-3) uploading the event;
(8-4) judging whether the event uploading is successful, if so, turning to the step (8-5), otherwise, turning to the step (8-2);
and (8-5) deleting the local cache event.
The cascade identification algorithm is exemplified:
a YOLOv4 target detection network is cascaded with a modified YOLOv3-Tiny target detection network and identifies personal safety helmets and protective clothing based on multiple network model weights matching different light intensities, with the steps of:
(6A-1) acquiring a video frame from the data queue;
(6A-2) judging whether the video frame is successfully acquired, if so, executing the step (6A-4), otherwise, executing the step (6A-3);
(6A-3) waiting until the data queue has data and going to step (6A-1);
(6A-4) inputting the video frames into a YOLOv4 target detection network for personnel detection;
(6A-5) judging whether the person is detected according to the credibility of the detected person output by the YOLOv4 target detection network, if no person is detected, executing the step (6A-1), otherwise, executing the step (6A-6);
(6A-6) cutting the human target frame area detected in the video frame according to the coordinate parameter of the human target frame output by the YOLOv4 target detection network;
(6A-7) judging whether the ambient light intensity is less than 0.0018Lux, if so, executing the step (6A-8), otherwise, executing the step (6A-9);
(6A-8) inputting the cut-out image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for night recognition for detecting human safety helmets and protective clothing, and then executing the step (6A-20);
(6A-9) judging whether the ambient illumination intensity is less than 0.0022Lux, if so, executing the step (6A-10), otherwise, executing the step (6A-11);
(6A-10) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for night recognition and daytime low-light intensity recognition to respectively detect personnel safety helmets and protective clothing, and performing fusion judgment on adjacent protective states based on recognition results of two adjacent models, and then executing the step (6A-20);
(6A-11) judging whether the ambient light intensity is less than 9Lux, if so, executing the step (6A-12), otherwise, executing the step (6A-13);
(6A-12) inputting the cut-out image of the human target frame area into a modified YOLOv3-Tiny target detection network suitable for daytime low-light intensity recognition for detecting human safety helmets and protective clothing, and then executing the step (6A-20);
(6A-13) judging whether the ambient light intensity is less than 11Lux, if so, executing the step (6A-14), otherwise, executing the step (6A-15);
(6A-14) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for identifying the daytime low illumination intensity and the daytime illumination intensity to respectively detect the personnel safety helmet and the protective clothing, and carrying out fusion judgment on adjacent protective states based on the identification results of two adjacent models, and then executing the step (6A-20);
(6A-15) judging whether the ambient illumination intensity is less than 90Lux, if so, executing the step (6A-16), otherwise, executing the step (6A-17);
(6A-16) inputting the cut-out image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for the identification of the illumination intensity in the daytime for the detection of the safety helmet and the protective clothing of the human, and then executing the step (6A-20);
(6A-17) judging whether the ambient light intensity is less than 110Lux, if so, executing the step (6A-18), otherwise, executing the step (6A-19);
(6A-18) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for identifying the illumination intensity in the daytime and the strong illumination intensity in the daytime to respectively detect the personnel safety helmet and the protective clothing, and carrying out fusion judgment on adjacent protective states based on the identification results of two adjacent models, and then executing the step (6A-20);
(6A-19) inputting the cut image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for daytime strong illumination intensity recognition to detect human safety helmets and protective clothing;
(6A-20) storing the wearing information of the safety helmet and the protective clothing of the current frame personnel.
1.1 the method for detecting personnel by using a YOLOv4 target detection network comprises the following steps:
(6A-4-1) extracting 1 ten thousand color video frames containing operators from videos shot by a monitoring camera of an operation site with the daytime illumination intensity of 0.002-10Lux, extracting 1 ten thousand color video frames containing operators from videos shot by a monitoring camera of an operation site with the daytime illumination intensity of 10-100Lux, extracting 2 ten thousand color video frames containing operators from videos shot by a monitoring camera of an operation site with the daytime illumination intensity of more than 100Lux, extracting 1 ten thousand infrared gray level video frames containing operators from videos shot by a monitoring camera of an operation site at night, establishing an operator image data set, and dividing the data set into a training set and a verification set according to the ratio of 9: 1;
(6A-4-2) establishing a train.txt file which comprises storage paths of all pictures in the training set; establishing a val.txt file which comprises storage paths of all pictures of the verification set; marking personnel in the data set image by using a LabelImg tool, marking a personnel area as person to obtain a corresponding data set file in an XML format, and converting the data set in the XLM format into a data set in a txt format suitable for a YOLOv4 target detection network by using a python script program;
(6A-4-3) building a YOLOv4 target detection network by using a dark learning framework of dark darknet, and the method comprises the following steps:
1) a BackBone part of a YOLOv4 target detection network is built by adopting a CSPDarknet53 network structure, an activation function of the BackBone part uses a Mish activation function, and the formula is as follows:
f(x)=x*tanh(log(1+ex))
wherein x is an input value of a network layer where the activation function is located, tanh () is a hyperbolic tangent function, a curve of a Mish activation function is smooth, better information can be allowed to enter a neural network, and therefore better accuracy and generalization can be obtained;
2) constructing a Neck part of a YOLOv4 target detection network by adopting an SPP module and an FPN + PAN structure;
3) the CIOU _ LOSS LOSS function is adopted as the regression LOSS function of the target frame of the YOLOV4 target detection network, so that the speed and the precision of the regression of the prediction frame are higher, and the formula is
Figure BDA0002792356480000181
The IOU is the intersection ratio of the target detection prediction frame and the real frame, Distance _ c is the diagonal Distance of the minimum external rectangle of the target detection prediction frame and the real frame, Distance _2 is the Euclidean Distance between the center points of the target detection prediction frame and the real frame, and V is a parameter for measuring the consistency of the length-width ratios of the target detection prediction frame and the real frame;
4) the YOLOv4 target detection network employs the DIOU _ nms target box screening method (in the detection of overlapping targets, the effect of DIOU _ nms is better than that of the traditional nms).
(6A-4-4) establishing a yolov4.names file, wherein each line of the file is a category name of an object to be identified, and the first line is set as person;
the method comprises the steps of establishing a yolov4.data file, wherein the file is used for storing information such as identification category number, training set file address, verification set file address and name file address, the category number in the yolov4.data file is set to be 1, the training set file address is set to be the address of a 'train.txt' file, the verification set file address is set to be the address of a 'val.txt' file, and the name file address is set to be the address of the 'yolov 4. names' file.
Carrying out object classification training on a YOLOv4 target detection network by adopting a COCO image data set (the COCO data set comprises 20 ten thousand images intercepted from a complex daily scene, has 80 target categories and more than 50 ten thousand target labels, and is the most widely disclosed target detection data set at present), carrying out iterative training on 64 images each time, and iterating 500000 times to obtain a part of the trained YOLOv4 network model;
(6A-4-5) on the basis of the result of the step (6A-4-4), training a YOLOv4 target detection network by using the manufactured image data set of the field workers, iteratively training 64 pictures each time, and iterating 200000 times to obtain a YOLOv4 network model which can be used for field worker detection;
(6A-4-6) inputting the video frame into a YOLOv4 target detection network to obtain the credibility of the detected personnel in the video frame and the coordinate parameters of the personnel target frame.
1.2 improved YOLOv3-Tiny target detection network and based on multiple network model weights matching different illumination intensities, performing personnel safety helmet and protective clothing detection, specifically comprising the following steps:
(6A-8-1) extracting 1 ten thousand color video frames containing safety helmets and protective clothing from videos shot by a monitoring camera of an operation site with the daytime illumination intensity of 0.002Lux-10Lux, extracting 1 ten thousand color video frames containing safety helmets and protective clothing from videos shot by a monitoring camera of an operation site with the daytime illumination intensity of 10Lux-100Lux, extracting 2 ten thousand color video frames containing safety helmets and protective clothing from videos shot by a monitoring camera of an operation site with the daytime illumination intensity of more than 100Lux, extracting 1 ten thousand infrared gray level video frames containing safety helmets and protective clothing from videos shot by a monitoring camera of an operation site at night, respectively establishing an image data set of the safety helmets and protective clothing of persons with weak light intensity in daytime, an image data set of the safety helmets and protective clothing of persons with strong light intensity in daytime, and an image data set of the safety helmets and protective clothing of persons with strong light intensity in daytime Splicing 4 pictures into 1 new picture by means of random zooming, random cutting and random arrangement to expand the four data sets by utilizing a Mosaic data enhancement mode so as to enhance the identification capability of small targets, and dividing the four data sets into a training set and a verification set according to the proportion of 9: 1;
establishing a train0.txt file which comprises storage paths of all pictures in image training sets of safety helmets and protective clothing for personnel with weak light intensity in the daytime; establishing a val0.txt file which comprises storage paths of all pictures of a safety helmet of a person with weak light intensity in the daytime and an image verification set of a protective clothing;
establishing a train1.txt file which comprises storage paths of all pictures in an image training set of a safety helmet and a protective suit of light intensity personnel in the daytime; establishing a val1.txt file which comprises storage paths of all pictures of a light intensity personnel safety helmet and a protective clothing image verification set in the daytime;
establishing a train2.txt file which comprises storage paths of all pictures in image training sets of safety helmets and protective clothing for personnel with strong light intensity in the daytime; establishing a val2.txt file which comprises storage paths of all pictures of a daytime high-light-intensity personnel safety helmet and a protective clothing image verification set;
establishing a train3.txt file which comprises storage paths of all pictures in image training sets of night personnel safety helmets and protective clothing; establishing a val3.txt file which comprises storage paths of all pictures of night personnel safety helmet and protective clothing image verification sets;
(6A-8-2) using a LabelImg tool to mark a personnel helmet and a protective clothing in a data set image, marking a head area with the helmet as hat, a head area without the hat as head, and a protective clothing area as cloth to obtain a corresponding data set file in an XML format, and converting the data set in the XLM format into a data set in a txt format suitable for a YOLOv3-Tiny target detection network by using a python script file;
(6A-8-3) constructing an improved YOLOv3-Tiny target detection network by using a dark learning framework of darknet, and having the following steps:
1) carrying out network model modification and pruning operation by taking a YOLOv3-Tiny target detection network as a basic framework;
2) replacing an original backbone network of YOLOv3-Tiny with a Google effective-B0 deep convolutional neural network, removing 132-135 layers of the effective-B0 deep convolutional neural network, and respectively adding 2 convolutional layers, 1 shortcut layer, 1 convolutional layer and a YOLO layer after 131 layers;
3) on the basis of the network obtained in the step 2), sequentially connecting 1 route layer, 1 convolution layer, 1 down-sampling layer, 1 short layer, 1 convolution layer, 2 short layers, 1 convolution layer and 1YOLO layer behind 133 layers of the network to obtain an improved YOLOv3-Tiny target detection network;
building yolov3-tiny. names file, each line of the file is a kind name of an object to be identified, setting the first line as hat, the second line as head and the third line as cloth;
the method comprises the steps of establishing a yolov3-tiny.data file, wherein the file is used for storing information such as identification type number, training set file address, verification set file address and name file address, setting classes of the yolov3-tiny.data file as 3, setting the training set file address as the address of a "train.txt" file, setting the verification set file address as the address of a "val.txt" file, and setting the name file address as the address of a "yolov 3-tiny.names" file.
(6A-8-4) clustering calculation is carried out on the real frame length and width parameters of the safety helmet and the protective clothing in the data set of the safety helmet and the protective clothing by using a k-means algorithm (for a given data set, the data set is divided into k clusters according to the distance between numerical values, the distance between the numerical values in the clusters is made to be as small as possible, and the inter-cluster distance is made to be as large as possible), and the original prior frame length and width data of a YOLOv3-Tiny target detection network are replaced by the length and width data obtained by real frame clustering, so that the detection rate of a target frame is improved;
(6A-8-5) training the improved YOLOv3-Tiny target detection network by adopting the manufactured daytime low-light-intensity personnel safety helmet and protective clothing data set, iteratively training 64 pictures each time, and iterating 50000 times to obtain a network model for detecting the daytime low-light-intensity personnel safety helmet and the protective clothing;
(6A-8-6) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the safety helmet and the protective clothing of the light intensity personnel in the daytime, iteratively training 64 pictures each time, and iterating 50000 times to obtain a network model which can be used for detecting the safety helmet and the protective clothing in the daytime in the light intensity environment;
(6A-8-7) training the improved YOLOv3-Tiny target detection network by adopting the manufactured daytime strong light intensity personnel safety helmet and protective clothing data set, iteratively training 64 pictures each time, and iterating 100000 times to obtain a network model which can be used for daytime strong light intensity environment safety helmet and protective clothing detection;
(6A-8-8) training the improved YOLOv3-Tiny target detection network by using the manufactured night personnel safety helmet and protective clothing data set, iteratively training 64 pictures each time, and iterating 50000 times to obtain a network model which can be used for detecting the night personnel safety helmet and the protective clothing;
(6A-8-9) inputting the cut personnel target area into an improved YOLOv3-Tiny target detection network suitable for different illumination intensities according to the illumination intensity data of the field environment, and obtaining the reliability of the safety helmet and the protective clothing worn by the field personnel and the coordinate parameters of the target frame of the safety helmet and the protective clothing.

Claims (9)

1. An intelligent identification method for safety protection in 5T operation and maintenance is characterized by comprising the following procedures:
(1) the control module acquires configuration parameters and identifies a task start-stop command, wherein the configuration parameters comprise: identifying time length, an ip address of a camera, a user name of the camera, a password of the camera, an external port number of a video stream, the number of stored pictures of each event and a state reporting period;
(2) judging whether to start the identification operation, if so, turning to the step (3), otherwise, returning to the step (1);
(3) acquiring environmental illumination intensity data;
(4) the video stream management module pulls the video stream from the corresponding numbered camera according to the configuration parameters obtained from the control module;
(5) judging whether video frames are not successfully acquired for 600 seconds continuously, and if the video frames are not successfully acquired for 600 seconds continuously, reporting abnormal state information of the camera; otherwise, turning to the step (6);
(6) the intelligent recognition analysis module carries out intelligent recognition and event analysis on safety helmets and protective clothing of workers in the video frames;
(7) the comprehensive management module uploads the generated event through a post request;
(8) and the integrated management module deletes the locally cached events periodically.
2. The method of claim 1, wherein the pull video stream in the overall process comprises the following specific steps:
(4-1) acquiring a video current frame from a corresponding numbered camera determined by the configuration parameters according to an RSTP standard stream protocol;
(4-2) judging whether the current frame of the camera is successfully acquired, if so, executing the step (4-3), otherwise, directly turning to the step (4-4);
(4-3) outputting the current frame data of the camera to a picture data queue;
(4-4) performing a camera status maintenance operation;
(4-5) after waiting for 1 second, go to step (4-1).
3. The method according to claim 2, wherein the specific steps of maintaining the camera status in the pull video stream process are as follows:
(4-4-1) reading the current state of the camera and the number of times of abnormal states of the camera;
(4-4-2) judging whether the current state of the camera is normal, if so, executing the step (4-4-3), otherwise, executing the step (4-4-4);
(4-4-3) setting the number of times of abnormal states of the camera to be 0;
(4-4-4) adding 1 to the abnormal times of the camera;
(4-4-5) judging whether the number of times of the abnormal state of the camera is more than 600, and if the number of times of the abnormal state of the camera is more than 600, executing the step (4-4-6);
(4-4-6) setting the camera status as abnormal.
4. The method of claim 1, wherein the intelligent identification in the overall process adopts a cascading method of a Yolov4 target detection network and a modified Yolov3-Tiny target detection network and identifies the wearing of personal safety helmets and protective clothing based on a plurality of network identification models matching different illumination intensities, and comprises the following specific steps:
(6A-1) acquiring a video frame from the data queue;
(6A-2) judging whether the video frame is successfully acquired, if so, executing the step (6A-4), otherwise, executing the step (6A-3);
(6A-3) waiting until the data queue has data and going to step (6A-1);
(6A-4) inputting the video frames into a YOLOv4 target detection network for personnel detection;
(6A-5) judging whether the person is detected according to the credibility of the detected person output by the YOLOv4 target detection network, if no person is detected, executing the step (6A-1), otherwise, executing the step (6A-6);
(6A-6) cutting the human target frame area detected in the video frame according to the coordinate parameter of the human target frame output by the YOLOv4 target detection network;
(6A-7) judging whether the ambient light intensity is less than 0.0018Lux, if so, executing the step (6A-8), otherwise, executing the step (6A-9);
(6A-8) inputting the cut-out image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for night recognition for detecting human safety helmets and protective clothing, and then executing the step (6A-20);
(6A-9) judging whether the ambient illumination intensity is less than 0.0022Lux, if so, executing the step (6A-10), otherwise, executing the step (6A-11);
(6A-10) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for night recognition and daytime low-light intensity recognition to respectively detect personnel safety helmets and protective clothing, and performing fusion judgment on adjacent protective states based on recognition results of two adjacent models, and then executing the step (6A-20);
(6A-11) judging whether the ambient light intensity is less than 9Lux, if so, executing the step (6A-12), otherwise, executing the step (6A-13);
(6A-12) inputting the cut-out image of the human target frame area into a modified YOLOv3-Tiny target detection network suitable for daytime low-light intensity recognition for detecting human safety helmets and protective clothing, and then executing the step (6A-20);
(6A-13) judging whether the ambient light intensity is less than 11Lux, if so, executing the step (6A-14), otherwise, executing the step (6A-15);
(6A-14) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for identifying the daytime low illumination intensity and the daytime illumination intensity to respectively detect the personnel safety helmet and the protective clothing, and carrying out fusion judgment on adjacent protective states based on the identification results of two adjacent models, and then executing the step (6A-20);
(6A-15) judging whether the ambient illumination intensity is less than 90Lux, if so, executing the step (6A-16), otherwise, executing the step (6A-17);
(6A-16) inputting the cut-out image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for the identification of the illumination intensity in the daytime for the detection of the safety helmet and the protective clothing of the human, and then executing the step (6A-20);
(6A-17) judging whether the ambient light intensity is less than 110Lux, if so, executing the step (6A-18), otherwise, executing the step (6-19);
(6A-18) respectively inputting the cut personnel target frame area images into an improved YOLOv3-Tiny target detection network suitable for identifying the illumination intensity in the daytime and the strong illumination intensity in the daytime to respectively detect the personnel safety helmet and the protective clothing, and carrying out fusion judgment on adjacent protective states based on the identification results of two adjacent models, and then executing the step (6A-20);
(6A-19) inputting the cut image of the area of the human target frame into a modified YOLOv3-Tiny target detection network suitable for daytime strong illumination intensity recognition to detect human safety helmets and protective clothing;
(6A-20) storing the wearing information of the safety helmet and the protective clothing of the current frame personnel.
5. The method of claim 4, wherein the method of cascading the improved YOLOv3-Tiny target detection network with the YOLOv4 target detection network is implemented by firstly using the YOLOv4 target detection network to perform human detection, and comprises the following steps:
(6A-4-1) extracting a video frame containing an operator from monitoring videos shot by a monitoring camera under different illumination intensities on an operation site, and establishing an operator image data set;
(6A-4-2) marking the personnel in the image by using a LabelImg tool to obtain a corresponding data set file in an XML format, and converting the data set in the XLM format into a data set in a txt format suitable for a YOLOv4 target detection network;
(6A-4-3) building a YOLOv4 target detection network by using a dark learning framework of dark darknet, and the method comprises the following steps:
1) a BackBone part of a YOLOv4 target detection network is built by adopting a CSPDarknet53 network structure, an activation function of the BackBone part uses a Mish activation function, and the formula is as follows:
f(x)=x*tanh(log(1+ex))
wherein x is an input value of a network layer where the activation function is located, and tanh () is a hyperbolic tangent function; the curve of the Mish activation function is smooth, better information can be allowed to enter a neural network, so that better accuracy and generalization can be obtained, and the Dropblock method is adopted, so that the image information of the characteristic diagram is randomly discarded to relieve overfitting;
2) constructing a Neck part of a YOLOv4 target detection network by adopting an SPP module and an FPN + PAN structure;
3) the CIOU _ LOSS LOSS function is adopted as the regression LOSS function of the target frame of the YOLOV4 target detection network, so that the speed and the precision of the regression of the prediction frame are higher, and the formula is
Figure FDA0002792356470000041
The IOU is the intersection ratio of the target detection prediction frame and the real frame, Distance _ c is the diagonal Distance of the minimum external rectangle of the target detection prediction frame and the real frame, Distance _2 is the Euclidean Distance between the center points of the target detection prediction frame and the real frame, and V is a parameter for measuring the consistency of the length-width ratios of the target detection prediction frame and the real frame;
4) the YOLOv4 target detection network adopts a DIOU _ nms target frame screening method;
(6A-4-4) carrying out object classification training on the YOLOv4 target detection network by adopting a COCO image data set to obtain a part of trained YOLOv4 network model;
(6A-4-5) on the basis of the result of the step (6A-4-4), training a YOLOv4 target detection network by using the manufactured field worker image data set to obtain a YOLOv4 network model capable of being used for field worker detection;
(6A-4-6) inputting the video frame into a YOLOv4 target detection network, and detecting the credibility of the personnel and the coordinate parameters of the personnel target frame.
6. The method of claim 4, wherein the method of cascading a Yolov4 target detection network with a modified Yolov3-Tiny target detection network, and then performing the personal safety helmet and protective clothing detection with the modified Yolov3-Tiny target detection network based on a plurality of network model weights matching different illumination intensities, is characterized by the following steps:
(6A-8-1) extracting video frames containing personal safety caps and protective clothing from monitoring videos shot by monitoring cameras under different illumination intensities in an operation site, respectively establishing an image data set of the daytime low-light-intensity personal safety caps and protective clothing, an image data set of the daytime medium-light-intensity personal safety caps and protective clothing, an image data set of the daytime high-light-intensity personal safety caps and protective clothing, and an image data set of the nighttime personal safety caps and protective clothing, and expanding the data sets by utilizing a Mosaic data enhancement mode;
(6A-8-2) marking the personnel safety helmet and the protective clothing in the image by using a LabelImg tool to obtain a corresponding data set file in an XML format, and converting the data set in the XLM format into a txt format data set suitable for a YOLOv3-Tiny target detection network;
(6A-8-3) constructing an improved YOLOv3-Tiny target detection network by using a dark learning framework of darknet, and having the following steps:
1) carrying out network model modification and pruning operation by taking a YOLOv3-Tiny target detection network as a basic framework;
2) replacing an original backbone network of YOLOv3-Tiny with a Google effective-B0 deep convolutional neural network, removing 132-135 layers of the effective-B0 deep convolutional neural network, and respectively adding 2 convolutional layers, 1 shortcut layer, 1 convolutional layer and a YOLO layer after 131 layers;
3) on the basis of the network obtained in the step 2), sequentially connecting 1 route layer, 1 convolution layer, 1 down-sampling layer, 1 short layer, 1 convolution layer, 2 short layers, 1 convolution layer and 1YOLO layer behind 133 layers of the network to obtain an improved YOLOv3-Tiny target detection network;
(6A-8-4) clustering calculation is carried out on the length and width parameters of the real frames of the safety helmet and the protective clothing in the data set of the safety helmet and the protective clothing by using a k-means algorithm, and the original prior frame length and width data of the YOLOv3-Tiny target detection network are replaced by the length and width data obtained by real frame clustering so as to improve the detection rate of the target frame;
(6A-8-5) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the safety helmet of the person with low light intensity in the daytime and the protective clothing to obtain a network model suitable for detecting the safety helmet of the person and the protective clothing under the low light intensity in the daytime;
(6A-8-6) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the light intensity personnel safety helmet and the protective clothing in the daytime to obtain a network model suitable for the personnel safety helmet and the protective clothing detection under the illumination intensity in the daytime;
(6A-8-7) training the improved YOLOv3-Tiny target detection network by adopting the manufactured data set of the personnel safety helmet with strong light intensity in the daytime and the protective clothing to obtain a network model suitable for detecting the personnel safety helmet and the protective clothing under the strong light intensity in the daytime;
(6A-8-8) training the improved YOLOv3-Tiny target detection network by using the manufactured night personnel safety helmet and protective clothing data set to obtain a network model for detecting the night personnel safety helmet and protective clothing;
(6A-8-9) inputting the cut personnel target area into an improved YOLOv3-Tiny target detection network suitable for different illumination intensities according to the illumination intensity data of the field environment, and obtaining the reliability of the safety helmet and the protective clothing worn by the field personnel and the coordinate parameters of the target frame of the safety helmet and the protective clothing.
7. The method according to claim 4, wherein in the step of identifying the wearing of the personal safety helmet and the protective clothing by using a plurality of network identification models matched with different illumination intensities, if the illumination value measured by the safety protection identification unit is in the neighborhood of the application distinguishing value of each illumination intensity identification model, the method for fusion judgment of the identification results of the identification models based on the adjacent illumination intensities is adopted, firstly, the identification results of the low-level illumination intensity identification model closest to the measured illumination value and the identification result of the high-level illumination intensity identification model closest to the measured illumination value are respectively adopted, and then, the wearing conditions of the safety helmet and the protective clothing are judged by fusion calculation, and the specific fusion judgment process is as follows:
(6A-10-1) recording that the critical light intensity value of the two adjacent illumination intensity recognition models is distinguished as xl(night recognition)In other model, in daytime, the lower limit light intensity value of the corresponding neighborhood is xll=0.9xlThe upper limit value of the neighborhood light intensity is xlh=1.1xlIf the current light intensity value is x, the confidence level weight of the low-level illumination intensity model identification is recorded as
Figure FDA0002792356470000051
Confidence weights for high-level illumination intensity model identification are recorded as
Figure FDA0002792356470000052
(6A-10-2) identifying the safety helmet and the protective clothing of the person based on the improved YOLOv3-Tiny low-level illumination intensity identification model, and obtaining the reliability h of the safety helmet worn by the person1The reliability of wearing protective clothing is c1Weighted confidence m of the person wearing the safety helmet1(A)=h1wlWeighted confidence m of unworn crash helmet1(B)=(1-h1)wlWeighted confidence m that the wearing state of the helmet is unknown1(C)=1-wlWeighted confidence m of wearing protective clothing1(D)=c1wlWeighted confidence m of not wearing protective clothing1(E)=(1-c1)wlWeighted confidence m that the wearing state of the protective clothing is unknown1(F)=1-wl
(6A-10-3) identifying the safety helmet and the protective clothing of the person based on the improved YOLOv3-Tiny high-level illumination intensity identification model to obtain the reliability h of the safety helmet worn by the person2If the reliability of wearing the protective clothing is c2, the weighted reliability m of the safety helmet worn by the person2(A)=h2whWeighted confidence m of unworn crash helmet2(B)=(1-h2)whWeighted confidence m that the wearing state of the helmet is unknown2(C)=1-whWeighted confidence m of wearing protective clothing2(D)=c2whWeighted confidence m of not wearing protective clothing2(E)=(1-c2)whWeighted confidence m that the wearing state of the protective clothing is unknown2(F)=1-wh
(6A-10-4) calculating the credibility m (A) of the safety helmet without the safety helmet, the credibility m (B) of the safety helmet without the safety helmet, the credibility m (D) of the protective clothing without the protective clothing and the credibility m (E) of the protective clothing based on the recognition result fusion of two adjacent illumination intensity recognition models, wherein the credibility m (A), the credibility m (B) of the protective clothing without the safety helmet, the credibility m (D) of the protective clothing without
Figure FDA0002792356470000061
Figure FDA0002792356470000062
Figure FDA0002792356470000063
Figure FDA0002792356470000064
(6A-10-5) comparing m (A) with m (B), if m (A) is more than or equal to m (B), the fusion is judged to be wearing the safety helmet, and if m (A) is less than m (B), the fusion is judged to be not wearing the safety helmet;
(6A-10-6) comparing m (D) with m (E), if m (D) is more than or equal to m (E), the fusion is judged that the protective clothing is worn, and if m (D) < m (E), the fusion is judged that the protective clothing is not worn.
8. The method according to claim 1, wherein the event analysis in the overall process comprises the following specific steps:
(6B-1) reading the identification results of the personnel safety helmet and the protective clothing of the current video frame;
(6B-2) judging whether the current video frame camera ip belongs to a certain event in the event task dictionary, and if the current video frame camera ip belongs to the certain event in the event task dictionary, executing the step (6B-3); otherwise, executing the step (6B-4);
(6B-3) putting the current video frame data into the video frame data queue corresponding to the event;
(6B-4) creating a new event task, and putting the current video frame data into a video frame data queue corresponding to the event;
(6B-5) judging whether the number of the data in the video frame data queue is equal to 60 or not, and if the number of the data in the video frame data queue is not equal to 60 or not, turning to the step (6B-5);
(6B-6) counting the number of the personnel who don't wear the protective clothing and wear the safety helmet in the video frame data queue;
(6B-7) judging whether the number of the unworn protective clothing or the unworn safety helmet is more than 70% of the total number of the video frame data queue data, if not, turning to the step (6B-9)
(6B-8) performing an event upload operation;
and (6B-9) releasing the resources.
The event uploading method comprises the following specific steps:
(6B-8-1) inputting pictures and video information needing to be uploaded;
(6B-8-2) uploading the event;
(6B-8-3) judging whether the event uploading is successful, if so, ending the flow, otherwise, turning to the step (6B-8-4);
and (6B-8-4) saving the picture and video information needing to be uploaded to the local.
9. The method according to claim 1, wherein the specific steps of the overall process for periodically deleting the local cache event are as follows:
(8-1) judging whether a cache event exists locally, if not, turning to the step (8-2), otherwise, turning to the step (8-3);
(8-2) after waiting a fixed time, going to step (8-1);
(8-3) uploading the event;
(8-4) judging whether the event uploading is successful, if so, turning to the step (8-5), otherwise, turning to the step (8-2);
and (8-5) deleting the local cache event.
CN202011319418.0A 2020-11-23 2020-11-23 Intelligent safety protection identification method in 5T operation and maintenance Active CN112434828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011319418.0A CN112434828B (en) 2020-11-23 2020-11-23 Intelligent safety protection identification method in 5T operation and maintenance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011319418.0A CN112434828B (en) 2020-11-23 2020-11-23 Intelligent safety protection identification method in 5T operation and maintenance

Publications (2)

Publication Number Publication Date
CN112434828A true CN112434828A (en) 2021-03-02
CN112434828B CN112434828B (en) 2023-05-16

Family

ID=74693579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011319418.0A Active CN112434828B (en) 2020-11-23 2020-11-23 Intelligent safety protection identification method in 5T operation and maintenance

Country Status (1)

Country Link
CN (1) CN112434828B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537007A (en) * 2021-07-02 2021-10-22 中国铁道科学研究院集团有限公司电子计算技术研究所 Non-worker intrusion detection and alarm method and device applied to railway platform
CN113971811A (en) * 2021-11-16 2022-01-25 北京国泰星云科技有限公司 Intelligent container feature identification method based on machine vision and deep learning
CN114285976A (en) * 2021-12-27 2022-04-05 深圳市海洋王铁路照明技术有限公司 File management method, device and equipment of camera shooting illumination equipment and storage medium
CN114723418A (en) * 2022-04-29 2022-07-08 河南鑫安利职业健康科技有限公司 Protective equipment wearing safety investigation system based on big data for chemical industrial production
CN116311633A (en) * 2023-05-19 2023-06-23 安徽数智建造研究院有限公司 Tunnel constructor management method based on face recognition technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672863A (en) * 2018-12-24 2019-04-23 海安常州大学高新技术研发中心 A kind of construction personnel's safety equipment intelligent monitoring method based on image recognition
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
US20190325584A1 (en) * 2018-04-18 2019-10-24 Tg-17, Llc Systems and Methods for Real-Time Adjustment of Neural Networks for Autonomous Tracking and Localization of Moving Subject
CN110807429A (en) * 2019-10-23 2020-02-18 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN111259682A (en) * 2018-11-30 2020-06-09 百度在线网络技术(北京)有限公司 Method and device for monitoring the safety of a construction site
CN111598040A (en) * 2020-05-25 2020-08-28 中建三局第二建设工程有限责任公司 Construction worker identity identification and safety helmet wearing detection method and system
CN111898541A (en) * 2020-07-31 2020-11-06 中科蓝海(扬州)智能视觉科技有限公司 Intelligent visual monitoring and warning system for safety operation of gantry crane

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
US20190325584A1 (en) * 2018-04-18 2019-10-24 Tg-17, Llc Systems and Methods for Real-Time Adjustment of Neural Networks for Autonomous Tracking and Localization of Moving Subject
CN111259682A (en) * 2018-11-30 2020-06-09 百度在线网络技术(北京)有限公司 Method and device for monitoring the safety of a construction site
CN109672863A (en) * 2018-12-24 2019-04-23 海安常州大学高新技术研发中心 A kind of construction personnel's safety equipment intelligent monitoring method based on image recognition
CN110807429A (en) * 2019-10-23 2020-02-18 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN111598040A (en) * 2020-05-25 2020-08-28 中建三局第二建设工程有限责任公司 Construction worker identity identification and safety helmet wearing detection method and system
CN111898541A (en) * 2020-07-31 2020-11-06 中科蓝海(扬州)智能视觉科技有限公司 Intelligent visual monitoring and warning system for safety operation of gantry crane

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537007A (en) * 2021-07-02 2021-10-22 中国铁道科学研究院集团有限公司电子计算技术研究所 Non-worker intrusion detection and alarm method and device applied to railway platform
CN113971811A (en) * 2021-11-16 2022-01-25 北京国泰星云科技有限公司 Intelligent container feature identification method based on machine vision and deep learning
CN114285976A (en) * 2021-12-27 2022-04-05 深圳市海洋王铁路照明技术有限公司 File management method, device and equipment of camera shooting illumination equipment and storage medium
CN114723418A (en) * 2022-04-29 2022-07-08 河南鑫安利职业健康科技有限公司 Protective equipment wearing safety investigation system based on big data for chemical industrial production
CN116311633A (en) * 2023-05-19 2023-06-23 安徽数智建造研究院有限公司 Tunnel constructor management method based on face recognition technology
CN116311633B (en) * 2023-05-19 2023-08-04 安徽数智建造研究院有限公司 Tunnel constructor management method based on face recognition technology

Also Published As

Publication number Publication date
CN112434828B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN112434828B (en) Intelligent safety protection identification method in 5T operation and maintenance
CN112434827B (en) Safety protection recognition unit in 5T operation and maintenance
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
WO2021139049A1 (en) Detection method, detection apparatus, monitoring device, and computer readable storage medium
CN106951889A (en) Underground high risk zone moving target monitoring and management system
CN111383429A (en) Method, system, device and storage medium for detecting dress of workers in construction site
CN113516076A (en) Improved lightweight YOLO v4 safety protection detection method based on attention mechanism
CN109672863A (en) A kind of construction personnel&#39;s safety equipment intelligent monitoring method based on image recognition
CN112149513A (en) Industrial manufacturing site safety helmet wearing identification system and method based on deep learning
CN110458794B (en) Quality detection method and device for accessories of rail train
CN115761537B (en) Power transmission line foreign matter intrusion identification method oriented to dynamic feature supplementing mechanism
CN112112629A (en) Safety business management system and method in drilling operation process
CN113850562B (en) Intelligent side station supervision method and system
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN115035088A (en) Helmet wearing detection method based on yolov5 and posture estimation
CN113807240A (en) Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition
CN116846059A (en) Edge detection system for power grid inspection and monitoring
CN112287823A (en) Facial mask identification method based on video monitoring
CN113191273A (en) Oil field well site video target detection and identification method and system based on neural network
CN116246424A (en) Old people&#39;s behavioral safety monitored control system
CN115223249A (en) Quick analysis and identification method for unsafe behaviors of underground personnel based on machine vision
CN114997279A (en) Construction worker dangerous area intrusion detection method based on improved Yolov5 model
CN105095891A (en) Human face capturing method, device and system
CN114067396A (en) Vision learning-based digital management system and method for live-in project field test

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant