CN116740011A - Yolov 5-based power distribution room construction safety protection measure identification method - Google Patents

Yolov 5-based power distribution room construction safety protection measure identification method Download PDF

Info

Publication number
CN116740011A
CN116740011A CN202310679302.5A CN202310679302A CN116740011A CN 116740011 A CN116740011 A CN 116740011A CN 202310679302 A CN202310679302 A CN 202310679302A CN 116740011 A CN116740011 A CN 116740011A
Authority
CN
China
Prior art keywords
image
protection
prediction
personnel
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310679302.5A
Other languages
Chinese (zh)
Inventor
陈泽涛
陈申宇
苏崇文
邓泽航
刘秦铭
王增煜
芮庆涛
陈维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202310679302.5A priority Critical patent/CN116740011A/en
Publication of CN116740011A publication Critical patent/CN116740011A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

According to the power distribution room construction safety protection measure identification method based on the YOLOv5, when an image to be identified is identified, the image to be identified of a tester in power distribution room operation can be obtained, and a protection measure identification model is determined, wherein the model is obtained by cascading a personnel detection network and a protection detection network which are constructed based on the YOLOv5, so that the image to be identified can be input into the personnel detection network to obtain a target personnel boundary box of the tester in the image to be identified, the image to be identified is cut according to the target personnel boundary box, the cut image is input into the protection detection network to obtain a target protection boundary box of the protection measure in the image, the image to be identified can be sequentially detected through the personnel detection network and the protection detection network, influence factors such as background interference are reduced through gradual detection, false detection and omission detection are reduced, and detection accuracy of small targets is further improved, and accordingly accuracy of identification results is improved.

Description

Yolov 5-based power distribution room construction safety protection measure identification method
Technical Field
The application relates to the technical field of digital image processing, in particular to a power distribution room construction safety protection measure identification method based on YOLOv 5.
Background
Along with the increasing demand of life production on electric power energy, the quality requirement on the electric power supply provided by a power supply department is higher and higher, and a power distribution room is the most important power supply node in a power distribution network, the process of electric power production operation is complex, and in the daily inspection and maintenance process, testers often contact high-voltage power equipment, such as improper operation or no wearing of protective equipment, safety accidents easily occur, so that real-time detection of safety protection of the testers working in the power distribution room is necessary.
With the development of image processing technology, deep learning has advanced well in the field of target detection, and the classical target detection algorithm is mainly divided into two types, namely single-stage and double-stage, wherein the single-stage type comprises YOLO, SDD and the like, and the double-stage type comprises R-CNN, mask R-CNN, fast R-CNN and the like. The single-stage algorithm YOLO algorithm has a high running speed, but because the camera of the power distribution room shoots a long distance, the sizes of gloves and operation levers worn by testers in a picture are small, the number of pixels occupied by a target is small, and the carried characteristic information is weak, so that the accuracy of a result obtained by recognition is low.
Disclosure of Invention
The application aims to at least solve one of the technical defects, in particular to the technical defects that the accuracy of the identification result is low because the size of gloves and operation rods worn by testers in pictures is small, the number of pixels occupied by a target is small and the carried characteristic information is weak because the shooting distance of a camera of a power distribution room is long in the prior art.
The application provides a power distribution room construction safety protection measure identification method based on YOLOv5, which comprises the following steps:
acquiring an image to be identified of a tester during operation of a power distribution room;
determining a protective measure identification model, wherein the protective measure identification model is obtained by cascading two small target detection networks constructed based on YOLOv5, and the small target detection networks are a personnel detection network and a protective detection network respectively;
inputting the image to be identified into the personnel detection network to obtain a target personnel boundary box of the tester in the image to be identified, which is output by the personnel detection network;
cutting the image to be identified according to the target personnel boundary box to obtain a tester image;
inputting the tester image into the protection detection network to obtain a target protection boundary box in the tester image output by the protection detection network;
And mapping the target protection boundary box into the image to be identified, and identifying the protection measures in the target protection boundary box in the mapped image to be identified.
Optionally, the safeguard recognition model is obtained by cascading two small target detection networks constructed based on YOLOv 5; the small target detection network is a personnel detection network and a protection detection network respectively;
the determining a safeguard recognition model includes:
acquiring a sample image set of a tester during operation of a power distribution room, wherein the sample image set comprises a sample personnel image and a sample protection image; the sample personnel image comprises a real personnel boundary box for marking the tester and is used for training the personnel detection network; the sample protection image comprises a real protection boundary box for marking the protection measures of the testers and is used for training the protection detection network;
sequentially inputting sample images in the sample image set into a preset initial recognition model to obtain a plurality of prediction boundary frames and prediction labels of the prediction boundary frames, wherein the prediction boundary frames are output by the initial recognition model, and each prediction boundary frame comprises a prediction personnel boundary frame corresponding to the sample personnel image and a prediction protection boundary frame corresponding to the sample protection image;
And training the initial recognition model by taking the prediction personnel boundary frame approaching the real personnel boundary frame and taking the prediction protection boundary frame approaching the real protection boundary frame as a target until the initial recognition model meets the preset training ending condition, so as to obtain a protection measure recognition model.
Optionally, the small target detection network comprises an input layer, a backbone network, a feature fusion network and a prediction layer;
the detection process of the small target detection network comprises the following steps:
inputting the sample image into the input layer, carrying out data enhancement on a target detection object in the sample image set, and outputting an enhanced image;
extracting structural features of the target detection object in the enhanced image through the backbone network, and outputting a feature map formed by the structural features;
performing feature fusion on a plurality of feature graphs generated in the forward propagation process of the small target detection network by using the feature fusion network to obtain a plurality of fusion feature graphs;
and inputting each fusion feature map into the prediction layer, and outputting to obtain a plurality of prediction boundary boxes and prediction labels of each prediction boundary box.
Optionally, the small target detection network includes 4 detection branch structures of different scales;
the step of extracting structural features of the target detection object in the enhanced image through the backbone network, and outputting a feature map formed by the structural features, includes:
and executing slicing operation and convolution operation on the enhanced image through a backbone network by utilizing each detection branch structure, and extracting the structural characteristics of the target detection object in the enhanced image so as to obtain a characteristic diagram corresponding to each structural characteristic.
Optionally, the feature fusion network includes a feature pyramid network and a path aggregation network;
the step of performing feature fusion on the feature graphs generated in the forward propagation process of the small target detection network by using the path aggregation network to obtain a plurality of fused feature graphs, includes:
the feature pyramid network is utilized to perform up-sampling fusion on 4 feature graphs output by the backbone network from top to bottom to obtain 4 deep feature graphs with different scales;
and carrying out bottom-up downsampling fusion on the 4 deep feature images by using the path aggregation network to obtain 4 fused feature images with different scales.
Optionally, the prediction tag includes a confidence level of a prediction bounding box;
the determining the safeguard recognition model further comprises:
acquiring the confidence coefficient of each predictor boundary box in the predictor boundary boxes, and selecting the predictor boundary box with the maximum confidence coefficient as the predictor boundary box corresponding to the sample personnel image;
and acquiring the confidence coefficient of each prediction protection boundary frame in the prediction boundary frames, and selecting the prediction protection boundary frame with the maximum confidence coefficient as the prediction protection boundary frame corresponding to the sample protection image.
Optionally, the prediction tag includes a category of a prediction protection boundary box, a prediction position of the prediction protection boundary box in the sample personnel image; the categories of the prediction protection boundary frames comprise a prediction glove boundary frame and a prediction operation rod boundary frame;
the training the protection detection network with the prediction protection boundary box approaching to the real protection boundary box as a target includes:
calculating a classification loss value of each prediction protection boundary box according to the classification of the prediction protection boundary box;
calculating a position regression loss value of each prediction protection boundary frame based on the prediction position of the prediction protection boundary frame in the sample personnel image;
Determining the relative positions of the prediction protection boundary frames belonging to different categories, and calculating the relative position loss value of each relative position;
updating parameters in the protection detection network based on the classification loss value, the location regression loss value, and the relative location loss value.
The application also provides a power distribution room construction safety protection measure identification device based on the YOLOv5, which comprises the following steps:
the image acquisition module is used for acquiring an image to be identified when a tester works in the power distribution room;
the model determining module is used for determining a protective measure identification model, wherein the protective measure identification model is obtained by cascading two small target detection networks constructed based on YOLOv5, and the small target detection networks are a personnel detection network and a protective detection network respectively;
the personnel detection module is used for inputting the image to be identified into the personnel detection network to obtain a target personnel boundary box of the tester in the image to be identified, which is output by the personnel detection network;
the image clipping module is used for clipping the image to be identified according to the target personnel boundary box to obtain a tester image;
The protection detection module is used for inputting the tester image into the protection detection network to obtain a target protection boundary box in the tester image output by the protection detection network;
and the protection recognition module is used for mapping the target protection boundary box into the image to be recognized and recognizing the protection measures in the target protection boundary box in the mapped image to be recognized.
The present application also provides a storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the power distribution room construction safety precaution recognition method as described in any of the above embodiments.
The present application also provides a computer device comprising: one or more processors, and memory;
the memory has stored therein computer readable instructions that, when executed by the one or more processors, perform the steps of the power distribution room construction safety precaution recognition method of any of the above embodiments.
From the above technical solutions, the embodiment of the present application has the following advantages:
According to the power distribution room construction safety protection measure identification method based on the YOLOv5, when an image to be identified is identified, the image to be identified of a tester in the power distribution room operation can be obtained, and a protection measure identification model is determined, because the protection measure identification model is obtained by cascading two small target detection networks constructed based on the YOLOv5, the two small target detection networks are respectively a personnel detection network and a protection detection network, wherein the personnel detection network is used for detecting the tester in the image to be identified, and the protection detection network is used for detecting the protection measure of the tester, therefore, the image to be identified can be input into the personnel detection network, a target personnel boundary box of the tester output by the personnel detection network in the image to be identified can be obtained, then the image of the tester can be obtained according to the target personnel boundary box, an irrelevant image area is eliminated, the image of the target area is enlarged, after the image of the tester is obtained, the target image in the protection detection network is input into the protection detection network, the target protection boundary box in the image of the tester is conveniently mapped into the protection measure to be identified, and the target image to be identified is mapped in the protection frame to the protection measure to be identified. According to the protective measure identification model obtained by cascading the personnel detection network and the protective detection network, after the personnel detection network is utilized to detect the testers in the image to be identified, the protective detection network is utilized to further detect the safety protection measures of the testers according to the detection result of the personnel detection network, and the influence factors such as background interference are reduced through gradual detection, so that false detection and omission detection are reduced, the difficulty of small target detection is reduced, the detection precision of the small target is further improved, and the accuracy of the identification result is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic flow chart of a power distribution room construction safety protection measure identification method based on YOLOv5 provided by the embodiment of the application;
fig. 2 is a schematic structural diagram of a power distribution room construction safety protection measure identification device based on YOLOv5 according to an embodiment of the present application;
fig. 3 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
With the development of image processing technology, deep learning has advanced well in the field of target detection, and the classical target detection algorithm is mainly divided into two types, namely single-stage and double-stage, wherein the single-stage type comprises YOLO, SDD and the like, and the double-stage type comprises R-CNN, mask R-CNN, fast R-CNN and the like. The single-stage algorithm YOLO algorithm has a high running speed, but because the camera of the power distribution room shoots a long distance, the sizes of gloves and operation levers worn by testers in a picture are small, the number of pixels occupied by a target is small, and the carried characteristic information is weak, so that the accuracy of a result obtained by recognition is low.
The technical scheme provided by the application aims to solve the technical problems, and is specifically seen as follows:
in one embodiment, as shown in fig. 1, fig. 1 is a flow chart of a power distribution room construction safety protection measure identification method based on YOLOv5 according to an embodiment of the present application; the application provides a power distribution room construction safety protection measure identification method based on YOLOv5, which specifically comprises the following steps:
s110: and acquiring an image to be identified of a tester during operation of the power distribution room.
In this step, when the image to be identified is acquired, a field image of the operation of the tester in the power distribution room can be acquired, and the field image is used as the image to be identified, wherein the operation of the power distribution room refers to that the tester is inspecting and maintaining the power equipment in the power distribution room, and the power equipment can be high-voltage power equipment or other types of power equipment, and is not limited herein.
It can be understood that when the image to be identified is obtained, the camera can be used for shooting the operation site of the tester, the interval time of image grabbing is set for the camera, that is, the camera automatically grabs a site image every interval time, further, after the image to be identified is obtained, the image to be identified can be preprocessed, and the preprocessing process comprises, but is not limited to, sharpening, denoising and rotating the image.
The sharpening process is to compensate the outline of the image, enhance the edge of the image and the portion of the gray jump, and make the image clear, and it can be divided into two types of spatial domain process and frequency domain process. The contrast between the edge of the feature and surrounding pixels is improved by highlighting the edge, outline, or characteristics of certain linear target elements of the feature on the image; the denoising process refers to a process of reducing noise in a digital image, and generally, in the process of digitizing and transmitting the image information, the image information obtained by the denoising process is often affected by interference of imaging equipment and external environment noise and the like, namely, noise is generally included in the image information obtained by the denoising process. The noise becomes an important cause of image interference, and the image is subjected to denoising treatment, so that the noise in the image is removed, and the authenticity and accuracy of the obtained image are further improved; the rotation process refers to a process of forming a new image by rotating an image by a certain angle with a certain point of the image as the center.
S120: a safeguard recognition model is determined.
In this step, when the image to be identified is obtained when the tester works in the power distribution room, the corresponding protection measure identification model can be determined according to the protection measure which the tester should have when detecting in the power distribution room, and the protection measure identification model is used for detecting the position of the protection measure in the image to be identified so as to identify the corresponding protection result and output.
It can be understood that in order to further improve the target recognition rate and the recognition detection rate, the application can obtain an initial recognition model by cascading two small target detection networks constructed based on YOLOv5, and then train the initial recognition model to obtain a protection measure recognition model, wherein the two small target detection networks are respectively a personnel detection network and a protection detection network, the personnel detection network is used for detecting the position of a tester in an image to be recognized, and the protection detection network is used for detecting the protection measure of the tester.
S130: inputting the image to be identified into a personnel detection network to obtain a target personnel boundary box of the tester in the image to be identified, wherein the target personnel boundary box is output by the personnel detection network.
In this step, after the image to be identified is obtained and the protection measure identification model is determined through S110 and S120, the obtained image to be identified may be input into the personnel detection network in the protection measure identification model, so as to obtain the target personnel bounding box of the tester in the image to be identified, which is output by the personnel detection network.
It can be understood that the target personnel boundary box in the application refers to a boundary box which is detected by a tester in an image to be identified and marked on the position of the detected tester, when the target personnel boundary box in the image to be identified is detected and marked through a personnel detection network, a plurality of initial personnel boundary boxes and the confidence corresponding to each initial personnel boundary box can be obtained, and then the initial personnel boundary box with the highest confidence can be selected as the target personnel boundary box of the image to be identified according to the confidence corresponding to each initial personnel boundary box, and then the final output result of the personnel detection network can be obtained.
Further, when the bounding box of the target person is marked, a Labelimg image marking tool can be used for marking the bounding box on the position of the test person, the marking type can be rectangle, circle, line and the like, the limitation is not made here, and after the bounding box is marked on the test person, the position coordinates of the bounding box can be obtained and recorded, so that the position of the test person in the image to be identified can be determined later.
S140: and cutting the image to be identified according to the target personnel boundary box to obtain a tester image.
In this step, after the target person boundary box is determined through S130, the image to be identified may be cut according to the position coordinates of the target person boundary box in the image to be identified, and the cut image may be subjected to preprocessing such as scale transformation, so as to obtain the image of the tester.
Further, when a plurality of testers exist in the field image, the position coordinates of the target personnel boundary box of each tester in the image to be identified can be obtained, and cutting is performed according to each position coordinate to obtain a plurality of tester images.
S150: and inputting the image of the tester into a protection detection network to obtain a target protection boundary box in the image of the tester output by the protection detection network.
In this step, after the image of the tester is obtained through S130, the obtained image of the tester may be input into the protection detection network in the protection measure recognition model, so as to obtain the target protection bounding box in the image of the tester output by the protection detection network.
It can be understood that the target protection boundary box in the application is a boundary box for detecting the protection measures of the testers in the tester image and marking the protection positions, wherein the protection measures of the testers are concentrated on the hands, including glove wearing and hand-held operation levers, so that the hand states of the testers can be directly detected in the application, and the corresponding protection measures can be further identified.
Specifically, when the target protection boundary box in the image of the tester is detected and marked through the protection detection network, a plurality of marking results and the confidence coefficient corresponding to each protection can be obtained, and then the target protection boundary box corresponding to the marking result with the highest confidence coefficient can be selected as the final output result of the protection detection network according to the confidence coefficient corresponding to each protection.
S160: and mapping the target protection boundary box into the image to be identified, and identifying the protection measures in the target protection boundary box in the mapped image to be identified.
In this step, after the target protection boundary box is obtained through S150, the target protection boundary box may be mapped into the image to be identified, and then the protection measures in the target protection boundary box are identified in the mapped image to be identified, so as to determine whether the operation of the tester is standard according to the identification result, thereby implementing real-time supervision on the safety protection of the tester operating in the power distribution room.
It can be understood that the target protection boundary box output by the protection detection network is marked in the cut image to be identified, so that when the protection measures in the target protection boundary box are identified, the target protection boundary box needs to be mapped into the initial image to be identified first, and a warning is initiated to the corresponding tester when the operation of the tester is not standard. When the target protection bounding box is mapped to the initial image to be identified, the bounding boxes of different types can be marked with different colors, for example, red can be used for marking a tester, green can be used for marking a glove, and blue can be used for marking an operation rod, so that the method is not limited.
Further, the protective measure recognition model recognition result can comprise, wear gloves, not wear gloves, hold the operation lever and not hold the operation lever, the protective measure recognition model can judge whether the tester holds the operation lever first, if not, confirm the operation specification of the tester; if yes, further judging whether the testers wear gloves, and if yes, standardizing the operation of the testers; if the tester is not wearing gloves with the operating lever held, the tester is out of specification, and a warning can be sent to the tester at this time.
In the above embodiment, when the image to be identified is identified, the image to be identified and the protection measure identification model can be obtained when the image to be identified is operated in the power distribution room, and because the protection measure identification model is obtained by cascading two small target detection networks constructed based on YOLOv5, the two small target detection networks are respectively a human detection network and a protection detection network, wherein the human detection network is used for detecting the test person in the image to be identified and the protection detection network is used for detecting the protection measure of the test person, the image to be identified can be input into the human detection network, the target person boundary box of the test person in the image to be identified output by the human detection network can be obtained, then the image to be identified can be cut according to the target person boundary box, the image of the test person is obtained, the irrelevant image area is eliminated, the image of the target area is amplified, and thus the accuracy of identification is improved, after the image of the test person is obtained, the target protection boundary box in the test person image output by the protection detection network can be obtained, so that the target protection boundary box in the test person image can be mapped into the image to be identified, and the target protection frame in the image to be identified, and the protection measure is mapped in the target protection frame to be identified. According to the protective measure identification model obtained by cascading the personnel detection network and the protective detection network, after the personnel detection network is utilized to detect the testers in the image to be identified, the protective detection network is utilized to further detect the safety protection measures of the testers according to the detection result of the personnel detection network, and the influence factors such as background interference are reduced through gradual detection, so that false detection and omission detection are reduced, the difficulty of small target detection is reduced, the detection precision of the small target is further improved, and the accuracy of the identification result is improved.
In one embodiment, determining the safeguard recognition model in S120 may include:
s121: and acquiring a sample image set of a tester during operation of the power distribution room.
In the step, when a protective measure identification model is determined, a tester in power distribution room operation can be photographed at multiple angles through a camera, and the photographed field image is subjected to pretreatment such as quality inspection, so that a sample personnel image is obtained, then the tester in the sample personnel image can be cut out and preprocessed, a sample protection image is obtained, and the sample personnel image and the sample protection image are combined to obtain a sample image set.
Further, the sample personnel image in the application can comprise a real personnel boundary box for marking the test personnel and is used for training the personnel detection network, so that the personnel detection network can learn the real personnel boundary box marked in the sample personnel image in the training process; the sample protection image can comprise a real protection boundary box marked on the protection measures of the testers and is used for training the protection detection network, so that the protection detection network can learn the real protection boundary box marked in the sample protection image in the training process.
S122: and sequentially inputting the sample images in the sample image set into a preset initial recognition model to obtain a plurality of prediction boundary boxes and prediction labels of each prediction boundary box output by the initial recognition model, wherein the prediction boundary boxes comprise a prediction personnel boundary box corresponding to the sample personnel image and a prediction protection boundary box corresponding to the sample protection image.
In this step, after the sample image set is obtained, sample images in the sample image set may be sequentially input into a preset initial recognition model, so as to recognize each sample image in the sample image set through the initial recognition model, and output a plurality of prediction bounding boxes and prediction labels of each prediction bounding box.
Furthermore, the initial recognition model is obtained by cascading the personnel detection network and the protection detection network, so that the personnel detection network and the protection detection network can be respectively trained through sample personnel images and sample protection images in a sample image set, namely, sample personnel images are input into the personnel detection network to obtain a plurality of predicted personnel boundary frames and predicted labels of each predicted personnel boundary frame output by the personnel detection network, sample protection images are input into the protection detection network to obtain a plurality of predicted protection boundary frames and predicted labels of each predicted protection boundary frame output by the protection detection network.
It should be noted that, the plurality of prediction bounding boxes output by the initial recognition model of the present application may include a predictor bounding box and a prediction protection bounding box, and the prediction protection bounding box may further include a prediction glove bounding box and a prediction operation lever bounding box. The prediction labels corresponding to each prediction bounding box may be a class, confidence level, predicted location of the prediction bounding box in the sample image data, etc. of the prediction bounding box.
S123: and training the initial recognition model by taking the approach of the predicted personnel boundary frame to the real personnel boundary frame and the approach of the predicted protection boundary frame to the real protection boundary frame as targets until the initial recognition model meets the preset training ending condition, so as to obtain the protection measure recognition model.
In this step, when the initial recognition model is trained, an originally acquired sample image may be input, where the sample image may be a preprocessed image, and the preprocessing process includes, but is not limited to, sharpening, denoising, and rotating the image, and by preprocessing the sample image, the sample image is scaled to a suitable size, so that training and outputting of the initial recognition model are facilitated.
Specifically, after the sample image is input into the initial recognition model, a predicted personnel boundary frame and a predicted protection boundary frame output by the initial recognition model can be obtained, then the initial recognition model can be updated to train by taking the predicted personnel boundary frame approaching to the real personnel boundary frame and taking the predicted protection boundary frame approaching to the real protection boundary frame as a target, when the initial recognition model after parameter updating meets the preset training ending condition, the initial recognition model can be used as a protective measure recognition model, and the preset training ending condition can be a loss value threshold value, a preset iteration number and other conditions, and the method is not limited.
In one embodiment, the small object detection network may include an input layer, a backbone network, a feature fusion network, and a prediction layer, while the small object detection network may be a human detection network and a guard detection network, wherein the detection process of the small object detection network may include:
s1221: and inputting the sample image into an input layer to perform data enhancement on the target detection objects in the sample image set, and outputting an enhanced image.
S1222: and extracting structural features of the target detection object in the enhanced image through a backbone network, and outputting to obtain a feature map formed by the structural features.
S1223: and carrying out feature fusion on a plurality of feature graphs generated in the forward propagation process of the small target detection network by using a feature fusion network to obtain a plurality of fusion feature graphs.
S1224: and inputting each fusion feature map into a prediction layer, and outputting to obtain a plurality of prediction boundary boxes and prediction labels of each prediction boundary box.
In this embodiment, when a small target detection network detects a sample image, the sample image may be input into an input layer to enhance pixels of an image area corresponding to a target detection object in a sample image set, so as to obtain enhanced data, then the enhanced data may be input into a backbone network, a slicing operation and a convolution operation are performed on the enhanced data through the backbone network, structural features of the target detection object in the enhanced data are extracted, and finally a feature map formed by the structural features is output, and then a feature fusion network may be utilized to perform feature fusion on a plurality of feature maps generated by the small target detection network in a forward propagation process, so as to obtain a plurality of fused feature maps, so that each fused feature map is input into a prediction layer, and a prediction boundary box and a prediction label corresponding to each fused feature map are obtained.
It can be understood that the input layer is used as an input end of the initial recognition model, has functions of Mosaic data enhancement, self-adaptive anchor frame calculation and the like, and can perform data enhancement on the sample image, such as scaling, color space adjustment, mosaic enhancement and the like on the sample image, so that a small target detection object in the sample data can be accurately detected; the backbone network can extract rich characteristic information from the characteristic diagram by adopting a CSP structure and a Focus structure, wherein the Focus structure is used for slicing operation, and the CPS structure is used for convolution operation; the feature fusion network adopts a PANet structure and consists of an FPN+PAN structure, wherein the FPN is a feature pyramid network, the feature graphs are subjected to up-sampling fusion in a top-down mode, the PAN is a path aggregation network, and the feature graphs are subjected to down-sampling fusion in a bottom-up mode; while the prediction layer may apply an anchor box on the fused feature map and generate a final output vector with class probabilities, object scores, and bounding boxes, i.e., multiple prediction bounding boxes and prediction labels for each prediction bounding box in the present application.
In one embodiment, the small target detection network may include 4 different scale detection branch structures; in S1222, the structural features of the target detection object in the enhanced image are extracted through the backbone network, and a feature map formed by the structural features is output, which may include:
S2221: and executing slicing operation and convolution operation on the enhanced image through a backbone network by utilizing each detection branch structure, and extracting the structural characteristics of the target detection object in the enhanced image so as to obtain a characteristic diagram corresponding to each structural characteristic.
In this embodiment, after the enhanced image is input into the backbone network, slicing operation and convolution operation may be performed on the enhanced image by using 4 detection branch structures with different scales, so as to extract structural features of the target detection object in the enhanced image, thereby obtaining a feature map corresponding to each structural feature.
It can be understood that the small target detection network constructed by YOLOv5 has only 3 detection branch structures with different scales, and outputs 3 feature images with the specifications of 20×20, 40×40 and 80×80 respectively, but only features with 3 scales are utilized, so that the utilization of shallow information is insufficient, and partial target detection object information is lost, therefore, the original three feature extraction layers can be expanded to four feature extraction layers, and the output of a feature image with the specification of 160×160 is increased, and the feature information of an image to be identified extracted by a model can be enhanced, so that the target identification rate and the accuracy of an identification result are greatly improved.
In one embodiment, the feature fusion network includes a feature pyramid network and a path aggregation network; in S1223, feature fusion is performed on a plurality of feature graphs generated in a forward propagation process by using a path aggregation network, so as to obtain a plurality of fused feature graphs, which may include:
s2231: and 4 feature graphs output by the backbone network are subjected to up-sampling fusion from top to bottom by utilizing the feature pyramid network, so that 4 deep feature graphs with different scales are obtained.
S2232: and (3) performing bottom-up downsampling fusion on the 4 deep feature maps by using a path aggregation network to obtain 4 fused feature maps with different scales.
In this embodiment, after obtaining 4 feature graphs output by the backbone network, the specifications of the 4 feature graphs may be arranged from deep to shallow, and the feature pyramid network is used to perform top-down upsampling fusion on the feature graphs to obtain 4 deep feature graphs with different scales, and then the path aggregation network may be used to perform bottom-up downsampling fusion on the 4 deep feature graphs to obtain 4 fused feature graphs with different scales.
It can be understood that the feature map extracted from the backbone network is up-sampled in the feature pyramid network and is integrated with the bottom feature map of the corresponding size of the network from deep to shallow to form effective information, so that the Concat connection operation is realized, but when the feature pyramid network transmits information to the deep layer feature map, the feature pyramid network is difficult to be integrated with the high layer feature map, so that a path aggregation network structure is realized on the basis of the feature pyramid network, a bottom-up path is increased, the integrated deep layer feature map in the feature pyramid network is continuously down-sampled and integrated from bottom to top, and the reverse integration of the deep layer feature map is realized, so that the feature information with richer sample images is obtained.
In one embodiment, the prediction tag includes a confidence level of the prediction bounding box; determining the safeguard recognition model in S120 may further include:
s124: and acquiring the confidence coefficient of each predictor boundary box in the predictor boundary boxes, and selecting the predictor boundary box with the highest confidence coefficient as the predictor boundary box corresponding to the sample personnel image.
S125: and acquiring the confidence coefficient of each prediction protection boundary frame in the prediction boundary frames, and selecting the prediction protection boundary frame with the maximum confidence coefficient as the prediction protection boundary frame corresponding to the sample protection image.
In this embodiment, after a sample image is input into an initial recognition model, a prediction boundary box output by the initial recognition model may be obtained, where the prediction boundary box is composed of a prediction personnel boundary box and a prediction protection boundary box, for each class, the confidence level of each prediction boundary box may be obtained, the confidence levels of each prediction boundary box are ordered, the prediction personnel boundary box with the highest confidence level is selected as the prediction personnel boundary box corresponding to the sample personnel image, and the prediction protection boundary box with the highest confidence level is selected as the prediction protection boundary box corresponding to the sample personnel image.
It can be understood that the prediction labels of the prediction boundary boxes refer to confidence degrees corresponding to the prediction boundary boxes, the confidence degrees refer to probability values corresponding to the prediction boundary boxes, after a plurality of prediction boundary boxes are obtained, the prediction boundary boxes can be sorted according to categories and then sorted from high to low according to the confidence degrees, so that the prediction boundary box with the highest confidence degree is selected as an output result of the small target detection network, and redundant boxes are removed, so that the accuracy of detection of the small target detection network is improved.
In one embodiment, the prediction tag comprises a category of a prediction protection bounding box, a predicted location of the prediction protection bounding box in the sample personnel image; the categories of the prediction protection boundary frames comprise a prediction glove boundary frame and a prediction operation rod boundary frame; training the protection detection network with the goal of predicting that the protection boundary box approaches the real protection boundary box in S123 may include:
s1231: and calculating the classification loss value of each prediction protection boundary box according to the classification of the prediction protection boundary box.
S1232: a positional regression loss value for each predictive protection bounding box is calculated based on the predicted position of the predictive protection bounding box in the sample personnel image.
S1233: and determining the relative positions of the prediction protection boundary boxes belonging to different categories, and calculating the relative position loss value of each relative position.
S1234: parameters in the protection detection network are updated based on the classification loss value, the location regression loss value, and the relative location loss value.
In this embodiment, the target detection object of the protection detection network includes a glove and an operation lever, so that the prediction protection boundary box output by the protection detection network includes both a prediction glove boundary box and a prediction operation lever boundary box, and when the protection detection network is trained, the classification loss value, the position regression loss value and the relative position loss value of the prediction protection boundary box in the protection detection network can be calculated respectively, so as to update the parameters in the protection detection network according to the classification loss value, the position regression loss value and the relative position loss value.
Specifically, when the protection detection network is trained with the prediction protection boundary box approaching to the real protection boundary box as a target, the classification loss value between the classes corresponding to the prediction protection boundary boxes output by the protection detection network and the classes corresponding to the real protection boundary boxes marked in the sample protection image can be calculated through the classification loss function, then the prediction position of each prediction protection boundary box output by the protection detection network in the sample protection image can be calculated through the regression loss function, the position regression loss value between the prediction boundary boxes marked in the sample protection image and the real position of each real protection boundary box marked in the sample protection image can be calculated, then the relative position between the prediction glove boundary boxes output by the protection detection network and the prediction operation rod boundary boxes marked in the sample protection image and the relative position loss value between the two relative positions can be calculated through the relative position loss function, finally the corresponding gradient can be obtained after the calculated classification loss value, the position regression loss value and the relative position loss value, and the corresponding gradient can be calculated after the chain type updating, the parameters in the network can be updated, and then the parameters in the protection network can be updated by means of the chain type updating.
The power distribution room construction safety protection measure identification device provided by the embodiment of the application is described below, and the power distribution room construction safety protection measure identification device described below and the power distribution room construction safety protection measure identification method described above can be correspondingly referred to each other.
In one embodiment, as shown in fig. 2, fig. 2 is a schematic diagram of a power distribution room construction safety protection measure identification device according to an embodiment of the present application; the application also provides a power distribution room construction safety protection measure identification device, which comprises an image acquisition module 210, a model determination module 220, a personnel detection module 230, an image cutting module 240, a protection detection module 250 and a protection identification module 260, and specifically comprises the following steps:
the image acquisition module is used for acquiring images to be identified of a tester during operation of the power distribution room.
The model determining module is used for determining a protective measure identification model, the protective measure identification model is obtained by cascading two small target detection networks constructed based on YOLOv5, and the small target detection networks are a personnel detection network and a protective detection network respectively.
And the personnel detection module is used for inputting the image to be identified into the personnel detection network to obtain a target personnel boundary box of the tester in the image to be identified, which is output by the personnel detection network.
And the image clipping module is used for clipping the image to be identified according to the target personnel boundary box to obtain a tester image.
And the protection detection module is used for inputting the image of the tester into the protection detection network to obtain a target protection boundary box in the image of the tester output by the protection detection network.
And the protection recognition module is used for mapping the target protection boundary box into the image to be recognized and recognizing the protection measures in the target protection boundary box in the mapped image to be recognized.
In the above embodiment, when the image to be identified is identified, the image to be identified and the protection measure identification model can be obtained when the image to be identified is operated in the power distribution room, and because the protection measure identification model is obtained by cascading two small target detection networks constructed based on YOLOv5, the two small target detection networks are respectively a human detection network and a protection detection network, wherein the human detection network is used for detecting the test person in the image to be identified and the protection detection network is used for detecting the protection measure of the test person, the image to be identified can be input into the human detection network, the target person boundary box of the test person in the image to be identified output by the human detection network can be obtained, then the image to be identified can be cut according to the target person boundary box, the image of the test person is obtained, the irrelevant image area is eliminated, the image of the target area is amplified, and thus the accuracy of identification is improved, after the image of the test person is obtained, the target protection boundary box in the test person image output by the protection detection network can be obtained, so that the target protection boundary box in the test person image can be mapped into the image to be identified, and the target protection frame in the image to be identified, and the protection measure is mapped in the target protection frame to be identified. According to the protective measure identification model obtained by cascading the personnel detection network and the protective detection network, after the personnel detection network is utilized to detect the testers in the image to be identified, the protective detection network is utilized to further detect the safety protection measures of the testers according to the detection result of the personnel detection network, and the influence factors such as background interference are reduced through gradual detection, so that false detection and omission detection are reduced, the difficulty of small target detection is reduced, the detection precision of the small target is further improved, and the accuracy of the identification result is improved.
In one embodiment, the present application also provides a storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the power distribution room construction safety precaution recognition method as described in any of the above embodiments.
In one embodiment, the present application also provides a computer device having stored therein computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the power distribution room construction safety precaution recognition method as described in any of the above embodiments.
Schematically, as shown in fig. 3, fig. 3 is a schematic internal structure of a computer device according to an embodiment of the present application, and the computer device 300 may be provided as a server. Referring to FIG. 3, a computer device 300 includes a processing component 302 that further includes one or more processors, and memory resources represented by memory 301, for storing instructions, such as applications, executable by the processing component 302. The application program stored in the memory 301 may include one or more modules each corresponding to a set of instructions. Further, the processing component 302 is configured to execute instructions to perform the electrical room construction safety precaution recognition method of any of the embodiments described above.
The computer device 300 may also include a power supply component 303 configured to perform power management of the computer device 300, a wired or wireless network interface 304 configured to connect the computer device 300 to a network, and an input output (I/O) interface 305. The computer device 300 may operate based on an operating system stored in memory 301, such as Windows Server TM, mac OS XTM, unix TM, linux TM, free BSDTM, or the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the present specification, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and may be combined according to needs, and the same similar parts may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. YOLOv 5-based power distribution room construction safety protection measure identification method is characterized by comprising the following steps of:
acquiring an image to be identified of a tester during operation of a power distribution room;
determining a protective measure identification model, wherein the protective measure identification model is obtained by cascading two small target detection networks constructed based on YOLOv5, and the small target detection networks are a personnel detection network and a protective detection network respectively;
Inputting the image to be identified into the personnel detection network to obtain a target personnel boundary box of the tester in the image to be identified, which is output by the personnel detection network;
cutting the image to be identified according to the target personnel boundary box to obtain a tester image;
inputting the tester image into the protection detection network to obtain a target protection boundary box in the tester image output by the protection detection network;
and mapping the target protection boundary box into the image to be identified, and identifying the protection measures in the target protection boundary box in the mapped image to be identified.
2. The power distribution room construction safety protection measure identification method according to claim 1, wherein the protection measure identification model is obtained by cascading two small target detection networks constructed based on YOLOv 5; the small target detection network is a personnel detection network and a protection detection network respectively;
the determining a safeguard recognition model includes:
acquiring a sample image set of a tester during operation of a power distribution room, wherein the sample image set comprises a sample personnel image and a sample protection image; the sample personnel image comprises a real personnel boundary box for marking the tester and is used for training the personnel detection network; the sample protection image comprises a real protection boundary box for marking the protection measures of the testers and is used for training the protection detection network;
Sequentially inputting sample images in the sample image set into a preset initial recognition model to obtain a plurality of prediction boundary frames and prediction labels of the prediction boundary frames, wherein the prediction boundary frames are output by the initial recognition model, and each prediction boundary frame comprises a prediction personnel boundary frame corresponding to the sample personnel image and a prediction protection boundary frame corresponding to the sample protection image;
and training the initial recognition model by taking the prediction personnel boundary frame approaching the real personnel boundary frame and taking the prediction protection boundary frame approaching the real protection boundary frame as a target until the initial recognition model meets the preset training ending condition, so as to obtain a protection measure recognition model.
3. The method for identifying the safety protection measures for the construction of the power distribution room according to claim 2, wherein the small target detection network comprises an input layer, a backbone network, a feature fusion network and a prediction layer;
the detection process of the small target detection network comprises the following steps:
inputting the sample image into the input layer, carrying out data enhancement on a target detection object in the sample image set, and outputting an enhanced image;
extracting structural features of the target detection object in the enhanced image through the backbone network, and outputting a feature map formed by the structural features;
Performing feature fusion on a plurality of feature graphs generated in the forward propagation process of the small target detection network by using the feature fusion network to obtain a plurality of fusion feature graphs;
and inputting each fusion feature map into the prediction layer, and outputting to obtain a plurality of prediction boundary boxes and prediction labels of each prediction boundary box.
4. The method for identifying the safety protection measures for the construction of the power distribution room according to claim 3, wherein the small target detection network comprises 4 detection branch structures with different scales;
the step of extracting structural features of the target detection object in the enhanced image through the backbone network, and outputting a feature map formed by the structural features, includes:
and executing slicing operation and convolution operation on the enhanced image through a backbone network by utilizing each detection branch structure, and extracting the structural characteristics of the target detection object in the enhanced image so as to obtain a characteristic diagram corresponding to each structural characteristic.
5. The method for identifying the safety protection measures for the construction of the power distribution room according to claim 4, wherein the characteristic fusion network comprises a characteristic pyramid network and a path aggregation network;
The step of performing feature fusion on the feature graphs generated in the forward propagation process of the small target detection network by using the path aggregation network to obtain a plurality of fused feature graphs, includes:
the feature pyramid network is utilized to perform up-sampling fusion on 4 feature graphs output by the backbone network from top to bottom to obtain 4 deep feature graphs with different scales;
and carrying out bottom-up downsampling fusion on the 4 deep feature images by using the path aggregation network to obtain 4 fused feature images with different scales.
6. The electrical room construction safety precaution recognition method of claim 2, wherein the prediction tag comprises a confidence level of a prediction bounding box;
the determining the safeguard recognition model further comprises:
acquiring the confidence coefficient of each predictor boundary box in the predictor boundary boxes, and selecting the predictor boundary box with the maximum confidence coefficient as the predictor boundary box corresponding to the sample personnel image;
and acquiring the confidence coefficient of each prediction protection boundary frame in the prediction boundary frames, and selecting the prediction protection boundary frame with the maximum confidence coefficient as the prediction protection boundary frame corresponding to the sample protection image.
7. The method for identifying the safety protection measures for the construction of the power distribution room according to claim 2, wherein the prediction labels comprise the category of the prediction protection boundary box and the prediction position of the prediction protection boundary box in the sample personnel image; the categories of the prediction protection boundary frames comprise a prediction glove boundary frame and a prediction operation rod boundary frame;
the training the protection detection network with the prediction protection boundary box approaching to the real protection boundary box as a target includes:
calculating a classification loss value of each prediction protection boundary box according to the classification of the prediction protection boundary box;
calculating a position regression loss value of each prediction protection boundary frame based on the prediction position of the prediction protection boundary frame in the sample personnel image;
determining the relative positions of the prediction protection boundary frames belonging to different categories, and calculating the relative position loss value of each relative position;
updating parameters in the protection detection network based on the classification loss value, the location regression loss value, and the relative location loss value.
8. YOLOv 5-based power distribution room construction safety protection measure identification device is characterized by comprising:
The image acquisition module is used for acquiring an image to be identified when a tester works in the power distribution room;
the model determining module is used for determining a protective measure identification model, wherein the protective measure identification model is obtained by cascading two small target detection networks constructed based on YOLOv5, and the small target detection networks are a personnel detection network and a protective detection network respectively;
the personnel detection module is used for inputting the image to be identified into the personnel detection network to obtain a target personnel boundary box of the tester in the image to be identified, which is output by the personnel detection network;
the image clipping module is used for clipping the image to be identified according to the target personnel boundary box to obtain a tester image;
the protection detection module is used for inputting the tester image into the protection detection network to obtain a target protection boundary box in the tester image output by the protection detection network;
and the protection recognition module is used for mapping the target protection boundary box into the image to be recognized and recognizing the protection measures in the target protection boundary box in the mapped image to be recognized.
9. A storage medium, characterized by: the storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the power distribution room construction safety precaution recognition method of any one of claims 1 to 7.
10. A computer device, comprising: one or more processors, and memory;
stored in the memory are computer readable instructions which, when executed by the one or more processors, perform the steps of the electrical room construction safety precaution recognition method of any one of claims 1 to 7.
CN202310679302.5A 2023-06-08 2023-06-08 Yolov 5-based power distribution room construction safety protection measure identification method Pending CN116740011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310679302.5A CN116740011A (en) 2023-06-08 2023-06-08 Yolov 5-based power distribution room construction safety protection measure identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310679302.5A CN116740011A (en) 2023-06-08 2023-06-08 Yolov 5-based power distribution room construction safety protection measure identification method

Publications (1)

Publication Number Publication Date
CN116740011A true CN116740011A (en) 2023-09-12

Family

ID=87910894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310679302.5A Pending CN116740011A (en) 2023-06-08 2023-06-08 Yolov 5-based power distribution room construction safety protection measure identification method

Country Status (1)

Country Link
CN (1) CN116740011A (en)

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
WO2021047232A1 (en) Interaction behavior recognition method, apparatus, computer device, and storage medium
JP7113657B2 (en) Information processing device, information processing method, and program
CN113269073B (en) Ship multi-target tracking method based on YOLO V5 algorithm
CN110569837B (en) Method and device for optimizing damage detection result
CN109784203B (en) Method for inspecting contraband in weak supervision X-ray image based on layered propagation and activation
CN110419048B (en) System for identifying defined objects
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
CN110264444B (en) Damage detection method and device based on weak segmentation
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
CN112149512A (en) Helmet wearing identification method based on two-stage deep learning
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN108805016A (en) A kind of head and shoulder method for detecting area and device
CN112149514A (en) Method and system for detecting safety dressing of construction worker
CN111008576A (en) Pedestrian detection and model training and updating method, device and readable storage medium thereof
CN111191535A (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN112949510A (en) Human detection method based on fast R-CNN thermal infrared image
CN112084860A (en) Target object detection method and device and thermal power plant detection method and device
Yu YOLO V5s-based deep learning approach for concrete cracks detection
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
KR101391667B1 (en) A model learning and recognition method for object category recognition robust to scale changes
CN116740011A (en) Yolov 5-based power distribution room construction safety protection measure identification method
CN115375991A (en) Strong/weak illumination and fog environment self-adaptive target detection method
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination