CN112541393A - Transformer substation personnel detection method and device based on deep learning - Google Patents

Transformer substation personnel detection method and device based on deep learning Download PDF

Info

Publication number
CN112541393A
CN112541393A CN202011243836.6A CN202011243836A CN112541393A CN 112541393 A CN112541393 A CN 112541393A CN 202011243836 A CN202011243836 A CN 202011243836A CN 112541393 A CN112541393 A CN 112541393A
Authority
CN
China
Prior art keywords
transformer substation
deep learning
personnel
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011243836.6A
Other languages
Chinese (zh)
Inventor
蒋翊
章羽丰
童啸霄
周登
邓蔚
张磊
谢坚铿
刘伟波
徐暕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Zhejiang Electric Power Co Ltd
Shaoxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Shengzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Zhejiang Electric Power Co Ltd
Shaoxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Shengzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Zhejiang Electric Power Co Ltd, Shaoxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd, Shengzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Zhejiang Electric Power Co Ltd
Priority to CN202011243836.6A priority Critical patent/CN112541393A/en
Publication of CN112541393A publication Critical patent/CN112541393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention discloses a transformer substation personnel detection method based on deep learning, which comprises the steps of training a transformer substation personnel track and behavior detection model based on deep learning, and applying the transformer substation personnel track and behavior detection model based on deep learning to detect transformer substation personnel; the invention also provides a transformer substation personnel detection device based on deep learning, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the steps of the method when executing the computer program. The invention does not depend on the selection of characteristics by human experts, can efficiently and accurately identify objects such as safety clothes, safety helmets, employee cards and the like, and the capability is very accordant with the requirement of the unattended mode of the current transformer substation.

Description

Transformer substation personnel detection method and device based on deep learning
Technical Field
The invention belongs to the technical field of power engineering, and particularly relates to a transformer substation safety monitoring technology.
Background
In recent years, automation systems of substations are being widely applied along with rapid development of power information technologies, artificial intelligence technologies and deep learning algorithms. The current on duty mode of the system is to transmit various real-time data of the transformer substation to a dispatching center, so that the influence of human factors is reduced, but the phenomena of illegal personnel intrusion of the transformer substation, illegal operation behavior and the like are still required to be manually analyzed and monitored to judge. The duty mode that the worker is relied on to check the video around the clock consumes a large amount of manpower, and the transformer substation runs continuously for 24 hours, so that massive video data can be generated without any necessity, and when certain information needs to be consulted or counted, the task cannot be completed by the manpower. Therefore, the existing unattended mode of the transformer substation is improved, and the more intelligent management mode becomes the inevitable trend of the operation management development of the transformer substation.
Disclosure of Invention
Aiming at the limitation of monitoring by adopting manual watching in the prior art, the invention aims to provide a transformer substation personnel detection method based on deep learning, so that the manual labor is reduced, and the detection efficiency of the transformer substation is improved.
In order to solve the technical problems, the invention adopts the following technical scheme: a transformer substation personnel detection method based on deep learning comprises the steps of training a transformer substation personnel track and behavior detection model based on deep learning, and applying the transformer substation personnel track and behavior detection model based on deep learning to detect transformer substation personnel;
wherein, the model training comprises the following steps:
s11, acquiring image data of related personnel in the transformer substation to obtain a large sample data set;
s12, preprocessing the acquired image data, enhancing the image data by adopting a Mosaic and self-confrontation training method, and taking the image data as a training data set after verification;
s13, extracting features by adopting CSPDarknet53 as a basic network, and then training and training a feature extractor to obtain a transformer substation personnel track and behavior detection model based on deep learning;
the model application comprises the following steps:
step S21: acquiring video data shot by a camera in real time, performing frame extraction processing on the video, and extracting real-time image data in the video data;
step S22: detecting the extracted key frame picture by using a transformer substation personnel track and behavior detection model based on deep learning;
step S23: and judging whether the identity and the operation behavior of the personnel in the transformer substation are abnormal or not according to the detection result.
Preferably, in step S11, first, in the substation, the camera is used to shoot the site from different angles to obtain image sample data; then marking the collected image data, wherein the identification types of the model are set to 4 object types of people, safety helmets, work clothes and employee certificates; finally, the manufactured image is stored as an image data set required by model training; and then, splitting the data set into a training set and a testing set by using a python script file, and generating corresponding train.txt and test.txt files, wherein picture paths and names of the training pictures and the testing pictures are respectively stored.
Preferably, the CSP module in the CSPDarknet53 divides the feature map of the base layer into two parts, and then merges the two parts through a cross-phase hierarchy; adding the SPP model into CSPDarknet53 to increase the multi-scale receptive field; the parameters in CSPDarknet53 are aggregated using PANet as the different layers.
Preferably, in step S21, frame extraction is performed on the video sample at an interval of 10S, and the opencv algorithm library is used to extract the key frames from the video bare stream, so as to obtain real-time image data in the video data.
Preferably, in step S23, the identity of the person in the substation is determined by whether the employee id card is worn and the employee id dress is worn; judging whether the personnel in the transformer substation operate in violation or not according to the fact whether the helmet is worn correctly or not; and send alarm information to the outside staff and the behavior of illegal operation.
The invention also provides a transformer substation personnel detection device based on deep learning, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the steps of the method when executing the computer program.
The deep learning takes an artificial neural network as a framework, simulates human brain to analyze and learn, and realizes the autonomous learning of data such as images, sounds and texts. The transformer substation personnel detection model Yolo-SE based on deep learning provided by the invention does not depend on manual experts to select characteristics, can efficiently and accurately complete the identification of objects such as safety clothing, safety helmets, employee cards and the like, and the capability of the transformer substation personnel detection model Yolo-SE based on deep learning is very suitable for the requirement of the unattended mode of the current transformer substation. The algorithm is integrated into the embedded hardware equipment, and by means of the recognition result, problems such as illegal intrusion, no wearing of articles such as safety helmets and the like, no operation in a specified time and place and the like can be analyzed and alarmed. Compared with a traditional manually designed model, the intelligent monitoring analysis can discover suspicious personnel and monitor the operation of workers more timely and prevent risks, so that the equipment maintenance workload of basic personnel is reduced, the labor cost is saved, the efficiency of the transformer substation is improved, and the safety of the transformer substation is ensured.
The following detailed description will explain the present invention and its advantages.
Drawings
The invention is further described with reference to the accompanying drawings and the detailed description below:
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a diagram of a model framework of the present invention;
fig. 3 is a diagram of the residual block in the CSPDarknet 53.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments of the present invention, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a transformer substation personnel detection method based on deep learning includes the steps of training a transformer substation personnel track and behavior detection model based on deep learning, and applying the transformer substation personnel track and behavior detection model based on deep learning to detect transformer substation personnel;
wherein, the model training comprises the following steps:
s11, acquiring image data of related personnel in the transformer substation to obtain a large sample data set;
s12, preprocessing the acquired image data, enhancing the image data by adopting a Mosaic and self-confrontation training method, and taking the image data as a training data set after verification;
s13, extracting features by adopting CSPDarknet53 as a basic network, and then training and training a feature extractor to obtain a transformer substation personnel track and behavior detection model based on deep learning;
the model application comprises the following steps:
step S21: acquiring video data shot by a camera in real time, performing frame extraction processing on the video, and extracting real-time image data in the video data;
step S22: detecting the extracted key frame picture by using a transformer substation personnel track and behavior detection model based on deep learning;
step S23: and judging whether the identity and the operation behavior of the personnel in the transformer substation are abnormal or not according to the detection result.
The method comprises the following specific steps:
model training process
Step 1, collecting data and acquiring a data set, specifically:
in a transformer substation place, a worker shoots a field from different angles by using a camera to obtain a large amount of image sample data, then labels the collected image data, sets the identification types of the model to 4 object types such as peoples, hemmas, suit and card, and finally stores the manufactured images as an image data set required by model training. And then, splitting the data set into a training set and a testing set by using a python script file, generating corresponding train.txt and test.txt files, and storing picture paths and names of the training pictures and the testing pictures respectively.
Step 2, data preprocessing, specifically comprising:
in order to improve the accuracy of model detection and reduce the influence of certain damage and various noise pollution on the image, the image data acquired in step 1 needs to be preprocessed. Because the image data set is affected by the video image blurring caused by the sampling process in the video sample or the extreme weather, the picture is lost in nature or the demand of people is high, and a series of preprocessing operations are needed to eliminate the influence on the image. The yolk-SE model provided by the invention adopts a novel Mosaic and Self-confrontation Training (SAT) method for data enhancement, the Mosaic method is to fuse four Training images, namely to splice four images with different semantic information into one image, so that a detector can detect a target exceeding the conventional context, and the robustness of the model is enhanced. The specific operation is to read four pictures each time, then perform operations such as turning, zooming, color gamut changing and the like on the four pictures respectively, and place the pictures according to four directions, and finally perform combination of the pictures and combination of the bounding box. In addition, when the BN (batch normalization) layer calculates the activation data of each layer, the four different images are included, so that the requirement on a large mini-batch when the mean value and the variance are estimated is greatly reduced, and the model is easier to train on a single GPU.
Self-countervailing Training (SAT) methods can act on two forward and backward propagation stages to some extent against counterattacks. In the first stage, CNN changes the weight of original image information but not the network model through back propagation, and in this way, the original image is changed, and a adversarial attack to the current model is created, so that no target artifact exists on the image; in the second phase, the CNN trains the model using this new image with the original Bbox and class label, helping to reduce overfitting.
Step 3, extracting features by using CSPDarknet53 as a basic network, and then training the feature extractor to obtain a transformer substation personnel detection model, wherein the method specifically comprises the following steps:
on the basis of a model structure of Yolo v3, the original Darknet53 is replaced by CSPDarknet53(Cross Stage Partial Darknet53) with better receptive field, parameter quantity and speed as a main feature extraction network, and an SPP block (Spatial Pyramid Pooling) structure is added, so that the multi-scale receptive field is increased, but the operation speed is not reduced, and in addition, the FPN is replaced by PANET as a parameter integration method of different layers. The main frame diagram of the model is shown in fig. 2.
CSPDarknet53 contains 29 convolutional layers, 725 × 725 receptive field, 27.6M parameters, and there is a series of improved residual network resblock _ body modular structures in the network. The residual convolution in darknet53 is to perform a convolution with step size 2 at 3X3, then to store the layer of the convolution, and then to perform a convolution with 1X1 and a convolution with 3X3, and to add the layer to the result as the final result. The CSPDarknet53 modifies the structure of the resblock _ body, and uses the CSPNet structure, which is to split the original stack of the residual block into two parts: the main part continues to stack the original residual blocks, but the BottleNeck layer is removed; the other part is directly connected to the last with little processing, like a residual edge, as shown in fig. 3. Each convolution part of the CSPDarknet53 uses a specific darknenv 2D structure, L2 regularization is performed during each convolution, and Batch Normalization and Mish activation functions are performed after the convolution is completed. The CBM in fig. 2 refers to this process, Conv2d + BN + marsh.
Wherein, the formula of the Mish function is as follows:
Mish=x×tanh(ln(1+ex))
the whole process can be described as follows: an image is input, the image is input into a two-dimensional convolutional layer DarknetConv2D specific to Darknet, L2 regularization (BN) operation is carried out in the layer, and the Mish function is adopted as an activation function, at the moment, the characteristic size becomes small, and the channel number (filter) also changes along with the change of num _ filter. And then inputting the newly generated feature map into five large residual block units, wherein the number of small residual units contained in each large residual block is 1, 2, 8 and 4, performing double convolution (1 x1 and 3x 3) operation in each residual block, and each group contains one convolution operation with the step size of 2, so that the feature map dimension is reduced by 5 times and is equal to the input image dimension/32, namely 32 is 2^ 5. The yolk-SE model provided by the invention extracts a plurality of characteristic layers for target detection, and extracts three characteristic layers which are respectively positioned at a middle layer, a middle-lower layer and a bottom layer of a main network CSPDarkNet. One part of the three characteristic layers is used for outputting a prediction result corresponding to the characteristic layer through five times of convolution operation and splicing operation of a plurality of matrixes; one part enters an SPP network to be subjected to pooling operation, and then is subjected to deconvolution and UpSamplling to be combined with other feature layers after convolution operation and matrix splicing operation, and a prediction result is output.
To increase the receptive field and isolate the most prominent contextual features, the Yolo-SE model uses SPP and PANet (Path Aggregation Net) as the structure of the feature pyramid. The SPP structure is doped in convolution of the last feature layer of CSPdakrnet 53, after three times of convolution operation is carried out on the last feature layer of CSPdakrnet 53, the SPP structure is processed by utilizing three maximum pooling of different scales respectively, the sizes of the maximum pooling kernels are respectively 13x13, 9x9 and 5x5, after the pooling is completed, the pooled layer is subjected to convolution and concat operation, a feature map layer is spliced, and dimension reduction is carried out to 512 channels through 1x1 convolution. The main important characteristic of the PANET structure is repeated extraction of features, after entering the PANET, the feature layer has a feature path from top to bottom like the original FPN, and a down-sampling enhancement path from bottom to top is added, so that each feature layer is fused with other feature layers through self-adaptive feature pooling for subsequent prediction.
A Head structure is adopted in the prediction process, and the structure performs prediction by using the characteristics obtained in the previous step to obtain a transformer substation personnel detection model required by people. Through the process, 3 feature layers of different shapes are extracted, and the provided model Yolo-SE has 3 prior frames for each feature layer, so that the obtained three feature layers respectively correspond to the positions of 3 prediction frames on grids of different sizes on each graph.
(II) model implementation flow
Step 1, acquiring video data in real time, performing frame extraction processing on the video, and extracting real-time image data in the video data, specifically:
the staff utilizes the ball machine to shoot the scene from different angles, acquires the video data sample that the camera shot in real time, decides the sampling time according to the time length of video sample, for making the model accuracy of training out higher, needs to gather as much sample as possible. The method selects the frame extraction processing for the video samples at the interval of 10s, and extracts key frames from the video bare stream by using an opencv algorithm library to obtain real-time image data in the video data.
And 2, identifying and detecting the extracted key frame picture by using the trained substation personnel detection model.
Step 3, judging whether the personnel identity and the operation behavior in the transformer substation are abnormal or not according to the detection result, specifically:
judging whether the identity of a person in the transformer substation is an external person or not by wearing a work card and wearing a work clothes according to the detection result of the model; and judging whether the personnel in the transformer substation operate in violation or not according to whether the helmet is worn correctly or not, and sending alarm information to the personnel and the behavior of the illegal operation.
Step 4, deploying and implementing the test, specifically: and integrating the trained model on some hardware equipment based on deep learning, such as a mobile computing board card, configuring a relevant environment, and testing the model on the spot after deployment is completed.
The invention also provides a transformer substation personnel detection device based on deep learning, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the steps of the method when executing the computer program.
In summary, the invention adopting the above technical scheme has the following beneficial effects:
(1) the invention creatively applies the target detection model based on deep learning to personnel detection in the scene of the transformer substation, and simultaneously integrates the model into the mobile board card, thereby realizing the application of edge calculation, realizing real-time detection and alarm information sending by the system, greatly reducing the labor intensity and improving the detection efficiency of the transformer substation.
(2) The invention adopts a brand-new data enhancement method, namely a Mosaic and self-confrontation training (SAT) method, the Mosaic method simultaneously fuses four pictures with different semantic information, so that the detector detects the target exceeding the conventional context, the robustness of the model is enhanced, and the dependence on the mini-batch is reduced. SAT is two-phase forward and backward propagation that can be somewhat resistant to counter-attacks. By processing the data by using the two methods, the model is more suitable for training on a single GPU.
(3) The network model is a brand-new YOLO-SE model, the CSP domain-oriented graph model takes CSP domain-oriented graph 53 as a main network of the model, the CSP module divides the feature mapping of a basic layer into two parts, and then the two parts are combined through a cross-stage hierarchical structure, so that the calculated amount is reduced, the accuracy rate is guaranteed, and the memory cost is reduced. Meanwhile, the SPP model is added into the CSPDarknet53, the multi-scale receptive field is increased, the running speed is not reduced, the model also uses the PANet to replace the FPN as a parameter aggregation method of different layers, the semantic information of the lower layer and the high layer is enhanced, and the loss of useful characteristic information is effectively avoided.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that the invention is not limited thereto, and may be embodied in other forms without departing from the spirit or essential characteristics thereof. Any modification which does not depart from the functional and structural principles of the present invention is intended to be included within the scope of the claims.

Claims (6)

1. A transformer substation personnel detection method based on deep learning is characterized in that: training a transformer substation personnel track and behavior detection model based on deep learning, and applying the transformer substation personnel track and behavior detection model based on deep learning to detect transformer substation personnel;
wherein, the model training comprises the following steps:
s11, acquiring image data of related personnel in the transformer substation to obtain a large sample data set;
s12, preprocessing the acquired image data, enhancing the image data by adopting a Mosaic and self-confrontation training method, and taking the image data as a training data set after verification;
s13, extracting features by adopting CSPDarknet53 as a basic network, and then training and training a feature extractor to obtain a transformer substation personnel track and behavior detection model based on deep learning;
the model application comprises the following steps:
step S21: acquiring video data shot by a camera in real time, performing frame extraction processing on the video, and extracting real-time image data in the video data;
step S22: detecting the extracted key frame picture by using a transformer substation personnel track and behavior detection model based on deep learning;
step S23: and judging whether the identity and the operation behavior of the personnel in the transformer substation are abnormal or not according to the detection result.
2. The transformer substation personnel detection method based on deep learning of claim 1 is characterized in that: in step S11, first, in a substation, a camera is used to shoot a site from different angles to obtain image sample data; then marking the collected image data, wherein the identification types of the model are set to 4 object types of people, safety helmets, work clothes and employee certificates; finally, the manufactured image is stored as an image data set required by model training; and then, splitting the data set into a training set and a testing set by using a python script file, and generating corresponding train.txt and test.txt files, wherein picture paths and names of the training pictures and the testing pictures are respectively stored.
3. The transformer substation personnel detection method based on deep learning of claim 1 is characterized in that: the CSP module in CSPDarknet53 divides the feature mapping of the base layer into two parts, and then merges the two parts through a cross-phase hierarchy; adding the SPP model into CSPDarknet53 to increase the multi-scale receptive field; the parameters in CSPDarknet53 are aggregated using PANet as the different layers.
4. The transformer substation personnel detection method based on deep learning of claim 1 is characterized in that: in step S21, frame extraction is performed on the video sample at an interval of 10S, and the opencv algorithm library is used to extract key frames from the video bare stream, so as to obtain real-time image data in the video data.
5. The transformer substation personnel detection method based on deep learning of claim 1 is characterized in that: in step S23, whether the personnel identity in the transformer substation is an external person is judged through whether the employee card is worn and the work clothes are worn; judging whether the personnel in the transformer substation operate in violation or not according to the fact whether the helmet is worn correctly or not; and send alarm information to the outside staff and the behavior of illegal operation.
6. A deep learning based substation personnel detection apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that: the processor, when executing the computer program, realizes the steps of the method according to claims 1 to 5.
CN202011243836.6A 2020-11-10 2020-11-10 Transformer substation personnel detection method and device based on deep learning Pending CN112541393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011243836.6A CN112541393A (en) 2020-11-10 2020-11-10 Transformer substation personnel detection method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011243836.6A CN112541393A (en) 2020-11-10 2020-11-10 Transformer substation personnel detection method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN112541393A true CN112541393A (en) 2021-03-23

Family

ID=75014070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011243836.6A Pending CN112541393A (en) 2020-11-10 2020-11-10 Transformer substation personnel detection method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN112541393A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239854A (en) * 2021-05-27 2021-08-10 北京环境特性研究所 Ship identity recognition method and system based on deep learning
CN113486860A (en) * 2021-08-03 2021-10-08 云南大学 YOLOv 5-based safety protector wearing detection method and system
CN113553979A (en) * 2021-07-30 2021-10-26 国电汉川发电有限公司 Safety clothing detection method and system based on improved YOLO V5
CN113781412A (en) * 2021-08-25 2021-12-10 南京航空航天大学 Chip redundancy detection system and method under X-ray high-resolution scanning image based on deep learning
CN114982663A (en) * 2022-06-02 2022-09-02 新瑞鹏宠物医疗集团有限公司 Method and device for managing wandering pets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEXEY BOCHKOVSKIY ET AL: "《YOLOv4:Optimal Speed and Accuracy of Object Detection》", 《ARXIV:2004.10934V1》 *
张玮珑: "《变电站视频分析系统的研究与实现》", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239854A (en) * 2021-05-27 2021-08-10 北京环境特性研究所 Ship identity recognition method and system based on deep learning
CN113239854B (en) * 2021-05-27 2023-12-19 北京环境特性研究所 Ship identity recognition method and system based on deep learning
CN113553979A (en) * 2021-07-30 2021-10-26 国电汉川发电有限公司 Safety clothing detection method and system based on improved YOLO V5
CN113553979B (en) * 2021-07-30 2023-08-08 国电汉川发电有限公司 Safety clothing detection method and system based on improved YOLO V5
CN113486860A (en) * 2021-08-03 2021-10-08 云南大学 YOLOv 5-based safety protector wearing detection method and system
CN113781412A (en) * 2021-08-25 2021-12-10 南京航空航天大学 Chip redundancy detection system and method under X-ray high-resolution scanning image based on deep learning
CN114982663A (en) * 2022-06-02 2022-09-02 新瑞鹏宠物医疗集团有限公司 Method and device for managing wandering pets

Similar Documents

Publication Publication Date Title
CN112541393A (en) Transformer substation personnel detection method and device based on deep learning
CN107123131B (en) Moving target detection method based on deep learning
CN110807353A (en) Transformer substation foreign matter identification method, device and system based on deep learning
CN111582095B (en) Light-weight rapid detection method for abnormal behaviors of pedestrians
CN106919921B (en) Gait recognition method and system combining subspace learning and tensor neural network
CN110378221A (en) A kind of power grid wire clamp detects and defect identification method and device automatically
CN113553979B (en) Safety clothing detection method and system based on improved YOLO V5
CN110852222A (en) Campus corridor scene intelligent monitoring method based on target detection
CN111325133B (en) Image processing system based on artificial intelligent recognition
CN112163572A (en) Method and device for identifying object
CN111914676A (en) Human body tumbling detection method and device, electronic equipment and storage medium
CN111860457A (en) Fighting behavior recognition early warning method and recognition early warning system thereof
CN115620090A (en) Model training method, low-illumination target re-recognition method and device and terminal equipment
CN114881665A (en) Method and system for identifying electricity stealing suspected user based on target identification algorithm
CN113298077A (en) Transformer substation foreign matter identification and positioning method and device based on deep learning
CN112417989A (en) Invigilator violation identification method and system
CN115565137A (en) Improved YOLOv 5-based unsafe behavior detection and alarm method
CN116229341A (en) Method and system for analyzing and alarming suspicious behaviors in video monitoring among electrons
CN109977891A (en) A kind of object detection and recognition method neural network based
CN115862128A (en) Human body skeleton-based customer abnormal behavior identification method
CN115205786A (en) On-line automatic identification and alarm method for mobile phone pirate behavior
CN110427920B (en) Real-time pedestrian analysis method oriented to monitoring environment
CN115100592A (en) Method and device for identifying hidden danger of external damage of power transmission channel and storage medium
CN113822155A (en) Clustering-assisted weak surveillance video anomaly detection method and device
CN109657677A (en) A kind of detection method of electric power line pole tower, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210323