CN113298077A - Transformer substation foreign matter identification and positioning method and device based on deep learning - Google Patents

Transformer substation foreign matter identification and positioning method and device based on deep learning Download PDF

Info

Publication number
CN113298077A
CN113298077A CN202110686935.XA CN202110686935A CN113298077A CN 113298077 A CN113298077 A CN 113298077A CN 202110686935 A CN202110686935 A CN 202110686935A CN 113298077 A CN113298077 A CN 113298077A
Authority
CN
China
Prior art keywords
image
images
foreign
transformer substation
foreign matter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110686935.XA
Other languages
Chinese (zh)
Inventor
翁书文
焦国锋
成美丽
蔡儒军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Electric Power Design and Research Institute of PowerChina Co Ltd
Original Assignee
Hainan Electric Power Design and Research Institute of PowerChina Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Electric Power Design and Research Institute of PowerChina Co Ltd filed Critical Hainan Electric Power Design and Research Institute of PowerChina Co Ltd
Priority to CN202110686935.XA priority Critical patent/CN113298077A/en
Publication of CN113298077A publication Critical patent/CN113298077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a transformer substation foreign matter identification and positioning method and device based on deep learning, the transformer substation foreign matter identification and positioning method based on deep learning not only extracts the foreign matter image characteristics, but also extracts the transformer substation inspection image background and the equipment state characteristics, the three parts are deeply studied, the position of the equipment where the foreign matters are easy to hang in the image is analyzed as a key identification area, and the characteristics of the foreign matters under the shielding phenomenon are analyzed, then deep learning is carried out, the established target detection algorithm is more accurate and has pertinence through continuous model training and verification, the problem of high image shooting uncertainty in the existing image foreign matter identification method is solved, the accuracy of foreign matter identification is greatly improved, and the model can carry out characteristic learning on the hidden danger images, the efficiency of identifying foreign matters is improved, and the false detection rate and the missing detection rate are greatly reduced.

Description

Transformer substation foreign matter identification and positioning method and device based on deep learning
Technical Field
The invention relates to the field of transformer substation inspection, in particular to a transformer substation foreign matter identification and positioning method and device based on deep learning.
Background
The transformer substation often builds in spacious flat area all around, and the construction framework is higher, often can receive the invasion of external "foreign matter", can hang on substation equipment, causes to take place the short circuit between the three-phase, causes the power failure accident. Nowadays, the transformer substation generally uses the intelligence to patrol and examine the robot and carry out daily inspection, and the accessible equipment state of shooing helps transformer substation's operating personnel to judge the hidden danger that equipment exists.
At present, in an intelligent patrol substation, aiming at the image recognition function of the hidden danger of foreign matter suspension of outdoor primary equipment of the substation, for example, in patent CN111680609A, the method of comparing a daily patrol image with a reference image (normal equipment patrol image) is used for recognizing the 'foreign matter' in the patrol image, but due to the difference of image background, equipment state and foreign matter type, the images have great difference, so that the accuracy of foreign matter recognition is difficult to ensure by image comparison recognition; meanwhile, there is an object detection method for deep learning, such as patent CN110188624A, which detects a foreign object in an image by extracting a feature of the foreign object. However, due to the fact that the variety of foreign matters in the transformer substation is large, shielding is easily caused on the foreign matter suspension equipment, the foreign matter characteristics shown in the image have large randomness, the foreign matters are difficult to accurately identify by simply applying a target detection algorithm, the missing rate is high, and great potential safety hazards exist.
Disclosure of Invention
In order to improve the accuracy of foreign matter identification and avoid the influence of background, equipment state and shielding in images, the invention provides a transformer substation foreign matter identification and positioning method and device based on deep learning, which can reliably identify and position the hidden danger of primary equipment foreign matter suspension of a transformer substation.
In order to solve the technical problems, the technical scheme of the invention is as follows:
the invention provides a transformer substation foreign matter identification and positioning method based on deep learning, which comprises the following steps of:
s1: analyzing and extracting characteristics of the foreign body image and the patrol image: collecting foreign body images, patrol images without foreign bodies of the transformer substation and patrol images with foreign bodies; segmenting the image according to different scenes in the image; extracting foreign matter, background and equipment state characteristics;
s2: image splicing: image splicing is carried out on the extracted background, the extracted equipment state and the extracted foreign matters, and hidden danger images of the transformer substation, which are used for simulating the foreign matters to break into the suspension equipment, are carried out in key identification areas of the images;
s3: image summarization and labeling: summarizing the simulated hidden danger images and the collected hidden danger images hung by the foreign matters, and taking Xmin, Ymin, Xmax and Ymax of pixel coordinates of the foreign matters at each position for each hidden danger image, wherein Xmin represents the minimum value of an abscissa, Ymin represents the minimum value of an ordinate, Xmax represents the maximum value of the abscissa, Ymax represents the maximum value of the ordinate, and the positions of the foreign matters are marked by a rectangle formed by four straight lines of x-Xmin, Y-Ymin, x-Xmax and Y-Ymax;
s4: model training and verification: dividing all images into a training set, a verification set and a test set, and selecting a target detection algorithm based on a deep learning algorithm to train, verify and test the training set to obtain a target detection model meeting the precision requirement;
s5: image detection and positioning: and identifying the shot daily patrol image through the target detection model and marking the position of the foreign matter.
Preferably, step S1 specifically includes the following steps:
s11: image collection: collecting foreign body images with high occurrence frequency and large harm degree in a carding substation; collecting patrol images shot by a patrol robot, wherein the patrol images comprise patrol images of different equipment states, angles and time periods, and meanwhile, the collection of foreign matter scene images of the existing transformer substation and the simulation of the foreign matter scene images are also included;
s12: image preprocessing: carrying out binarization processing on the image according to a preset threshold value, setting points with gray values higher than the threshold value in the image as black, and setting other pixel points as white;
s13: image segmentation: comparing elements in the image, analyzing similarity information of adjacent pixel points in the image, forming a pixel set by the pixel points meeting certain similarity, and identifying the positions of the block pixel points in a segmenting way, so that the distribution of equipment positions and various background elements in the image is judged, and the image segmentation is realized;
s14: hidden danger characteristic extraction: and respectively extracting background features, foreign body features and equipment state features of the image.
Preferably, in step S12, the specific steps are:
in order to remove the influence of light spots, shadows and background differences among the images, the collected images are preprocessed by using a formula:
Figure BDA0003124914130000021
in the formula: src (x, y) is a gray value before binarization, dst (x, y) is a gray value after binarization, thresh is a preset threshold, the gray value of a pixel point in the image is higher than the threshold and is black, and other pixel points are white.
Preferably, in step S13, the method further includes: for the collected foreign object images, the number of the obtained foreign object images is increased by the methods of flipping, cropping, and scaling.
Preferably, in step S13, the method of flipping, cropping, and scaling is:
turning: turning the image every 60 degrees to obtain a new image;
cutting: randomly cutting the collected foreign body images, respectively cutting 30%, 60% and 90% of the images from the upper part, the lower part, the left part and the right part of the images, and finally acquiring 12 new cut images from each foreign body image;
zooming: the foreign object images are changed to 30%, 50%, and 150% of the original size, respectively, and new images are acquired.
Preferably, step S14 specifically includes the following steps:
s141: extracting image background features, namely analyzing and extracting image features and distribution features of the identified tour image background elements;
s142: foreign body feature extraction, which is to extract the feature of the foreign body image obtained by collection and treatment;
s143: and (4) equipment state feature extraction, namely analyzing and extracting image features of the images under different equipment states.
Preferably, in step S3, the method further includes: and marking a label on each foreign matter marked by the rectangle in the image, wherein the label comprises any one or more of the following items: comprises a honeycomb, a bird nest, a kite, a balloon, a plastic bag, leaves, a plastic film and a dustproof net.
Preferably, step S4 specifically includes the following steps:
s41: data division: dividing all images into a training set, a verification set and a test set, wherein the number accounts for 65%, 25% and 10% respectively;
s42: adjusting parameters in an algorithm: selecting a target detection algorithm based on a deep learning algorithm, and carrying out preliminary adjustment on parameters of the algorithm;
s43: training a model: training the algorithm on the basis of the training set images to obtain a target detection model of the foreign matter in the transformer substation;
s44: and (3) model verification: on the basis of the verification set, the accuracy of the target detection model is evaluated, if the accuracy does not meet the requirement, the parameters of the target detection algorithm are adjusted, the training set is used for model training, and the verification set is used for evaluating the accuracy of the model; if the accuracy meets the requirement, the next step is carried out, if the accuracy does not meet the requirement, the algorithm parameter adjusting step is returned again, and the operation in the previous step is repeated until the accuracy meets the condition; evaluating the accuracy of the model again on the basis of the test set, if the accuracy meets the requirement, keeping the model, if the accuracy cannot meet the requirement by adjusting the algorithm parameters, returning to the step S1, re-expanding and optimizing the quality of the sample, and finishing the training of the model according to the flow in sequence until the model finally meets the accuracy requirement;
s45: and (3) deploying an appearance model algorithm: and deploying the target detection model to the developed windows software for identifying and positioning the foreign matters according to the obtained final target detection model.
Preferably, step S5 specifically includes the following steps:
s51: image importing: importing shot daily patrol images;
s52: image preprocessing and segmentation: carrying out binarization processing on the image and segmenting the image according to different scenes in the image;
s53: image detection: according to the characteristics extracted from the foreign body image, a trained target detection model is applied to detect foreign bodies in the key identification area in the image;
s54: positioning foreign matters: after the position of the foreign object in the image is detected, taking Xmin, Ymin, Xmax and Ymax of the pixel point coordinate of each foreign object, and marking the position of the foreign object by using a rectangle enclosed by four straight lines of x being Xmin, Y being Ymin, x being Xmax and Y being Ymax;
s55: hidden danger alarming: and (5) alarming the patrol image with the detected foreign matters, and returning the hidden danger image marked with the foreign matters.
The invention also provides a transformer substation foreign matter identification and positioning device based on deep learning, which comprises:
foreign body image and patrol image analysis and extraction feature module: the system is used for collecting foreign body images, patrol images without foreign bodies of the transformer substation and patrol images with foreign bodies; segmenting the image according to different scenes in the image; extracting foreign matter, background and equipment state characteristics;
an image stitching module: the device is used for image splicing of the extracted background, the extracted equipment state and the extracted foreign matters, and simulating hidden danger images of the foreign matters intruding into the suspension equipment in the transformer substation in a key identification area of the images;
the image summarizing and labeling module comprises: the hidden danger image modeling method comprises the steps of summarizing simulated hidden danger images and collected hidden danger images hung by foreign matters, and taking Xmin, Ymin, Xmax and Ymax of pixel coordinates of the foreign matters at each position for each hidden danger image, wherein Xmin represents the minimum value of an abscissa, Ymin represents the minimum value of an ordinate, Xmax represents the maximum value of the abscissa, Ymax represents the maximum value of the ordinate, and the positions of the foreign matters are marked by a rectangle formed by four straight lines of x-Xmin, Y-Ymin, x-Xmax and Y-Ymax;
a model training verification module: the method comprises the steps of dividing all images into a training set, a verification set and a test set, and selecting a target detection algorithm based on a deep learning algorithm to train, verify and test the training set to obtain a target detection model meeting the precision requirement;
the image detection positioning module: the system is used for identifying shot daily patrol images and marking foreign body positions through the target detection model.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that: the invention provides a transformer substation foreign matter identification and positioning method and device based on deep learning, which are suitable for an intelligent inspection transformer substation, can be applied to operation and maintenance work of primary transformer equipment, and can reliably identify and position the hidden danger of foreign matter suspension of the primary transformer equipment by processing daily inspection images of an intelligent inspection robot. The method for identifying and positioning the foreign matters in the deep learning transformer substation not only extracts the image features of the foreign matters, but also extracts the inspection image background and the equipment state features of the transformer substation, deeply learns the three parts of the image, analyzes the position of the equipment where the foreign matters are easy to hang in the image as a key identification area, analyzes the foreign matter features under the shielding phenomenon, then deeply learns, and establishes a more accurate and targeted target detection algorithm through continuous model training and verification, thereby solving the problem of high image shooting uncertainty in the existing image foreign matter identification method, greatly improving the accuracy of foreign matter identification, enabling the model to learn the features of the hidden danger image, improving the efficiency of identifying the foreign matters, and greatly reducing the false detection rate and the omission rate.
Drawings
Fig. 1 is a flowchart of a transformer substation foreign object identification and positioning method based on deep learning in embodiment 1.
Fig. 2 is a flow chart of feature extraction by analyzing a foreign object image and a patrol image.
FIG. 3 is a diagram illustrating image preprocessing and image segmentation.
FIG. 4 is a flow chart of model training validation.
Fig. 5 is a flowchart of image detection positioning.
Fig. 6 is a distribution diagram of the inspection image recognition software modules in the embodiment 3.
FIG. 7 is a diagram illustrating feature learning and updating.
Fig. 8 is a hidden danger image with foreign matter marked thereon.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, the present embodiment provides a transformer substation foreign object identification and location method based on deep learning, including the following steps:
s1: analyzing and extracting characteristics of the foreign body image and the patrol image: collecting foreign body images, patrol images without foreign bodies of the transformer substation and patrol images with foreign bodies; segmenting the image according to different scenes in the image; extracting foreign matter, background and equipment state characteristics;
s2: image splicing: image splicing is carried out on the extracted background, the extracted equipment state and the extracted foreign matters, and hidden danger images of the transformer substation, which are used for simulating the foreign matters to break into the suspension equipment, are carried out in key identification areas of the images;
s3: image summarization and labeling: summarizing the simulated hidden danger images and the collected hidden danger images hung by the foreign matters, numbering uniformly, and taking Xmin, Ymin, Xmax and Ymax of pixel coordinates of each foreign matter for each hidden danger image, wherein Xmin represents the minimum value of an abscissa, Ymin represents the minimum value of an ordinate, Xmax represents the maximum value of the abscissa, Ymax represents the maximum value of the ordinate, and the position of the foreign matter is marked by a rectangle enclosed by four straight lines of x (Xmin), Y (Ymin), x (Xmax) and Y (Ymax); and marking a label on each foreign matter marked by the rectangle in the image, wherein the label comprises any one or more of the following items: comprises a honeycomb, a bird nest, a kite, a balloon, a plastic bag, leaves, a plastic film and a dustproof net.
S4: model training and verification: dividing all images into a training set, a verification set and a test set, and selecting a target detection algorithm based on a deep learning algorithm to train, verify and test the training set to obtain a target detection model meeting the precision requirement;
s5: image detection and positioning: and identifying the shot daily patrol image through the target detection model and marking the position of the foreign matter.
As shown in fig. 2, in the implementation process, step S1 specifically includes the following steps:
s11: image collection: collecting foreign body images with high occurrence frequency and large harm degree in a carding substation; the device comprises a honeycomb, a bird nest, a kite, a balloon, a plastic bag, leaves, a plastic film and a dust screen, and the types of foreign matters in the transformer substation can be expanded on the basis of the honeycomb, the bird nest, the kite, the balloon, the plastic bag, the leaves, the plastic film and the dust screen in the later stage; collecting patrol images shot by a patrol robot, wherein the patrol images comprise patrol images of different equipment states, angles and time periods, and meanwhile, the collection of foreign matter scene images of the existing transformer substation and the simulation of the foreign matter scene images are also included;
s12: image preprocessing: carrying out binarization processing on the image according to a preset threshold value, setting points with gray values higher than the threshold value in the image as black, and setting other pixel points as white;
in a specific implementation process, the step S12 specifically includes the following steps:
in order to remove the influence of light spots, shadows and background differences among the images, the collected images are preprocessed by using a formula:
Figure BDA0003124914130000071
in the formula: src (x, y) is a gray value before binarization, dst (x, y) is a gray value after binarization, thresh is a preset threshold, the gray value of a pixel point in the image is higher than the threshold and is black, and other pixel points are white. As shown in fig. 3, in the binarized image, the background portion is a white pixel, and the interference of the background content difference on the recognition result can be effectively eliminated.
S13: image segmentation: comparing elements in the image, analyzing similarity information of adjacent pixel points in the image, forming a pixel set by the pixel points meeting certain similarity, and rapidly comparing, segmenting and identifying the positions of the pixel points in blocks, thereby judging the distribution of equipment positions and various background elements (such as sky, weeds and white clouds) in the image and realizing image segmentation;
in a specific implementation process, step S13 further includes: for the collected foreign object images, the number of the obtained foreign object images is increased by the methods of flipping, cropping, and scaling. The method for turning, cutting and scaling comprises the following steps:
turning: turning the image every 60 degrees to obtain a new image;
cutting: randomly cutting the collected foreign body images, respectively cutting 30%, 60% and 90% of the images from the upper part, the lower part, the left part and the right part of the images, and finally acquiring 12 new cut images from each foreign body image;
zooming: the foreign object images are changed to 30%, 50%, and 150% of the original size, respectively, and new images are acquired.
S14: hidden danger characteristic extraction: and respectively extracting background features, foreign body features and equipment state features of the image. The method specifically comprises the following steps:
s141: extracting image background features, namely analyzing and extracting image features and distribution features of identified background elements (including sky, weeds, white clouds and the like) of the patrol image;
s142: foreign body feature extraction, which is to extract the feature of the foreign body image obtained by collection and treatment;
s143: and (4) equipment state feature extraction, namely analyzing and extracting image features of the images under different equipment states.
After the three parts of feature extraction, the key identification area and the foreign body feature of the image can be obtained, and irrelevant interference factors in the image can be eliminated.
As shown in fig. 4, in the implementation process, step S4 specifically includes the following steps:
s41: data division: dividing all images into a training set, a verification set and a test set, wherein the number accounts for 65%, 25% and 10% respectively;
s42: adjusting parameters in an algorithm: selecting a target detection algorithm based on a deep learning algorithm, and carrying out preliminary adjustment on parameters of the algorithm;
s43: training a model: training the algorithm on the basis of the training set images to obtain a target detection model of the foreign matter in the transformer substation;
s44: and (3) model verification: on the basis of the verification set, the accuracy of the target detection model is evaluated, if the accuracy does not meet the requirement, the parameters of the target detection algorithm are adjusted, the training set is used for model training, and the verification set is used for evaluating the accuracy of the model; if the accuracy meets the requirement, the next step is carried out, if the accuracy does not meet the requirement, the algorithm parameter adjusting step is returned again, and the operation in the previous step is repeated until the accuracy meets the condition; evaluating the accuracy of the model again on the basis of the test set, if the accuracy meets the requirement, keeping the model, if the accuracy cannot meet the requirement by adjusting the algorithm parameters, returning to the step S1, re-expanding and optimizing the quality of the sample, and finishing the training of the model according to the flow in sequence until the model finally meets the accuracy requirement;
s45: and (3) deploying an appearance model algorithm: and deploying the target detection model to the developed windows software for identifying and positioning the foreign matters according to the obtained final target detection model.
As shown in fig. 5, in the implementation process, step S5 specifically includes the following steps:
s51: image importing: importing shot daily patrol images;
s52: image preprocessing and segmentation: carrying out binarization processing on the image and segmenting the image according to different scenes in the image;
s53: image detection: according to the characteristics extracted from the foreign body image, a trained target detection model is applied to detect foreign bodies in the key identification area in the image;
s54: positioning foreign matters: after the position of the foreign object in the image is detected, taking Xmin, Ymin, Xmax and Ymax of the pixel point coordinate of each foreign object, and marking the position of the foreign object by using a rectangle enclosed by four straight lines of x being Xmin, Y being Ymin, x being Xmax and Y being Ymax;
s55: hidden danger alarming: and (5) alarming the patrol image with the detected foreign matters, and returning the hidden danger image marked with the foreign matters.
The transformer substation foreign matter identification and positioning method based on deep learning is suitable for intelligent inspection of a transformer substation, can be applied to operation and maintenance of primary equipment of the transformer substation, and can reliably identify and position the hidden danger of foreign matter suspension of the primary equipment of the transformer substation by processing daily inspection images of an intelligent inspection robot. The method for identifying and positioning the foreign matters in the deep learning transformer substation not only extracts the image features of the foreign matters, but also extracts the inspection image background and the equipment state features of the transformer substation, deeply learns the three parts of the image, analyzes the position of the equipment where the foreign matters are easy to hang in the image as a key identification area, analyzes the foreign matter features under the shielding phenomenon, then deeply learns, and establishes a more accurate and targeted target detection algorithm through continuous model training and verification, thereby solving the problem of high image shooting uncertainty in the existing image foreign matter identification method, greatly improving the accuracy of foreign matter identification, enabling the model to learn the features of the hidden danger image, improving the efficiency of identifying the foreign matters, and greatly reducing the false detection rate and the omission rate.
Example 2
The embodiment provides a transformer substation foreign matter discernment positioner based on degree of depth study, includes:
foreign body image and patrol image analysis and extraction feature module: the system is used for collecting foreign body images, patrol images without foreign bodies of the transformer substation and patrol images with foreign bodies; segmenting the image according to different scenes in the image; extracting foreign matter, background and equipment state characteristics;
an image stitching module: the device is used for image splicing of the extracted background, the extracted equipment state and the extracted foreign matters, and simulating hidden danger images of the foreign matters intruding into the suspension equipment in the transformer substation in a key identification area of the images;
the image summarizing and labeling module comprises: the hidden danger image recognition method comprises the steps of summarizing simulated hidden danger images and collected hidden danger images hung by foreign matters, numbering uniformly, and taking Xmin, Ymin, Xmax and Ymax of pixel coordinates of each foreign matter for each hidden danger image, wherein Xmin represents the minimum value of an abscissa, Ymin represents the minimum value of an ordinate, Xmax represents the maximum value of the abscissa, Ymax represents the maximum value of the ordinate, and the position of the foreign matter is marked by a rectangle enclosed by four straight lines of x-Xmin, Y-Ymin, x-Xmax and Y-Ymax;
a model training verification module: the method comprises the steps of dividing all images into a training set, a verification set and a test set, and selecting a target detection algorithm based on a deep learning algorithm to train, verify and test the training set to obtain a target detection model meeting the precision requirement;
the image detection positioning module: the system is used for identifying shot daily patrol images and marking foreign body positions through the target detection model.
The transformer substation foreign matter identification and positioning device based on deep learning in the embodiment is the corresponding device in embodiment 1, and has the same beneficial effects as the transformer substation foreign matter identification and positioning method based on deep learning in embodiment 1.
Example 3
As shown in fig. 6, in this embodiment, inspection image recognition V1.0 software is developed by using the target detection model described in embodiment 1, and the software is tried in a 500kV east island station of an intelligent inspection substation, and through actual tests, the accuracy of hidden danger recognition is high, and the message analysis is accurate.
The patrol image recognition V1.0 software is divided into three modules, a function module, an operation module and a task recording module, wherein the distribution diagram of the software modules is shown in figure 6, the characteristic learning and updating is shown in figure 7, and the hidden danger image marked with foreign matters is shown in figure 8.
Through actual test, the performance indexes are as follows:
image recognition efficiency: 500 patrol images can be imported for single patrol image identification, software can synchronously identify hidden troubles of 10 patrol images, and the identification speed of each image is within 0.1 second/piece.
Hidden danger identification accuracy: software identification of floating object hanging hidden dangers is carried out on three inspection images of a transformer substation at intervals, 50 samples of abnormal images hung by floating objects are extracted, the identification accuracy of the floating objects at the intervals is over 80%, the identification hidden danger characteristic accuracy is 100%, and finally the identification accuracy of the floating objects is not less than 85% through statistics.
The message analysis is accurate: and comparing and checking the processed messages in the equipment hidden danger identification operation process to obtain the accuracy of the software processed message data of 100 percent, and the analysis of the hidden danger position and the time is accurate.
The benefits of the method and the software are as follows:
(1) economic benefits are as follows: through the hidden danger of the equipment that software discernment floater hung, help the operation personnel to find out unusual image fast and accurately, reduced a large amount of image identification work load, artifical discernment needs 3 people to discern the hidden danger of the equipment that can only accomplish one day together, and software only needs 5 minutes just can accomplish, very big reduction basic level staff work burden, has saved a large amount of human costs. Moreover, the accuracy of the troubleshooting is up to more than 85%, the troubleshooting efficiency of the hidden danger of the equipment hung on the floating object is improved, the hidden danger of the equipment can be found accurately in time, the further deterioration of the hidden danger of the equipment is avoided, the reliable and safe operation of the equipment is ensured, a large amount of load loss and economic loss can be saved, and huge economic benefits are generated.
(2) Social benefits are as follows: the software identification replaces the traditional manual identification, the missing detection rate and the false detection rate of the identification hidden danger of operators are reduced, the quick and accurate early warning of the hidden danger of equipment is realized, the control accuracy of the operators on the site condition of the substation equipment is improved, the probability of accident occurrence is reduced, the equipment operation safety is better guaranteed, and the safe and stable operation of a power grid is guaranteed.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A transformer substation foreign matter identification and positioning method based on deep learning is characterized by comprising the following steps:
s1: analyzing and extracting characteristics of the foreign body image and the patrol image: collecting foreign body images, patrol images without foreign bodies of the transformer substation and patrol images with foreign bodies; segmenting the image according to different scenes in the image; extracting foreign matter, background and equipment state characteristics;
s2: image splicing: image splicing is carried out on the extracted background, the extracted equipment state and the extracted foreign matters, and hidden danger images of the transformer substation, which are used for simulating the foreign matters to break into the suspension equipment, are carried out in key identification areas of the images;
s3: image summarization and labeling: summarizing the simulated hidden danger images and the collected hidden danger images hung by the foreign matters, and taking Xmin, Ymin, Xmax and Ymax of pixel coordinates of the foreign matters at each position for each hidden danger image, wherein Xmin represents the minimum value of an abscissa, Ymin represents the minimum value of an ordinate, Xmax represents the maximum value of the abscissa, Ymax represents the maximum value of the ordinate, and the positions of the foreign matters are marked by a rectangle formed by four straight lines of x-Xmin, Y-Ymin, x-Xmax and Y-Ymax;
s4: model training and verification: dividing all images into a training set, a verification set and a test set, and selecting a target detection algorithm based on a deep learning algorithm to train, verify and test the training set to obtain a target detection model meeting the precision requirement;
s5: image detection and positioning: and identifying the shot daily patrol image through the target detection model and marking the position of the foreign matter.
2. The transformer substation foreign matter identification and positioning method based on deep learning of claim 1, wherein step S1 specifically comprises the following steps:
s11: image collection: collecting foreign body images with high occurrence frequency and large harm degree in a carding substation; collecting patrol images shot by a patrol robot, wherein the patrol images comprise patrol images of different equipment states, angles and time periods, and meanwhile, the collection of foreign matter scene images of the existing transformer substation and the simulation of the foreign matter scene images are also included;
s12: image preprocessing: carrying out binarization processing on the image according to a preset threshold value, setting points with gray values higher than the threshold value in the image as black, and setting other pixel points as white;
s13: image segmentation: comparing elements in the image, analyzing similarity information of adjacent pixel points in the image, forming a pixel set by the pixel points meeting certain similarity, and identifying the positions of the block pixel points in a segmenting way, so that the distribution of equipment positions and various background elements in the image is judged, and the image segmentation is realized;
s14: hidden danger characteristic extraction: and respectively extracting background features, foreign body features and equipment state features of the image.
3. The transformer substation foreign matter identification and positioning method based on deep learning of claim 2, wherein in step S12, the specific steps are as follows:
in order to remove the influence of light spots, shadows and background differences among the images, the collected images are preprocessed by using a formula:
Figure FDA0003124914120000021
in the formula: src (x, y) is a gray value before binarization, dst (x, y) is a gray value after binarization, thresh is a preset threshold, the gray value of a pixel point in the image is higher than the threshold and is black, and other pixel points are white.
4. The transformer substation foreign object identification and positioning method based on deep learning of claim 2, wherein in step S13, the method further comprises: for the collected foreign object images, the number of the obtained foreign object images is increased by the methods of flipping, cropping, and scaling.
5. The deep learning-based substation foreign object identification and positioning method according to claim 4, wherein in step S13, the method of turning, clipping and scaling is as follows:
turning: turning the image every 60 degrees to obtain a new image;
cutting: randomly cutting the collected foreign body images, respectively cutting 30%, 60% and 90% of the images from the upper part, the lower part, the left part and the right part of the images, and finally acquiring 12 new cut images from each foreign body image;
zooming: the foreign object images are changed to 30%, 50%, and 150% of the original size, respectively, and new images are acquired.
6. The transformer substation foreign matter identification and positioning method based on deep learning of claim 2, wherein step S14 specifically comprises the following steps:
s141: extracting image background features, namely analyzing and extracting image features and distribution features of the identified tour image background elements;
s142: foreign body feature extraction, which is to extract the feature of the foreign body image obtained by collection and treatment;
s143: and (4) equipment state feature extraction, namely analyzing and extracting image features of the images under different equipment states.
7. The transformer substation foreign object identification and positioning method based on deep learning of claim 1, wherein in step S3, the method further comprises: and marking a label on each foreign matter marked by the rectangle in the image, wherein the label comprises any one or more of the following items: comprises a honeycomb, a bird nest, a kite, a balloon, a plastic bag, leaves, a plastic film and a dustproof net.
8. The transformer substation foreign matter identification and positioning method based on deep learning of claim 1, wherein step S4 specifically comprises the following steps:
s41: data division: dividing all images into a training set, a verification set and a test set, wherein the number accounts for 65%, 25% and 10% respectively;
s42: adjusting parameters in an algorithm: selecting a target detection algorithm based on a deep learning algorithm, and carrying out preliminary adjustment on parameters of the algorithm;
s43: training a model: training the algorithm on the basis of the training set images to obtain a target detection model of the foreign matter in the transformer substation;
s44: and (3) model verification: on the basis of the verification set, the accuracy of the target detection model is evaluated, if the accuracy does not meet the requirement, the parameters of the target detection algorithm are adjusted, the training set is used for model training, and the verification set is used for evaluating the accuracy of the model; if the accuracy meets the requirement, the next step is carried out, if the accuracy does not meet the requirement, the algorithm parameter adjusting step is returned again, and the operation in the previous step is repeated until the accuracy meets the condition; evaluating the accuracy of the model again on the basis of the test set, if the accuracy meets the requirement, keeping the model, if the accuracy cannot meet the requirement by adjusting the algorithm parameters, returning to the step S1, re-expanding and optimizing the quality of the sample, and finishing the training of the model according to the flow in sequence until the model finally meets the accuracy requirement;
s45: and (3) deploying an appearance model algorithm: and deploying the target detection model to the developed windows software for identifying and positioning the foreign matters according to the obtained final target detection model.
9. The transformer substation foreign matter identification and positioning method based on deep learning of claim 1, wherein step S5 specifically comprises the following steps:
s51: image importing: importing shot daily patrol images;
s52: image preprocessing and segmentation: carrying out binarization processing on the image and segmenting the image according to different scenes in the image;
s53: image detection: according to the characteristics extracted from the foreign body image, a trained target detection model is applied to detect foreign bodies in the key identification area in the image;
s54: positioning foreign matters: after the position of the foreign object in the image is detected, taking Xmin, Ymin, Xmax and Ymax of the pixel point coordinate of each foreign object, and marking the position of the foreign object by using a rectangle enclosed by four straight lines of x being Xmin, Y being Ymin, x being Xmax and Y being Ymax;
s55: hidden danger alarming: and (5) alarming the patrol image with the detected foreign matters, and returning the hidden danger image marked with the foreign matters.
10. The utility model provides a foreign matter discernment positioner of transformer substation based on degree of depth study which characterized in that includes:
foreign body image and patrol image analysis and extraction feature module: the system is used for collecting foreign body images, patrol images without foreign bodies of the transformer substation and patrol images with foreign bodies; segmenting the image according to different scenes in the image; extracting foreign matter, background and equipment state characteristics;
an image stitching module: the device is used for image splicing of the extracted background, the extracted equipment state and the extracted foreign matters, and simulating hidden danger images of the foreign matters intruding into the suspension equipment in the transformer substation in a key identification area of the images;
the image summarizing and labeling module comprises: the hidden danger image modeling method comprises the steps of summarizing simulated hidden danger images and collected hidden danger images hung by foreign matters, and taking Xmin, Ymin, Xmax and Ymax of pixel coordinates of the foreign matters at each position for each hidden danger image, wherein Xmin represents the minimum value of an abscissa, Ymin represents the minimum value of an ordinate, Xmax represents the maximum value of the abscissa, Ymax represents the maximum value of the ordinate, and the positions of the foreign matters are marked by a rectangle formed by four straight lines of x-Xmin, Y-Ymin, x-Xmax and Y-Ymax;
a model training verification module: the method comprises the steps of dividing all images into a training set, a verification set and a test set, and selecting a target detection algorithm based on a deep learning algorithm to train, verify and test the training set to obtain a target detection model meeting the precision requirement;
the image detection positioning module: the system is used for identifying shot daily patrol images and marking foreign body positions through the target detection model.
CN202110686935.XA 2021-06-21 2021-06-21 Transformer substation foreign matter identification and positioning method and device based on deep learning Pending CN113298077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110686935.XA CN113298077A (en) 2021-06-21 2021-06-21 Transformer substation foreign matter identification and positioning method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110686935.XA CN113298077A (en) 2021-06-21 2021-06-21 Transformer substation foreign matter identification and positioning method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN113298077A true CN113298077A (en) 2021-08-24

Family

ID=77328948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110686935.XA Pending CN113298077A (en) 2021-06-21 2021-06-21 Transformer substation foreign matter identification and positioning method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN113298077A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283385A (en) * 2021-12-29 2022-04-05 华南理工大学 Foreign matter data generation method and terminal
CN114782828A (en) * 2022-06-22 2022-07-22 国网山东省电力公司高青县供电公司 Foreign matter detection system based on deep learning
CN115000879A (en) * 2022-08-03 2022-09-02 国网山东省电力公司高青县供电公司 Power transmission line inspection robot and inspection method based on image learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175982A (en) * 2019-04-16 2019-08-27 浙江大学城市学院 A kind of defect inspection method based on target detection
CN110807353A (en) * 2019-09-03 2020-02-18 国网辽宁省电力有限公司电力科学研究院 Transformer substation foreign matter identification method, device and system based on deep learning
CN110986889A (en) * 2019-12-24 2020-04-10 国网河南省电力公司检修公司 High-voltage substation panoramic monitoring method based on remote sensing image technology
US20210004590A1 (en) * 2019-01-14 2021-01-07 Sourcewater, Inc. Image processing of aerial imagery for energy infrastructure site status analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210004590A1 (en) * 2019-01-14 2021-01-07 Sourcewater, Inc. Image processing of aerial imagery for energy infrastructure site status analysis
CN110175982A (en) * 2019-04-16 2019-08-27 浙江大学城市学院 A kind of defect inspection method based on target detection
CN110807353A (en) * 2019-09-03 2020-02-18 国网辽宁省电力有限公司电力科学研究院 Transformer substation foreign matter identification method, device and system based on deep learning
CN110986889A (en) * 2019-12-24 2020-04-10 国网河南省电力公司检修公司 High-voltage substation panoramic monitoring method based on remote sensing image technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李军锋: "结合深度学习和随机森林的电力设备图像识别", 《高电压技术》 *
马静怡等: "基于改进Faster RCNN的小尺度入侵目标识别及定位", 《中国电力》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283385A (en) * 2021-12-29 2022-04-05 华南理工大学 Foreign matter data generation method and terminal
CN114782828A (en) * 2022-06-22 2022-07-22 国网山东省电力公司高青县供电公司 Foreign matter detection system based on deep learning
CN115000879A (en) * 2022-08-03 2022-09-02 国网山东省电力公司高青县供电公司 Power transmission line inspection robot and inspection method based on image learning
CN115000879B (en) * 2022-08-03 2022-11-11 国网山东省电力公司高青县供电公司 Power transmission line inspection robot based on image learning and inspection method

Similar Documents

Publication Publication Date Title
CN110807353B (en) Substation foreign matter identification method, device and system based on deep learning
CN111967393B (en) Safety helmet wearing detection method based on improved YOLOv4
CN113298077A (en) Transformer substation foreign matter identification and positioning method and device based on deep learning
CN111091544B (en) Method for detecting breakage fault of side integrated framework of railway wagon bogie
CN109344753A (en) A kind of tiny fitting recognition methods of Aerial Images transmission line of electricity based on deep learning
CN110458798B (en) Vibration damper defect visual detection method, system and medium based on key point detection
CN111339883A (en) Method for identifying and detecting abnormal behaviors in transformer substation based on artificial intelligence in complex scene
CN103442209A (en) Video monitoring method of electric transmission line
CN105354589A (en) Method and system for intelligently identifying insulator crack in catenary image
CN111222478A (en) Construction site safety protection detection method and system
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
CN110133443B (en) Power transmission line component detection method, system and device based on parallel vision
CN112183438B (en) Image identification method for illegal behaviors based on small sample learning neural network
CN111639530B (en) Method and system for detecting and identifying power transmission tower and insulator of power transmission line
CN111862065B (en) Power transmission line diagnosis method and system based on multitask deep convolutional neural network
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN112541393A (en) Transformer substation personnel detection method and device based on deep learning
CN111091110A (en) Wearing identification method of reflective vest based on artificial intelligence
CN108898098A (en) Early stage video smoke detection method based on monitor supervision platform
CN111695493A (en) Method and system for detecting hidden danger of power transmission line
CN116229052A (en) Method for detecting state change of substation equipment based on twin network
CN111199250A (en) Transformer substation air switch state checking method and device based on machine learning
CN107704818A (en) A kind of fire detection system based on video image
CN113111728A (en) Intelligent identification method and system for power production operation risk in transformer substation
CN117475353A (en) Video-based abnormal smoke identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824

RJ01 Rejection of invention patent application after publication