CN114842426B - Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting - Google Patents

Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting Download PDF

Info

Publication number
CN114842426B
CN114842426B CN202210785448.3A CN202210785448A CN114842426B CN 114842426 B CN114842426 B CN 114842426B CN 202210785448 A CN202210785448 A CN 202210785448A CN 114842426 B CN114842426 B CN 114842426B
Authority
CN
China
Prior art keywords
target
image
equipment
target equipment
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210785448.3A
Other languages
Chinese (zh)
Other versions
CN114842426A (en
Inventor
黄汉生
甘文琪
陈方正
黄德华
刘永浩
李俊华
甘焯坤
覃肇安
吴立帆
陈壹锋
车濡均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Zhaoqing Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202210785448.3A priority Critical patent/CN114842426B/en
Publication of CN114842426A publication Critical patent/CN114842426A/en
Application granted granted Critical
Publication of CN114842426B publication Critical patent/CN114842426B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a substation equipment state monitoring method and system based on accurate alignment camera shooting, and relates to the technical field of substation monitoring.

Description

Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting
Technical Field
The invention relates to the technical field of substation monitoring, in particular to a substation equipment state monitoring method and system based on accurate alignment camera shooting.
Background
The current transformer substation adopts advanced camera monitored control system to carry out automation according to preset route and elm agency shop and tours and intelligent analysis, but along with the time, certain degree of wearing and tearing can appear in the camera cloud platform for the camera can not the accurate alignment target. In order to solve the problem, in some prior art, a camera combining a wide view field lens and a long-focus zoom lens is adopted to align substation target equipment, the combined camera is installed on a three-axis stability-increasing cradle head, the cradle head is installed on an unmanned aerial vehicle, the unmanned aerial vehicle patrols according to a preset route and a preset point, the wide view field lens finds a target to be detected by using an image processing module at a patrol point, the long-focus zoom lens adjusts the angle through the cradle head, a detailed image is obtained by zooming to align the target, and the image processing module is used for identifying the target. The high-altitude detection method improves the inspection efficiency, but has higher cost, is easily influenced by the planning of the preset air route and the severe weather of GPS positioning deviation, and has low stability. In the prior art, an inspection robot is used for inspecting a transformer substation according to a preset route, when a cloud deck of the inspection robot is located at a preset position, equipment to be inspected is shot to obtain a first picture, the first picture is partitioned according to a preset partition mode, a target is positioned by calculating the similarity between each partition and an image of the target to be inspected, the angle of a camera is adjusted according to the positioning, the target is located in the center of the image, and then a detail image of the target is obtained by using a second focal length. The method is influenced by robot positioning and traveling posture deviation, so that a target to be detected has larger offset in an imaging area and accuracy is influenced, the target positioning of the method needs to be performed on images in advance in a preset partition mode, the preset partition mode needs to be completed manually, partitions of each scene are different, partition migration is poor, and the accuracy caused by partition influence is not high in the method for determining the target position by calculating the similarity between the partition images and the target reference image.
The invention provides a substation equipment state monitoring method and system based on accurate alignment camera shooting, which are used for solving the problems that the unmanned aerial vehicle inspection mode in the prior art is high in cost, is easily influenced by the severe weather of GPS positioning deviation planned by a preset route and is low in stability, the inspection robot is easily influenced by the positioning and advancing attitude deviation of the robot in the mode of inspecting a substation according to the preset route, and the positioning accuracy of target equipment is influenced by the fact that images need to be subjected to preset partitioning in advance.
Disclosure of Invention
The invention provides a substation equipment state monitoring method and system based on accurate alignment camera shooting, which are used for solving the technical problems that the existing substation equipment state monitoring method is high in cost, is easily influenced by severe weather of GPS positioning deviation planned by a preset air line, is low in stability, is easily influenced by robot positioning and advancing attitude deviation, and is influenced by the fact that images need to be subjected to preset partitioning in advance to influence the positioning accuracy of target equipment.
In view of this, a first aspect of the present invention provides a substation equipment state monitoring method based on precise alignment camera shooting, including:
the method comprises the following steps of building a camera monitoring system for monitoring the state of the transformer substation equipment, wherein the camera monitoring system comprises a three-axis stability-increasing cradle head, a high-definition night vision network camera mounted on the three-axis stability-increasing cradle head and a background processing terminal in communication connection with the high-definition night vision network camera;
the method comprises the steps that a background processing terminal acquires images of transformer substation equipment through a high-definition night vision network camera to construct a first image data set and a second image data set, wherein image samples in the first image data set are image samples which are acquired by the high-definition night vision network camera and contain the transformer substation equipment after image preprocessing, and image samples in the second image data set are image samples obtained by cutting transformer substation equipment areas in the image samples in the first image data set;
the background processing terminal respectively trains a first Efficientdet network model and a second Efficientdet network model by using a first image data set and a second image data set to obtain an equipment body detection Efficientdet model for detecting a target position and an equipment state detection Efficientdet model for detecting a target state;
the background processing terminal controls the high-definition night vision network camera to collect images of preset points according to a preset patrol strategy;
the background processing terminal detects whether target equipment exists in the image of the preset point in the image input equipment body detection Efficientdet model of the preset point, if not, the image of the preset point is collected again, and if yes, the central point of the target equipment is positioned according to the number of the target equipment existing in the image of the preset point;
the background processing terminal calculates the x-axis position adjustment amount and the y-axis position adjustment amount of the target equipment according to the pixels of the central point of the target equipment;
the background processing terminal adjusts the target equipment to the central position of the image of the preset point according to the x-axis position adjustment amount and the y-axis position adjustment amount to obtain a first target image;
the background processing terminal calculates the ratio of the target equipment in the first target image according to the width and the height of the target equipment in the first target image;
the background processing terminal calculates the focal length variation of the camera according to the corresponding relation coefficient of the ratio and the pixel zoom coefficient, adjusts the focal length of the camera according to the focal length variation of the camera and automatically focuses the first target image to obtain a second target image;
and the background processing terminal detects the state of the second target image input equipment in the Efficientdet model to identify the state of the target equipment.
Optionally, after the background processing terminal detects the state of the second target image input device in the Efficientdet model to identify the state of the target device, the method further includes:
and judging whether the target equipment has abnormal state, if so, sending out abnormal state warning information of the target equipment, wherein the abnormal state warning information of the target equipment comprises the name and the state of the target equipment.
Optionally, the detecting, by the background processing terminal, whether the target device exists in the image of the preset point in the image input device body detection Efficientdet model of the preset point, if not, acquiring the image of the preset point again, and if yes, positioning the center point of the target device according to the number of the target devices existing in the image of the preset point, including:
the background processing terminal inputs the image of the preset point into the device body detection Efficientdet model to detect whether target devices exist in the image of the preset point or not, if not, the image of the preset point is collected again, if yes, whether and only one target device exists in the image of the preset point is judged, if yes, the central point of the target device is directly positioned, otherwise, the confidence coefficient of each target device is calculated, when the confidence coefficient is larger than 0.8 and only one target device exists, the central point of the target device is directly positioned, when the confidence coefficient is larger than 0.8 and more than one target device exists, the area of each target device is calculated, and the target device with the largest area is selected to position the central point of the target device.
Optionally, the calculation formula for calculating the distance between the stationing position and the monitored equipment according to the stationing position of the camera and the position of the monitored equipment is as follows:
Figure 144673DEST_PATH_IMAGE001
wherein,
Figure 246228DEST_PATH_IMAGE002
in order to monitor the location of the equipment being monitored,dthe distance between the stationing position and the monitored equipment.
Optionally, the calculating, by the background processing terminal, the x-axis position adjustment amount and the y-axis position adjustment amount of the target device according to the pixel of the central point of the target device includes:
the background processing terminal calculates the central pixel deviation according to the pixels of the central point of the target equipment, and the calculation formula is as follows:
Figure 952016DEST_PATH_IMAGE003
Figure 287182DEST_PATH_IMAGE004
wherein,
Figure 78421DEST_PATH_IMAGE005
is the x-axis center pixel deviation,
Figure 219552DEST_PATH_IMAGE006
is the y-axis center pixel offset and,
Figure 717530DEST_PATH_IMAGE007
the x-axis pixel that is the center point of the target device,
Figure 787379DEST_PATH_IMAGE008
is the y-axis pixel of the center point of the target device,
Figure 65914DEST_PATH_IMAGE009
the x-axis pixel being the center position of the image of the preset point,
Figure 948419DEST_PATH_IMAGE010
a y-axis pixel being a center position of the image of the preset point;
the background processing terminal performs the processing according to the deviation of the central pixel of the x axis
Figure 363220DEST_PATH_IMAGE005
Calculating the x-axis position adjustment amount of the target equipment according to a first corresponding relation coefficient of the x-axis position adjustment amount, wherein the first corresponding relation coefficient is as follows:
Figure 368085DEST_PATH_IMAGE011
wherein,
Figure 806020DEST_PATH_IMAGE012
the amount of x-axis position adjustment for the target device,
Figure 805067DEST_PATH_IMAGE013
is a coefficient of the first correspondence relationship,
Figure 277636DEST_PATH_IMAGE014
Figure 187824DEST_PATH_IMAGE015
is the focal length of the camera during shooting;
the background processing terminal performs processing according to the deviation of the central pixel of the y axis
Figure 175371DEST_PATH_IMAGE006
And calculating the y-axis position adjustment quantity of the target equipment according to a second corresponding relation coefficient of the y-axis position adjustment quantity, wherein the second corresponding relation coefficient is as follows:
Figure 399679DEST_PATH_IMAGE016
wherein,
Figure 352854DEST_PATH_IMAGE017
the amount of y-axis position adjustment for the target device,
Figure 699522DEST_PATH_IMAGE018
in order to be the second correspondence coefficient,
Figure 174365DEST_PATH_IMAGE019
optionally, the calculating, by the background processing terminal, a ratio of the target device in the first target image according to the width and the height of the target device in the first target image includes:
the background processing terminal detects a first target image in the Efficientdet model by inputting the first target image into the equipment body, and a detection target is setWidth of the devicewAnd heighth
Background processing terminal according to widthwAnd heighthCalculating a height ratio and a width ratio, taking the maximum value of the height ratio and the width ratio as the ratio of the target equipment in the image, wherein the calculation formula of the height ratio and the width ratio is as follows:
Figure 999102DEST_PATH_IMAGE020
Figure 180685DEST_PATH_IMAGE021
wherein,
Figure 196789DEST_PATH_IMAGE022
in order to make the ratio of the height to the height,bin order to have a width ratio,His the height of the first target image,Wis the width of the first target image.
Optionally, the calculation formula of the focal length variation of the camera is as follows:
Figure 96612DEST_PATH_IMAGE023
wherein,
Figure 725039DEST_PATH_IMAGE024
is the variation of the focal length of the camera,
Figure 761128DEST_PATH_IMAGE025
as a correspondence coefficient of the duty ratio and the pixel zoom factor,
Figure 449598DEST_PATH_IMAGE026
the invention provides a substation equipment state monitoring system based on accurate alignment camera shooting, which comprises a triaxial stability augmentation holder, a high-definition night vision network camera mounted on the triaxial stability augmentation holder and a background processing terminal in communication connection with the high-definition night vision network camera;
the triaxial stability augmentation holder is used for carrying a high-definition night vision network camera and adjusting the visual angle of the high-definition night vision network camera;
the high-definition night vision network camera is used for inspecting and collecting images of the transformer substation equipment;
the background processing terminal is used for:
acquiring an image of the transformer substation equipment acquired by a high-definition night vision network camera, and constructing a first image data set and a second image data set, wherein an image sample in the first image data set is an image sample which is acquired by the high-definition night vision network camera and contains the transformer substation equipment after image preprocessing, and an image sample in the second image data set is an image sample obtained by cutting a transformer substation equipment area in the image sample in the first image data set;
respectively training a first Efficientdet network model and a second Efficientdet network model by using a first image data set and a second image data set to obtain an equipment body detection Efficientdet model for detecting a target position and an equipment state detection Efficientdet model for detecting a target state;
controlling a high-definition night vision network camera to acquire images of preset points according to a preset patrol strategy;
judging whether target equipment exists in the image of the preset point, if not, re-collecting the image of the preset point, and if so, positioning the central point of the target equipment according to the number of the target equipment existing in the image of the preset point;
calculating the x-axis position adjustment amount and the y-axis position adjustment amount of the target equipment according to the pixels of the central point of the target equipment;
adjusting the target equipment to the central position of the image of the preset point according to the x-axis position adjustment amount and the y-axis position adjustment amount to obtain a first target image;
inputting a first target image into an Efficientdet model for detecting the width and the height of target equipment to obtain the proportion of the target equipment in an image;
calculating the focal length variation of the camera according to the corresponding relation coefficient of the ratio and the pixel zoom coefficient, adjusting the focal length of the camera according to the focal length variation of the camera, and automatically focusing the first target image to obtain a second target image;
and inputting the second target image into the state detection Efficientdet model to identify the state of the target equipment.
Optionally, the background processing terminal is further configured to:
after the state of the target equipment is identified in the second target image input equipment state detection Efficientdet model, judging whether the target equipment is abnormal in state or not, if so, sending out abnormal state alarm information of the target equipment, wherein the abnormal state alarm information of the target equipment comprises the name and the state of the target equipment.
Optionally, determining whether target equipment exists in the image of the preset point, if not, re-acquiring the image of the preset point, and if so, positioning a central point of the target equipment according to the number of the target equipment existing in the image of the preset point, including:
judging whether target equipment exists in the image of the preset point, if not, re-acquiring the image of the preset point, if so, judging whether the image of the preset point has only one target equipment, if so, directly positioning the central point of the target equipment, otherwise, calculating the confidence coefficient of each target equipment, directly positioning the central point of the target equipment when the confidence coefficient is greater than 0.8 and only one target equipment exists, and when more than one target equipment with the confidence coefficient greater than 0.8 is available, calculating the area of each target equipment, and taking the target equipment with the largest area to position the central point of the target equipment.
Optionally, calculating an x-axis position adjustment amount and a y-axis position adjustment amount of the target device according to the pixels of the center point of the target device includes:
calculating the central pixel deviation according to the pixels of the central point of the target equipment, wherein the calculation formula is as follows:
Figure 899034DEST_PATH_IMAGE003
Figure 3257DEST_PATH_IMAGE004
wherein,
Figure 723213DEST_PATH_IMAGE005
is the x-axis center pixel deviation,
Figure 520268DEST_PATH_IMAGE006
is the y-axis center pixel offset and,
Figure 191421DEST_PATH_IMAGE007
the x-axis pixel that is the center point of the target device,
Figure 427230DEST_PATH_IMAGE008
the y-axis pixel being the center point of the target device,
Figure 437911DEST_PATH_IMAGE009
the x-axis pixel being the center position of the image of the preset point,
Figure 468184DEST_PATH_IMAGE010
a y-axis pixel being a center position of the image of the preset point;
the background processing terminal performs the processing according to the central pixel deviation of the x axis
Figure 564316DEST_PATH_IMAGE005
Calculating the x-axis position adjustment amount of the target device according to a first corresponding relation coefficient of the x-axis position adjustment amount, wherein the first corresponding relation coefficient is as follows:
Figure 577052DEST_PATH_IMAGE011
wherein,
Figure 707819DEST_PATH_IMAGE012
the amount of x-axis position adjustment for the target device,
Figure 908994DEST_PATH_IMAGE013
is a coefficient of the first correspondence relationship,
Figure 492422DEST_PATH_IMAGE014
Figure 804454DEST_PATH_IMAGE015
is the focal length of the camera during shooting;
the background processing terminal performs processing according to the deviation of the central pixel of the y axis
Figure 852045DEST_PATH_IMAGE006
And calculating the y-axis position adjustment quantity of the target equipment according to a second corresponding relation coefficient of the y-axis position adjustment quantity, wherein the second corresponding relation coefficient is as follows:
Figure 161803DEST_PATH_IMAGE016
wherein,
Figure 796309DEST_PATH_IMAGE017
the amount of y-axis position adjustment for the target device,
Figure 912033DEST_PATH_IMAGE018
is a coefficient of the second correspondence relationship,
Figure 751813DEST_PATH_IMAGE019
according to the technical scheme, the transformer substation equipment state monitoring method based on accurate alignment camera shooting has the following advantages:
the transformer substation equipment state monitoring method based on accurate alignment camera shooting provided by the invention comprises the steps of constructing two image data sets, respectively training two Efficientdet network models to obtain an equipment body detection Efficientdet model and an equipment state detection Efficientdet model, detecting the position of target equipment under a wide angle condition through the equipment body detection Efficientdet model for carrying out angle adjustment and focal length adjustment on an alignment target, identifying the state of the target equipment through the equipment state detection Efficientdet model, not partitioning a scene image, adapting to a new scene, having good mobility and high accuracy, reducing the cost by using a high-definition night vision network camera fixed by three-axis stabilization compared with a mode of carrying the camera and a patrol robot by an unmanned aerial vehicle, having no deviation problem caused by positioning and traveling postures, solving the problems that the existing transformer substation equipment state monitoring method is high in cost, easy to be influenced by positioning deviation of a preset course and extreme weather such as storm and rain, and the problem that the positioning deviation caused by positioning and traveling postures of a tripod head is easily influenced by the high positioning and the problem of the positioning and traveling posture of a robot.
The substation equipment state monitoring system based on the accurate alignment camera shooting is used for executing the substation equipment state monitoring method based on the accurate alignment camera shooting, the principle and the achieved technical effect of the substation equipment state monitoring system based on the accurate alignment camera shooting are the same as those of the substation equipment state monitoring method based on the accurate alignment camera shooting, and the details are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other related drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a substation equipment state monitoring method based on accurate alignment camera shooting according to the present invention;
fig. 2 is a schematic diagram of data processing logic of a background processing terminal of the substation equipment state monitoring method based on accurate alignment camera shooting provided in the present invention;
fig. 3 is a schematic structural diagram of a substation equipment state monitoring system based on accurate alignment camera shooting provided in the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
For easy understanding, referring to fig. 1, an embodiment of a substation device status monitoring method based on accurate alignment camera shooting is provided in the present invention, and includes:
101, a camera monitoring system for monitoring the state of the substation equipment is built, wherein the camera monitoring system comprises a triaxial stability-increasing cradle head, a high-definition night vision network camera installed on the triaxial stability-increasing cradle head and a background processing terminal in communication connection with the high-definition night vision network camera.
It should be noted that, in the embodiment of the present invention, a camera monitoring system for monitoring the state of a substation device is first built, where the camera monitoring system includes a triaxial stability enhancement cradle head, a high-definition night-vision network camera mounted on the triaxial stability enhancement cradle head, and a background processing terminal in communication connection with the high-definition night-vision network camera, and the triaxial stability enhancement cradle head carries the high-definition night-vision network camera and supports adjustment of a viewing angle of the high-definition night-vision network camera. The high-definition night vision network camera is arranged at a preset shooting position in the transformer substation through the triaxial stability-increasing cradle head and is used for inspecting and collecting images of transformer substation equipment. The background processing terminal is used for acquiring images acquired by the high-definition night vision network camera and performing preset processing analysis on the images.
102, the background processing terminal acquires images of the transformer substation equipment through the high-definition night vision network camera and constructs a first image data set and a second image data set, wherein image samples in the first image data set are image samples which are acquired by the high-definition night vision network camera and contain the transformer substation equipment after image preprocessing, and image samples in the second image data set are image samples obtained by cutting transformer substation equipment areas in the image samples in the first image data set.
The method includes the steps that after the background processing terminal acquires an image of the substation equipment acquired by the high-definition night vision network camera, a first image data set and a second image data set are constructed, an image sample in the first image data set is an image sample which is acquired by the high-definition night vision network camera and contains the substation equipment after image preprocessing, and an image sample in the second image data set is an image sample obtained by cutting a substation equipment area in the image sample in the first image data set. The high-definition night vision network camera collects images of the transformer substation equipment, after data cleaning, clearly identifiable images with correct focal length are screened out for marking, and the position, name and state of the equipment in the images are marked out to be used as image samples in the first image data set. And cutting out the equipment area in the image sample in the first image data set to obtain a new image sample as the image sample in the second image data set. The image samples in the second image dataset are annotated with device names and states.
103, the background processing terminal respectively trains a first Efficientdet network model and a second Efficientdet network model by using the first image data set and the second image data set to obtain an equipment body detection Efficientdet model for detecting the target position and an equipment state detection Efficientdet model for detecting the target state.
It should be noted that the Efficientdet network model belongs to a deep learning target detection one-stage algorithm, and compared with the two-stage algorithm, the detection speed is faster by adopting the one-stage algorithm, and the identification accuracy effect is good. The image samples in the first image data set and the image samples in the second image data set are divided into 80% of training sets, 10% of testing sets and 10% of verifying sets respectively, the first Efficientdet network model and the second Efficientdet network model are trained respectively, and the equipment body detection Efficientdet model for detecting the target position and the equipment state detection Efficientdet model for detecting the target state can be obtained after training is completed.
Specifically, a first offset network model is trained by using a first image data set, a second offset network model is trained by using a second image data set, the sizes of input pictures of the two models are set to be 600 x 600, the maximum training round is 50 rounds, the test is performed once in each round of training, the learning rate is adjusted in a mode that the existing learning rate is multiplied by 0.1 to be used as a new learning rate when the preset iteration number is reached, and the new learning rate is adjusted after 10 rounds of training and 15 rounds of training respectively. And stopping training when the loss value of the los tends to be flat, and finally obtaining an equipment body detection Efficientdet model and an equipment state detection Efficientdet model.
The trained models are used to perform tests on the test set. The method comprises the steps of testing an equipment body detection Efficientdet model by using a test set divided by a first image data set, testing an equipment state detection Efficientdet model by using a test set divided by a second image data set, analyzing evaluation indexes such as model recall rate and accuracy, carrying out processing modes such as parameter adjustment aiming at the condition of false missing detection and error report in the test set, and carrying out repeated related training until the evaluation indexes achieve a better effect.
Because the target devices to be detected have different sizes and dimensions in different task scenarios, before training an Efficientdet Network model, 9 anchors with different sizes and aspect ratios are clustered according to different training data sets (in a target detection task, an input image is extracted through a backbone Network to obtain a feature map, each pixel point on the feature map is an anchor point, or in an operation mode that a sliding window is used at a feature map, a mapping point of a current sliding window in an original pixel space is called an anchor, and simply speaking, an RPN (Region pro-polar Network) generates 9 target frames with preset and preset areas for each position (i.e. the anchor) by means of a window sliding on a shared feature map, wherein the 9 initial anchors comprise three areas (128 × 128, 256, 512 × 512), and each area comprises three aspect ratios (1, 1 aspect ratio 2, 1), namely.
And step 104, the background processing terminal controls the high-definition night vision network camera to collect images of preset points according to a preset patrol strategy.
It should be noted that the high-definition night vision network camera acquires an image of a preset point according to a preset patrol strategy (for example, rotates at an appointed angle of 30 °), and transmits the acquired image of the preset point to the background processing terminal.
And 105, the background processing terminal inputs the image of the preset point into the equipment body detection Efficientdet model to detect whether target equipment exists in the image of the preset point or not, if not, the image of the preset point is collected again, and if yes, the central point of the target equipment is positioned according to the number of the target equipment existing in the image of the preset point.
It should be noted that the background processing terminal may perform target device identification on the image of the preset point through the device body detection Efficientdet model, as shown in fig. 2, if the target device does not exist in the image of the preset point, the high-definition night vision network camera is controlled to continue to collect the image of the preset point according to the preset patrol strategy, and if the target device exists, the central point of the target device is located according to the number of the target devices existing in the image of the preset point. Specifically, whether there is only one target device in the image of the preset point is judged, if yes, the center point of the target device is directly positioned, otherwise, the confidence coefficient of each target device is calculated, when there is only one target device with the confidence coefficient larger than 0.8, the center point of the target device is directly positioned, and when there is more than one target device with the confidence coefficient larger than 0.8, the area of each target device is calculated, wherein the calculation mode is as follows:
Figure 294790DEST_PATH_IMAGE027
wherein,Sis the area of the target device and,wis the width of the target device and,his the height of the target device.
And positioning the central point of the target equipment by the target equipment with the largest area.
And 106, calculating the x-axis position adjustment amount and the y-axis position adjustment amount of the target equipment by the background processing terminal according to the pixel of the central point of the target equipment.
After determining the target device of the image of the preset point and locating the center point of the target device, the target device needs to be adjusted to the center position of the image of the preset point. Therefore, the x-axis position adjustment amount and the y-axis position adjustment amount of the target device need to be calculated, and the target device is adjusted to the image center position of the preset point according to the x-axis position adjustment amount and the y-axis position adjustment amount of the target device.
Specifically, the computing device body detects the central pixel deviation between the pixel at the central point position of the target device detected by the Efficientdet model and the pixel at the central point of the image of the preset point, and the computing formula is as follows:
Figure 649548DEST_PATH_IMAGE003
Figure 506645DEST_PATH_IMAGE004
wherein,
Figure 761784DEST_PATH_IMAGE005
is the x-axis center pixel deviation,
Figure 413345DEST_PATH_IMAGE006
is the y-axis center pixel deviation,
Figure 520978DEST_PATH_IMAGE007
the x-axis pixel that is the center point of the target device,
Figure 916188DEST_PATH_IMAGE008
the y-axis pixel being the center point of the target device,
Figure 527297DEST_PATH_IMAGE009
the x-axis pixel being the center position of the image of the preset point,
Figure 677656DEST_PATH_IMAGE010
to presetA y-axis pixel of a center position of the image of points;
the background processing terminal performs the processing according to the central pixel deviation of the x axis
Figure 944689DEST_PATH_IMAGE005
Calculating the x-axis position adjustment amount of the target equipment according to a first corresponding relation coefficient of the x-axis position adjustment amount, wherein the first corresponding relation coefficient is as follows:
Figure 441792DEST_PATH_IMAGE011
wherein,
Figure 110671DEST_PATH_IMAGE012
the amount of x-axis position adjustment for the target device,
Figure 431931DEST_PATH_IMAGE013
is a coefficient of the first correspondence relationship,
Figure 982998DEST_PATH_IMAGE014
Figure 985589DEST_PATH_IMAGE015
is the focal length of the camera during shooting;
the background processing terminal performs processing according to the deviation of the central pixel of the y axis
Figure 571291DEST_PATH_IMAGE006
And calculating the y-axis position adjustment quantity of the target equipment according to a second corresponding relation coefficient of the y-axis position adjustment quantity, wherein the second corresponding relation coefficient is as follows:
Figure 735556DEST_PATH_IMAGE016
wherein,
Figure 532174DEST_PATH_IMAGE017
the amount of y-axis position adjustment for the target device,
Figure 72877DEST_PATH_IMAGE018
in order to be the second correspondence coefficient,
Figure 513086DEST_PATH_IMAGE019
Figure 176148DEST_PATH_IMAGE015
is the focal length of the camera during shooting.
And step 107, the background processing terminal adjusts the target device to the central position of the image of the preset point according to the x-axis position adjustment amount and the y-axis position adjustment amount to obtain a first target image.
It should be noted that, by adjusting the target device to the center position of the image of the preset point according to the x-axis position adjustment amount and the y-axis position adjustment amount, the first target image and the center point position information and the size information (the height and the width of the target device in pixels) of the target device in the first target image can be obtained.
And step 108, the background processing terminal calculates the ratio of the target equipment in the first target image according to the width and the height of the target equipment in the first target image.
It should be noted that, according to the width of the target device in the first target imagewAnd heighthAnd calculating the height ratio and the width ratio, and taking the maximum value of the height ratio and the width ratio as the ratio of the target equipment in the image. The calculation formula of the height ratio and the width ratio is as follows:
Figure 639490DEST_PATH_IMAGE020
Figure 46201DEST_PATH_IMAGE021
wherein,
Figure 544178DEST_PATH_IMAGE022
in order to make the ratio of the height to the height,bin order to have a width ratio,His the height of the first target image,Wis the width of the first target image.
Taking the maximum value of the height ratio and the width ratio as the ratio of the target device in the image, namely:
Figure 614028DEST_PATH_IMAGE028
wherein,cis the occupation ratio of the target device in the image.
And step 109, the background processing terminal calculates the focal length variation of the camera according to the corresponding relation coefficient of the duty ratio and the pixel zoom factor, adjusts the focal length of the camera according to the focal length variation of the camera and automatically focuses the first target image to obtain a second target image.
It should be noted that the occupation ratio of the target device in the image is based oncCoefficient of correspondence with pixel zoom factor
Figure 830246DEST_PATH_IMAGE025
Calculating the variation of focal length of the camera
Figure 775068DEST_PATH_IMAGE024
. The calculation formula is as follows:
Figure 127552DEST_PATH_IMAGE023
wherein,
Figure 866838DEST_PATH_IMAGE024
is the variation of the focal length of the camera,
Figure 367089DEST_PATH_IMAGE025
as a correspondence coefficient of the duty ratio and the pixel zoom factor,
Figure 614137DEST_PATH_IMAGE026
focal length variation of camera
Figure 821128DEST_PATH_IMAGE024
And adjusting the focal length of the camera and automatically focusing the first target image to obtain a clear second target image.
And 110, the background processing terminal detects the state of the second target image input device in the Efficientdet model to identify the state of the target device.
It should be noted that the second target image is input into the equipment state detection Efficientdet model, so that the state information in the target equipment in the second target image can be obtained, and the state monitoring of the target equipment is realized.
In one embodiment, after the state of the target device is identified, whether the target device has an abnormal state may be determined, and if so, abnormal state warning information of the target device is sent out to notify a manager to perform review and maintenance, where the abnormal state warning information of the target device includes the name and the state of the target device.
The substation equipment state monitoring method based on accurate alignment camera shooting provided by the embodiment of the invention comprises the steps of constructing two image data sets, respectively training two Efficientdet network models to obtain an equipment body detection Efficientdet model and an equipment state detection Efficientdet model, detecting the position of target equipment under a wide-angle condition through the equipment body detection Efficientdet model for carrying out angle adjustment and focal length adjustment on an alignment target, identifying the state of the target equipment through the equipment state detection Efficientdet model without zoning a scene image, adapting to a new scene, having good mobility and high accuracy, reducing the cost by using a three-axis augmented stability cradle head fixed night vision network camera compared with a mode of carrying a camera and an unmanned aerial vehicle inspection robot, having high stability, being not easily influenced by GPS positioning deviation planned by a preset air route and extreme weather such as storm, having no deviation problem caused by positioning and traveling posture, solving the problems of the existing substation equipment state monitoring method, being easily influenced by the preset air route positioning deviation planned by high cost, being easily influenced by the positioning deviation planned by the GPS positioning deviation and the GPS positioning deviation, being easily influenced by the GPS positioning deviation and the traveling posture of the preset air and the preset target equipment, and the problem of the robot positioning and the high accuracy of the equipment.
For easy understanding, please refer to fig. 3, an embodiment of a substation equipment state monitoring system based on accurate alignment camera shooting is further provided in the present invention, including a triaxial stability increasing pan-tilt, a high-definition night vision network camera mounted on the triaxial stability increasing pan-tilt, and a background processing terminal in communication connection with the high-definition night vision network camera;
the three-axis stability augmentation holder is used for carrying a high-definition night vision network camera and adjusting the visual angle of the high-definition night vision network camera;
the high-definition night vision network camera is used for inspecting and collecting images of the transformer substation equipment;
the background processing terminal is used for:
acquiring an image of the transformer substation equipment acquired by a high-definition night vision network camera, and constructing a first image data set and a second image data set, wherein an image sample in the first image data set is an image sample which is acquired by the high-definition night vision network camera and contains the transformer substation equipment after image preprocessing, and an image sample in the second image data set is an image sample obtained by cutting a transformer substation equipment area in the image sample in the first image data set;
respectively training a first Efficientdet network model and a second Efficientdet network model by using a first image data set and a second image data set to obtain an equipment body detection Efficientdet model for detecting a target position and an equipment state detection Efficientdet model for detecting a target state;
controlling a high-definition night vision network camera to acquire images of preset points according to a preset patrol strategy;
judging whether target equipment exists in the image of the preset point, if not, re-acquiring the image of the preset point, and if so, positioning the central point of the target equipment according to the number of the target equipment existing in the image of the preset point;
calculating the x-axis position adjustment amount and the y-axis position adjustment amount of the target equipment according to the pixels of the central point of the target equipment;
adjusting the target equipment to the central position of the image of the preset point according to the x-axis position adjustment amount and the y-axis position adjustment amount to obtain a first target image;
inputting a first target image into an Efficientdet model for detecting the width and the height of target equipment to obtain the proportion of the target equipment in an image;
calculating the focal length variation of the camera according to the corresponding relation coefficient of the ratio and the pixel zoom coefficient, adjusting the focal length of the camera according to the focal length variation of the camera, and automatically focusing the first target image to obtain a second target image;
and inputting the second target image into the state detection Efficientdet model to identify the state of the target equipment.
The background processing terminal is further used for:
after the state of the target equipment is identified in the second target image input equipment state detection Efficientdet model, judging whether the target equipment is abnormal in state or not, if so, sending out abnormal state alarm information of the target equipment, wherein the abnormal state alarm information of the target equipment comprises the name and the state of the target equipment.
Judging whether target equipment exists in the image of the preset point, if not, re-acquiring the image of the preset point, and if so, positioning a central point of the target equipment according to the number of the target equipment existing in the image of the preset point, wherein the method comprises the following steps:
judging whether target equipment exists in the image of the preset point, if not, re-acquiring the image of the preset point, if so, judging whether the image of the preset point has only one target equipment, if so, directly positioning the central point of the target equipment, otherwise, calculating the confidence coefficient of each target equipment, directly positioning the central point of the target equipment when the confidence coefficient is greater than 0.8 and only one target equipment exists, and when more than one target equipment with the confidence coefficient greater than 0.8 is available, calculating the area of each target equipment, and taking the target equipment with the largest area to position the central point of the target equipment.
Calculating the x-axis position adjustment amount and the y-axis position adjustment amount of the target equipment according to the pixels of the central point of the target equipment, wherein the method comprises the following steps:
calculating the central pixel deviation according to the pixel of the central point of the target equipment, wherein the calculation formula is as follows:
Figure 996894DEST_PATH_IMAGE003
Figure 922125DEST_PATH_IMAGE004
wherein,
Figure 208750DEST_PATH_IMAGE005
is the x-axis center pixel deviation,
Figure 598143DEST_PATH_IMAGE006
is the y-axis center pixel deviation,
Figure 882494DEST_PATH_IMAGE007
the x-axis pixel that is the center point of the target device,
Figure 593223DEST_PATH_IMAGE008
is the y-axis pixel of the center point of the target device,
Figure 621222DEST_PATH_IMAGE009
the x-axis pixel being the center position of the image of the preset point,
Figure 865121DEST_PATH_IMAGE010
a y-axis pixel being a center position of the image of the preset point;
the background processing terminal performs the processing according to the central pixel deviation of the x axis
Figure 320373DEST_PATH_IMAGE005
Calculating the x-axis position adjustment amount of the target device according to a first corresponding relation coefficient of the x-axis position adjustment amount, wherein the first corresponding relation coefficient is as follows:
Figure 282513DEST_PATH_IMAGE011
wherein,
Figure 910941DEST_PATH_IMAGE012
the amount of x-axis position adjustment for the target device,
Figure 947030DEST_PATH_IMAGE013
is a coefficient of the first correspondence relationship,
Figure 139895DEST_PATH_IMAGE014
Figure 261434DEST_PATH_IMAGE015
is the focal length of the camera during shooting;
the background processing terminal performs processing according to the deviation of the central pixel of the y axis
Figure 693553DEST_PATH_IMAGE006
And calculating the y-axis position adjustment quantity of the target equipment according to a second corresponding relation coefficient of the y-axis position adjustment quantity, wherein the second corresponding relation coefficient is as follows:
Figure 912044DEST_PATH_IMAGE016
wherein,
Figure 709099DEST_PATH_IMAGE017
the amount of y-axis position adjustment for the target device,
Figure 380252DEST_PATH_IMAGE018
is a coefficient of the second correspondence relationship,
Figure 288165DEST_PATH_IMAGE019
Figure 862628DEST_PATH_IMAGE015
is the focal length of the camera during shooting.
Calculating the occupation ratio of the target device in the first target image according to the width and the height of the target device in the first target image, and the method comprises the following steps:
the background processing terminal inputs the first target image into the equipment body detection Efficientdet model to detect the width of the target equipmentwAnd heighth
Background processing terminal according to widthwAnd heighthCalculating a height ratio and a width ratio, taking the maximum value of the height ratio and the width ratio as the ratio of the target equipment in the image, wherein the calculation formula of the height ratio and the width ratio is as follows:
Figure 830584DEST_PATH_IMAGE020
Figure 989033DEST_PATH_IMAGE021
wherein,
Figure 700637DEST_PATH_IMAGE022
in order to make the ratio of the height to the height,bin order to have a width ratio,His the height of the first target image,Wis the width of the first target image.
The calculation formula of the focal length variation of the camera is as follows:
Figure 893721DEST_PATH_IMAGE023
wherein,
Figure 94895DEST_PATH_IMAGE024
is the variation of the focal length of the camera,
Figure 412744DEST_PATH_IMAGE025
as a correspondence coefficient of the duty ratio and the pixel zoom factor,
Figure 488891DEST_PATH_IMAGE026
the substation equipment state monitoring system based on accurate alignment camera shooting provided by the embodiment of the invention constructs two image data sets, respectively trains two Efficientdet network models to obtain an equipment body detection Efficientdet model and an equipment state detection Efficientdet model, detects the position of target equipment under a wide angle condition through the equipment body detection Efficientdet model for carrying out angle adjustment and focal length adjustment on an alignment target, identifies the state of the target equipment through the equipment state detection Efficientdet model, does not need to partition a scene image, can adapt to a new scene, has good mobility and high accuracy, reduces the cost by using a three-axis stability-increasing fixed high-definition network camera compared with a mode that an unmanned aerial vehicle carries the camera and a patrol robot, does not have the problem of deviation caused by the positioning and the traveling posture, solves the problems that the existing substation equipment state monitoring method is easily influenced by high cost, is easily influenced by the weather positioning deviation planned by a high air route and a severe storm rain, and the problem that the positioning deviation caused by a positioning and the GPS positioning posture are easily influenced by a high positioning technology of a tripod head in advance.
The substation equipment state monitoring system based on the accurate alignment camera shooting is used for executing the substation equipment state monitoring method based on the accurate alignment camera shooting, the principle and the achieved technical effect of the substation equipment state monitoring system based on the accurate alignment camera shooting are the same as those of the substation equipment state monitoring method based on the accurate alignment camera shooting, and the details are not repeated herein.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A transformer substation equipment state monitoring method based on accurate alignment camera shooting is characterized by comprising the following steps:
the method comprises the following steps of building a camera monitoring system for monitoring the state of the transformer substation equipment, wherein the camera monitoring system comprises a triaxial stability increasing holder, a high-definition night vision network camera mounted on the triaxial stability increasing holder and a background processing terminal in communication connection with the high-definition night vision network camera;
the method comprises the steps that a background processing terminal acquires images of transformer substation equipment through a high-definition night vision network camera to construct a first image data set and a second image data set, wherein image samples in the first image data set are image samples which are acquired by the high-definition night vision network camera and contain the transformer substation equipment after image preprocessing, and image samples in the second image data set are image samples obtained by cutting transformer substation equipment areas in the image samples in the first image data set;
the background processing terminal respectively trains a first Efficientdet network model and a second Efficientdet network model by using a first image data set and a second image data set to obtain an equipment body detection Efficientdet model for detecting a target position and an equipment state detection Efficientdet model for detecting a target state;
the background processing terminal controls the high-definition night vision network camera to collect images of preset points according to a preset patrol strategy;
the background processing terminal detects whether target equipment exists in the image of the preset point in the image input equipment body detection Efficientdet model of the preset point, if not, the image of the preset point is collected again, and if yes, the central point of the target equipment is positioned according to the number of the target equipment existing in the image of the preset point;
the background processing terminal calculates the x-axis position adjustment amount and the y-axis position adjustment amount of the target equipment according to the pixel of the central point of the target equipment;
the background processing terminal adjusts the target equipment to the central position of the image of the preset point according to the x-axis position adjustment amount and the y-axis position adjustment amount to obtain a first target image;
the background processing terminal calculates the proportion of the target equipment in the first target image according to the width and the height of the target equipment in the first target image;
the background processing terminal calculates the focal length variation of the camera according to the corresponding relation coefficient of the ratio and the pixel zoom coefficient, adjusts the focal length of the camera according to the focal length variation of the camera and automatically focuses the first target image to obtain a second target image;
the background processing terminal detects the state of the second target image input device into the Efficientdet model to identify the state of the target device;
the background processing terminal calculates the x-axis position adjustment amount and the y-axis position adjustment amount of the target device according to the pixel of the central point of the target device, and the method comprises the following steps:
the background processing terminal calculates the central pixel deviation according to the pixel of the central point of the target equipment, and the calculation formula is as follows:
Figure 321311DEST_PATH_IMAGE001
Figure 271949DEST_PATH_IMAGE002
wherein,
Figure 718236DEST_PATH_IMAGE003
is the x-axis center pixel deviation,
Figure 133037DEST_PATH_IMAGE004
is the y-axis center pixel deviation,
Figure 137902DEST_PATH_IMAGE005
the x-axis pixel that is the center point of the target device,
Figure 575837DEST_PATH_IMAGE006
is the y-axis pixel of the center point of the target device,
Figure 58771DEST_PATH_IMAGE007
the x-axis pixel being the center position of the image of the preset point,
Figure 109771DEST_PATH_IMAGE008
a y-axis pixel being a center position of the image of the preset point;
the background processing terminal performs the processing according to the central pixel deviation of the x axis
Figure 36270DEST_PATH_IMAGE003
Calculating the x-axis position adjustment amount of the target device according to a first corresponding relation coefficient of the x-axis position adjustment amount, wherein the first corresponding relation coefficient is as follows:
Figure 961500DEST_PATH_IMAGE009
wherein,
Figure 248125DEST_PATH_IMAGE010
the amount of x-axis position adjustment for the target device,
Figure 371939DEST_PATH_IMAGE011
is a coefficient of the first correspondence relationship,
Figure 656290DEST_PATH_IMAGE012
Figure 629669DEST_PATH_IMAGE013
is the focal length of the camera during shooting;
the background processing terminal performs processing according to the deviation of the central pixel of the y axis
Figure 719984DEST_PATH_IMAGE004
And calculating the y-axis position adjustment quantity of the target equipment according to a second corresponding relation coefficient of the y-axis position adjustment quantity, wherein the second corresponding relation coefficient is as follows:
Figure 963884DEST_PATH_IMAGE014
wherein,
Figure 215874DEST_PATH_IMAGE015
the amount of y-axis position adjustment for the target device,
Figure 115696DEST_PATH_IMAGE016
is a coefficient of the second correspondence relationship,
Figure 245589DEST_PATH_IMAGE017
2. the substation equipment state monitoring method based on accurate alignment shooting according to claim 1, wherein after the background processing terminal detects the state of the second target image input equipment in the Efficientdet model and identifies the state of the target equipment, the method further comprises:
and judging whether the target equipment has abnormal state, if so, sending out abnormal state alarm information of the target equipment, wherein the abnormal state alarm information of the target equipment comprises the name and the state of the target equipment.
3. The substation equipment state monitoring method based on accurate alignment camera shooting according to claim 1, wherein a background processing terminal inputs an image of a preset point into an equipment body detection Efficientdet model to detect whether target equipment exists in the image of the preset point, if not, the image of the preset point is collected again, if so, a target equipment center point is positioned according to the number of the target equipment existing in the image of the preset point, and the method comprises the following steps:
the background processing terminal inputs the image of the preset point into the device body detection Efficientdet model to detect whether target devices exist in the image of the preset point or not, if not, the image of the preset point is collected again, if yes, whether and only one target device exists in the image of the preset point is judged, if yes, the central point of the target device is directly positioned, otherwise, the confidence coefficient of each target device is calculated, when the confidence coefficient is larger than 0.8 and only one target device exists, the central point of the target device is directly positioned, when the confidence coefficient is larger than 0.8 and more than one target device exists, the area of each target device is calculated, and the target device with the largest area is selected to position the central point of the target device.
4. The substation equipment state monitoring method based on accurate alignment camera shooting as claimed in claim 1, wherein the calculating of the proportion of the target equipment in the first target image by the background processing terminal according to the width and height of the target equipment in the first target image comprises:
the background processing terminal inputs the first target image into the equipment body detection Efficientdet model, and the width of the target equipment is detectedwAnd heighth
Background processing terminal according to widthwAnd heighthCalculating a height ratio and a width ratio, taking the maximum value of the height ratio and the width ratio as the ratio of the target equipment in the image, wherein the calculation formula of the height ratio and the width ratio is as follows:
Figure 609574DEST_PATH_IMAGE018
Figure 32465DEST_PATH_IMAGE019
wherein,
Figure 481901DEST_PATH_IMAGE020
in order to make the ratio of the height to the height,bin order to have a ratio of the width,His the height of the first target image,Wis the width of the first target image.
5. The substation equipment state monitoring method based on accurate alignment camera shooting as claimed in claim 4, wherein the calculation formula of the focal length variation of the camera is as follows:
Figure 851702DEST_PATH_IMAGE021
wherein,
Figure 309009DEST_PATH_IMAGE022
is the variation of the focal length of the camera,
Figure 168381DEST_PATH_IMAGE023
as a correspondence coefficient of the duty ratio and the pixel zoom factor,
Figure 777217DEST_PATH_IMAGE024
6. a transformer substation equipment state monitoring system based on accurate alignment camera shooting is characterized by comprising a triaxial stability-increasing cradle head, a high-definition night vision network camera installed on the triaxial stability-increasing cradle head and a background processing terminal in communication connection with the high-definition night vision network camera;
the three-axis stability augmentation holder is used for carrying a high-definition night vision network camera and adjusting the visual angle of the high-definition night vision network camera;
the high-definition night vision network camera is used for inspecting and collecting images of the transformer substation equipment;
the background processing terminal is used for:
acquiring an image of the transformer substation equipment acquired by a high-definition night vision network camera, and constructing a first image data set and a second image data set, wherein an image sample in the first image data set is an image sample which is acquired by the high-definition night vision network camera and contains the transformer substation equipment after image preprocessing, and an image sample in the second image data set is an image sample obtained by cutting a transformer substation equipment area in the image sample in the first image data set;
respectively training a first Efficientdet network model and a second Efficientdet network model by using a first image data set and a second image data set to obtain an equipment body detection Efficientdet model for detecting a target position and an equipment state detection Efficientdet model for detecting a target state;
controlling a high-definition night vision network camera to acquire images of preset points according to a preset patrol strategy;
judging whether target equipment exists in the image of the preset point, if not, re-collecting the image of the preset point, and if so, positioning the central point of the target equipment according to the number of the target equipment existing in the image of the preset point;
calculating the x-axis position adjustment amount and the y-axis position adjustment amount of the target equipment according to the pixels of the central point of the target equipment;
adjusting the target equipment to the central position of the image of the preset point according to the x-axis position adjustment amount and the y-axis position adjustment amount to obtain a first target image;
inputting a first target image into an Efficientdet model for detecting the width and the height of target equipment to obtain the ratio of the target equipment in an image;
calculating the focal length variation of the camera according to the corresponding relation coefficient of the ratio and the pixel zoom coefficient, adjusting the focal length of the camera according to the focal length variation of the camera, and automatically focusing the first target image to obtain a second target image;
detecting the state of a second target image input device in an Efficientdet model to identify the state of the target device;
calculating the x-axis position adjustment amount and the y-axis position adjustment amount of the target equipment according to the pixels of the central point of the target equipment, wherein the method comprises the following steps:
calculating the central pixel deviation according to the pixel of the central point of the target equipment, wherein the calculation formula is as follows:
Figure 13026DEST_PATH_IMAGE001
Figure 86024DEST_PATH_IMAGE002
wherein,
Figure 617762DEST_PATH_IMAGE003
is the x-axis center pixel deviation and,
Figure 713894DEST_PATH_IMAGE004
is the y-axis center pixel deviation,
Figure 487815DEST_PATH_IMAGE005
the x-axis pixel that is the center point of the target device,
Figure 415320DEST_PATH_IMAGE006
the y-axis pixel being the center point of the target device,
Figure 554177DEST_PATH_IMAGE007
the x-axis pixel being the center position of the image of the preset point,
Figure 199922DEST_PATH_IMAGE008
a y-axis pixel being a center position of the image of the preset point;
the background processing terminal performs the processing according to the central pixel deviation of the x axis
Figure 10490DEST_PATH_IMAGE003
Calculating the x-axis position adjustment amount of the target device according to a first corresponding relation coefficient of the x-axis position adjustment amount, wherein the first corresponding relation coefficient is as follows:
Figure 58080DEST_PATH_IMAGE009
wherein,
Figure 430156DEST_PATH_IMAGE010
the amount of x-axis position adjustment for the target device,
Figure 500880DEST_PATH_IMAGE011
is a coefficient of the first correspondence relationship,
Figure 616604DEST_PATH_IMAGE012
Figure 20165DEST_PATH_IMAGE013
the focal length of the camera during shooting;
the background processing terminal performs processing according to the deviation of the central pixel of the y axis
Figure 563142DEST_PATH_IMAGE004
And calculating the y-axis position adjustment quantity of the target equipment according to a second corresponding relation coefficient of the y-axis position adjustment quantity, wherein the second corresponding relation coefficient is as follows:
Figure 855583DEST_PATH_IMAGE014
wherein,
Figure 774998DEST_PATH_IMAGE015
the amount of y-axis position adjustment for the target device,
Figure 531601DEST_PATH_IMAGE016
in order to be the second correspondence coefficient,
Figure 738155DEST_PATH_IMAGE017
7. the substation equipment state monitoring system based on accurate alignment camera shooting of claim 6, wherein the background processing terminal is further configured to:
after the state of the target equipment is identified in the second target image input equipment state detection Efficientdet model, judging whether the target equipment is abnormal in state or not, if so, sending out target equipment state abnormal alarm information, wherein the target equipment state abnormal alarm information comprises the name and the state of the target equipment.
8. The substation equipment state monitoring system based on accurate alignment camera shooting according to claim 6, wherein judging whether target equipment exists in the image of the preset point, if not, re-acquiring the image of the preset point, if so, positioning a target equipment center point according to the number of the target equipment existing in the image of the preset point, and comprises:
judging whether target equipment exists in the image of the preset point, if not, re-acquiring the image of the preset point, if so, judging whether the image of the preset point has only one target equipment, if so, directly positioning the central point of the target equipment, otherwise, calculating the confidence coefficient of each target equipment, directly positioning the central point of the target equipment when the confidence coefficient is greater than 0.8 and only one target equipment exists, and when more than one target equipment with the confidence coefficient greater than 0.8 is available, calculating the area of each target equipment, and taking the target equipment with the largest area to position the central point of the target equipment.
CN202210785448.3A 2022-07-06 2022-07-06 Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting Active CN114842426B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210785448.3A CN114842426B (en) 2022-07-06 2022-07-06 Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210785448.3A CN114842426B (en) 2022-07-06 2022-07-06 Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting

Publications (2)

Publication Number Publication Date
CN114842426A CN114842426A (en) 2022-08-02
CN114842426B true CN114842426B (en) 2022-10-04

Family

ID=82575308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210785448.3A Active CN114842426B (en) 2022-07-06 2022-07-06 Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting

Country Status (1)

Country Link
CN (1) CN114842426B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263066A (en) * 2020-02-18 2020-06-09 Oppo广东移动通信有限公司 Composition guiding method, composition guiding device, electronic equipment and storage medium
CN112561986A (en) * 2020-12-02 2021-03-26 南方电网电力科技股份有限公司 Secondary alignment method, device, equipment and storage medium for inspection robot holder
CN113643359A (en) * 2021-08-26 2021-11-12 广州文远知行科技有限公司 Target object positioning method, device, equipment and storage medium
WO2022021739A1 (en) * 2020-07-30 2022-02-03 国网智能科技股份有限公司 Humanoid inspection operation method and system for semantic intelligent substation robot
CN114114002A (en) * 2021-11-26 2022-03-01 国网安徽省电力有限公司马鞍山供电公司 Online fault diagnosis and state evaluation method for oxide arrester
CN114627360A (en) * 2020-12-14 2022-06-14 国电南瑞科技股份有限公司 Substation equipment defect identification method based on cascade detection model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807353B (en) * 2019-09-03 2023-12-19 国网辽宁省电力有限公司电力科学研究院 Substation foreign matter identification method, device and system based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263066A (en) * 2020-02-18 2020-06-09 Oppo广东移动通信有限公司 Composition guiding method, composition guiding device, electronic equipment and storage medium
WO2022021739A1 (en) * 2020-07-30 2022-02-03 国网智能科技股份有限公司 Humanoid inspection operation method and system for semantic intelligent substation robot
CN112561986A (en) * 2020-12-02 2021-03-26 南方电网电力科技股份有限公司 Secondary alignment method, device, equipment and storage medium for inspection robot holder
CN114627360A (en) * 2020-12-14 2022-06-14 国电南瑞科技股份有限公司 Substation equipment defect identification method based on cascade detection model
CN113643359A (en) * 2021-08-26 2021-11-12 广州文远知行科技有限公司 Target object positioning method, device, equipment and storage medium
CN114114002A (en) * 2021-11-26 2022-03-01 国网安徽省电力有限公司马鞍山供电公司 Online fault diagnosis and state evaluation method for oxide arrester

Also Published As

Publication number Publication date
CN114842426A (en) 2022-08-02

Similar Documents

Publication Publication Date Title
WO2022082856A1 (en) Method and system for automatically identifying and tracking inspection target, and robot
CN108279428B (en) Map data evaluating device and system, data acquisition system, acquisition vehicle and acquisition base station
CN110418957B (en) Method and device for monitoring the condition of a facility having an operating means
CN113759960B (en) Unmanned aerial vehicle-based fan blade and tower barrel inspection identification system and method
CN112164015A (en) Monocular vision autonomous inspection image acquisition method and device and power inspection unmanned aerial vehicle
CN110207832A (en) High-tension line cruising inspection system and its method for inspecting based on unmanned plane
CN112327906A (en) Intelligent automatic inspection system based on unmanned aerial vehicle
CN111311597A (en) Unmanned aerial vehicle inspection method and system for defective insulator
CN111311967A (en) Unmanned aerial vehicle-based power line inspection system and method
CN110085029A (en) Highway cruising inspection system and method based on rail mounted crusing robot
CN109979468B (en) Lightning stroke optical path monitoring system and method
KR102061264B1 (en) Unexpected incident detecting system using vehicle position information based on C-ITS
CN112802004B (en) Portable intelligent video detection device for health of power transmission line and pole tower
CN111244822B (en) Fixed-wing unmanned aerial vehicle line patrol method, system and device in complex geographic environment
CN113759961A (en) Power transmission line panoramic inspection method and system based on unmanned aerial vehicle AI inspection control
CN109712188A (en) A kind of method for tracking target and device
CN112067137A (en) Automatic power line temperature measurement method based on unmanned aerial vehicle line patrol
CN111046121A (en) Environment monitoring method, device and system
CN113763484A (en) Ship target positioning and speed estimation method based on video image analysis technology
WO2020239088A1 (en) Insurance claim processing method and apparatus
WO2022247597A1 (en) Papi flight inspection method and system based on unmanned aerial vehicle
CN114708520A (en) Method for recognizing and processing electric power fitting defect images on power transmission line
CN113780246A (en) Unmanned aerial vehicle three-dimensional track monitoring method and system and three-dimensional monitoring device
CN113449688B (en) Power transmission tree obstacle recognition system based on image and laser point cloud data fusion
CN114842426B (en) Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240411

Address after: No. 757, Dongfeng East Road, Yuexiu District, Guangzhou City, Guangdong Province, 510699

Patentee after: GUANGDONG POWER GRID Co.,Ltd.

Country or region after: China

Patentee after: GUANGDONG POWER GRID Co.,Ltd. ZHAOQING POWER SUPPLY BUREAU

Address before: 526000 88 Xin'an Road, 77 Duanzhou District, Zhaoqing City, Guangdong Province

Patentee before: GUANGDONG POWER GRID Co.,Ltd. ZHAOQING POWER SUPPLY BUREAU

Country or region before: China