CN116596902A - Registration and accurate detection method and system for distribution network component image - Google Patents

Registration and accurate detection method and system for distribution network component image Download PDF

Info

Publication number
CN116596902A
CN116596902A CN202310601611.0A CN202310601611A CN116596902A CN 116596902 A CN116596902 A CN 116596902A CN 202310601611 A CN202310601611 A CN 202310601611A CN 116596902 A CN116596902 A CN 116596902A
Authority
CN
China
Prior art keywords
visible light
image
light image
distribution network
zooming
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310601611.0A
Other languages
Chinese (zh)
Inventor
黄志鸿
张辉
陶岩
吴晟
肖剑
杜瑞
曹意宏
徐先勇
张可人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Hunan Xiangdian Test Research Institute Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Hunan Xiangdian Test Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd, State Grid Hunan Electric Power Co Ltd, Hunan Xiangdian Test Research Institute Co Ltd filed Critical Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
Priority to CN202310601611.0A priority Critical patent/CN116596902A/en
Publication of CN116596902A publication Critical patent/CN116596902A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
    • Y04S10/52Outage or fault management, e.g. fault detection or location

Abstract

The application discloses a registration and accurate detection method and a system for images of distribution network components, wherein the method comprises the steps of obtaining visible light images and infrared images of a pair of distribution network lines; cutting and registering the visible light image according to the infrared image to obtain a registered visible light image; interpolation of the infrared image to the same size of the registered visible light image; sending the registered visible light image into a target detection network to obtain a detection frame; and drawing the detection frame into the interpolated infrared image, and extracting temperature information of a detection target in the detection frame in the infrared image for judging the thermal faults of the distribution network component. The application aims to fully and effectively utilize the beneficial information of the visible light image and the infrared image, realize the detection of the distribution network component with high precision, effectively reduce the occurrence of false leakage detection, provide a good basis for subsequent temperature interpretation and thermal fault discrimination, and effectively solve the problem of low precision of the existing infrared detection method.

Description

Registration and accurate detection method and system for distribution network component image
Technical Field
The application belongs to the technical field of computer vision, and particularly relates to a registration and accurate detection method and system for an image of a distribution network component.
Background
With the development of the power system in China, the distribution network component becomes an indispensable component in the power system. The distribution network component can generate certain heat during operation, when the distribution network component fails, abnormal temperature can be caused, and the heat map technology can realize rapid and non-contact detection of the thermal failure of the distribution network component by detecting the temperature distribution of the surface of a target object, so that the distribution network component becomes an effective failure detection means. However, the conventional heat map technology has the problems of narrow detection range, low detection precision, low detection efficiency and the like in the use process, and cannot meet the requirements of practical application.
In recent years, the rapid development of unmanned aerial vehicle technology provides a new idea for fault detection of power distribution network components. The unmanned aerial vehicle has the characteristics of wide coverage, flexible flight, portability and the like, and can effectively shoot power distribution network components in an omnibearing and multi-angle manner to acquire more abundant heat map information. Under the support of unmanned aerial vehicle image processing technology, the fault detection of distribution network part can realize more accurate, efficient testing result. However, due to various image sources and uneven quality, some thermal fault information in the images acquired by the unmanned aerial vehicle cannot be detected well, so that the application effect of the unmanned aerial vehicle is affected. For example, an infrared camera with a thermal imaging function is required to shoot a target for judging thermal fault information, and a shot infrared image contains temperature information. But the problem of accurately discriminating a thermal failure of a component requires that the component be distinguished from other objects or backgrounds in the image. Due to the low resolution and low contrast of infrared images, common target detection methods often have difficulty meeting the requirements. Therefore, a high-precision infrared image distribution network typical space component target detection method is needed to realize real-time accurate detection, so that the accuracy and the efficiency of the unmanned aerial vehicle distribution network component thermal fault detection are effectively improved.
Disclosure of Invention
The application aims to solve the technical problems: aiming at the problems in the prior art, the application provides a registration and accurate detection method and a registration and accurate detection system for images of distribution network components, which aim to fully and effectively utilize the beneficial information of two images, namely a visible light image and an infrared image, realize high-precision detection of the distribution network components, effectively reduce the occurrence of false leakage detection, provide a good basis for subsequent temperature interpretation and discrimination of thermal faults, and effectively solve the problem of low precision of the existing infrared detection method.
In order to solve the technical problems, the application adopts the following technical scheme:
a registration and accurate detection method for a distribution network component image comprises the following steps:
s101, visible light images and infrared images of the paired distribution network lines are obtained;
s102, cutting and registering the visible light image according to the infrared image to obtain a registered visible light image;
s103, interpolating the infrared image to the same size of the registered visible light image;
s104, sending the registered visible light image into a target detection network to obtain a detection frame, a target category and a confidence coefficient;
s105, drawing the detection frame into the interpolated infrared image, and extracting temperature information of a detection target in the detection frame in the infrared image for judging the thermal faults of the distribution network component.
Optionally, performing clipping registration on the visible light image according to the infrared image in step S102 includes: judging whether the visible light image has zooming, if the visible light image does not have zooming, directly cutting out a designated cutting area from the visible light image to be used as a registered visible light image; otherwise, for a given corresponding infrared image, the clipping region (x 1 ,y 1 ,x 2 ,y 2 ) And a focal length f' after zooming of the visible light image, wherein (x 1 ,y 1 ) To crop the upper left corner coordinates of the region, (x) 2 ,y 2 ) And cutting out the zoomed cutting area from the visible light image to obtain the right lower corner coordinates of the cutting area as the registered visible light image.
Optionally, the obtaining the zoomed clipping region includes:
s201, determining the center coordinates (x c ,y c ):
x c =(x 1+ x 2 )/2,y c =(y 1+ y 2 )/2,
S202, the center coordinates (x c ,y c ) Moving to the center point of the image in situ; so that the center coordinates (x c ,y c ) The center coordinates of the clipping region are obtained when the center of the image is taken as the origin:
x c =x c -W/2,y c =y c -H/2,
in the above formula, W and H are the width and height of the visible light image without zooming, respectively;
s203, obtaining the central coordinate (x 'of the zoomed clipping region according to the following formula' c ,y c ′):
x′ c =x c *z,y′ c =y c *z,
Wherein z is a zoom multiple and has z=f '/f, where f' is a focal length after zooming and y is a focal length without zooming;
s204, moving the origin of coordinates to the upper left corner of the image according to the following equation:
x′ c =x′ c +w′/2,y′ c =y′ c +H′/2,
in the above formula, W 'is the width of the zoomed visible light image, H' is the height of the zoomed visible light image, and when the zoom mode is optical zoom, the width W 'of the zoomed visible light image is the same as the width W of the non-zoomed visible light image, and the height H' of the zoomed visible light image is the same as the height H of the non-zoomed visible light image;
s205, calculating coordinates (x) of the zoomed clipping region according to the following formula 1 ′,y 1 ′,x′ 2 ,y 2 ') where (x) 1 ′,y 1 'is the upper left corner coordinates of the zoomed cropped region, (x' 2 ,y 2 ') is the lower right corner coordinates of the zoomed cropped area:
x′ 1 =x′ c -w′/2,y′ 1 =y′ c -h′/2,x′ 2 =x′ c +w′/2,y 2 ′=y′ c -h′/2,
in the above formula, h 'is the height of the cropping area after zooming, w' is the width of the cropping area after zooming, and w '=w×z, h' =h×z, where w is the width of the cropping area before zooming, and h is the height of the cropping area before zooming.
Optionally, the determining whether the visible light image has zooming includes: and acquiring a focal length f 'after zooming from metadata of the visible light image, calculating a zooming multiple z according to z=f'/f by combining the focal length f which is acquired from the non-zoomed visible light image in advance, judging that the visible light image has zooming if the zooming multiple z is not equal to 1, and otherwise judging that the visible light image has no zooming.
Alternatively, the infrared image is interpolated to employ bilinear interpolation in step S103.
Optionally, the target detection network in step S104 is a YOLOv5 convolutional neural network.
Optionally, step S101 of acquiring the visible light image and the infrared image of the pair of distribution network lines refers to acquiring the visible light image and the infrared image of the pair of distribution network lines acquired by the unmanned aerial vehicle.
In addition, the application also provides an unmanned aerial vehicle, which comprises an unmanned aerial vehicle body with a visible light camera and an infrared camera, wherein a microprocessor and a memory which are mutually connected are arranged in the unmanned aerial vehicle body, the visible light camera and the infrared camera are respectively connected with the microprocessor, and the microprocessor is programmed or configured to execute the registration and accurate detection method facing the images of the network distribution component.
In addition, the application also provides a registration and precision detection system for the image of the distribution network component, which comprises a computer device with a microprocessor and a memory which are mutually connected, wherein the microprocessor is programmed or configured to execute the registration and precision detection method for the image of the distribution network component.
Furthermore, the present application provides a computer readable storage medium having stored therein a computer program for being programmed or configured by a microprocessor to perform the registration and accurate detection method for a distribution network element image.
Compared with the prior art, the application has the following advantages: the method comprises the steps of cutting and registering a visible light image according to an infrared image aiming at the visible light image and the infrared image to obtain a registered visible light image; interpolation of the infrared image to the same size of the registered visible light image; sending the registered visible light images into a target detection network to obtain a detection frame, a target category and a confidence coefficient; the application can achieve higher precision by utilizing the visible light image with higher resolution to detect the target, and then outputting the detection frame to the infrared image to obtain the temperature information of the detection target in the detection frame, thereby effectively utilizing the beneficial information of the visible light image and the infrared image, realizing the detection of the high-precision distribution network component, effectively reducing the occurrence of false detection, providing a good basis for the subsequent temperature interpretation and the discrimination of the thermal fault, and effectively solving the problem of low precision of the prior infrared detection method.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present application.
Detailed Description
As shown in fig. 1, the registration and accurate detection method for the image of the network component in this embodiment includes:
s101, visible light images and infrared images of the paired distribution network lines are obtained;
s102, cutting and registering the visible light image according to the infrared image to obtain a registered visible light image;
s103, interpolating the infrared image to the same size of the registered visible light image;
s104, sending the registered visible light image into a target detection network to obtain a detection frame, a target category and a confidence coefficient;
s105, drawing the detection frame into the interpolated infrared image, and extracting temperature information of a detection target in the detection frame in the infrared image for judging the thermal faults of the distribution network component.
In this embodiment, when the step S101 obtains the visible light image and the infrared image paired by the network line, the resolution of the infrared image is 640×512, and the resolution of the visible light image is 4000×3000.
In this embodiment, performing clipping registration on the visible light image according to the infrared image in step S102 includes: judging whether the visible light image has zooming, if not, judging whether the visible light image has zoomingDirectly cutting out a designated cutting area from the visible light image as a registered visible light image when zooming; otherwise, for a given corresponding infrared image, the clipping region (x 1 ,y 1 ,x 2 ,y 2 ) And a focal length f' after zooming of the visible light image, wherein (x 1 ,y 1 ) To crop the upper left corner coordinates of the region, (x) 2 ,y 2 ) And cutting out the zoomed cutting area from the visible light image to obtain the right lower corner coordinates of the cutting area as the registered visible light image. Wherein the clipping region (x 1 ,y 1 ,x 2 ,y 2 ) The method comprises manually registering a pair of infrared and visible light images, cutting the visible light images, and recording cutting coordinates to obtain a cutting region (x 1 ,y 1 ,x 2 ,y 2 ) And acquiring a zoomed clipping region, so that the clipped visible light image achieves the aim of registration.
The transformation of the central coordinate of the clipping region and the width and height of the region can be calculated through the change of the focal length and the mathematical model. To facilitate code calculation, the upper left corner of the image is marked as the origin of coordinates, (x) 1 ,y 1 ) To crop the upper left corner coordinates of the region, (x) 2 ,y 2 ) The coordinates of the clipping region before zooming are (x) 1 ,y 1 ,x 2 ,y 2 ) The image size is h×w. In this embodiment, acquiring the zoomed clipping region includes:
s201, determining the center coordinates (x c ,y c ):
x c =(x 1+ x 2 )/2,y c =(t 1+ y 2 )/2,
S202, the center coordinates (x c ,y c ) Moving to the center point of the image in situ; so that the center coordinates (x c ,y c ) The center coordinates of the clipping region are obtained when the center of the image is taken as the origin:
x c =x c -W/2,y c =y c -H/2,
in the above formula, W and H are the width and height of the visible light image without zooming, respectively;
s203, since the optical zoom does not change the resolution of the image, the center coordinates (x 'of the zoomed clipping region can be obtained according to the following formula' c ,y c ′):
x′ c =x c *z,y′ c =y c *z,
Wherein z is a zoom multiple and has z=f '/f, where f' is a focal length after zooming and f is a focal length without zooming;
s204, moving the origin of coordinates to the upper left corner of the image according to the following equation:
x′ c =x′ c +W′/2,y′ c =y′ c +H′/2,
in the above formula, W 'is the width of the zoomed visible light image, H' is the height of the zoomed visible light image, and when the zoom mode is optical zoom, the width W 'of the zoomed visible light image is the same as the width W of the non-zoomed visible light image, and the height H' of the zoomed visible light image is the same as the height H of the non-zoomed visible light image;
s205, calculating coordinates (x 'of the zoomed clipping region according to the following formula' 1 ,y′ 1 ,x′ 2 ,y′ 2 ) Wherein (x' 1 ,y′ 1 ) For the upper left corner coordinates of the zoomed cropped region, (x' 2 ,y′ 2 ) The right lower corner coordinates of the zoomed clipping region:
x′ 1 =x′ c -w′/2,y′ 1 =y′ c -h′/2,x′ 2 =x′ c +w′/2,y′ 2 =y′ c -h′/2,
in the above formula, h 'is the height of the cropping area after zooming, w' is the width of the cropping area after zooming, and w '=w×z, h' =h×z, where w is the width of the cropping area before zooming, and h is the height of the cropping area before zooming. By the method, the purpose of completing self-adaptive clipping registration of the visible light image can be achieved.
The visible light camera of the unmanned aerial vehicle can zoom during shooting, so that clipping coordinates are not available, and the focal length of each shot visible light image is acquired by acquiring metadata information of the visible light image. The metadata of the visible light image stores the camera model, focal length information and the like for shooting the image, and the focal length in the metadata of each image is acquired to judge whether the image has zooming relative to the initial image or not. Acquiring metadata information of a visible light image to acquire a focal length of the visible light image can be expressed as:
f=exifread(I v ),
in the above formula, f is the focal length, exifiread is a known function of reading image EXIF information, I v Is a visible light image.
n=z/f,
Wherein n is approximately a constant, z is a zoom coefficient, and the ratio of the focal length to the zoom multiple of the same optical zoom camera is unchanged, so that whether zooming exists can be judged by acquiring the focal length. In this embodiment, determining whether the visible light image has zooming includes: and acquiring a focal length f 'after zooming from metadata of the visible light image, calculating a zooming multiple z according to z=f'/f by combining the focal length f which is acquired from the non-zoomed visible light image in advance, judging that the visible light image has zooming if the zooming multiple z is not equal to 1, and otherwise judging that the visible light image has no zooming. The focal length f without zooming can be used as a parameter after being acquired from metadata of the visible light image without zooming.
In this embodiment, the size of the visible light image is 4000×3000 at the beginning, the size of the infrared image is 640×512, and the size of the registered visible light image obtained after clipping and registration becomes 2230×1750. In order to realize the subsequent detection frame migration, the infrared image needs to be linearly interpolated to 2230 x 1750 so that the resolution of the infrared image and the resolution of the infrared image are consistent, and then the coordinates of the detection frame can be corresponding to the same position. It should be noted that, in step S103, the interpolation method may be used as needed to interpolate the infrared image, for example, bilinear interpolation is used in the present embodiment.
The target detection network in step S104 may employ a convolutional neural network as needed. For example, as an alternative implementation manner, the target detection network in step S104 of this embodiment is a YOLOv5 convolutional neural network. And (3) sending the registered visible light image with the size of 2230 x 1750 into a Yolov5 convolutional neural network to obtain a detection result (comprising a detection frame, a target class and a confidence coefficient). And drawing the detection frame into the infrared image through coordinates. In this embodiment, the detection on the visible light is because the spatial resolution of the visible light is high, so that higher accuracy can be achieved, and the output of the detection frame to the infrared image is because the thermal fault determination is finally performed through temperature interpretation, and if the target detection is directly performed by using the infrared image, the detection accuracy is very low and many false detection phenomena can occur.
The processing of the registered visible light image by the Yolov5 convolutional neural network comprises the following steps:
1. image preprocessing: the input image is preprocessed, such as scaled, cropped, normalized, etc.
2. Feature extraction: the extraction of features of different levels of an image by a deep Convolutional Neural Network (CNN) can be expressed as:
F=CNN(x),
where x is the input image and F is the feature that extracts multiple levels, namely: f= { F 1 ,F 2 ,...,F n }。
3. Feature fusion: the features of different levels are fused to detect objects of different dimensions, which can be expressed as:
m=merge(F 1 ,F 2 ,...,F n ),
where m is the fusion feature and merge is the fusion operation;
4. target prediction: target prediction is performed on the feature map, including target position, category, confidence, and the like. A certain detection result can be expressed as:
p c ,p c1 ,p c2 ,...,p cn ,b x ,b y ,b w ,b h =predict(m)
wherein the method comprises the steps of,p c Is confidence (probability of target existence), p c1 ,p c2 ,...,p cn Probabilities of n different classes respectively, (b) x ,b y ,b w ,b h ) Is the detection frame position and size, (b) x ,b y ) Is the center point coordinates, (b) w ,b h ) For width and height, prediction represents target prediction;
5. non-maximum suppression (NMS): and screening the prediction result to remove overlapped frames and frames with low confidence.
boxes=apply_nms(p c ,p c1 ,p c2 ,...,p cn ,b x ,b y ,b w ,b h )
Where boxes is a filtered list of target boxes, apply_nms represents non-maximum suppression.
In the YOLOv5 convolutional neural network, the predicted detection frame is represented by three parameters of center point coordinates, width and height, and the probability of the target class needs to be predicted. Specifically, for each grid (cell), the model predicts three anchor boxes (anchors) of different sizes, and for each anchor box, the model outputs a target presence score and a class probability distribution vector. Based on the threshold value of the target presence score, it can be determined which anchor boxes contain objects and convert them into detection boxes.
The center coordinates, width and height of the detection frame can be calculated by the following formula:
b x =(sigmoid(t_x)+c_x)*stride
b y =(sigmoid(t_y)+c_y)*stride
b w =p_w*e t _w*anchor_w
b h =p_h*e t _h*anchor_h
in the above formula, t_x, t_y, t_w and t_h are four parameters of model prediction, c_x and c_y are central coordinates (taking the upper left corner as the origin) of the current grid on the feature map, stride is a downsampling multiple of the feature map relative to the original map, p_w and p_h are normalized width and height, and anchor_w and anchor_h are width and height of the current anchor frame.
Probability p of n different classes c1 ,p c2 ,...,p cn This can be calculated from the following formula:
Pr(class_i|object)=sigmoid(t_i),
where t_i is the probability of the ith class of model predictions. Sigmoid represents a Sigmoid function, and Pr (class_i|object) represents the probability that the target object belongs to class_i. Since each anchor box is responsible for predicting the probability of only one class, the class probabilities of three anchor boxes need to be combined. Typically, the class with the highest probability value is selected as the final predicted class. Finally, combining the three anchor frames and the prediction scores and the category probabilities of the three anchor frames to obtain a detection frame and a prediction category of each object. And drawing the detection frames with the same positions and the same sizes on the infrared image through the center coordinates, the width and the height of the detection frames, and distinguishing different kinds of information through the colors of the detection frames. It should be noted that, the YOLOv5 convolutional neural network is an existing convolutional neural network, and in this embodiment, only the application of the YOLOv5 convolutional neural network is referred to, and no improvement of the YOLOv5 convolutional neural network is referred to. In the method of the embodiment, although the infrared image is not directly detected, the class prediction graph of the infrared image is finally obtained, and the detection effect is the same as that of the infrared image, but the detection precision is greatly improved, and the occurrence of false detection omission is effectively reduced.
As an optional implementation manner, the step S101 of obtaining the visible light image and the infrared image of the pairing of the distribution network lines in this embodiment refers to obtaining the visible light image and the infrared image of the pairing of the distribution network lines collected by the unmanned aerial vehicle. Needless to say, the step S101 of this embodiment of obtaining the visible light image and the infrared image paired by the distribution network line does not depend on a specific collection mode.
In summary, in the registration and accurate detection method for the images of the network distribution component according to the embodiment, the cross-modal registration is performed on the visible light image and the infrared image based on the adaptive clipping registration method, the registered visible light image is sent into the detection network, the center coordinates, the width, the height and the category information of the detection frame predicted by the detection network are obtained and drawn on the infrared image, the purpose of performing target detection by using the visible light image with high resolution is achieved, and the infrared image with temperature information is used for performing temperature interpretation. The beneficial information of the two images is utilized efficiently. The real-time accurate drawing of the detection frame on the infrared image can be realized, and the high-quality temperature interpretation is achieved, so that the thermal fault judgment of the distribution network typical space component can be accurately realized.
In addition, this embodiment still provides an unmanned aerial vehicle, including the unmanned aerial vehicle body that has visible light camera and infrared camera, be equipped with interconnect's microprocessor and memory in the unmanned aerial vehicle body, visible light camera and infrared camera link to each other with microprocessor respectively, microprocessor is programmed or is disposed in order to carry out the registration and the accurate detection method towards joining in marriage net part image previously to can realize joining in marriage net's real-time inspection and detection.
In addition, the embodiment also provides a registration and precision detection system for the images of the network components, which comprises a computer device with a microprocessor and a memory which are mutually connected, wherein the microprocessor is programmed or configured to execute the registration and precision detection method for the images of the network components, so that an image processing solution independent of an image acquisition device is provided, and the computer device can adopt offline arrangement, networking arrangement and even cloud arrangement according to requirements.
In addition, the present embodiment further provides a computer readable storage medium, in which a computer program is stored, where the computer program is configured or programmed by a microprocessor to perform the registration and precision detection method for the network component image described above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and the protection scope of the present application is not limited to the above examples, and all technical solutions belonging to the concept of the present application belong to the protection scope of the present application. It should be noted that modifications and adaptations to the present application may occur to one skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.

Claims (10)

1. The registration and accurate detection method for the distribution network component image is characterized by comprising the following steps of:
s101, visible light images and infrared images of the paired distribution network lines are obtained;
s102, cutting and registering the visible light image according to the infrared image to obtain a registered visible light image;
s103, interpolating the infrared image to the same size of the registered visible light image;
s104, sending the registered visible light image into a target detection network to obtain a detection frame, a target category and a confidence coefficient;
s105, drawing the detection frame into the interpolated infrared image, and extracting temperature information of a detection target in the detection frame in the infrared image for judging the thermal faults of the distribution network component.
2. The registration and accurate detection method for an image of a distribution network component according to claim 1, wherein performing clipping registration on a visible light image according to an infrared image in step S102 includes: judging whether the visible light image has zooming, if the visible light image does not have zooming, directly cutting out a designated cutting area from the visible light image to be used as a registered visible light image; otherwise, for a given corresponding infrared image, the clipping region (x 1 ,y 1 ,x 2 ,y 2 ) And a focal length f' after zooming of the visible light image, wherein (x 1 ,y 1 ) To crop the upper left corner coordinates of the region, (x) 2 ,y 2 ) And cutting out the zoomed cutting area from the visible light image to obtain the right lower corner coordinates of the cutting area as the registered visible light image.
3. The registration and precision detection method for an image of a distribution network component according to claim 2, wherein the obtaining the zoomed clipping region includes:
s201, determining the center coordinates (x c ,y c ):
x c =(x 1+ x 2 )/2,y c =(y 1+ y 2 )/2,
S202, the center coordinates (x c ,y c ) Moving to the center point of the image in situ; so that the center coordinates (x c ,y c ) The center coordinates of the clipping region are obtained when the center of the image is taken as the origin:
x c =x c -W/2,y c =y c -H/2,
in the above formula, W and H are the width and height of the visible light image without zooming, respectively;
s203, obtaining the central coordinate (x 'of the zoomed clipping region according to the following formula' c ,y′ c ):
x′ c =x c *z,t′ c =y c *z,
Wherein z is a zoom multiple and has z=f '/f, where f' is a focal length after zooming and f is a focal length without zooming;
s204, moving the origin of coordinates to the upper left corner of the image according to the following equation:
x′ c =x′ c +W′/2,y′ c =y′ c +H′/2,
in the above formula, W 'is the width of the zoomed visible light image, H' is the height of the zoomed visible light image, and when the zoom mode is optical zoom, the width W 'of the zoomed visible light image is the same as the width W of the non-zoomed visible light image, and the height H' of the zoomed visible light image is the same as the height H of the non-zoomed visible light image;
s205, calculating coordinates (x 'of the zoomed clipping region according to the following formula' 1 ,y′ 1 ,x′ 2 ,y′ 2 ) Wherein (x' 1 ,y′ 1 ) For the upper left corner coordinates of the zoomed cropped region, (x' 2 ,y′ 2 ) The right lower corner coordinates of the zoomed clipping region:
x′ 1 =x′ c -w′/2,y′ 1 =y′ c -h′/2,x′ 2 =x′ c +w′/2,y′ 2 =y′ c -h′/2,
in the above formula, h 'is the height of the cropping area after zooming, w' is the width of the cropping area after zooming, and w '=w×z, h' =h×z, where w is the width of the cropping area before zooming, and h is the height of the cropping area before zooming.
4. The registration and precision detection method for an image of a distribution network component according to claim 2, wherein the determining whether the visible light image has zooming comprises: and acquiring a focal length f 'after zooming from metadata of the visible light image, calculating a zooming multiple z according to z=f'/f by combining the focal length f which is acquired from the non-zoomed visible light image in advance, judging that the visible light image has zooming if the zooming multiple z is not equal to 1, and otherwise judging that the visible light image has no zooming.
5. The registration and precision detection method for a distribution network component image according to claim 1, wherein the infrared image is interpolated in step S103 to adopt bilinear interpolation.
6. The registration and accurate detection method for a distribution network component image according to claim 1, wherein the target detection network in step S104 is a YOLOv5 convolutional neural network.
7. The registration and accurate detection method for images of distribution network components according to claim 1, wherein step S101 is to acquire a visible light image and an infrared image of a pair of distribution network lines, which are acquired by an unmanned aerial vehicle.
8. An unmanned aerial vehicle comprising an unmanned aerial vehicle body with a visible light camera and an infrared camera, wherein a microprocessor and a memory which are mutually connected are arranged in the unmanned aerial vehicle body, and the visible light camera and the infrared camera are respectively connected with the microprocessor, and the unmanned aerial vehicle is characterized in that the microprocessor is programmed or configured to execute the registration and accurate detection method for the images of the distribution network components according to any one of claims 1 to 7.
9. A registration and precision detection system for a distribution network element image, comprising a computer device with a microprocessor and a memory connected to each other, characterized in that the microprocessor is programmed or configured to perform the registration and precision detection method for a distribution network element image according to any one of claims 1-7.
10. A computer readable storage medium having a computer program stored therein, wherein the computer program is programmed or configured by a microprocessor to perform the registration and precision detection method for a distribution network element oriented image of any one of claims 1 to 7.
CN202310601611.0A 2023-05-25 2023-05-25 Registration and accurate detection method and system for distribution network component image Pending CN116596902A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310601611.0A CN116596902A (en) 2023-05-25 2023-05-25 Registration and accurate detection method and system for distribution network component image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310601611.0A CN116596902A (en) 2023-05-25 2023-05-25 Registration and accurate detection method and system for distribution network component image

Publications (1)

Publication Number Publication Date
CN116596902A true CN116596902A (en) 2023-08-15

Family

ID=87605965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310601611.0A Pending CN116596902A (en) 2023-05-25 2023-05-25 Registration and accurate detection method and system for distribution network component image

Country Status (1)

Country Link
CN (1) CN116596902A (en)

Similar Documents

Publication Publication Date Title
EP3496383A1 (en) Image processing method, apparatus and device
RU2607774C2 (en) Control method in image capture system, control apparatus and computer-readable storage medium
CN109949347B (en) Human body tracking method, device, system, electronic equipment and storage medium
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
CN111462503B (en) Vehicle speed measuring method and device and computer readable storage medium
US20070018977A1 (en) Method and apparatus for generating a depth map
CN113052066B (en) Multi-mode fusion method based on multi-view and image segmentation in three-dimensional target detection
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN109859104B (en) Method for generating picture by video, computer readable medium and conversion system
CN112528974A (en) Distance measuring method and device, electronic equipment and readable storage medium
JP2020149641A (en) Object tracking device and object tracking method
CN112580600A (en) Dust concentration detection method and device, computer equipment and storage medium
CN113688820A (en) Stroboscopic stripe information identification method and device and electronic equipment
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN115760912A (en) Moving object tracking method, device, equipment and computer readable storage medium
CN113281780B (en) Method and device for marking image data and electronic equipment
CN115471542A (en) Packaging object binocular recognition and positioning method based on YOLO v5
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
US9392146B2 (en) Apparatus and method for extracting object
CN116596902A (en) Registration and accurate detection method and system for distribution network component image
CN113108919B (en) Human body temperature detection method, device and storage medium
CN113538351B (en) Method for evaluating defect degree of external insulation equipment by fusing multiparameter electric signals
CN109328373B (en) Image processing method, related device and storage medium thereof
CN113674319A (en) Target tracking method, system, equipment and computer storage medium
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination