CN112581386B - Full-automatic lightning arrester detection and tracking method - Google Patents

Full-automatic lightning arrester detection and tracking method Download PDF

Info

Publication number
CN112581386B
CN112581386B CN202011386930.7A CN202011386930A CN112581386B CN 112581386 B CN112581386 B CN 112581386B CN 202011386930 A CN202011386930 A CN 202011386930A CN 112581386 B CN112581386 B CN 112581386B
Authority
CN
China
Prior art keywords
image
lightning arrester
network
target
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011386930.7A
Other languages
Chinese (zh)
Other versions
CN112581386A (en
Inventor
罗威
高俊彦
余田甜
朱亦曼
郑先杰
郭毓
吴益飞
郭健
吴巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202011386930.7A priority Critical patent/CN112581386B/en
Publication of CN112581386A publication Critical patent/CN112581386A/en
Application granted granted Critical
Publication of CN112581386B publication Critical patent/CN112581386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a full-automatic lightning arrester detecting and tracking method, which comprises the steps of firstly training an improved YOLOv3 network aiming at a lightning arrester and a general video target tracking network based on a siamMask; then collecting a real-time image data stream; performing Gaussian smoothing processing on the image, predicting the initial frame by using a detection network, and obtaining the coordinate of a target central point, the width, the height and the confidence coefficient of a target frame; judging whether the confidence coefficient meets the requirement of being higher than a threshold value; and (4) compressing the prediction output of the detection network to be used as the input of a tracking network, predicting the subsequent frame, and recording the track of the prediction center point. The invention can realize automatic detection and real-time tracking of the arrester under the background of a complex environment, can stably, accurately and quickly automatically detect and track the arrester, provides target position information for subsequent operation of a manipulator, and is beneficial to improving the working efficiency of the live working robot.

Description

Full-automatic lightning arrester detection and tracking method
Technical Field
The invention belongs to the technical field of video image information processing, and particularly relates to a full-automatic lightning arrester detecting and tracking method.
Background
Distribution lines have been rapidly developed in recent years, and maintenance work on distribution lines has been significantly increased. The most typical task of which is to replace the arrester. At present, the work is mainly finished manually, and workers need to make a plurality of safety measures to carry out live working in order to avoid huge economic loss caused by power failure. The voltage of the power distribution network is lower than that of the power transmission network, but the personal safety of workers is threatened; moreover, the distribution network environment has the characteristics of high altitude, complex lines, dense equipment and the like, and workers face great psychological pressure and labor intensity during operation. In short, the manual live working is not only inefficient, but also has potential safety hazards.
The lightning arrester replacing operation is carried out by using the live working robot instead of manpower and autonomously, and the premise is that the robot can stably, real-timely and accurately detect the position of the lightning arrester target in the whole operation process. According to the target position information, the position of the target relative to the end effector of the mechanical arm is obtained through pose conversion, so that the mechanical arm can be controlled to move to enable the end effector to reach the target position, and subsequent grabbing and other operations are performed. When the mechanical arm platform and the mechanical arm platform are shaken during high-altitude operation, the target position needs to be tracked in real time, so that the accumulation of errors between the measured position and the actual position of the target caused by shaking is avoided. The existing lightning arrester target tracking method is non-autonomous, needs human participation, is easily interfered by an external environment, and has poor stability and low precision of measurement in a complex environment, thereby causing low working efficiency.
Disclosure of Invention
The invention aims to provide a full-automatic lightning arrester detection and tracking method, which solves the problems that a visual system is low in target detection speed and prone to losing targets in a task of replacing a lightning arrester by a mechanical arm, and achieves autonomous detection and real-time tracking of the targets.
The technical solution for realizing the purpose of the invention is as follows: a full-automatic lightning arrester detecting and tracking method comprises the following steps:
step 1, collecting an image making data set of the lightning arrester, and training an improved Yolov3 network M for detecting the lightning arrester 1
Step 2, training the SiamMask-based universal video target tracking network M by using the public data set 2
Step 3, collecting a real-time image data stream, and selecting a frame image A as an initial frame;
step 4, performing Gaussian smoothing processing on the image A, thereby eliminating noise points existing in a local region;
step 5, utilizing the network M 1 Predicting the image A to obtain coordinates (x, y) of a target central point in the image, the width w and height h of a target frame and a confidence acc;
step 6, judging whether the confidence coefficient meets the requirement of being higher than a threshold value, if so, performing step 7, and if not, returning to the step 3;
step 7, the network M 1 Is compressed as a network M 2 And (4) predicting each frame afterwards, and recording the track of the predicted central point.
Compared with the prior art, the invention has the following remarkable advantages: 1) The invention is a completely autonomous detection and tracking, without manual intervention, such as framing the target position in the first frame, thus the invention can be applied to unmanned environment and some scenes needing manual work for repeated work; 2) The detection network of the invention can be suitable for the detection task of small targets; 3) The invention can be applied to complex environment, can still keep tracking when a target object is shielded, deformed and changed in scale, and can still automatically capture the target position again when the target disappears in the image and then appears, thereby having high precision, rapidity and robustness.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is an improved YOLOv3 network M of the present invention 1 The expanded structure of the added convolution attention model.
FIG. 3 is a network M of a video target tracking network SiamMask used in the present invention 2 And (5) model structure.
FIG. 4 is a YOLOv3 network M for training improvement in the present invention 1 Sample example of a dataset of (a).
Fig. 5 is a diagram illustrating the effect of gaussian smoothing of the initial frame in the present invention.
FIG. 6 is a diagram illustrating the effect of the compaction process on the frame of the output result of the detection network in the present invention.
FIG. 7 is a diagram illustrating the effect of target detection and tracking in the present invention.
Detailed Description
A full-automatic lightning arrester detecting and tracking method is characterized by comprising the following steps:
step 1, collecting an image making data set of the lightning arrester, and training an improved Yolov3 network M for detecting the lightning arrester 1 Outputting the position information of the lightning arrester in the image; in order to improve the generalization ability of the detection network, the data set is expanded by adopting modes of horizontal inversion, noise point increase, brightness adjustment, image expansion and the like, and the richness of the data set is further improved by sample synthesis, specifically:
step 1-1, collecting an image making data set of a lightning arrester;
step 1-2: training improved Yolov3 network, improved Yolov3 network M 1 The characteristic extraction layer in the system adopts a DarkNet-53 convolution layer, and the CBAM attention model enhancement target information is added; adding a prediction layer to make YOLOv3 network M 1 The number of the prior frames is increased to 12 layers, so that the method is suitable for target detection with more scales; calculating 12 prior frame sizes for framing the target by using a K-means clustering algorithm;
step 1-3: taking the image of the lightning arrester as input, and predicting the position information of the lightning arrester in the image;
step 1-4, calculating YOLOv3 network M according to loss function GIoU 1 Error between predicted result and true value, and then adjusting Yolov3 network M 1 Parameter W i The method specifically comprises the following steps:
Figure BDA0002811239640000031
Figure BDA0002811239640000032
α=0.95 i ·α 0
the intersection of the area of the target frame and the area of the real frame of the prediction result is I, the sum of the areas is U, and the minimum enclosing area is A C IoU is the cross-over ratio, alpha isLearning rate, initial learning rate α 0 =0.01, by adjusting the YOLOv3 network M 1 Parameter W i The value of the loss function GIoU is reduced, and the training accuracy is improved.
Step 2, training the SiamMask-based universal video target tracking network M by using the public data set 2 The core idea of the SimMask target tracking network is that a target object to be tracked is framed through an initial frame to serve as a template, and the template is used as a retrieval basis of a subsequent frame; then simultaneously inputting the template and the subsequent frame into a twin network SiemesNet to obtain two feature maps, and performing cross-correlation calculation between the two feature maps to obtain a feature map of a candidate frame; and performing convolution operation with the size of 1 x 1 on the basis to obtain the output aiming at different tasks, and finally generating the mask of the target.
And 3, collecting the real-time image data stream, selecting a frame of image A as an initial frame, wherein the image size is 1280 × 720 pixels, the collection frame rate is higher than 30fps, and the proportion of the target size in the initial frame in the whole image is more than 10%.
And 4, performing Gaussian smoothing on the image A to eliminate noise points existing in a local area, wherein the Gaussian coefficient is delta =0.8, the Gaussian smoothing is a common image denoising and enhancing method, and the core idea is to slide a template in the image and perform convolution operation, so that the noise points existing in the local area are eliminated, and the image characteristics are more obvious.
Step 5, utilizing the network M 1 And predicting the image A to obtain the coordinates (x, y) of the target center point in the image, the width w and the height h of the target frame and the confidence acc, wherein the coordinates (x, y) are in an area of 960 × 540 pixels taking the image center as the center.
And 6, judging whether the confidence coefficient meets the requirement of being higher than the threshold, if so, performing the step 7, and if not, returning to the step 3, wherein the confidence coefficient threshold is set to be 0.7.
Step 7, improving the prediction result of the Yolov3 detection network generally has a certain size margin, and if the result is directly input into the SiamMask network M 2 In the middle, it may appear that the mask has covered the environmental informationThis in turn causes the target frame to be too large. When the target is blocked and lost, the tracker can hardly capture the target again. Therefore, the output of the improved Yolov3 network is firstly subjected to the compaction treatment, so that the target frame is close to the target image as much as possible, and the error of the mask is reduced.
Network M 1 Is compressed as a network M 2 Predicting each frame later, and recording the track of a predicted center point, wherein the input is as follows:
7-1, selecting a compaction factor to be 0.9;
step 7-2, converting the target prediction frame composed of the predicted central coordinates (x, y) and the width w 'and height h' after the compaction into a target frame composed of the central point (x ', y'), the width w 'and the height h', specifically:
Figure BDA0002811239640000041
w'=0.9w,h'=0.9h
step 7-3, inputting the new target box information into the SimMask network M 2 And predicting the subsequent frames and recording the track of the predicted central point.
A full-automatic lightning arrester detection and tracking system comprises the following modules:
a network training module: collecting an image production data set of a lightning arrester, training an improved Yolov3 network M for detecting the lightning arrester 1 And a general video target tracking network M based on the SimMask is trained by using the public data 2
An image acquisition module: for acquiring a real-time image data stream;
an image processing module: the device is used for carrying out Gaussian smoothing processing on the acquired image so as to eliminate noise existing in a local area;
the lightning arrester detection module: using a network M 1 Predicting the processed image, detecting the lightning arrester in the image, acquiring the coordinate, width, height and confidence coefficient of the lightning arrester, and finally judging whether the confidence coefficient meets the requirement;
the lightning arrester tracking module: for connecting networks M 1 After the predicted output is compacted, network M is used 2 And tracking the lightning arrester and recording the track information of the lightning arrester.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
step 1, collecting an image making data set of the lightning arrester, and training an improved Yolov3 network M for detecting the lightning arrester 1
Step 2, training the SiamMask-based universal video target tracking network M by using the public data set 2
Step 3, collecting a real-time image data stream, and selecting a frame image A as an initial frame;
step 4, performing Gaussian smoothing processing on the image A, thereby eliminating noise points existing in a local region;
step 5, utilizing the network M 1 Predicting the image A to obtain coordinates (x, y) of a target central point in the image, the width w and height h of a target frame and a confidence acc;
step 6, judging whether the confidence coefficient meets the requirement of being higher than a threshold value, if so, performing step 7, and if not, returning to the step 3;
step 7, the network M 1 Is compressed as a network M 2 And (4) predicting each frame afterwards, and recording the track of the predicted central point.
A computer-storable medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
step 1, collecting an image making data set of the lightning arrester, and training an improved Yolov3 network M for detecting the lightning arrester 1
Step 2, training the SiamMask-based universal video target tracking network M by using the public data set 2
Step 3, collecting a real-time image data stream, and selecting a frame image A as an initial frame;
step 4, performing Gaussian smoothing processing on the image A, thereby eliminating noise points existing in a local area;
step 5, utilizing the network M 1 Predicting the image A to obtain coordinates (x, y) of a target central point in the image, the width w and height h of a target frame and a confidence acc;
step 6, judging whether the confidence coefficient meets the requirement of being higher than a threshold value, if so, performing step 7, and if not, returning to the step 3;
step 7, the network M 1 Is compacted as a network M 2 The track of the predicted central point is recorded by predicting each frame.
The invention is described in detail below with reference to the drawings and examples.
The embodiment is as follows:
with reference to fig. 1, the method for detecting and tracking a full-automatic lightning arrester provided by the invention comprises the following steps:
step 1, collecting an image making data set of the lightning arrester, and training an improved Yolov3 network M for detecting the lightning arrester 1 Outputting the position information of the lightning arrester in the image; in order to improve the generalization ability of the detection network, the data set is expanded by adopting modes of horizontal inversion, noise point increase, brightness adjustment, image expansion and the like, and the richness of the data set is further improved by sample synthesis, as shown in fig. 4, specifically:
step 1-1, collecting an image making data set of a lightning arrester;
step 1-2: in conjunction with FIG. 2, training the improved YOLOv3 network, the improved YOLOv3 network M 1 The characteristic extraction layer in the system is a DarkNet-53 convolutional layer, and CBAM attention model enhancement target information is added; adding a prediction layer to make the Yolov3 network M 1 The number of the prior frames is increased to 12 layers, and the method is suitable for target detection with more scales; calculating 12 prior frame sizes for framing the target by using a K-means clustering algorithm; the improved YOLOv3 network structure parameters are shown in the following table:
Figure BDA0002811239640000061
Figure BDA0002811239640000071
step 1-3: taking the image of the lightning arrester as input, and predicting the position information of the lightning arrester in the image;
step 1-4, calculating YOLOv3 network M according to loss function GIoU 1 Error between predicted result and true value, and then adjusting Yolov3 network M 1 Parameter W i The method specifically comprises the following steps:
Figure BDA0002811239640000072
Figure BDA0002811239640000073
α=0.95 i ·α 0
the area intersection of the target frame and the real frame of the prediction result is I, the area union is U, and the minimum surrounding area is A C IoU is cross-over ratio, alpha is learning rate, and initial learning rate alpha 0 =0.01, by adjusting YOLOv3 network M 1 Parameter W i The value of the loss function GIoU is reduced, and the training accuracy is improved.
Step 2, training the SiamMask-based universal video target tracking network M by using the public data set 2 The structure of the SimMask target tracking network is shown in FIG. 3, and the core idea of the SimMask target tracking network is that a target object to be tracked is framed through an initial frame to be used as a template and used as a retrieval basis of a subsequent frame; then simultaneously inputting the template and the subsequent frame into a twin network SiemesNet to obtain two feature maps, and performing cross-correlation calculation between the two feature maps to obtain a feature map of a candidate frame; and performing convolution operation with the size of 1 x 1 on the basis to obtain the output aiming at different tasks, and finally generating the mask of the target.
And 3, collecting the real-time image data stream, selecting a frame of image A as an initial frame, wherein the image size is 1280 × 720 pixels, the collection frame rate is higher than 30fps, and the proportion of the target size in the initial frame in the whole image is more than 10%.
Step 4, performing gaussian smoothing on the image a to eliminate noise existing in a local region, where a gaussian coefficient is δ =0.8, gaussian smoothing is a commonly used method for image denoising and enhancement, and a core idea of the method is to use a template to slide in the image and perform convolution operation to eliminate noise existing in the local region, so that image features are more obvious, as shown in fig. 5, where part (a) in fig. 5 is an original image and part (b) in fig. 5 is an image after gaussian smoothing.
Step 5, utilizing the network M 1 And predicting the image A to obtain the coordinates (x, y) of the target center point in the image, the width w and the height h of the target frame and the confidence acc, wherein the coordinates (x, y) are in an area of 960 × 540 pixels taking the image center as the center.
And 6, judging whether the confidence coefficient meets the requirement of being higher than the threshold, if so, performing the step 7, and if not, returning to the step 3, wherein the confidence coefficient threshold is set to be 0.7.
Step 7, improving the prediction result of the Yolov3 detection network generally has a certain size margin, and if the result is directly input into the SimMask network M 2 In the middle, the situation that the mask covers the environment information occurs, and then the target frame is larger. When the target is shielded and lost, the tracker is difficult to capture the target again. Therefore, the output of the improved Yolov3 network is firstly subjected to the compaction treatment, so that the target frame is close to the target image as much as possible, and the error of the mask is reduced.
Network M 1 Is compressed as a network M 2 The input of (1) predicting each frame thereafter, recording the track of the predicted center point, and the effect of the compaction processing is as shown in fig. 6, specifically:
step 7-1, selecting the tightening factor to be 0.9;
step 7-2, converting the target prediction frame composed of the predicted central coordinates (x, y) and the width w 'and height h' after the compaction into a target frame composed of the central point (x ', y'), the width w 'and the height h', specifically:
Figure BDA0002811239640000081
w'=0.9w,h'=0.9h
step 7-3, inputting the new target box information into the SimMask network M 2 And predicting the subsequent frames and recording the track of the predicted central point.
Fig. 7 is a diagram of the effect of target detection and tracking in the present invention, in which part (a) in fig. 7 is a case of an initial frame, part (b) in fig. 7 is a tracking effect after the attitude of the arrester changes, part (c) in fig. 7 is a tracking effect when the size of the arrester becomes smaller as the lens is zoomed out, and part (d) in fig. 7 is a tracking effect when the arrester is partially blocked by other objects in the environment.
The lightning arrester target detection and tracking system can autonomously and quickly detect and track a lightning arrester target in a video, achieves real-time tracking and can cope with the conditions of target scale change, attitude change, local shielding and overall loss, so that the live working robot can stably, accurately and quickly position the lightning arrester when performing lightning arrester replacement experimental operation, can keep tracking performance in an aerial working environment with complex background environment and shaking, provides an important information basis for subsequent attitude measurement, manipulator grabbing and disassembling work, and has good application prospect and value.

Claims (9)

1. A full-automatic lightning arrester detection and tracking method is characterized by comprising the following steps:
step 1, collecting an image making data set of the lightning arrester, and training an improved Yolov3 network M for detecting the lightning arrester 1 The method specifically comprises the following steps:
step 1-1, collecting an image making data set of a lightning arrester;
step 1-2: training improved Yolov3 network, improved Yolov3 networkLuo M 1 The characteristic extraction layer in the system adopts a DarkNet-53 convolution layer, and the CBAM attention model enhancement target information is added; adding a prediction layer to make the Yolov3 network M 1 The number of the prior frames is increased to 12 layers, and the method is suitable for target detection with more scales; calculating 12 prior frame sizes for framing the target by using a K-means clustering algorithm;
step 1-3: taking the image of the lightning arrester as input, and predicting the position information of the lightning arrester in the image;
step 1-4, calculating YOLOv3 network M according to loss function GIoU 1 Error between predicted result and true value, and then adjust Yolov3 network M 1 Parameter W i The method specifically comprises the following steps:
Figure FDA0003791717030000011
Figure FDA0003791717030000012
α=0.95 i ·α 0
the intersection of the area of the target frame and the area of the real frame of the prediction result is I, the sum of the areas is U, and the minimum enclosing area is A C IoU is cross-over ratio, alpha is learning rate, initial learning rate alpha 0 =0.01, by adjusting YOLOv3 network M 1 Parameter W i The value of the loss function GIoU is reduced, and the training accuracy is improved;
step 2, training the SiamMask-based universal video target tracking network M by using the public data set 2
Step 3, collecting a real-time image data stream, and selecting a frame image A as an initial frame;
step 4, performing Gaussian smoothing processing on the image A, thereby eliminating noise points existing in a local area;
step 5, utilizing the network M 1 Predicting the image A to obtain coordinates (x, y) of a target central point in the image, the width w and height h of a target frame and a confidence acc;
step 6, judging whether the confidence coefficient meets the requirement of being higher than a threshold value, if so, performing step 7, and if not, returning to the step 3;
step 7, the network M 1 Is compacted as a network M 2 And (4) predicting each frame afterwards, and recording the track of the predicted central point.
2. The method for detecting and tracking the full-automatic lightning arrester according to claim 1, wherein the size of the image in the step 3 is 1280 × 720, the frame rate of the acquisition is higher than 30fps, and the target size in the initial frame accounts for more than 10% of the total image.
3. The fully automatic lightning arrester detecting and tracking method according to claim 1, characterized in that the gaussian coefficient δ =0.8 in the gaussian smoothing process is performed on the image in step 4.
4. The fully automatic lightning arrester detecting and tracking method according to claim 1, characterized in that (x, y) in step 5 is within an area of 960 x 540 pixels centered on the center of the image.
5. The fully automatic arrester detection and tracking method according to claim 1, characterized in that the confidence threshold in step 6 is 0.7.
6. The method for detecting and tracking the full-automatic lightning arrester according to claim 1, wherein the step 7 of predicting each frame thereafter and recording the track of the predicted center point specifically comprises the steps of:
7-1, selecting a compaction factor to be 0.9;
step 7-2, converting the target prediction frame composed of the predicted central coordinates (x, y) and the width w 'and height h' after the compaction into a target frame composed of the central point (x ', y'), the width w 'and the height h', specifically:
Figure FDA0003791717030000021
w'=0.9w,h'=0.9h
step 7-3, inputting the new target box information into the SimMask network M 2 And predicting the subsequent frames and recording the track of the predicted central point.
7. The full-automatic lightning arrester detecting and tracking system is characterized by comprising the following modules:
a network training module: collecting an image production data set of a lightning arrester, training an improved Yolov3 network M for detecting the lightning arrester 1 And utilizes the public data and the universal video target tracking network M based on the SimMask training 2
An image acquisition module: for acquiring a real-time image data stream;
an image processing module: the device is used for carrying out Gaussian smoothing processing on the acquired image so as to eliminate noise existing in a local area;
the lightning arrester detection module: using network M 1 Predicting the processed image, detecting the lightning arrester in the image, acquiring the coordinate, width, height and confidence coefficient of the lightning arrester, and finally judging whether the confidence coefficient meets the requirement;
the lightning arrester tracking module: for connecting networks M 1 After the predicted output is compacted, network M is used 2 And tracking the lightning arrester and recording the track information of the lightning arrester.
8. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method as claimed in any one of claims 1 to 6 when executing the computer program.
9. A computer-storable medium having a computer program stored thereon, wherein the computer program is adapted to carry out the steps of the method according to any one of the claims 1-6 when executed by a processor.
CN202011386930.7A 2020-12-02 2020-12-02 Full-automatic lightning arrester detection and tracking method Active CN112581386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011386930.7A CN112581386B (en) 2020-12-02 2020-12-02 Full-automatic lightning arrester detection and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011386930.7A CN112581386B (en) 2020-12-02 2020-12-02 Full-automatic lightning arrester detection and tracking method

Publications (2)

Publication Number Publication Date
CN112581386A CN112581386A (en) 2021-03-30
CN112581386B true CN112581386B (en) 2022-10-21

Family

ID=75128112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011386930.7A Active CN112581386B (en) 2020-12-02 2020-12-02 Full-automatic lightning arrester detection and tracking method

Country Status (1)

Country Link
CN (1) CN112581386B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113085888A (en) * 2021-04-21 2021-07-09 金陵科技学院 Intelligent networked automobile driving-assisting safety information detection system
CN113902044B (en) * 2021-12-09 2022-03-01 江苏游隼微电子有限公司 Image target extraction method based on lightweight YOLOV3
CN117315508B (en) * 2023-08-24 2024-05-14 北京智盟信通科技有限公司 Power grid equipment monitoring method and system based on data processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106771805A (en) * 2016-12-09 2017-05-31 南京理工大学 A kind of hot line robot Detecting Methods of MOA
CN109525781A (en) * 2018-12-24 2019-03-26 国网山西省电力公司检修分公司 A kind of image capturing method, device, equipment and the storage medium of wire-connection point
CN110706266A (en) * 2019-12-11 2020-01-17 北京中星时代科技有限公司 Aerial target tracking method based on YOLOv3
CN111932583A (en) * 2020-06-05 2020-11-13 西安羚控电子科技有限公司 Space-time information integrated intelligent tracking method based on complex background

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106771805A (en) * 2016-12-09 2017-05-31 南京理工大学 A kind of hot line robot Detecting Methods of MOA
CN109525781A (en) * 2018-12-24 2019-03-26 国网山西省电力公司检修分公司 A kind of image capturing method, device, equipment and the storage medium of wire-connection point
CN110706266A (en) * 2019-12-11 2020-01-17 北京中星时代科技有限公司 Aerial target tracking method based on YOLOv3
CN111932583A (en) * 2020-06-05 2020-11-13 西安羚控电子科技有限公司 Space-time information integrated intelligent tracking method based on complex background

Also Published As

Publication number Publication date
CN112581386A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN112581386B (en) Full-automatic lightning arrester detection and tracking method
CN109685066B (en) Mine target detection and identification method based on deep convolutional neural network
CN110340891B (en) Mechanical arm positioning and grabbing system and method based on point cloud template matching technology
WO2018028103A1 (en) Unmanned aerial vehicle power line inspection method based on characteristics of human vision
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN111553949B (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN111199556B (en) Indoor pedestrian detection and tracking method based on camera
US11367195B2 (en) Image segmentation method, image segmentation apparatus, image segmentation device
CN111553950B (en) Steel coil centering judgment method, system, medium and electronic terminal
KR102470873B1 (en) Crop growth measurement device using image processing and method thereof
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
CN110555377A (en) pedestrian detection and tracking method based on fisheye camera overlook shooting
CN111368637B (en) Transfer robot target identification method based on multi-mask convolutional neural network
CN112785626A (en) Twin network small target tracking method based on multi-scale feature fusion
Chen et al. Stingray detection of aerial images with region-based convolution neural network
CN111767826A (en) Timing fixed-point scene abnormity detection method
CN111127355A (en) Method for finely complementing defective light flow graph and application thereof
CN110889460A (en) Mechanical arm specified object grabbing method based on cooperative attention mechanism
CN114022520B (en) Robot target tracking method based on Kalman filtering and twin network
CN116129039A (en) Three-dimensional point cloud generation method and device for power transmission line and storage medium
Li et al. Low-cost 3D building modeling via image processing
CN115457001A (en) Photovoltaic panel foreign matter detection method, system, device and medium based on VGG network
CN109934853B (en) Correlation filtering tracking method based on response image confidence region adaptive feature fusion
CN113932712A (en) Melon and fruit vegetable size measuring method based on depth camera and key points
CN114120444A (en) 3D convolution neural network unsafe behavior detection system based on human skeleton characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant