CN116993681A - Substation inspection defect detection method and system - Google Patents

Substation inspection defect detection method and system Download PDF

Info

Publication number
CN116993681A
CN116993681A CN202310825936.7A CN202310825936A CN116993681A CN 116993681 A CN116993681 A CN 116993681A CN 202310825936 A CN202310825936 A CN 202310825936A CN 116993681 A CN116993681 A CN 116993681A
Authority
CN
China
Prior art keywords
defect
substation equipment
loss
inspection
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310825936.7A
Other languages
Chinese (zh)
Inventor
王一博
刘士峰
王科
刘俊
塔晓龙
金洋
高寅
冶金顺
秦之武
赵中奇
秦贵邦
董发福
李嘉荣
李新龙
周先
李强
张鸿远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haibei Power Supply Company State Grid Qinghai Electric Power Co ltd
State Grid Corp of China SGCC
State Grid Qinghai Electric Power Co Ltd
Original Assignee
Haibei Power Supply Company State Grid Qinghai Electric Power Co ltd
State Grid Corp of China SGCC
State Grid Qinghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haibei Power Supply Company State Grid Qinghai Electric Power Co ltd, State Grid Corp of China SGCC, State Grid Qinghai Electric Power Co Ltd filed Critical Haibei Power Supply Company State Grid Qinghai Electric Power Co ltd
Priority to CN202310825936.7A priority Critical patent/CN116993681A/en
Publication of CN116993681A publication Critical patent/CN116993681A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The invention discloses a substation inspection defect detection method, and belongs to the technical field of power equipment inspection; the method comprises the following steps: acquiring a defect image of substation equipment; constructing a substation equipment defect detection model according to the substation equipment defect image; acquiring a defect image of substation equipment to be identified; inputting a to-be-identified substation equipment defect image into a substation equipment defect detection model to perform defect identification, and obtaining a defect identification result; and visually displaying the defect identification result. The invention also discloses a substation inspection defect detection system. The invention utilizes a multi-head attention mechanism to enrich and extract space-time characteristic information, adopts sliding windows to strengthen the connection between each window, strengthens the interaction between local characteristics and global characteristics, adopts FPN to perform characteristic fusion and a detection head of YOLOv5 to construct a fusion network, and ensures the recognition rate of complex background defect detection while carrying out real-time efficient detection.

Description

Substation inspection defect detection method and system
Technical Field
The invention relates to the technical field of power equipment inspection, in particular to a substation inspection defect detection method and system.
Background
The transformer substation is an important component of the power system, and in order to ensure normal, stable and safe operation of the power system, the transformer substation needs to be periodically detected and maintained. However, routine inspection of the traditional transformer substation is all dependent on manual operation, and the problems of low working efficiency, difficulty in accurately reflecting equipment states of inspection results and the like exist. With the rapid development of the power grid in recent years, the contradiction between the shortage of production personnel and the increase of inspection workload is increasingly prominent, and operation and maintenance integration and other works expand the service range and simultaneously bring higher requirements for the operation and maintenance work of the power transformation. At present, some inspection robots are also arranged in a transformer substation, but the inspection accuracy of the inspection method is limited, and meanwhile, the inspection robot is difficult to inquire aiming at small targets such as defects on equipment.
The specific disadvantages of the prior art are:
1. the existing transformer substation defect detection method is easily affected by complex background, so that the identification result is inaccurate.
2. The existing transformer substation defect detection method has a limited identification range, and is easy to cause missed detection aiming at small targets such as fine defects of transformer substation equipment.
3. Most of the existing methods aim at global feature extraction, and the relationship between local features and global features cannot be associated.
Disclosure of Invention
The invention aims to provide a substation inspection defect detection method and device, which aim to obtain equipment state data of small defects through a secondary detection network in the inspection process, and realize comprehensive and accurate detection of the equipment defects by combining with artificial intelligent data analysis and processing technology, thereby improving the inspection efficiency and accuracy.
In order to solve the technical problems, the invention provides a substation inspection defect detection method, which comprises the following steps:
acquiring a defect image of substation equipment;
constructing a substation equipment defect detection model according to the substation equipment defect image;
acquiring a defect image of substation equipment to be identified;
inputting a to-be-identified substation equipment defect image into a substation equipment defect detection model to perform defect identification, and obtaining a defect identification result;
and visually displaying the defect identification result.
Preferably, a substation equipment defect detection model is constructed according to the substation equipment defect image, and the method specifically comprises the following steps:
carrying out data amplification on the defect image of the transformer substation equipment to obtain an image dataset;
dividing the image dataset into a training set and a testing set;
marking the defect images of the substation equipment in the training set and the testing set;
extracting the characteristics of the defect images of the substation equipment in the training set through a sliding window multi-head self-attention network;
feature fusion is carried out on the features through the FPN structure, and target features are obtained;
performing cluster analysis on the training set by using an improved K-means++ clustering algorithm, and determining a priori frame for target detection;
performing network structure adjustment on the YOLOv5 fusion network, and introducing Focal Loss into a Loss function to adjust the proportion of positive and negative samples in the YOLOv5 fusion network;
inputting the training set after cluster analysis into a YOLOv5 network model for training to obtain a substation equipment defect detection model;
and testing the defect detection model of the substation equipment according to the test set after the cluster analysis.
Preferably, feature extraction is performed on the fault images of the substation equipment in the training set through a sliding window multi-head self-attention network, and the method specifically comprises the following steps:
inputting the defect images of the substation equipment in the training set into a Patch Partition module for blocking, wherein each 4x4 adjacent pixel is a Patch, and then flattening along the channel direction; the defect images of the substation equipment in the training set are RGB three-channel pictures;
small feature maps of different sizes constructed through four stages; the first stage is the combination of a linear embedded layer, a window multi-head self-attention module and a sliding window multi-head self-attention module; the second stage to the fourth stage are all combinations of a Patch Merging downsampling, a window multi-head self-attention module and a sliding window multi-head self-attention module, wherein in the third stage, the combination of the window multi-head self-attention module and the sliding window multi-head self-attention module is used for 3 times;
the four small feature images are spliced in the depth direction, and then the length and width of the feature images are reduced by half through the normalization layer and the full connection layer, downsampling processing is carried out, and meanwhile the number of channels is doubled.
Preferably, in the first stage, the linear embedding layer performs linear transformation on the channel data of each pixel, and converts the channel number into a set value;
the window multi-head self-attention module divides the whole feature map into equal-sized windows, and then carries out multi-head attention mechanism on the interior of each window
The multi-head self-attention module of the sliding window realizes information transfer among different windows by shifting the windows.
Preferably, the window multi-head self-attention module is composed of a normalization layer based on residual connection, a window multi-head self-attention layer and a multi-layer perceptron, wherein the window multi-head self-attention layer is based on a multi-head self-attention mechanism;
the calculation formula of the multi-head self-attention mechanism is as follows:
wherein: q, K, V are the query vector sequence, the key vector sequence and the value vector sequence respectively, and d is the distance between the query vector sequence and the key vector sequence;
the window multi-head self-attention layer divides the overall feature graphics processed by the multi-head attention mechanism intoWindows with width and height all M size, w is wide, h is high, and then multi-headed attention calculation is used in each window.
Preferably, the sliding window multi-head self-attention module realizes information transfer between different windows by shifting the windows, and specifically comprises the following steps:
the window with the same M size and divided by the multi-head self-attention layer of the window is respectively shifted from the upper left corner to the right side and below by M/2 which is rounded downwards, so that information can be transferred between the windows;
performing cyclic shift on the window with the size of M/2 once, and splicing the segmented blocks into the window with the size of M;
calculating the spliced window with M size by masking the multi-head self-attention mechanism, wherein masking is added when attention mechanism calculation is carried out on irrelevant positions;
the cyclic shift of the masked M-sized window is restored.
Preferably, the training set is subjected to cluster analysis by using a modified K-means++ clustering algorithm, and a priori frame for target detection is determined, and the method specifically comprises the following steps of:
randomly selecting a sample labeling frame in a training set to be regarded as an initial clustering center point;
calculating the shortest distance D (x) between each sample and the current cluster center point, calculating the probability that each sample is selected as the next cluster center, and finally selecting the sample point corresponding to the maximum probability value as the next cluster center until k cluster center points are selected;
taking k clustering center points as initial clustering centers, and finally clustering k priori frames;
the calculation formula of the probability that a sample is selected as a cluster center is as follows:
wherein: p is the probability of selecting as the cluster center, X is the dataset, and X is the sample.
Preferably, focal Loss is introduced into the Loss function to adjust the positive and negative sample ratio in the YOLOv5 fusion network, and the method specifically comprises the following steps:
the loss function loss of the YOLOv5 fusion network consists of a regression box loss bbox_loss, a confidence loss conf_loss and a classification loss prob_loss; the loss function loss is formulated as follows:
loss=bbox_loss+conf_loss+prob_loss
the YOLOv5 fusion network calculates regression frame loss by adopting CIoU loss, and only calculates positioning loss of a positive sample, wherein the formula is as follows:
in the above formula, ρ is the center point distance between the tag frame and the prediction frame, c is the diagonal length of the minimum bounding rectangle of the tag frame and the prediction frame, v is the aspect ratio similarity between the tag frame and the prediction frame, α is the influence factor of v, and the coordinates of the upper left corner and the lower right corner of the prediction frame are (x p1 ,y p1 )、(x p2 ,y p2 ) The coordinates of the upper left corner and the lower right corner of the label frame are respectively (x) l1 ,y l2 )、(x l2 ,y l2 );
The YOLOv5 fusion network adopts a two-class cross entropy Loss function Binary CrossEntropy Loss to calculate confidence Loss and class Loss, and introduces a modulation coefficient, which is defined as a Focal Loss function, aiming at the problems of unbalance of positive and negative samples and regulation of attention degree of difficult samples in network training, and the formula is shown as follows:
wherein p epsilon [0,1] represents the probability of the model to detect and output the target class, y represents whether the model is the class or not by 0 and 1, and alpha and gamma are modulation coefficients, wherein alpha determines the balance degree of positive and negative samples, and gamma determines the attention degree of difficult samples.
Preferably, the method for acquiring the defect image of the substation equipment to be identified specifically comprises the following steps:
setting inspection parameters of the inspection robot; the inspection parameters comprise an inspection line and an inspection speed;
operating the inspection robot according to the inspection line and the inspection speed;
acquiring position information and barrier information when the inspection robot runs;
according to the position information and the obstacle information of the inspection robot during operation, avoiding adjustment is carried out on the inspection line;
and shooting the substation equipment through the inspection robot to obtain a defect image of the substation equipment to be identified.
The invention also provides a substation inspection defect detection system, which comprises:
the acquisition module is used for acquiring a defect image of the substation equipment;
the construction module is used for constructing a substation equipment defect detection model according to the substation equipment defect image;
the acquisition module is used for acquiring a defect image of the transformer substation equipment to be identified;
the computing and identifying module is used for inputting the defect image of the transformer substation equipment to be identified into a transformer substation equipment defect detection model to conduct defect identification, and obtaining a defect identification result;
and the visualization module is used for carrying out visual display on the defect identification result.
Compared with the prior art, the invention has the beneficial effects that:
(1) The substation robot inspection method based on the sliding window multi-head self-attention network and the YOLOv5 fusion network utilizes a multi-head attention mechanism to enrich and extract space-time characteristic information, meanwhile, the sliding window is adopted to strengthen the connection between each window, the interaction between local characteristics and global characteristics is enhanced, meanwhile, the FPN is adopted to perform characteristic fusion and the detection head of the YOLOv5 is adopted to construct the fusion network, so that the real-time efficient detection is realized, and meanwhile, the recognition rate of complex background defect detection is ensured;
(2) The invention designs and improves the prior frame selection strategy K-mean++ clustering algorithm and the Focal loss function, and effectively improves the detection precision of the design of the multi-head self-attention network based on the sliding window and the YOLOv5 fusion network algorithm;
(3) The target detection strategy adopting the two-stage series connection can effectively improve the fine inspection of small targets.
Drawings
The following describes the embodiments of the present invention in further detail with reference to the accompanying drawings.
Fig. 1 is a block diagram of a multi-headed self-attention network based on a sliding window and YOLOv5 converged network.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present invention may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present invention is not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The invention is described in further detail below with reference to the attached drawing figures:
the invention provides a substation inspection defect detection method, which comprises the following steps:
acquiring a defect image of substation equipment;
constructing a substation equipment defect detection model according to the substation equipment defect image;
acquiring a defect image of substation equipment to be identified;
inputting a to-be-identified substation equipment defect image into a substation equipment defect detection model to perform defect identification, and obtaining a defect identification result;
and visually displaying the defect identification result.
Preferably, a substation equipment defect detection model is constructed according to the substation equipment defect image, and the method specifically comprises the following steps:
carrying out data amplification on the defect image of the transformer substation equipment to obtain an image dataset;
dividing the image dataset into a training set and a testing set;
marking the defect images of the substation equipment in the training set and the testing set;
extracting the characteristics of the defect images of the substation equipment in the training set through a sliding window multi-head self-attention network;
feature fusion is carried out on the features through the FPN structure, and target features are obtained;
performing cluster analysis on the training set by using an improved K-means++ clustering algorithm, and determining a priori frame for target detection;
performing network structure adjustment on the YOLOv5 fusion network, and introducing Focal Loss into a Loss function to adjust the proportion of positive and negative samples in the YOLOv5 fusion network;
inputting the training set after cluster analysis into a YOLOv5 network model for training to obtain a substation equipment defect detection model;
and testing the defect detection model of the substation equipment according to the test set after the cluster analysis.
Preferably, feature extraction is performed on the fault images of the substation equipment in the training set through a sliding window multi-head self-attention network, and the method specifically comprises the following steps:
inputting the defect images of the substation equipment in the training set into a Patch Partition module for blocking, wherein each 4x4 adjacent pixel is a Patch, and then flattening along the channel direction; the defect images of the substation equipment in the training set are RGB three-channel pictures;
small feature maps of different sizes constructed through four stages; the first stage is the combination of a linear embedded layer, a window multi-head self-attention module and a sliding window multi-head self-attention module; the second stage to the fourth stage are all combinations of a Patch Merging downsampling, a window multi-head self-attention module and a sliding window multi-head self-attention module, wherein in the third stage, the combination of the window multi-head self-attention module and the sliding window multi-head self-attention module is used for 3 times;
the four small feature images are spliced in the depth direction, and then the length and width of the feature images are reduced by half through the normalization layer and the full connection layer, downsampling processing is carried out, and meanwhile the number of channels is doubled.
Preferably, in the first stage, the linear embedding layer performs linear transformation on the channel data of each pixel, and converts the channel number into a set value;
the window multi-head self-attention module divides the whole feature map into equal-sized windows, and then carries out multi-head attention mechanism on the interior of each window
The multi-head self-attention module of the sliding window realizes information transfer among different windows by shifting the windows.
Preferably, the window multi-head self-attention module is composed of a normalization layer based on residual connection, a window multi-head self-attention layer and a multi-layer perceptron, wherein the window multi-head self-attention layer is based on a multi-head self-attention mechanism:
wherein: q, K, V are the query vector sequence, the key vector sequence and the value vector sequence respectively, and d is the distance between the query vector sequence and the key vector sequence;
the window multi-head self-attention layer divides the overall feature graphics processed by the multi-head attention mechanism intoWindows with width and height all M size, w is wide, h is high, and then multi-headed attention calculation is used in each window.
Preferably, the sliding window multi-head self-attention module realizes information transfer between different windows by shifting the windows, and specifically comprises the following steps:
the window with the same M size and divided by the multi-head self-attention layer of the window is respectively shifted from the upper left corner to the right side and below by M/2 which is rounded downwards, so that information can be transferred between the windows;
performing cyclic shift on the window with the size of M/2 once, and splicing the segmented blocks into the window with the size of M;
calculating the spliced window with M size by masking the multi-head self-attention mechanism, wherein masking is added when attention mechanism calculation is carried out on irrelevant positions;
the cyclic shift of the masked M-sized window is restored.
Preferably, the training set is subjected to cluster analysis by using a modified K-means++ clustering algorithm, and a priori frame for target detection is determined, and the method specifically comprises the following steps of:
randomly selecting a sample labeling frame in a training set to be regarded as an initial clustering center point;
calculating the shortest distance D (x) between each sample and the current cluster center point, calculating the probability that each sample is selected as the next cluster center, and finally selecting the sample point corresponding to the maximum probability value as the next cluster center until k cluster center points are selected;
the calculation formula of the probability that a sample is selected as a cluster center is as follows:
wherein: p is the probability of selecting as a cluster center, X is a data set, and X is a sample;
and taking the k clustering center points as initial clustering centers, and finally clustering k priori frames.
Preferably, focal Loss is introduced into the Loss function to adjust the positive and negative sample ratio in the YOLOv5 fusion network, and the method specifically comprises the following steps:
the loss function loss of the YOLOv5 fusion network consists of a regression box loss bbox_loss, a confidence loss conf_loss and a classification loss prob_loss; the loss function loss is formulated as follows:
loss=bbox_loss+conf_loss+prob_loss
the YOLOv5 fusion network calculates regression frame loss by adopting CIoU loss, and only calculates positioning loss of a positive sample, wherein the formula is as follows:
in the above formula, ρ is the center point distance between the tag frame and the prediction frame, c is the diagonal length of the minimum bounding rectangle of the tag frame and the prediction frame, v is the aspect ratio similarity between the tag frame and the prediction frame, α is the influence factor of v, and the coordinates of the upper left corner and the lower right corner of the prediction frame are (x p1 ,y p1 )、(x p2 ,y p2 ) The coordinates of the upper left corner and the lower right corner of the label frame are respectively (x) l1 ,y l2 )、(x l2 ,y l2 );
The YOLOv5 fusion network adopts a two-class cross entropy Loss function Binary CrossEntropy Loss to calculate confidence Loss and class Loss, and introduces a modulation coefficient, which is defined as a Focal Loss function, aiming at the problems of unbalance of positive and negative samples and regulation of attention degree of difficult samples in network training, and the formula is shown as follows:
wherein p epsilon [0,1] represents the probability of the model to detect and output the target class, y represents whether the model is the class or not by 0 and 1, and alpha and gamma are modulation coefficients, wherein alpha determines the balance degree of positive and negative samples, and gamma determines the attention degree of difficult samples.
Preferably, the method for acquiring the defect image of the substation equipment to be identified specifically comprises the following steps:
setting inspection parameters of the inspection robot; the inspection parameters comprise an inspection line and an inspection speed;
operating the inspection robot according to the inspection line and the inspection speed;
acquiring position information and barrier information when the inspection robot runs;
according to the position information and the obstacle information of the inspection robot during operation, avoiding adjustment is carried out on the inspection line;
and shooting the substation equipment through the inspection robot to obtain a defect image of the substation equipment to be identified.
The invention also provides a substation inspection defect detection system, which comprises:
the acquisition module is used for acquiring a defect image of the substation equipment;
the construction module is used for constructing a substation equipment defect detection model according to the substation equipment defect image;
the acquisition module is used for acquiring a defect image of the transformer substation equipment to be identified;
the computing and identifying module is used for inputting the defect image of the transformer substation equipment to be identified into a transformer substation equipment defect detection model to conduct defect identification, and obtaining a defect identification result;
and the visualization module is used for carrying out visual display on the defect identification result.
In order to better illustrate the technical effects of the present invention, the present invention provides the following specific embodiments to illustrate the above technical flow:
embodiment 1, a transformer substation inspection defect detection system for transformer substation inspection robot defect detection, including the following part:
m1: and the acquisition module is used for: the visible light cameras are used for acquiring the inspection robot to acquire images containing substation equipment and defects of the substation equipment at different visual angles;
m2: and a positioning module: for determining the position of inspection robots
M3: and the control module is used for: the path planning instruction for the inspection robot is issued to control the front, back, left and right movements (including inspection speed) of the inspection robot and also takes charge of obstacle avoidance of the inspection robot;
m4: and a communication module: the system comprises a control module, a calculation and identification module, a visual module and a positioning module, wherein the control module is used for transmitting the position information of the inspection robot of the positioning module to the control module, transmitting the image information of the acquisition module to the calculation and identification module and transmitting the result processed by the calculation and identification module to the visual module;
m5: and a storage module: the device is used for storing visible light videos and images shot by the inspection robot, and is convenient for later backtracking;
m6: and (3) calculating and identifying module: the device for intelligently detecting the transformer substation and the corresponding defects (meter damage, insulator damage, oil leakage, respirator damage, cover plate damage or loss, bird nest foreign matter and the like) and marking and early warning the detected defects;
m7: and a visualization module: the method is used for displaying the inspection result and the fault alarm prompt of the inspection robot.
The substation inspection defect detection method applying the detection system comprises the following steps of:
s1: issuing a planned inspection line in a control module, setting the inspection speed of the inspection robot, taking the inspection line and the inspection speed as inspection parameters, and starting the inspection robot to work;
s2: the positioning module reports position information in the process of inspecting the robot to the control module in real time through a GPS, and if the robot encounters a shelter or an obstacle, the control module can avoid the obstacle in a reasonable path planning range by controlling the front, back, left and right movements of the robot;
s3: the acquisition module acquires visible light images (substation equipment defect images) containing the substation equipment and defects thereof at different visual angles in the inspection process of the inspection robot, and stores the visible light images in the storage module; meanwhile, the acquisition module transmits the shot defect image of the substation equipment to the calculation and identification module through the transmission module;
s4: the computing and identifying module is connected with two network models in series, the first stage is a substation equipment identifying model, the second stage is a substation equipment defect identifying model, firstly, the computing and identifying module can enter the substation identifying model for detection after receiving the substation equipment defect image transmitted by the acquisition module, and if the object can not be detected, the second stage substation equipment defect identifying model can not be detected, and the processing is not performed; if the target is detected, cutting and amplifying the target, entering an amplified target image into a second-level power station equipment defect identification model for defect identification, and if the defect is detected, marking and performing defect early warning; if no defect is detected, no further processing is performed;
s5: and the result calculated by the calculation and identification module is transmitted to a visualization module through a communication module, and the visualization module displays the inspection path, defects and corresponding types of substation equipment inspected in the inspection process, and broadcasts an alarm to prompt workers to process.
Further, in the step S4, a network model based on a sliding window multi-head self-attention mechanism and YOLOv5 hybrid detection is adopted, and the detection of the power station equipment defect recognition model is taken as an example, and a specific modeling flow is as follows:
s41: acquiring abundant substation equipment defect images through the inspection robot;
s42: performing data amplification on the obtained image by adopting methods such as horizontal mirroring, rotation transformation, image miscut, color transformation and the like, establishing an image data set, and generating a training set and a testing set according to random proportion distribution;
s43: labeling the defects of the substation equipment in the image data set by using Labelme software;
s44: the method comprises the steps of performing feature extraction of images with different backgrounds and different granularities in an image dataset containing defects of substation equipment by utilizing a sliding window multi-head self-attention network, and performing feature fusion through an FPN structure to obtain target features;
s45: and (3) selecting a design network prior frame: performing prior frame clustering by using an improved K-means++ clustering algorithm to generate a prior frame with more data set pertinence;
s46: designing a network model loss function: the positive and negative sample proportion in the YOLOv5 is adjusted by introducing Focal Loss in a Loss function, and the detection capability of the sample with difficult classification is improved;
s47: detecting a substation equipment defect image based on a sliding window multi-head self-attention network and a YOLOv5 fusion network;
s48: substation equipment defect model training is performed on the basis of a sliding window multi-head self-attention network and a YOLOv5 fusion network: training a sliding window multi-head self-attention network and a YOLOv5 fusion network by utilizing a training set of the marked image data set in the S43 to obtain a substation equipment defect detection model, as shown in fig. 1;
s49: and (3) applying the detection model obtained in the step (S48) to inspection of the substation inspection robot, inputting a test set of the image dataset into the model obtained by training for testing, and outputting inspection results of the substation robot.
Further, in the step S44, the sliding window multi-head self-attention network feature extraction includes the following steps:
s44_1: firstly, transmitting an RGB picture input into a 3-channel to a Patch Partition module for partitioning, wherein every 4x4 adjacent pixels are one Patch, and then flattening along the channel direction;
s44_2: then feature maps of different sizes constructed through four stages, wherein the first stage is the combination of a linear embedded layer, a window multi-head self-attention module and a sliding window multi-head self-attention module;
s44_3: the linear embedding layer carries out linear transformation on the channel data of each pixel, and the channel number is converted into a set numerical value;
s44_4. the window multi-head self-attention module divides the whole feature map into equal-sized windows, and then carries out multi-head attention mechanism in each window independently;
s44_5: the multi-head self-attention module of the sliding window transmits information between the windows by shifting the windows;
s44_6: the Patch Merging downsampling layer divides the feature map into 4 small feature maps with equal size according to the space position, then the four small feature maps are spliced in the depth direction, and then the length and width of the feature map are reduced by half through the normalization layer and the full-connection layer, downsampling processing is carried out, and meanwhile the channel number is doubled.
S44_7: the second stage to the fourth stage are all combinations of a Patch Merging downsampling, a window multi-head self-attention module and a sliding window multi-head self-attention module, wherein in the third stage, the combination of the window multi-head self-attention module and the sliding window multi-head self-attention module is used for 3 times;
further, in the step s44_4, the window multi-head self-attention module is composed of a normalization layer based on residual connection, a window multi-head self-attention layer, and a multi-layer perceptron, wherein the window multi-head self-attention layer is based on a multi-head self-attention mechanism:
wherein Q, K, V are respectively a query vector sequence, a key vector sequence and a value vector sequence, d is the distance between the query vector sequence and the key vector sequence, and the window multi-head self-attention layer divides the whole feature mapping processed by the multi-head attention mechanism intoWindows with width and height being M (w is wide and h is high) and then multi-head attention calculation is used in each window, so that the overall calculation complexity is reduced;
further, in the step s44_5, the sliding window multi-head self-attention module is composed of a normalization layer based on residual connection, a sliding window multi-head self-attention layer and a multi-layer perceptron, wherein the sliding window multi-head self-attention layer is further divided into the following operation steps:
s44_51: firstly, windows with equal M sizes and divided by a multi-head self-attention layer of the window are respectively shifted from the left upper corner to the right side and the lower side by M/2 (downward rounding), so that information can be transferred between the windows;
s44_52: then, performing cyclic shift on the basis, and splicing the segmented blocks into windows with the size of M;
s44_53: the spliced window with the size M is operated in a mode of masking a multi-head self-attention mechanism, wherein masking is added when attention mechanism calculation is carried out on irrelevant positions, namely-100 is added to a position result needing masking, and the position needing masking is unchanged, so that information disorder of different areas is prevented;
s44_54: and finally, restoring the cyclic shift, so as to ensure the original semantic information of the picture.
Further, in the step 45, a network prior frame selection method is designed, and a K-means++ clustering algorithm is adopted, which specifically comprises the following steps:
s45_1: randomly selecting a sample annotation frame in the data set to be regarded as an initial clustering center point;
s45_2: calculating the shortest distance D (x) between each sample and a known cluster center, calculating the probability that each sample is selected as the next cluster center, and finally selecting a sample point corresponding to the maximum probability value as the next cluster center;
where P is the probability of selecting as the cluster center, X is the dataset, and X is the sample.
S45_3: repeating the step S45_2, and entering the next step when k clustering center points are selected;
s45_4: and taking the k clustering center points as initial clustering centers, and finally clustering k priori frames.
Further, the network model loss function designed in the step S46 is specifically expressed as:
(1) The YOLOv5 network loss function consists of a regression box loss bbox_loss, a confidence loss conf_loss, and a classification loss prob_loss, as in equation (3),
loss=bbox_loss+conf_loss+prob_loss (3)
(2) The regression frame loss is calculated by CIoU loss in the YOLOv5 network, and only the positioning loss of the positive sample is calculated, wherein the formulas are (3) - (7),
/>
in the above formula, ρ is the center point distance between the tag frame and the prediction frame, c is the diagonal length of the minimum bounding rectangle of the tag frame and the prediction frame, v is the aspect ratio similarity between the tag frame and the prediction frame, α is the influence factor of v, and the coordinates of the upper left corner and the lower right corner of the prediction frame are (x p1 ,y p1 )、(x p2 ,y p2 ) The coordinates of the upper left corner and the lower right corner of the label frame are respectively (x) l1 ,y l2 )、(x l2 ,y l2 )。
(3) The confidence Loss and the classification Loss are calculated by adopting a two-class cross entropy Loss function (Binary CrossEntropy Loss) in the YOLOv5 network, the modulation coefficient is introduced for the problems of unbalance of positive and negative samples and regulation of the attention degree of difficult samples in network training, the modulation coefficient is defined as a Focal Loss function, as shown in a formula (8),
wherein p epsilon [0,1] represents the probability of the model to detect and output the target class, y represents whether the model is the class or not by 0 and 1, and alpha and gamma are modulation coefficients, wherein alpha determines the balance degree of positive and negative samples, and gamma determines the attention degree of difficult samples.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and the division of modules, or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units, modules, or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed.
The units may or may not be physically separate, and the components shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. The above-described functions defined in the method of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU). The computer readable medium of the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the present invention is not limited thereto, but any changes or substitutions within the technical scope of the present invention should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The substation inspection defect detection method is characterized by comprising the following steps of:
acquiring a defect image of substation equipment;
constructing a substation equipment defect detection model according to the substation equipment defect image;
acquiring a defect image of substation equipment to be identified;
inputting a to-be-identified substation equipment defect image into a substation equipment defect detection model to perform defect identification, and obtaining a defect identification result;
and visually displaying the defect identification result.
2. The substation inspection defect detection method according to claim 1, wherein the substation equipment defect detection model is constructed according to the substation equipment defect image, specifically comprising the following steps:
carrying out data amplification on the defect image of the transformer substation equipment to obtain an image dataset;
dividing the image dataset into a training set and a testing set;
marking the defect images of the substation equipment in the training set and the testing set;
extracting the characteristics of the defect images of the substation equipment in the training set through a sliding window multi-head self-attention network;
feature fusion is carried out on the features through the FPN structure, and target features are obtained;
performing cluster analysis on the training set by using an improved K-means++ clustering algorithm, and determining a priori frame for target detection;
performing network structure adjustment on the YOLOv5 fusion network, and introducing Focal Loss into a Loss function to adjust the proportion of positive and negative samples in the YOLOv5 fusion network;
inputting the training set after cluster analysis into a YOLOv5 network model for training to obtain a substation equipment defect detection model;
and testing the defect detection model of the substation equipment according to the test set after the cluster analysis.
3. The substation inspection defect detection method according to claim 2, wherein the feature extraction is performed on the substation equipment defect image in the training set through a sliding window multi-head self-attention network, and specifically comprises the following steps:
inputting the defect images of the substation equipment in the training set into a Patch Partition module for blocking, wherein each 4x4 adjacent pixel is a Patch, and then flattening along the channel direction; the defect images of the substation equipment in the training set are RGB three-channel pictures;
small feature maps of different sizes constructed through four stages; the first stage is the combination of a linear embedded layer, a window multi-head self-attention module and a sliding window multi-head self-attention module; the second stage to the fourth stage are all combinations of a Patch Merging downsampling, a window multi-head self-attention module and a sliding window multi-head self-attention module, wherein in the third stage, the combination of the window multi-head self-attention module and the sliding window multi-head self-attention module is used for 3 times;
the four small feature images are spliced in the depth direction, and then the length and width of the feature images are reduced by half through the normalization layer and the full connection layer, downsampling processing is carried out, and meanwhile the number of channels is doubled.
4. A substation inspection defect detection method according to claim 3, characterized in that:
in the first stage, the linear embedding layer performs linear transformation on the channel data of each pixel, and the channel number is converted into a set value;
the window multi-head self-attention module divides the whole feature map into equal-sized windows, and then carries out multi-head attention mechanism on the interior of each window
The multi-head self-attention module of the sliding window realizes information transfer among different windows by shifting the windows.
5. The substation inspection defect detection method according to claim 4, wherein:
the window multi-head self-attention module consists of a normalization layer based on residual connection, a window multi-head self-attention layer and a multi-layer perceptron, wherein the window multi-head self-attention layer is based on a multi-head self-attention mechanism;
the calculation formula of the multi-head self-attention mechanism is as follows:
wherein: q, K, V are the query vector sequence, the key vector sequence and the value vector sequence respectively, and d is the distance between the query vector sequence and the key vector sequence;
the window multi-head self-attention layer divides the overall feature graphics processed by the multi-head attention mechanism intoWindows with width and height all M size, w is wide, h is high, and then multi-headed attention calculation is used in each window.
6. The substation inspection defect detection method according to claim 5, wherein the sliding window multi-head self-attention module realizes information transfer between different windows by shifting the windows, and specifically comprises the following steps:
the window with the same M size and divided by the multi-head self-attention layer of the window is respectively shifted from the upper left corner to the right side and below by M/2 which is rounded downwards, so that information can be transferred between the windows;
performing cyclic shift on the window with the size of M/2 once, and splicing the segmented blocks into the window with the size of M;
calculating the spliced window with M size by masking the multi-head self-attention mechanism, wherein masking is added when attention mechanism calculation is carried out on irrelevant positions;
the cyclic shift of the masked M-sized window is restored.
7. The substation inspection defect detection method according to claim 2, wherein the training set is subjected to cluster analysis by using a modified K-means++ cluster algorithm, and a priori frame for target detection is determined, specifically comprising the following steps:
randomly selecting a sample labeling frame in a training set to be regarded as an initial clustering center point;
calculating the shortest distance D (x) between each sample and the current cluster center point, calculating the probability that each sample is selected as the next cluster center, and finally selecting the sample point corresponding to the maximum probability value as the next cluster center until k cluster center points are selected;
taking k clustering center points as initial clustering centers, and finally clustering k priori frames;
the calculation formula of the probability that a sample is selected as a cluster center is as follows:
wherein: p is the probability of selecting as the cluster center, X is the dataset, and X is the sample.
8. The substation inspection defect detection method according to claim 2, wherein a Focal Loss is introduced into a Loss function to adjust the proportion of positive and negative samples in a YOLOv5 fusion network, specifically comprising the following steps:
the loss function loss of the YOLOv5 fusion network consists of a regression box loss bbox_loss, a confidence loss conf_loss and a classification loss prob_loss; the loss function loss is formulated as follows:
loss=bbox_loss+conf_loss+prob_loss
the YOLOv5 fusion network calculates regression frame loss by adopting CIoU loss, and only calculates positioning loss of a positive sample, wherein the formula is as follows:
in the above formula, ρ is the center point distance between the tag frame and the prediction frame, c is the diagonal length of the minimum bounding rectangle of the tag frame and the prediction frame, v is the aspect ratio similarity between the tag frame and the prediction frame, α is the influence factor of v, and the coordinates of the upper left corner and the lower right corner of the prediction frame are (x p1 ,y p1 )、(x p2 ,y p2 ) The coordinates of the upper left corner and the lower right corner of the label frame are respectively (x) l1 ,y l2 )、(x l2 ,y l2 );
The YOLOv5 fusion network adopts a two-class cross entropy Loss function Binary CrossEntropy Loss to calculate confidence Loss and class Loss, and introduces a modulation coefficient, which is defined as a Focal Loss function, aiming at the problems of unbalance of positive and negative samples and regulation of attention degree of difficult samples in network training, and the formula is shown as follows:
wherein p epsilon [0,1] represents the probability of the model to detect and output the target class, y represents whether the model is the class or not by 0 and 1, and alpha and gamma are modulation coefficients, wherein alpha determines the balance degree of positive and negative samples, and gamma determines the attention degree of difficult samples.
9. The substation inspection defect detection method according to claim 1, wherein the step of acquiring the defect image of the substation equipment to be identified specifically comprises the following steps:
setting inspection parameters of the inspection robot; the inspection parameters comprise an inspection line and an inspection speed;
operating the inspection robot according to the inspection line and the inspection speed;
acquiring position information and barrier information when the inspection robot runs;
according to the position information and the obstacle information of the inspection robot during operation, avoiding adjustment is carried out on the inspection line;
and shooting the substation equipment through the inspection robot to obtain a defect image of the substation equipment to be identified.
10. A substation inspection defect detection system for implementing the substation inspection defect detection method according to any one of claims 1 to 9, comprising:
the acquisition module is used for acquiring a defect image of the substation equipment;
the construction module is used for constructing a substation equipment defect detection model according to the substation equipment defect image;
the acquisition module is used for acquiring a defect image of the transformer substation equipment to be identified;
the computing and identifying module is used for inputting the defect image of the transformer substation equipment to be identified into a transformer substation equipment defect detection model to conduct defect identification, and obtaining a defect identification result;
and the visualization module is used for carrying out visual display on the defect identification result.
CN202310825936.7A 2023-07-06 2023-07-06 Substation inspection defect detection method and system Pending CN116993681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310825936.7A CN116993681A (en) 2023-07-06 2023-07-06 Substation inspection defect detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310825936.7A CN116993681A (en) 2023-07-06 2023-07-06 Substation inspection defect detection method and system

Publications (1)

Publication Number Publication Date
CN116993681A true CN116993681A (en) 2023-11-03

Family

ID=88525730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310825936.7A Pending CN116993681A (en) 2023-07-06 2023-07-06 Substation inspection defect detection method and system

Country Status (1)

Country Link
CN (1) CN116993681A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274723A (en) * 2023-11-22 2023-12-22 国网智能科技股份有限公司 Target identification method, system, medium and equipment for power transmission inspection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274723A (en) * 2023-11-22 2023-12-22 国网智能科技股份有限公司 Target identification method, system, medium and equipment for power transmission inspection
CN117274723B (en) * 2023-11-22 2024-03-26 国网智能科技股份有限公司 Target identification method, system, medium and equipment for power transmission inspection

Similar Documents

Publication Publication Date Title
CN107808133B (en) Unmanned aerial vehicle line patrol-based oil and gas pipeline safety monitoring method and system and software memory
CN108734143A (en) A kind of transmission line of electricity online test method based on binocular vision of crusing robot
CN110443969A (en) A kind of fire point detecting method, device, electronic equipment and storage medium
CN105930819A (en) System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system
Kang et al. Voxel-based extraction and classification of 3-D pole-like objects from mobile LiDAR point cloud data
CN113592905B (en) Vehicle driving track prediction method based on monocular camera
CN113284144B (en) Tunnel detection method and device based on unmanned aerial vehicle
CN113516664A (en) Visual SLAM method based on semantic segmentation dynamic points
CN116993681A (en) Substation inspection defect detection method and system
CN112613668A (en) Scenic spot dangerous area management and control method based on artificial intelligence
CN112861673A (en) False alarm removal early warning method and system for multi-target detection of surveillance video
Cao et al. MCS-YOLO: A multiscale object detection method for autonomous driving road environment recognition
CN115585731A (en) Space-air-ground integrated hydropower station space state intelligent monitoring management system and method thereof
Elihos et al. Deep learning based segmentation free license plate recognition using roadway surveillance camera images
CN114298163A (en) Online road condition detection system and method based on multi-source information fusion
CN116543603B (en) Flight path completion prediction method and device considering airspace situation and local optimization
CN115083229B (en) Intelligent recognition and warning system of flight training equipment based on AI visual recognition
CN109002746A (en) 3D solid fire identification method and system
CN117115728A (en) Risk identification method and system applied to field operation of transformer substation
CN114550016B (en) Unmanned aerial vehicle positioning method and system based on context information perception
CN111145551A (en) Intersection traffic planning system based on CNN detection follows chapter rate
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN110807415A (en) Traffic checkpoint vehicle intelligent retrieval system and method based on annual inspection marks
Song et al. An accurate vehicle counting approach based on block background modeling and updating
Carreaud et al. Automating the underground cadastral survey: a processing chain proposal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination