CN114897778A - Rigid busbar defect detection method and device - Google Patents
Rigid busbar defect detection method and device Download PDFInfo
- Publication number
- CN114897778A CN114897778A CN202210367179.9A CN202210367179A CN114897778A CN 114897778 A CN114897778 A CN 114897778A CN 202210367179 A CN202210367179 A CN 202210367179A CN 114897778 A CN114897778 A CN 114897778A
- Authority
- CN
- China
- Prior art keywords
- bus
- bus bar
- image
- busbar
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 100
- 230000007547 defect Effects 0.000 title claims abstract description 59
- 238000001914 filtration Methods 0.000 claims abstract description 34
- 238000005452 bending Methods 0.000 claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 29
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000013136 deep learning model Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 42
- 238000010586 diagram Methods 0.000 claims description 32
- 230000007246 mechanism Effects 0.000 claims description 12
- 238000009499 grossing Methods 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 10
- 238000012795 verification Methods 0.000 claims description 9
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 6
- 230000003631 expected effect Effects 0.000 claims description 4
- 238000007689 inspection Methods 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000006748 scratching Methods 0.000 claims 2
- 230000002393 scratching effect Effects 0.000 claims 2
- 238000004364 calculation method Methods 0.000 description 16
- 239000000126 substance Substances 0.000 description 12
- 238000005299 abrasion Methods 0.000 description 6
- 208000034656 Contusions Diseases 0.000 description 4
- 230000004913 activation Effects 0.000 description 4
- 208000034526 bruise Diseases 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and a device for detecting defects of a rigid busbar, and relates to the technical field of image processing and image recognition; the method comprises the steps of S1, locating a bus region image of interest from acquired bus images by utilizing a deep learning model; s2, filtering noise of the interested bus region image; s3, preprocessing the image of the interest bus region after noise filtering to obtain a feature map, and extracting bus features from the feature map; and S4, performing bus defect detection according to the extracted bus characteristics, wherein the bus defect detection comprises bus bending deformation detection and/or bus scratch detection. The bus bar defect detection method provided by the invention can be used for rapidly and accurately detecting and judging whether the bus bar is bent and deformed or not, rapidly identifying the scratch type of the bus bar and positioning the scratch position of the bus bar, and is suitable for a rigid contact network in the field of rail transit.
Description
Technical Field
The invention relates to the technical field of image processing and image recognition, in particular to a rigid busbar defect detection method and device.
Background
In electrified railways, overhead rigid contact networks are widely applied to subways in various large cities at present due to the advantages of no tension of contact lines, few parts, low clearance requirement, small maintenance amount and the like. The bus bar is used as an important component of a rigid contact net and used for fixing the contact line, so that the pantograph can freely slide on the contact line to receive power. However, as the subway operation time continues to increase, various fault problems on the busbar are gradually exposed, and due to the fact that the local elasticity of the rigid catenary is poor, the abrasion phenomenon exists between the pantograph and the busbar, the busbar is scratched, threads of a connecting plate of an intermediate joint in the busbar are smooth, and the busbar falls off from a contact line to cause groove separation. Because the interval time in the subway operation process is short, when a bus bar has a serious fault, the current collection quality of a train can be directly influenced, and therefore, the detection of the defects of the bus bar is a problem to be solved urgently in the application and popularization of the current rigid contact network.
At present, a fault detection method for rigid busbar abrasion mainly performs line inspection through maintenance personnel, and is low in efficiency and easy to miss detection. Therefore, how to efficiently and accurately identify the defects of the rigid contact net busbar has very important practical significance.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a rigid busbar defect detection method and device, which have the advantages of high efficiency, high accuracy and great safety significance and practical application value for identifying abnormal conditions such as bending deformation, scratch and the like of a busbar and detecting potential safety hazards of a contact network.
The technical scheme of the invention is as follows:
a rigid busbar defect detection method comprises the following steps:
s1, positioning the interested bus area image from the acquired bus images by using a deep learning model;
s2, filtering noise of the interested bus region image;
s3, preprocessing the image of the interesting bus bar region after noise filtering to obtain a characteristic diagram, and extracting bus bar characteristics from the characteristic diagram;
and S4, performing bus defect detection according to the extracted bus characteristics, wherein the bus defect detection comprises bus bending deformation detection and/or bus scratch detection.
Further, the step S1 includes:
s11, preprocessing the acquired bus bar image, and carrying out Yolov4 model training;
s12, verifying the trained Yolov4 model to determine whether the Yolov4 model needs to be retrained;
s13, positioning the bus by using the trained YOLOV4 model, selecting an interested bus area image, and outputting the coordinates of the interested bus area image. And the coordinates of the interested bus area image are used for identifying the position of the bus.
Further, in step S11, the bus bar image obtained by collection is preprocessed, and YOLOV4 model training is performed, and the detailed steps include:
s111, acquiring a bus image by using a linear array camera or an area array camera;
s112, processing such as rotating, overturning and amplifying the acquired bus bar image so as to enhance the bus bar image data;
s113, labeling the busbar image by using a picture labeling tool, and dividing the labeled busbar image into a training set and a verification set by using a K (K = 10) fold cross verification method;
and S114, training the training set, wherein the specific training mode is that a YOLOV4 model is trained by using a YOLOV4 target detection algorithm and used for positioning the bus.
Further, in step S12, the trained YOLOV4 model is verified to determine whether the YOLOV4 model needs to be retrained, specifically: verifying the positioning accuracy of the YOLOV4 model obtained by the training of the step S11 through a verification set;
if the expected effect is not achieved, repeating the step S11;
if the expected effect is achieved, the YOLOV4 model training is complete.
Further, the filtering out the noise of the image of the interesting bus bar region comprises:
processing the interested bus region image through median filtering, and filtering the interested region noise;
wherein the processing of the bus region of interest image by median filtering includes smoothing and filtering.
Further, the smoothing and filtering process includes:
and sequencing the local pixels of the interested bus region image, calculating the gray value of each pixel point in the local pixels, and selecting a median as the gray value of the current pixel. The noise of the interested bus region image is filtered, and meanwhile, the bus boundary in the interested bus region image is well protected.
Wherein the median filtering calculation formula is:
wherein the content of the first and second substances,is represented in local pixelsIn the area (d), the gray value corresponding to each pixel point,represents the median filtered value within the current region,and (4) horizontal and vertical coordinates of pixel points in the area are represented.
Further, in step S3, extracting a bus bar feature from the feature map includes:
extracting bending deformation characteristics of the bus bar from the characteristic diagram; wherein the bus bar bending deformation characteristic comprises bus bar profile information.
Further, extracting the bus contour information specifically includes: and calculating the maximum connected region of the input pixels by using a Laplace edge detection algorithm to obtain the edge profile information of the bus.
Further, in step S3, the method for extracting a bus bar feature from the feature map further includes:
and extracting a bus scratch characteristic from the characteristic diagram, wherein the main characteristic is that a bus appears in a region with large variation of gray values.
Further, the extracting of the bus bruise feature from the feature map includes training a target detection model by using a YOLOV4 model as a basis and combining an STN spatial domain attention mechanism, and performing secondary processing on the located bus region image to extract the bus bruise feature.
I.e., the bus bar feature, includes at least a bus bar bending deformation feature and a bus bar gouging feature.
Further, the bus bar bending deformation detection in the step S4 includes:
s41, calculating to obtain a bus bar related line according to the bus bar outline information; performing linear fitting operation on the busbar line to obtain a busbar fitting linear line; and judging whether the bus bar is bent and deformed or not by comparing the maximum distance from the point on the bus bar line to the bus bar fitting straight line with a preset distance threshold value.
Further, the step S41 includes:
s411, calculating the maximum connected region of the input pixels by using a Laplace edge detection algorithm to obtain the edge profile of the bus.
Further, performing second-order partial derivative on the image of the interested bus region after the noise is filtered to obtain a Laplacian, and calculating to obtain bus edge information through a second-order difference algorithm;
the calculation formula of the Laplace edge detection algorithm is as follows:
the calculation formula of the Laplace operator is as follows:
wherein the content of the first and second substances,respectively, horizontal and vertical coordinates in the image for representing the position information of each pixel value.
Further, the bus edge information is highlighted through convolution operation, and bus outline information is obtained.
The convolution operation specifically includes: performing matrix multiplication operation on the image of the region of the bus of interest with the noise-filtered image of the region of the bus and the Laplacian to highlight the edge information of the bus; thereby obtaining bus profile information.
S412, extracting bus bars from the bus bar outline information by using a Hoffman line detection algorithm, and screening interference lines according to the length and the angle of the bus bars to obtain bus bar lines.
The Hoffman linear detection algorithm has the following calculation formula:
wherein the content of the first and second substances,the set of all the lengths of the extracted lines,indicating that the straight line is filtered by an angle threshold,representing a range of thresholds.
And S413, fitting the bus straight line through a weighted least square method to obtain a bus fitting straight line.
Wherein, the step of fitting the busbar straight line comprises the following specific steps:
initializing the weight, wherein the formula is as follows:whereinThe value is a threshold value and the value is,indicating the distance of a point on the busbar to the straight line.
Further, according to the bus-bar fitting straight line, using least square method to calculate the offset of each point on the bus-bar relative to the three directions of x, y and z, using D xx ,D xy ,D yy Represents;
wherein the content of the first and second substances,to representThe average value of (a) of (b),presentation busThe horizontal and vertical coordinates of the points on the line,indicating the number of dots.
Further, calculating the offset by a least square method to obtain a fitted linear equation; the specific calculation formula is as follows:
and S414, calculating the maximum distance from the point on the bus bar line to the bus bar fitting straight line, comparing the maximum distance with a preset distance threshold value, and judging whether the bus bar is bent and deformed.
Further, when the maximum distance from the point on the bus bar line to the bus bar fitting straight line is greater than or equal to a preset distance threshold value, judging that the bus bar is bent and deformed;
otherwise, it is determined that the bus bar is not bent and deformed.
Wherein, the distance calculation formula is as follows:whereinA distance threshold value is indicated which is,indicating the distance of a point on the bus bar line to the line,,indicating the hanger wire length.
Further, the bus bar scuffing detection in step S4 includes:
s42, performing secondary processing on the located bus bar region image, specifically including: training a target detection model based on the trained YOLOV4 model and combining with an STN spatial domain attention mechanism, extracting bus abrasion characteristics, and positioning and classifying the bus abrasion characteristics through the trained target detection model to finish the detection of the bus abrasion defects.
Further, the training of the target detection model based on the YOLOV4 model and combined with the STN spatial domain attention mechanism to extract the bus scratch features includes:
and S421, performing convolution and pooling on the characteristic diagram by using the YOLOV4 model to generate a new bus characteristic diagram theta. The bus characteristic image is extracted, and the image is subjected to down-sampling, so that the size of the image is reduced.
Wherein the rolling and pooling treatment specifically comprises:
the convolution processing calculation formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,the image size of the output feature map V is indicated,the image size of the input feature map U is represented,which represents the size of the convolution kernel,represents the step size moved during convolution;
the pooling treatment calculation formula is as follows:
wherein the content of the first and second substances,the image size of the output feature map V is indicated,the size of the pooling is indicated by the size of the pool,representing the step size of the pooling movement.
Further, batch normalization processing is carried out on the gray value data of each pixel of the bus characteristic diagram theta, the pixel values of the image are distributed in the range of [0,1], and BN operation is added before the characteristic diagram U is input into each layer, so that the convergence speed of the model is accelerated.
Further, by activating the function RELU, the YOLOV4 model becomes sparse, and the interdependence relation of the parameters is reduced, so as to alleviate the over-fitting problem.
Wherein, the formula of the specific activation function RELU is as follows:
when the weight z <0, the activation function RELU output is 0.
And S422, carrying out re-turning and cutting processing on the bus characteristic diagram theta by using an affine transformation tool so as to improve the richness of the training data set. Wherein affine changesTool changing generatorOther affine transformation tools are also possible.
S423, using a bilinear interpolation algorithm to enable the whole YOLOV4 model to perform backward propagation training end to end, and circularly iterating the weight of the model to obtain an optimal solution; the calculation formula is as follows:
wherein the content of the first and second substances,respectively representing the coordinate information of the current image area, P (x, y) representing the pixel value of the point after bilinear interpolation, P (x) 0 ,y 0 ) Represents the pixel value of the lower left corner coordinate of the region, P (x) 1 ,y 0 ) Represents the pixel value of the lower right corner coordinate of the region, P (x) 0 ,y 1 ) Representing the upper left corner coordinate pixel value, P (x) 1 ,y 1 ) Representing the upper right corner coordinate pixel value.
Further, training a target detection model through the labeled training data set; and positioning and classifying the bus scratch characteristics by using a trained target detection model, completing the bus scratch detection, and obtaining the bus scratch defect position.
The invention also provides a rigid busbar defect detection device suitable for the rigid busbar defect detection method, wherein the rigid busbar defect detection device comprises a busbar image acquisition module, a busbar image processing module and a busbar defect detection module; the bus bar defect detection module comprises a bus bar bending deformation detection module and a bus bar scratch detection module;
the bus bar image acquisition module at least comprises a linear array camera and/or an area array camera and is used for acquiring and acquiring bus bar images;
the image processing module is used for processing and detecting bus bar images, positioning the bus bars and obtaining bus bar characteristics;
the bus bar bending deformation detection module is used for calculating and detecting the bus bar bending deformation according to the bus bar characteristics;
the bus bar scratch detection module comprises the Yolov4+ STN target detection model and is used for detecting bus bar scratches according to the positioned bus bar and a bus bar characteristic diagram.
Compared with the prior art, the invention has the beneficial effects that:
the rigid bus bending deformation and scratch detection method provided by the invention provides model training based on a deep learning model YOLOV4, positions the bus and extracts bus characteristics, and reduces the influence of external interference noise; bus noise is further filtered by using median filtering, and a bus image is subjected to smoothing treatment, so that the boundary of the bus can be well protected; the bus bar defect detection method can simultaneously detect the bending deformation and the scratch defect of the bus bar, can quickly and accurately detect and judge whether the bus bar is bent or not, realizes the scratch detection and identification of the bus bar based on YOLOV4 and a spatial domain attention mechanism STN, and judges the type of the scratch defect of the bus bar. The detection precision and the detection efficiency of the defects of the bus bar are effectively improved, the bus bar fault prompt can be provided for the safe operation of the train, and the technical reference is provided for the bus bar fault maintenance.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a flow chart of the YOLOV4 model training of the present invention.
FIG. 3 is a schematic diagram of bus extraction.
FIG. 4 is a schematic diagram of a bus before being filtered.
FIG. 5 is a schematic diagram of a bus bar after being filtered.
Fig. 6 is a spatial domain attention mechanism STN based on YOLOV 4.
Fig. 7 is a schematic diagram of an implementation process based on a bilinear interpolation algorithm.
Detailed Description
The idea, the detailed description and the technical effects of the present invention are clearly and completely described below in conjunction with the embodiments and the accompanying drawings, so as to fully understand the objects, the features and the effects of the present invention.
Example 1
A rigid busbar defect detection method, as shown in fig. 1, comprising the steps of:
s1, positioning the interested bus area image from the acquired bus images by using a deep learning model;
s2, filtering noise of the interested bus region image;
further, filtering noise of the interested bus region, including image smoothing filtering, and filtering interference noise by performing smoothing filtering on an image with poor image quality;
s3, preprocessing the image of the interesting bus bar region after noise filtering to obtain a characteristic diagram, and extracting bus bar characteristics from the characteristic diagram; wherein the busbar characteristics include busbar bending deformation characteristics and busbar gouging characteristics;
and S4, performing bus defect detection according to the bus bending deformation characteristic and the bus scratch characteristic respectively, wherein the bus defect detection comprises bus bending deformation detection and/or bus scratch detection.
The busbar bending deformation detection includes: preprocessing the image of the interested bus region after noise filtering, extracting the related bus line, calculating the maximum distance from the line to the line by fitting the line, and judging whether the bus is bent and deformed.
The bus bar scuffing detection includes: and detecting the bus scratch defects by using a YOLOV4+ STN space attention combined model.
Further, training a target detection model by taking a YOLOV4 model as a basis and combining an STN spatial domain attention mechanism, and extracting bus abrasion characteristics;
and positioning and classifying the bus bar scratch characteristics through the trained target detection model to finish the detection of the bus bar scratch defects.
Example 2
The present embodiment locates the image of the bus region of interest from the acquired bus image by using the deep learning model based on embodiment 1. As shown in fig. 2, the specific processing steps are as follows:
s11, preprocessing the acquired bus bar image, and carrying out Yolov4 model training;
s12, verifying the trained Yolov4 model, and determining whether the Yolov4 model needs to be retrained;
s13, positioning the bus by using the trained YOLOV4 model, selecting an interested bus area image, and outputting the coordinates of the interested bus area image. And the coordinates of the interested bus area image are used for identifying the position of the bus.
Further, the detailed steps in step S11 are:
s111, acquiring a bus bar image by using a linear array camera or an area array camera arranged on the roof of the vehicle;
s112, processing such as rotating, overturning and amplifying the acquired bus bar image so as to enhance the bus bar image data;
s113, labeling the busbar image by using a picture labeling tool, and dividing the labeled busbar image into a training set and a verification set by using a K (K = 10) fold cross verification method;
the picture marking tool adopts a Labelimg marking tool and can also adopt other marking tools, which are not described in detail herein; the training set and the verification set are in a ratio of 9:1, or in a ratio of 8: 2;
s114, training the training set, wherein the specific training mode is that a YOLOV4 model is trained by using a YOLOV4 target detection algorithm.
Example 3
On the basis of embodiment 2, smoothing and filtering the busbar region of interest image output by the trained YOLOV4 model to filter out the region of interest noise. As shown in fig. 3, the specific steps are as follows:
processing the interested bus region image through median filtering, and filtering the interested region noise;
wherein the processing of the bus region of interest image by median filtering includes smoothing and filtering.
Further, the smoothing and filtering process includes:
and sequencing the local pixels of the interested bus region image, calculating the gray value of each pixel point in the local pixels, and selecting a median as the gray value of the pixels of the input image. The noise of the interested bus region image is filtered, and meanwhile, the bus boundary in the interested bus region image is well protected.
Wherein the median filtering calculation formula is:
wherein the content of the first and second substances,is represented in a local pixelThe gray value corresponding to each pixel point in the area (2),represents the median filtered value within the current region,and (4) horizontal and vertical coordinates of pixel points in the area are represented.
Example 4
The present embodiment is based on embodiment 3, and the extracting of the bus bar feature from the feature map includes:
extracting bending deformation characteristics of the bus bar from the characteristic diagram; wherein the bus bar bending deformation characteristics include bus bar profile information;
further, extracting the bus contour information specifically includes: and calculating the maximum connected region of the input pixels by using a Laplace edge detection algorithm to obtain the edge profile information of the bus.
Further, in step S3, the method for extracting a bus bar feature from the feature map further includes:
extracting bus scratch characteristics from the characteristic diagram, wherein the main characteristics of the bus scratch characteristics are that a bus has an area with large variation of gray values;
further, the extracting of the bus bruise feature from the feature map includes training a target detection model by using a YOLOV4 model as a basis and combining an STN spatial domain attention mechanism, and performing secondary processing on the located bus region image to extract the bus bruise feature.
I.e., the bus bar feature, includes at least a bus bar bending deformation feature and a bus bar gouging feature.
Example 5
In this embodiment, on the basis of embodiment 4, a method for detecting bending deformation of a rigid bus bar is provided, which specifically includes the following steps:
s41, calculating to obtain a bus bar related line according to the bus bar outline information; performing linear fitting operation on the busbar to obtain a linear equation of the busbar; and judging whether the bus bar is bent and deformed or not by extracting the maximum distance between the point on the bus bar line and the bus bar linear equation and comparing the maximum distance with a preset distance threshold.
Further, the step S41 includes:
s411, calculating the maximum connected region of the pixels of the input image by using a Laplace edge detection algorithm to obtain a bus edge profile;
further, performing second-order partial derivative on the image of the interesting bus region after the noise is filtered to obtain a Laplacian, and calculating to obtain bus edge information through a second-order difference algorithm;
the calculation formula of the Laplace edge detection algorithm is as follows:
the calculation formula of the Laplace operator is as follows:
wherein the content of the first and second substances,respectively, horizontal and vertical coordinates in the image, for representing the position information of each pixel value.
Further, by convolution operation, the bus edge information is highlighted, and bus outline information is obtained;
the convolution operation specifically includes: performing matrix multiplication operation on the image of the region of the bus of interest with the noise-filtered image of the region of the bus and the Laplacian to highlight the edge information of the bus; thereby obtaining bus profile information.
S412, extracting a bus bar line from the bus bar outline information by using a Hoffman linear detection algorithm;
and screening interference lines according to the length and the angle of the bus bar lines to obtain bus bar straight lines, as shown in a schematic diagram 4 before bus bar straight line filtering and a schematic diagram 5 after bus bar straight line filtering.
The Hoffman linear detection algorithm has the following calculation formula:
wherein the content of the first and second substances,the table represents the set of all lengths of the extracted lines,representing passing angleThe degree threshold value filters the straight line,representing a range of thresholds.
And S413, fitting the bus straight line by using a weighted least square method to obtain a bus fitting straight line.
Wherein, the step of fitting the busbar straight line comprises the following steps:
wherein the content of the first and second substances,the value is a threshold value and the value is,indicating the distance of a point on the busbar to the straight line.
Further, according to the bus-bar fitting straight line, using least square method to calculate the offset of each point on the bus-bar relative to x, y and z three directions, using D xx ,D xy ,D yy Represents; the concrete formula is as follows:
wherein the content of the first and second substances,to representThe average value of (a) of (b),the horizontal and vertical coordinates of the points on the bus bar line are shown,indicating the number of dots.
Further, calculating the offset by a least square method to obtain a fitted linear equation(ii) a The specific calculation formula is as follows:
and S414, calculating the maximum distance from the extracted points on the bus bar line to the bus bar fitting straight line, and comparing the maximum distance with a preset distance threshold value to judge whether the bus bar is bent and deformed.
Further, when the maximum distance from the point on the bus bar line to the bus bar fitting straight line is greater than or equal to a preset distance threshold value, judging that the bus bar is bent and deformed;
otherwise, it is determined that the bus bar is not bent and deformed.
Further, the distance between a point on the bus bar line and the bus bar fitting straight line is calculated according to the following formula:,
wherein the content of the first and second substances,a distance threshold value is indicated which is,indicating the distance of a point on the bus bar line to the line,,indicating the hanger wire length.
Example 6
The embodiment provides a method for detecting scratches of a rigid bus bar based on embodiment 4, which specifically includes the following steps:
s42, performing a secondary process on the located bus bar region image, as shown in the schematic diagram of fig. 6, specifically including:
training a target detection model by taking a YOLOV4 model as a basis and combining an STN spatial domain attention mechanism, and extracting bus scratch characteristics;
and positioning and classifying the bus bar scratch characteristics through the trained target detection model to finish the detection of the bus bar scratch defects.
Further, the training of the target detection model based on the YOLOV4 model and combined with the STN spatial domain attention mechanism to extract the bus scratch features includes:
and S421, performing convolution and pooling on the characteristic diagram by using the YOLOV4 model to generate a new bus characteristic diagram theta. The bus characteristic image is extracted, and the image is subjected to down-sampling, so that the size of the image is reduced.
The convolution processing has the calculation formula as follows:
whereinThe image size of the output feature map V is indicated,the image size of the input feature map U is represented,which represents the size of the convolution kernel,representing the step size moved during the convolution process.
The pooling treatment has the calculation formula as follows:
the image size of the output feature map V is indicated,the size of the pooling is indicated by the size of the pool,representing the step size of the pooling movement.
Further, batch normalization processing is carried out on the gray value data of each pixel of the bus characteristic diagram theta, the pixel values of the image are distributed in the range of [0,1], and BN operation is added before the characteristic diagram U is input into each layer, so that the convergence speed of the model is accelerated.
Further, by activating the function RELU, the YOLOV4 model becomes sparse, and the interdependencies of the parameters are reduced to alleviate the over-fitting problem.
Wherein, the formula of the specific activation function RELU is as follows:
when the weight z <0, the activation function RELU output is 0.
And S422, carrying out re-turning and cutting processing on the bus characteristic diagram theta by using an affine transformation tool so as to improve the richness of the training data set. Wherein the affine transformation tool is a generatorOther affine transformation tools are also possible.
S423, using bilinear interpolation algorithm, making the entire YOLOV4 model perform inverse propagation training end to end, and performing loop iteration model weight to obtain an optimal solution, as shown in fig. 7, the specific formula is:
wherein the coordinate information of the current image area is respectively, P (x, y) represents the pixel value of the point after bilinear interpolation, and P (x) 0 ,y 0 ) Represents the pixel value of the lower left corner coordinate of the region, P (x) 1 ,y 0 ) Represents the pixel value of the lower right corner coordinate of the region, P (x) 0 ,y 1 ) Representing the upper left corner coordinate pixel value, P (x) 1 ,y 1 ) Representing the upper right corner coordinate pixel value.
Further, training a target detection model through the labeled training data set; and positioning and classifying the bus scratch characteristics by using the trained target detection model, completing the bus scratch detection, and obtaining the bus scratch defect position.
Example 7
The embodiment provides a rigid busbar defect detection device suitable for the rigid busbar defect detection method, and the rigid busbar defect detection device comprises a busbar image acquisition module, a busbar image processing module and a busbar defect detection module; the bus bar defect detection module comprises a bus bar bending deformation detection module and a bus bar scratch detection module;
the bus bar image acquisition module at least comprises a linear array camera and/or an area array camera and is used for acquiring and acquiring bus bar images;
the image processing module is used for processing and detecting bus bar images, positioning the bus bars and obtaining bus bar characteristics;
the bus bar bending deformation detection module is used for calculating and detecting the bus bar bending deformation according to the bus bar characteristics;
the bus bar scratch detection module comprises the Yolov4+ STN target detection model and is used for detecting bus bar scratches according to the positioned bus bar and a bus bar characteristic diagram.
The rigid bus bending deformation and scratch detection method provided by the invention provides model training based on a deep learning model YOLOV4, positions the bus and extracts bus characteristics, and reduces the influence of external interference noise; bus noise is further filtered by using median filtering, and a bus image is subjected to smoothing treatment, so that the boundary of the bus can be well protected; the bus bar defect detection method can simultaneously detect the bending deformation and the scratch defect of the bus bar, can quickly and accurately detect and judge whether the bus bar is bent or not, realizes the scratch detection and identification of the bus bar based on YOLOV4 and a spatial domain attention mechanism STN, and judges the type of the scratch defect of the bus bar. The detection precision and the detection efficiency of the defects of the bus bar are effectively improved, the bus bar fault prompt can be provided for the safe operation of the train, and the technical reference is provided for the bus bar fault maintenance. Has great safety significance and practical application value.
The embodiments of the present invention have been described in detail, but the present invention is not limited to the embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and the equivalents or substitutions are included in the scope of the present invention defined by the claims.
Claims (13)
1. A rigid busbar defect detection method is characterized by comprising the following steps:
s1, positioning the interested bus area image from the acquired bus images by using a deep learning model;
s2, smoothing and filtering the interested bus region image through median filtering, and filtering the interested bus region image noise;
s3, preprocessing the image of the interest bus region after noise filtering to obtain a feature map, and extracting bus features from the feature map;
and S4, performing bus defect detection according to the extracted bus characteristics, wherein the bus defect detection comprises bus bending deformation detection and/or bus scratch detection.
2. The rigid busbar defect detection method of claim 1, wherein: in step S1, locating the image of the bus region of interest from the acquired bus image by using the deep learning model specifically includes:
s11, preprocessing the acquired bus bar image, and carrying out Yolov4 model training;
s12, verifying the trained Yolov4 model to determine whether the Yolov4 model needs to be retrained;
s13, positioning the bus by using the trained YOLOV4 model, selecting an interested bus area image, and outputting the coordinates of the interested bus area image.
3. The rigid busbar defect detecting method according to claim 2, wherein the step S11 is performed by preprocessing the acquired busbar image and performing YOLOV4 model training, and comprises:
s111, acquiring a bus bar image by using a linear array camera or an area array camera arranged on the roof of the vehicle;
s112, rotating, overturning and amplifying the acquired bus bar image to enhance the bus bar image data;
s113, marking the busbar image by using a picture marking tool, and dividing the marked busbar image into a training set and a verification set by using a K-fold cross verification method;
s114, training the training set to obtain a trained Yolov4 model.
4. The rigid busbar defect detecting method according to claim 3, wherein: the step S12 of verifying the trained YOLOV4 model to determine whether the YOLOV4 model needs to be retrained includes:
verifying the positioning accuracy of the trained Yolov4 model through the verification set;
if the expected effect is not achieved, repeating the step S11;
if the expected effect is achieved, the YOLOV4 model training is complete.
5. The rigid busbar defect detection method of claim 1, wherein the smoothing and filtering process comprises:
and sequencing local pixels of the interested bus region image, calculating the gray value of each pixel point in the local pixels, selecting a median as the gray value of the current input pixel, and removing noise interference.
6. The rigid busbar defect inspection method of claim 5, wherein the busbar characteristics include at least busbar bending deformation characteristics and busbar scratching characteristics.
7. The rigid busbar defect detecting method according to claim 6,
the busbar bending deformation feature comprising: bus profile information;
the bus bar bending deformation detection includes: calculating to obtain a bus bar related line by using the bus bar outline information; fitting the busbar line to obtain a busbar fitting straight line; and comparing the maximum distance from the point on the bus bar line to the bus bar fitting straight line with a preset distance threshold value, and judging whether the bus bar is bent and deformed.
8. The rigid bus bar defect detection method of claim 7, wherein the bus bar edge contour information is obtained by calculating a maximum connected region of input pixels using a laplacian edge detection algorithm.
9. The rigid bus bar defect detection method according to claim 8, wherein a Hoffman linear detection algorithm is used for extracting bus bar lines from the bus bar outline information, and interference lines are screened according to the length and the angle of the bus bar lines to obtain bus bar lines; and fitting the bus straight line by a weighted least square method to obtain a bus fitting straight line.
10. The rigid busbar defect detecting method according to claim 8, wherein when a maximum distance from a point on the busbar line to the busbar fitting straight line is greater than or equal to a preset distance threshold, it is determined that the busbar is bent.
11. The rigid busbar defect inspection method of claim 6, wherein the busbar scuffing inspection comprises:
based on the trained YOLOV4 model, extracting and obtaining the bus scratch characteristics by combining an STN spatial domain attention mechanism; training a target detection model through the labeled training samples; and positioning and classifying the bus scratch characteristics through a trained target detection model.
12. The rigid busbar defect detecting method according to claim 11, wherein the extracting busbar scratching features through the trained YOLOV4 model and STN spatial domain attention mechanism comprises:
performing convolution and pooling on the characteristic diagram by using the YOLOV4 model to generate a new bus characteristic diagram theta;
carrying out batch normalization processing on the gray value data of each pixel of the bus characteristic diagram theta to enable the pixel values of the image to be distributed in the range of [0,1 ];
activating the function RELU to make the YOLOV4 model sparse;
carrying out re-turning and cutting processing on the bus characteristic diagram theta by using an affine transformation tool so as to improve the richness of a training data set;
by using a bilinear interpolation algorithm, the whole Yolov4 model can be subjected to back propagation training end to end, and the model weight is continuously updated in an iterative manner to obtain the optimal solution of the weight.
13. A rigid busbar defect detecting device suitable for the rigid busbar defect detecting method according to any one of claims 1 to 12, comprising an image acquisition module, an image processing module, and a busbar defect detecting module; the bus bar defect detection module comprises a bus bar bending deformation detection module and a bus bar scratch detection module;
the image acquisition module at least comprises a linear array camera and/or an area array camera and is used for acquiring and acquiring a bus bar image;
the image processing module is used for processing the bus bar image acquired by the image acquisition module so as to position the bus bar and obtain bus bar characteristics;
the bus bar bending deformation detection module is used for calculating and detecting the bus bar bending deformation according to the bus bar characteristics;
the bus scratch detection module comprises a YOLOV4+ STN target detection model and is used for detecting bus scratches according to the positioned bus and a bus characteristic diagram.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210367179.9A CN114897778A (en) | 2022-04-08 | 2022-04-08 | Rigid busbar defect detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210367179.9A CN114897778A (en) | 2022-04-08 | 2022-04-08 | Rigid busbar defect detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114897778A true CN114897778A (en) | 2022-08-12 |
Family
ID=82715430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210367179.9A Pending CN114897778A (en) | 2022-04-08 | 2022-04-08 | Rigid busbar defect detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897778A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116703891A (en) * | 2023-07-31 | 2023-09-05 | 苏州精控能源科技有限公司 | Welding detection method and device for cylindrical lithium battery busbar |
-
2022
- 2022-04-08 CN CN202210367179.9A patent/CN114897778A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116703891A (en) * | 2023-07-31 | 2023-09-05 | 苏州精控能源科技有限公司 | Welding detection method and device for cylindrical lithium battery busbar |
CN116703891B (en) * | 2023-07-31 | 2023-10-10 | 苏州精控能源科技有限公司 | Welding detection method and device for cylindrical lithium battery busbar |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108009591B (en) | Contact network key component identification method based on deep learning | |
CN109101924B (en) | Machine learning-based road traffic sign identification method | |
CN107543828B (en) | Workpiece surface defect detection method and system | |
CN111681240B (en) | Bridge surface crack detection method based on YOLO v3 and attention mechanism | |
CN109801267B (en) | Inspection target defect detection method based on feature point detection and SVM classifier | |
CN110211101A (en) | A kind of rail surface defect rapid detection system and method | |
CN107665348B (en) | Digital identification method and device for digital instrument of transformer substation | |
CN109872303B (en) | Surface defect visual detection method and device and electronic equipment | |
CN105447512A (en) | Coarse-fine optical surface defect detection method and coarse-fine optical surface defect detection device | |
CN111311567A (en) | Method for identifying fastener and steel rail diseases of track line image | |
CN112101138B (en) | Bridge inhaul cable surface defect real-time identification system and method based on deep learning | |
CN103955496B (en) | A kind of quick live tire trace decorative pattern searching algorithm | |
CN113436157A (en) | Vehicle-mounted image identification method for pantograph fault | |
CN110619146A (en) | Polycrystalline silicon cell crack defect detection method based on structural similarity measurement | |
CN111091111A (en) | Vehicle bottom dangerous target identification method | |
CN110728269B (en) | High-speed rail contact net support pole number plate identification method based on C2 detection data | |
CN111882664A (en) | Multi-window accumulated difference crack extraction method | |
CN114897778A (en) | Rigid busbar defect detection method and device | |
CN111814773A (en) | Lineation parking space identification method and system | |
CN115272830A (en) | Pantograph foreign matter detection method based on deep learning | |
CN114581886A (en) | Visibility discrimination method, device and medium combining semantic segmentation and frequency domain analysis | |
CN109272484B (en) | Rainfall detection method based on video image | |
CN114030395A (en) | Method and system for detecting foreign matters in contact suspension string area | |
CN111256596B (en) | Size measuring method and device based on CV technology, computer equipment and medium | |
CN115797970B (en) | Dense pedestrian target detection method and system based on YOLOv5 model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |