CN114897778A - Rigid busbar defect detection method and device - Google Patents

Rigid busbar defect detection method and device Download PDF

Info

Publication number
CN114897778A
CN114897778A CN202210367179.9A CN202210367179A CN114897778A CN 114897778 A CN114897778 A CN 114897778A CN 202210367179 A CN202210367179 A CN 202210367179A CN 114897778 A CN114897778 A CN 114897778A
Authority
CN
China
Prior art keywords
bus
bus bar
image
busbar
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210367179.9A
Other languages
Chinese (zh)
Inventor
占栋
王瑞峰
周蕾
梁四平
王云龙
赵杰超
张金鑫
董建明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tangyuan Electric Co Ltd
Original Assignee
Chengdu Tangyuan Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tangyuan Electric Co Ltd filed Critical Chengdu Tangyuan Electric Co Ltd
Priority to CN202210367179.9A priority Critical patent/CN114897778A/en
Publication of CN114897778A publication Critical patent/CN114897778A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting defects of a rigid busbar, and relates to the technical field of image processing and image recognition; the method comprises the steps of S1, locating a bus region image of interest from acquired bus images by utilizing a deep learning model; s2, filtering noise of the interested bus region image; s3, preprocessing the image of the interest bus region after noise filtering to obtain a feature map, and extracting bus features from the feature map; and S4, performing bus defect detection according to the extracted bus characteristics, wherein the bus defect detection comprises bus bending deformation detection and/or bus scratch detection. The bus bar defect detection method provided by the invention can be used for rapidly and accurately detecting and judging whether the bus bar is bent and deformed or not, rapidly identifying the scratch type of the bus bar and positioning the scratch position of the bus bar, and is suitable for a rigid contact network in the field of rail transit.

Description

Rigid busbar defect detection method and device
Technical Field
The invention relates to the technical field of image processing and image recognition, in particular to a rigid busbar defect detection method and device.
Background
In electrified railways, overhead rigid contact networks are widely applied to subways in various large cities at present due to the advantages of no tension of contact lines, few parts, low clearance requirement, small maintenance amount and the like. The bus bar is used as an important component of a rigid contact net and used for fixing the contact line, so that the pantograph can freely slide on the contact line to receive power. However, as the subway operation time continues to increase, various fault problems on the busbar are gradually exposed, and due to the fact that the local elasticity of the rigid catenary is poor, the abrasion phenomenon exists between the pantograph and the busbar, the busbar is scratched, threads of a connecting plate of an intermediate joint in the busbar are smooth, and the busbar falls off from a contact line to cause groove separation. Because the interval time in the subway operation process is short, when a bus bar has a serious fault, the current collection quality of a train can be directly influenced, and therefore, the detection of the defects of the bus bar is a problem to be solved urgently in the application and popularization of the current rigid contact network.
At present, a fault detection method for rigid busbar abrasion mainly performs line inspection through maintenance personnel, and is low in efficiency and easy to miss detection. Therefore, how to efficiently and accurately identify the defects of the rigid contact net busbar has very important practical significance.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a rigid busbar defect detection method and device, which have the advantages of high efficiency, high accuracy and great safety significance and practical application value for identifying abnormal conditions such as bending deformation, scratch and the like of a busbar and detecting potential safety hazards of a contact network.
The technical scheme of the invention is as follows:
a rigid busbar defect detection method comprises the following steps:
s1, positioning the interested bus area image from the acquired bus images by using a deep learning model;
s2, filtering noise of the interested bus region image;
s3, preprocessing the image of the interesting bus bar region after noise filtering to obtain a characteristic diagram, and extracting bus bar characteristics from the characteristic diagram;
and S4, performing bus defect detection according to the extracted bus characteristics, wherein the bus defect detection comprises bus bending deformation detection and/or bus scratch detection.
Further, the step S1 includes:
s11, preprocessing the acquired bus bar image, and carrying out Yolov4 model training;
s12, verifying the trained Yolov4 model to determine whether the Yolov4 model needs to be retrained;
s13, positioning the bus by using the trained YOLOV4 model, selecting an interested bus area image, and outputting the coordinates of the interested bus area image. And the coordinates of the interested bus area image are used for identifying the position of the bus.
Further, in step S11, the bus bar image obtained by collection is preprocessed, and YOLOV4 model training is performed, and the detailed steps include:
s111, acquiring a bus image by using a linear array camera or an area array camera;
s112, processing such as rotating, overturning and amplifying the acquired bus bar image so as to enhance the bus bar image data;
s113, labeling the busbar image by using a picture labeling tool, and dividing the labeled busbar image into a training set and a verification set by using a K (K = 10) fold cross verification method;
and S114, training the training set, wherein the specific training mode is that a YOLOV4 model is trained by using a YOLOV4 target detection algorithm and used for positioning the bus.
Further, in step S12, the trained YOLOV4 model is verified to determine whether the YOLOV4 model needs to be retrained, specifically: verifying the positioning accuracy of the YOLOV4 model obtained by the training of the step S11 through a verification set;
if the expected effect is not achieved, repeating the step S11;
if the expected effect is achieved, the YOLOV4 model training is complete.
Further, the filtering out the noise of the image of the interesting bus bar region comprises:
processing the interested bus region image through median filtering, and filtering the interested region noise;
wherein the processing of the bus region of interest image by median filtering includes smoothing and filtering.
Further, the smoothing and filtering process includes:
and sequencing the local pixels of the interested bus region image, calculating the gray value of each pixel point in the local pixels, and selecting a median as the gray value of the current pixel. The noise of the interested bus region image is filtered, and meanwhile, the bus boundary in the interested bus region image is well protected.
Wherein the median filtering calculation formula is:
Figure 955271DEST_PATH_IMAGE001
,
wherein the content of the first and second substances,
Figure 646146DEST_PATH_IMAGE002
is represented in local pixels
Figure 589832DEST_PATH_IMAGE003
In the area (d), the gray value corresponding to each pixel point,
Figure 758776DEST_PATH_IMAGE004
represents the median filtered value within the current region,
Figure 471517DEST_PATH_IMAGE005
and (4) horizontal and vertical coordinates of pixel points in the area are represented.
Further, in step S3, extracting a bus bar feature from the feature map includes:
extracting bending deformation characteristics of the bus bar from the characteristic diagram; wherein the bus bar bending deformation characteristic comprises bus bar profile information.
Further, extracting the bus contour information specifically includes: and calculating the maximum connected region of the input pixels by using a Laplace edge detection algorithm to obtain the edge profile information of the bus.
Further, in step S3, the method for extracting a bus bar feature from the feature map further includes:
and extracting a bus scratch characteristic from the characteristic diagram, wherein the main characteristic is that a bus appears in a region with large variation of gray values.
Further, the extracting of the bus bruise feature from the feature map includes training a target detection model by using a YOLOV4 model as a basis and combining an STN spatial domain attention mechanism, and performing secondary processing on the located bus region image to extract the bus bruise feature.
I.e., the bus bar feature, includes at least a bus bar bending deformation feature and a bus bar gouging feature.
Further, the bus bar bending deformation detection in the step S4 includes:
s41, calculating to obtain a bus bar related line according to the bus bar outline information; performing linear fitting operation on the busbar line to obtain a busbar fitting linear line; and judging whether the bus bar is bent and deformed or not by comparing the maximum distance from the point on the bus bar line to the bus bar fitting straight line with a preset distance threshold value.
Further, the step S41 includes:
s411, calculating the maximum connected region of the input pixels by using a Laplace edge detection algorithm to obtain the edge profile of the bus.
Further, performing second-order partial derivative on the image of the interested bus region after the noise is filtered to obtain a Laplacian, and calculating to obtain bus edge information through a second-order difference algorithm;
the calculation formula of the Laplace edge detection algorithm is as follows:
Figure 333294DEST_PATH_IMAGE006
the calculation formula of the Laplace operator is as follows:
Figure 764275DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 736910DEST_PATH_IMAGE008
respectively, horizontal and vertical coordinates in the image for representing the position information of each pixel value.
Further, the bus edge information is highlighted through convolution operation, and bus outline information is obtained.
The convolution operation specifically includes: performing matrix multiplication operation on the image of the region of the bus of interest with the noise-filtered image of the region of the bus and the Laplacian to highlight the edge information of the bus; thereby obtaining bus profile information.
S412, extracting bus bars from the bus bar outline information by using a Hoffman line detection algorithm, and screening interference lines according to the length and the angle of the bus bars to obtain bus bar lines.
The Hoffman linear detection algorithm has the following calculation formula:
Figure 304158DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 336836DEST_PATH_IMAGE010
the set of all the lengths of the extracted lines,
Figure 255113DEST_PATH_IMAGE011
indicating that the straight line is filtered by an angle threshold,
Figure 765860DEST_PATH_IMAGE012
representing a range of thresholds.
And S413, fitting the bus straight line through a weighted least square method to obtain a bus fitting straight line.
Wherein, the step of fitting the busbar straight line comprises the following specific steps:
let the equation of a straight line be:
Figure 453194DEST_PATH_IMAGE013
wherein
Figure 408772DEST_PATH_IMAGE014
Figure 548766DEST_PATH_IMAGE015
Is a coefficient;
initializing the weight, wherein the formula is as follows:
Figure 128783DEST_PATH_IMAGE016
wherein
Figure 670623DEST_PATH_IMAGE017
The value is a threshold value and the value is,
Figure 45104DEST_PATH_IMAGE018
indicating the distance of a point on the busbar to the straight line.
Further, according to the bus-bar fitting straight line, using least square method to calculate the offset of each point on the bus-bar relative to the three directions of x, y and z, using D xx ,D xy ,D yy Represents;
Figure 344498DEST_PATH_IMAGE019
Figure 321681DEST_PATH_IMAGE020
Figure 858973DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 528989DEST_PATH_IMAGE022
to represent
Figure 518941DEST_PATH_IMAGE023
The average value of (a) of (b),
Figure 299816DEST_PATH_IMAGE024
presentation busThe horizontal and vertical coordinates of the points on the line,
Figure 691614DEST_PATH_IMAGE025
indicating the number of dots.
Further, calculating the offset by a least square method to obtain a fitted linear equation; the specific calculation formula is as follows:
Figure 532531DEST_PATH_IMAGE026
Figure 9780DEST_PATH_IMAGE027
Figure 594345DEST_PATH_IMAGE028
Figure 840650DEST_PATH_IMAGE029
Figure 852468DEST_PATH_IMAGE030
and S414, calculating the maximum distance from the point on the bus bar line to the bus bar fitting straight line, comparing the maximum distance with a preset distance threshold value, and judging whether the bus bar is bent and deformed.
Further, when the maximum distance from the point on the bus bar line to the bus bar fitting straight line is greater than or equal to a preset distance threshold value, judging that the bus bar is bent and deformed;
otherwise, it is determined that the bus bar is not bent and deformed.
Wherein, the distance calculation formula is as follows:
Figure 285854DEST_PATH_IMAGE031
wherein
Figure 939690DEST_PATH_IMAGE032
A distance threshold value is indicated which is,
Figure 40501DEST_PATH_IMAGE033
indicating the distance of a point on the bus bar line to the line,
Figure 223220DEST_PATH_IMAGE034
Figure 409482DEST_PATH_IMAGE035
indicating the hanger wire length.
Further, the bus bar scuffing detection in step S4 includes:
s42, performing secondary processing on the located bus bar region image, specifically including: training a target detection model based on the trained YOLOV4 model and combining with an STN spatial domain attention mechanism, extracting bus abrasion characteristics, and positioning and classifying the bus abrasion characteristics through the trained target detection model to finish the detection of the bus abrasion defects.
Further, the training of the target detection model based on the YOLOV4 model and combined with the STN spatial domain attention mechanism to extract the bus scratch features includes:
and S421, performing convolution and pooling on the characteristic diagram by using the YOLOV4 model to generate a new bus characteristic diagram theta. The bus characteristic image is extracted, and the image is subjected to down-sampling, so that the size of the image is reduced.
Wherein the rolling and pooling treatment specifically comprises:
the convolution processing calculation formula is as follows:
Figure 335850DEST_PATH_IMAGE036
wherein, the first and the second end of the pipe are connected with each other,
Figure 556747DEST_PATH_IMAGE037
the image size of the output feature map V is indicated,
Figure 175947DEST_PATH_IMAGE038
the image size of the input feature map U is represented,
Figure 849505DEST_PATH_IMAGE039
which represents the size of the convolution kernel,
Figure 48405DEST_PATH_IMAGE040
represents the step size moved during convolution;
the pooling treatment calculation formula is as follows:
Figure 389388DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure 179489DEST_PATH_IMAGE042
the image size of the output feature map V is indicated,
Figure 74764DEST_PATH_IMAGE043
the size of the pooling is indicated by the size of the pool,
Figure 608514DEST_PATH_IMAGE040
representing the step size of the pooling movement.
Further, batch normalization processing is carried out on the gray value data of each pixel of the bus characteristic diagram theta, the pixel values of the image are distributed in the range of [0,1], and BN operation is added before the characteristic diagram U is input into each layer, so that the convergence speed of the model is accelerated.
Further, by activating the function RELU, the YOLOV4 model becomes sparse, and the interdependence relation of the parameters is reduced, so as to alleviate the over-fitting problem.
Wherein, the formula of the specific activation function RELU is as follows:
Figure 532564DEST_PATH_IMAGE044
when the weight z <0, the activation function RELU output is 0.
And S422, carrying out re-turning and cutting processing on the bus characteristic diagram theta by using an affine transformation tool so as to improve the richness of the training data set. Wherein affine changesTool changing generator
Figure 227988DEST_PATH_IMAGE045
Other affine transformation tools are also possible.
S423, using a bilinear interpolation algorithm to enable the whole YOLOV4 model to perform backward propagation training end to end, and circularly iterating the weight of the model to obtain an optimal solution; the calculation formula is as follows:
Figure 876138DEST_PATH_IMAGE046
wherein the content of the first and second substances,
Figure 947999DEST_PATH_IMAGE047
respectively representing the coordinate information of the current image area, P (x, y) representing the pixel value of the point after bilinear interpolation, P (x) 0 ,y 0 ) Represents the pixel value of the lower left corner coordinate of the region, P (x) 1 ,y 0 ) Represents the pixel value of the lower right corner coordinate of the region, P (x) 0 ,y 1 ) Representing the upper left corner coordinate pixel value, P (x) 1 ,y 1 ) Representing the upper right corner coordinate pixel value.
Further, training a target detection model through the labeled training data set; and positioning and classifying the bus scratch characteristics by using a trained target detection model, completing the bus scratch detection, and obtaining the bus scratch defect position.
The invention also provides a rigid busbar defect detection device suitable for the rigid busbar defect detection method, wherein the rigid busbar defect detection device comprises a busbar image acquisition module, a busbar image processing module and a busbar defect detection module; the bus bar defect detection module comprises a bus bar bending deformation detection module and a bus bar scratch detection module;
the bus bar image acquisition module at least comprises a linear array camera and/or an area array camera and is used for acquiring and acquiring bus bar images;
the image processing module is used for processing and detecting bus bar images, positioning the bus bars and obtaining bus bar characteristics;
the bus bar bending deformation detection module is used for calculating and detecting the bus bar bending deformation according to the bus bar characteristics;
the bus bar scratch detection module comprises the Yolov4+ STN target detection model and is used for detecting bus bar scratches according to the positioned bus bar and a bus bar characteristic diagram.
Compared with the prior art, the invention has the beneficial effects that:
the rigid bus bending deformation and scratch detection method provided by the invention provides model training based on a deep learning model YOLOV4, positions the bus and extracts bus characteristics, and reduces the influence of external interference noise; bus noise is further filtered by using median filtering, and a bus image is subjected to smoothing treatment, so that the boundary of the bus can be well protected; the bus bar defect detection method can simultaneously detect the bending deformation and the scratch defect of the bus bar, can quickly and accurately detect and judge whether the bus bar is bent or not, realizes the scratch detection and identification of the bus bar based on YOLOV4 and a spatial domain attention mechanism STN, and judges the type of the scratch defect of the bus bar. The detection precision and the detection efficiency of the defects of the bus bar are effectively improved, the bus bar fault prompt can be provided for the safe operation of the train, and the technical reference is provided for the bus bar fault maintenance.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a flow chart of the YOLOV4 model training of the present invention.
FIG. 3 is a schematic diagram of bus extraction.
FIG. 4 is a schematic diagram of a bus before being filtered.
FIG. 5 is a schematic diagram of a bus bar after being filtered.
Fig. 6 is a spatial domain attention mechanism STN based on YOLOV 4.
Fig. 7 is a schematic diagram of an implementation process based on a bilinear interpolation algorithm.
Detailed Description
The idea, the detailed description and the technical effects of the present invention are clearly and completely described below in conjunction with the embodiments and the accompanying drawings, so as to fully understand the objects, the features and the effects of the present invention.
Example 1
A rigid busbar defect detection method, as shown in fig. 1, comprising the steps of:
s1, positioning the interested bus area image from the acquired bus images by using a deep learning model;
s2, filtering noise of the interested bus region image;
further, filtering noise of the interested bus region, including image smoothing filtering, and filtering interference noise by performing smoothing filtering on an image with poor image quality;
s3, preprocessing the image of the interesting bus bar region after noise filtering to obtain a characteristic diagram, and extracting bus bar characteristics from the characteristic diagram; wherein the busbar characteristics include busbar bending deformation characteristics and busbar gouging characteristics;
and S4, performing bus defect detection according to the bus bending deformation characteristic and the bus scratch characteristic respectively, wherein the bus defect detection comprises bus bending deformation detection and/or bus scratch detection.
The busbar bending deformation detection includes: preprocessing the image of the interested bus region after noise filtering, extracting the related bus line, calculating the maximum distance from the line to the line by fitting the line, and judging whether the bus is bent and deformed.
The bus bar scuffing detection includes: and detecting the bus scratch defects by using a YOLOV4+ STN space attention combined model.
Further, training a target detection model by taking a YOLOV4 model as a basis and combining an STN spatial domain attention mechanism, and extracting bus abrasion characteristics;
and positioning and classifying the bus bar scratch characteristics through the trained target detection model to finish the detection of the bus bar scratch defects.
Example 2
The present embodiment locates the image of the bus region of interest from the acquired bus image by using the deep learning model based on embodiment 1. As shown in fig. 2, the specific processing steps are as follows:
s11, preprocessing the acquired bus bar image, and carrying out Yolov4 model training;
s12, verifying the trained Yolov4 model, and determining whether the Yolov4 model needs to be retrained;
s13, positioning the bus by using the trained YOLOV4 model, selecting an interested bus area image, and outputting the coordinates of the interested bus area image. And the coordinates of the interested bus area image are used for identifying the position of the bus.
Further, the detailed steps in step S11 are:
s111, acquiring a bus bar image by using a linear array camera or an area array camera arranged on the roof of the vehicle;
s112, processing such as rotating, overturning and amplifying the acquired bus bar image so as to enhance the bus bar image data;
s113, labeling the busbar image by using a picture labeling tool, and dividing the labeled busbar image into a training set and a verification set by using a K (K = 10) fold cross verification method;
the picture marking tool adopts a Labelimg marking tool and can also adopt other marking tools, which are not described in detail herein; the training set and the verification set are in a ratio of 9:1, or in a ratio of 8: 2;
s114, training the training set, wherein the specific training mode is that a YOLOV4 model is trained by using a YOLOV4 target detection algorithm.
Example 3
On the basis of embodiment 2, smoothing and filtering the busbar region of interest image output by the trained YOLOV4 model to filter out the region of interest noise. As shown in fig. 3, the specific steps are as follows:
processing the interested bus region image through median filtering, and filtering the interested region noise;
wherein the processing of the bus region of interest image by median filtering includes smoothing and filtering.
Further, the smoothing and filtering process includes:
and sequencing the local pixels of the interested bus region image, calculating the gray value of each pixel point in the local pixels, and selecting a median as the gray value of the pixels of the input image. The noise of the interested bus region image is filtered, and meanwhile, the bus boundary in the interested bus region image is well protected.
Wherein the median filtering calculation formula is:
Figure 732415DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 864319DEST_PATH_IMAGE049
is represented in a local pixel
Figure 734186DEST_PATH_IMAGE050
The gray value corresponding to each pixel point in the area (2),
Figure 609739DEST_PATH_IMAGE051
represents the median filtered value within the current region,
Figure 514241DEST_PATH_IMAGE052
and (4) horizontal and vertical coordinates of pixel points in the area are represented.
Example 4
The present embodiment is based on embodiment 3, and the extracting of the bus bar feature from the feature map includes:
extracting bending deformation characteristics of the bus bar from the characteristic diagram; wherein the bus bar bending deformation characteristics include bus bar profile information;
further, extracting the bus contour information specifically includes: and calculating the maximum connected region of the input pixels by using a Laplace edge detection algorithm to obtain the edge profile information of the bus.
Further, in step S3, the method for extracting a bus bar feature from the feature map further includes:
extracting bus scratch characteristics from the characteristic diagram, wherein the main characteristics of the bus scratch characteristics are that a bus has an area with large variation of gray values;
further, the extracting of the bus bruise feature from the feature map includes training a target detection model by using a YOLOV4 model as a basis and combining an STN spatial domain attention mechanism, and performing secondary processing on the located bus region image to extract the bus bruise feature.
I.e., the bus bar feature, includes at least a bus bar bending deformation feature and a bus bar gouging feature.
Example 5
In this embodiment, on the basis of embodiment 4, a method for detecting bending deformation of a rigid bus bar is provided, which specifically includes the following steps:
s41, calculating to obtain a bus bar related line according to the bus bar outline information; performing linear fitting operation on the busbar to obtain a linear equation of the busbar; and judging whether the bus bar is bent and deformed or not by extracting the maximum distance between the point on the bus bar line and the bus bar linear equation and comparing the maximum distance with a preset distance threshold.
Further, the step S41 includes:
s411, calculating the maximum connected region of the pixels of the input image by using a Laplace edge detection algorithm to obtain a bus edge profile;
further, performing second-order partial derivative on the image of the interesting bus region after the noise is filtered to obtain a Laplacian, and calculating to obtain bus edge information through a second-order difference algorithm;
the calculation formula of the Laplace edge detection algorithm is as follows:
Figure 551467DEST_PATH_IMAGE053
the calculation formula of the Laplace operator is as follows:
Figure 502105DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 587873DEST_PATH_IMAGE055
respectively, horizontal and vertical coordinates in the image, for representing the position information of each pixel value.
Further, by convolution operation, the bus edge information is highlighted, and bus outline information is obtained;
the convolution operation specifically includes: performing matrix multiplication operation on the image of the region of the bus of interest with the noise-filtered image of the region of the bus and the Laplacian to highlight the edge information of the bus; thereby obtaining bus profile information.
S412, extracting a bus bar line from the bus bar outline information by using a Hoffman linear detection algorithm;
and screening interference lines according to the length and the angle of the bus bar lines to obtain bus bar straight lines, as shown in a schematic diagram 4 before bus bar straight line filtering and a schematic diagram 5 after bus bar straight line filtering.
The Hoffman linear detection algorithm has the following calculation formula:
Figure 674778DEST_PATH_IMAGE056
wherein the content of the first and second substances,
Figure 758271DEST_PATH_IMAGE057
the table represents the set of all lengths of the extracted lines,
Figure 727364DEST_PATH_IMAGE058
representing passing angleThe degree threshold value filters the straight line,
Figure 85665DEST_PATH_IMAGE059
representing a range of thresholds.
And S413, fitting the bus straight line by using a weighted least square method to obtain a bus fitting straight line.
Wherein, the step of fitting the busbar straight line comprises the following steps:
let the equation of a straight line be:
Figure 823813DEST_PATH_IMAGE060
wherein, in the step (A),
Figure 671684DEST_PATH_IMAGE061
Figure 3439DEST_PATH_IMAGE062
is a coefficient;
initializing the weight, wherein the formula is as follows:
Figure 758905DEST_PATH_IMAGE063
wherein the content of the first and second substances,
Figure 492506DEST_PATH_IMAGE064
the value is a threshold value and the value is,
Figure 308015DEST_PATH_IMAGE065
indicating the distance of a point on the busbar to the straight line.
Further, according to the bus-bar fitting straight line, using least square method to calculate the offset of each point on the bus-bar relative to x, y and z three directions, using D xx ,D xy ,D yy Represents; the concrete formula is as follows:
Figure 127067DEST_PATH_IMAGE066
Figure 686224DEST_PATH_IMAGE067
Figure 274332DEST_PATH_IMAGE068
wherein the content of the first and second substances,
Figure 260742DEST_PATH_IMAGE069
to represent
Figure 567090DEST_PATH_IMAGE070
The average value of (a) of (b),
Figure 867621DEST_PATH_IMAGE071
the horizontal and vertical coordinates of the points on the bus bar line are shown,
Figure 434869DEST_PATH_IMAGE072
indicating the number of dots.
Further, calculating the offset by a least square method to obtain a fitted linear equation
Figure 733126DEST_PATH_IMAGE073
(ii) a The specific calculation formula is as follows:
Figure 385824DEST_PATH_IMAGE074
Figure 162150DEST_PATH_IMAGE075
Figure 849483DEST_PATH_IMAGE076
Figure 380959DEST_PATH_IMAGE077
Figure 396319DEST_PATH_IMAGE078
and S414, calculating the maximum distance from the extracted points on the bus bar line to the bus bar fitting straight line, and comparing the maximum distance with a preset distance threshold value to judge whether the bus bar is bent and deformed.
Further, when the maximum distance from the point on the bus bar line to the bus bar fitting straight line is greater than or equal to a preset distance threshold value, judging that the bus bar is bent and deformed;
otherwise, it is determined that the bus bar is not bent and deformed.
Further, the distance between a point on the bus bar line and the bus bar fitting straight line is calculated according to the following formula:
Figure 100970DEST_PATH_IMAGE079
wherein the content of the first and second substances,
Figure 512317DEST_PATH_IMAGE080
a distance threshold value is indicated which is,
Figure 11431DEST_PATH_IMAGE081
indicating the distance of a point on the bus bar line to the line,
Figure 514088DEST_PATH_IMAGE082
Figure 756850DEST_PATH_IMAGE083
indicating the hanger wire length.
Example 6
The embodiment provides a method for detecting scratches of a rigid bus bar based on embodiment 4, which specifically includes the following steps:
s42, performing a secondary process on the located bus bar region image, as shown in the schematic diagram of fig. 6, specifically including:
training a target detection model by taking a YOLOV4 model as a basis and combining an STN spatial domain attention mechanism, and extracting bus scratch characteristics;
and positioning and classifying the bus bar scratch characteristics through the trained target detection model to finish the detection of the bus bar scratch defects.
Further, the training of the target detection model based on the YOLOV4 model and combined with the STN spatial domain attention mechanism to extract the bus scratch features includes:
and S421, performing convolution and pooling on the characteristic diagram by using the YOLOV4 model to generate a new bus characteristic diagram theta. The bus characteristic image is extracted, and the image is subjected to down-sampling, so that the size of the image is reduced.
The convolution processing has the calculation formula as follows:
Figure 622038DEST_PATH_IMAGE084
wherein
Figure 167420DEST_PATH_IMAGE085
The image size of the output feature map V is indicated,
Figure 16428DEST_PATH_IMAGE086
the image size of the input feature map U is represented,
Figure 938247DEST_PATH_IMAGE087
which represents the size of the convolution kernel,
Figure 720258DEST_PATH_IMAGE088
representing the step size moved during the convolution process.
The pooling treatment has the calculation formula as follows:
Figure 436542DEST_PATH_IMAGE089
Figure 772845DEST_PATH_IMAGE090
the image size of the output feature map V is indicated,
Figure 560673DEST_PATH_IMAGE091
the size of the pooling is indicated by the size of the pool,
Figure 806977DEST_PATH_IMAGE092
representing the step size of the pooling movement.
Further, batch normalization processing is carried out on the gray value data of each pixel of the bus characteristic diagram theta, the pixel values of the image are distributed in the range of [0,1], and BN operation is added before the characteristic diagram U is input into each layer, so that the convergence speed of the model is accelerated.
Further, by activating the function RELU, the YOLOV4 model becomes sparse, and the interdependencies of the parameters are reduced to alleviate the over-fitting problem.
Wherein, the formula of the specific activation function RELU is as follows:
Figure 818796DEST_PATH_IMAGE093
when the weight z <0, the activation function RELU output is 0.
And S422, carrying out re-turning and cutting processing on the bus characteristic diagram theta by using an affine transformation tool so as to improve the richness of the training data set. Wherein the affine transformation tool is a generator
Figure 48920DEST_PATH_IMAGE094
Other affine transformation tools are also possible.
S423, using bilinear interpolation algorithm, making the entire YOLOV4 model perform inverse propagation training end to end, and performing loop iteration model weight to obtain an optimal solution, as shown in fig. 7, the specific formula is:
Figure 171597DEST_PATH_IMAGE095
wherein the coordinate information of the current image area is respectively, P (x, y) represents the pixel value of the point after bilinear interpolation, and P (x) 0 ,y 0 ) Represents the pixel value of the lower left corner coordinate of the region, P (x) 1 ,y 0 ) Represents the pixel value of the lower right corner coordinate of the region, P (x) 0 ,y 1 ) Representing the upper left corner coordinate pixel value, P (x) 1 ,y 1 ) Representing the upper right corner coordinate pixel value.
Further, training a target detection model through the labeled training data set; and positioning and classifying the bus scratch characteristics by using the trained target detection model, completing the bus scratch detection, and obtaining the bus scratch defect position.
Example 7
The embodiment provides a rigid busbar defect detection device suitable for the rigid busbar defect detection method, and the rigid busbar defect detection device comprises a busbar image acquisition module, a busbar image processing module and a busbar defect detection module; the bus bar defect detection module comprises a bus bar bending deformation detection module and a bus bar scratch detection module;
the bus bar image acquisition module at least comprises a linear array camera and/or an area array camera and is used for acquiring and acquiring bus bar images;
the image processing module is used for processing and detecting bus bar images, positioning the bus bars and obtaining bus bar characteristics;
the bus bar bending deformation detection module is used for calculating and detecting the bus bar bending deformation according to the bus bar characteristics;
the bus bar scratch detection module comprises the Yolov4+ STN target detection model and is used for detecting bus bar scratches according to the positioned bus bar and a bus bar characteristic diagram.
The rigid bus bending deformation and scratch detection method provided by the invention provides model training based on a deep learning model YOLOV4, positions the bus and extracts bus characteristics, and reduces the influence of external interference noise; bus noise is further filtered by using median filtering, and a bus image is subjected to smoothing treatment, so that the boundary of the bus can be well protected; the bus bar defect detection method can simultaneously detect the bending deformation and the scratch defect of the bus bar, can quickly and accurately detect and judge whether the bus bar is bent or not, realizes the scratch detection and identification of the bus bar based on YOLOV4 and a spatial domain attention mechanism STN, and judges the type of the scratch defect of the bus bar. The detection precision and the detection efficiency of the defects of the bus bar are effectively improved, the bus bar fault prompt can be provided for the safe operation of the train, and the technical reference is provided for the bus bar fault maintenance. Has great safety significance and practical application value.
The embodiments of the present invention have been described in detail, but the present invention is not limited to the embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and the equivalents or substitutions are included in the scope of the present invention defined by the claims.

Claims (13)

1. A rigid busbar defect detection method is characterized by comprising the following steps:
s1, positioning the interested bus area image from the acquired bus images by using a deep learning model;
s2, smoothing and filtering the interested bus region image through median filtering, and filtering the interested bus region image noise;
s3, preprocessing the image of the interest bus region after noise filtering to obtain a feature map, and extracting bus features from the feature map;
and S4, performing bus defect detection according to the extracted bus characteristics, wherein the bus defect detection comprises bus bending deformation detection and/or bus scratch detection.
2. The rigid busbar defect detection method of claim 1, wherein: in step S1, locating the image of the bus region of interest from the acquired bus image by using the deep learning model specifically includes:
s11, preprocessing the acquired bus bar image, and carrying out Yolov4 model training;
s12, verifying the trained Yolov4 model to determine whether the Yolov4 model needs to be retrained;
s13, positioning the bus by using the trained YOLOV4 model, selecting an interested bus area image, and outputting the coordinates of the interested bus area image.
3. The rigid busbar defect detecting method according to claim 2, wherein the step S11 is performed by preprocessing the acquired busbar image and performing YOLOV4 model training, and comprises:
s111, acquiring a bus bar image by using a linear array camera or an area array camera arranged on the roof of the vehicle;
s112, rotating, overturning and amplifying the acquired bus bar image to enhance the bus bar image data;
s113, marking the busbar image by using a picture marking tool, and dividing the marked busbar image into a training set and a verification set by using a K-fold cross verification method;
s114, training the training set to obtain a trained Yolov4 model.
4. The rigid busbar defect detecting method according to claim 3, wherein: the step S12 of verifying the trained YOLOV4 model to determine whether the YOLOV4 model needs to be retrained includes:
verifying the positioning accuracy of the trained Yolov4 model through the verification set;
if the expected effect is not achieved, repeating the step S11;
if the expected effect is achieved, the YOLOV4 model training is complete.
5. The rigid busbar defect detection method of claim 1, wherein the smoothing and filtering process comprises:
and sequencing local pixels of the interested bus region image, calculating the gray value of each pixel point in the local pixels, selecting a median as the gray value of the current input pixel, and removing noise interference.
6. The rigid busbar defect inspection method of claim 5, wherein the busbar characteristics include at least busbar bending deformation characteristics and busbar scratching characteristics.
7. The rigid busbar defect detecting method according to claim 6,
the busbar bending deformation feature comprising: bus profile information;
the bus bar bending deformation detection includes: calculating to obtain a bus bar related line by using the bus bar outline information; fitting the busbar line to obtain a busbar fitting straight line; and comparing the maximum distance from the point on the bus bar line to the bus bar fitting straight line with a preset distance threshold value, and judging whether the bus bar is bent and deformed.
8. The rigid bus bar defect detection method of claim 7, wherein the bus bar edge contour information is obtained by calculating a maximum connected region of input pixels using a laplacian edge detection algorithm.
9. The rigid bus bar defect detection method according to claim 8, wherein a Hoffman linear detection algorithm is used for extracting bus bar lines from the bus bar outline information, and interference lines are screened according to the length and the angle of the bus bar lines to obtain bus bar lines; and fitting the bus straight line by a weighted least square method to obtain a bus fitting straight line.
10. The rigid busbar defect detecting method according to claim 8, wherein when a maximum distance from a point on the busbar line to the busbar fitting straight line is greater than or equal to a preset distance threshold, it is determined that the busbar is bent.
11. The rigid busbar defect inspection method of claim 6, wherein the busbar scuffing inspection comprises:
based on the trained YOLOV4 model, extracting and obtaining the bus scratch characteristics by combining an STN spatial domain attention mechanism; training a target detection model through the labeled training samples; and positioning and classifying the bus scratch characteristics through a trained target detection model.
12. The rigid busbar defect detecting method according to claim 11, wherein the extracting busbar scratching features through the trained YOLOV4 model and STN spatial domain attention mechanism comprises:
performing convolution and pooling on the characteristic diagram by using the YOLOV4 model to generate a new bus characteristic diagram theta;
carrying out batch normalization processing on the gray value data of each pixel of the bus characteristic diagram theta to enable the pixel values of the image to be distributed in the range of [0,1 ];
activating the function RELU to make the YOLOV4 model sparse;
carrying out re-turning and cutting processing on the bus characteristic diagram theta by using an affine transformation tool so as to improve the richness of a training data set;
by using a bilinear interpolation algorithm, the whole Yolov4 model can be subjected to back propagation training end to end, and the model weight is continuously updated in an iterative manner to obtain the optimal solution of the weight.
13. A rigid busbar defect detecting device suitable for the rigid busbar defect detecting method according to any one of claims 1 to 12, comprising an image acquisition module, an image processing module, and a busbar defect detecting module; the bus bar defect detection module comprises a bus bar bending deformation detection module and a bus bar scratch detection module;
the image acquisition module at least comprises a linear array camera and/or an area array camera and is used for acquiring and acquiring a bus bar image;
the image processing module is used for processing the bus bar image acquired by the image acquisition module so as to position the bus bar and obtain bus bar characteristics;
the bus bar bending deformation detection module is used for calculating and detecting the bus bar bending deformation according to the bus bar characteristics;
the bus scratch detection module comprises a YOLOV4+ STN target detection model and is used for detecting bus scratches according to the positioned bus and a bus characteristic diagram.
CN202210367179.9A 2022-04-08 2022-04-08 Rigid busbar defect detection method and device Pending CN114897778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210367179.9A CN114897778A (en) 2022-04-08 2022-04-08 Rigid busbar defect detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210367179.9A CN114897778A (en) 2022-04-08 2022-04-08 Rigid busbar defect detection method and device

Publications (1)

Publication Number Publication Date
CN114897778A true CN114897778A (en) 2022-08-12

Family

ID=82715430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210367179.9A Pending CN114897778A (en) 2022-04-08 2022-04-08 Rigid busbar defect detection method and device

Country Status (1)

Country Link
CN (1) CN114897778A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703891A (en) * 2023-07-31 2023-09-05 苏州精控能源科技有限公司 Welding detection method and device for cylindrical lithium battery busbar

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703891A (en) * 2023-07-31 2023-09-05 苏州精控能源科技有限公司 Welding detection method and device for cylindrical lithium battery busbar
CN116703891B (en) * 2023-07-31 2023-10-10 苏州精控能源科技有限公司 Welding detection method and device for cylindrical lithium battery busbar

Similar Documents

Publication Publication Date Title
CN108009591B (en) Contact network key component identification method based on deep learning
CN109101924B (en) Machine learning-based road traffic sign identification method
CN107543828B (en) Workpiece surface defect detection method and system
CN111681240B (en) Bridge surface crack detection method based on YOLO v3 and attention mechanism
CN109801267B (en) Inspection target defect detection method based on feature point detection and SVM classifier
CN110211101A (en) A kind of rail surface defect rapid detection system and method
CN107665348B (en) Digital identification method and device for digital instrument of transformer substation
CN109872303B (en) Surface defect visual detection method and device and electronic equipment
CN105447512A (en) Coarse-fine optical surface defect detection method and coarse-fine optical surface defect detection device
CN111311567A (en) Method for identifying fastener and steel rail diseases of track line image
CN112101138B (en) Bridge inhaul cable surface defect real-time identification system and method based on deep learning
CN103955496B (en) A kind of quick live tire trace decorative pattern searching algorithm
CN113436157A (en) Vehicle-mounted image identification method for pantograph fault
CN110619146A (en) Polycrystalline silicon cell crack defect detection method based on structural similarity measurement
CN111091111A (en) Vehicle bottom dangerous target identification method
CN110728269B (en) High-speed rail contact net support pole number plate identification method based on C2 detection data
CN111882664A (en) Multi-window accumulated difference crack extraction method
CN114897778A (en) Rigid busbar defect detection method and device
CN111814773A (en) Lineation parking space identification method and system
CN115272830A (en) Pantograph foreign matter detection method based on deep learning
CN114581886A (en) Visibility discrimination method, device and medium combining semantic segmentation and frequency domain analysis
CN109272484B (en) Rainfall detection method based on video image
CN114030395A (en) Method and system for detecting foreign matters in contact suspension string area
CN111256596B (en) Size measuring method and device based on CV technology, computer equipment and medium
CN115797970B (en) Dense pedestrian target detection method and system based on YOLOv5 model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination