CN111598846A - Rail defect detection method in tunnel based on YOLO - Google Patents

Rail defect detection method in tunnel based on YOLO Download PDF

Info

Publication number
CN111598846A
CN111598846A CN202010340381.3A CN202010340381A CN111598846A CN 111598846 A CN111598846 A CN 111598846A CN 202010340381 A CN202010340381 A CN 202010340381A CN 111598846 A CN111598846 A CN 111598846A
Authority
CN
China
Prior art keywords
image
rail
tunnel
defect
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010340381.3A
Other languages
Chinese (zh)
Other versions
CN111598846B (en
Inventor
楚红雨
王阳
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Kunpeng Uav Technology Co ltd
Suzhou Mingyi Think Tank Information Technology Co ltd
Kunpad Communication Kunshan Co ltd
Original Assignee
Kunshan Kunpeng Uav Technology Co ltd
Suzhou Mingyi Think Tank Information Technology Co ltd
Kunpad Communication Kunshan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Kunpeng Uav Technology Co ltd, Suzhou Mingyi Think Tank Information Technology Co ltd, Kunpad Communication Kunshan Co ltd filed Critical Kunshan Kunpeng Uav Technology Co ltd
Priority to CN202010340381.3A priority Critical patent/CN111598846B/en
Publication of CN111598846A publication Critical patent/CN111598846A/en
Application granted granted Critical
Publication of CN111598846B publication Critical patent/CN111598846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for detecting surface defects of rails in a tunnel based on YOLO comprises the following steps: 1) the unmanned aerial vehicle with the image acquisition and autonomous positioning functions takes a tunnel inlet as an initial point and a tunnel outlet as an end point; entering a tunnel to inspect the rail; 2) acquiring real-time image information of the surface of the rail, detecting the surface defects of the rail in real time, and entering the step 3 if the surface defects are detected; otherwise, repeating the step 2) until the end position is reached; 3) and storing the current picture, marking the defect position in the picture, and simultaneously recording the defect type and the current unmanned aerial vehicle position information. The invention can improve the detection efficiency and detection precision of the rail defects in the tunnel.

Description

Rail defect detection method in tunnel based on YOLO
Technical Field
The invention relates to the field of defect detection and the field of unmanned aerial vehicles, in particular to a method for detecting surface defects of rails in a tunnel based on YOLO.
Background
Transportation is now becoming a part of our lives, particularly rail transportation. China railway transportation is in a rapid development stage, and the surface of a rail is a weak link in the safe operation of the train at present. Aiming at the surface defect types of the rails, the defects mainly include the hidden troubles that the normal use of the rails is influenced and the safe running of trains is threatened, such as abrasion, cracks, indentation, stripping and the like.
Traditional tunnel rail mode of patrolling and examining is mainly patrolled and examined through the manual work, but this mode has many potential safety hazards, also leads to missing the detection false retrieval condition serious because of light is dim in the tunnel when consuming a large amount of manpowers. Therefore, an effective method is to use an unmanned aerial vehicle to detect the surface defect condition of the rail in the tunnel. The method comprises the steps of establishing a rail surface defect data set, identifying rail surface defects based on a YOLO frame model, detecting in real time through unmanned aerial vehicle inspection, and improving detection precision of small rail defects by using an improved loss function.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for detecting the surface defects of the rail in the tunnel by the unmanned aerial vehicle, which can improve the detection efficiency and the detection precision.
In order to solve the technical problem, the technical scheme of the invention is a method for detecting the surface defects of the rails in the tunnel based on the YOLO, which comprises the following steps:
1) the unmanned aerial vehicle with the image acquisition and autonomous positioning functions takes a tunnel inlet as an initial point and a tunnel outlet as an end point; entering a tunnel to inspect the rail;
2) acquiring real-time image information of the surface of the rail, detecting the surface defects of the rail in real time, and entering the step 3 if the surface defects are detected; otherwise, repeating the step 2) until the end position is reached;
3) and storing the current picture, marking the defect position in the picture, and simultaneously recording the defect type and the current unmanned aerial vehicle position information.
The unmanned aerial vehicle establishes a tunnel inspection map by using a laser radar in cooperation with a SLAM algorithm, and performs real-time positioning in inspection work.
Carry on 3 controllable array light sources in unmanned aerial vehicle organism below, carry out the light filling to the not enough condition of illumination in the tunnel, light source illumination intensity L regulation formula is:
L=L0+u
Figure BDA0002468230720000021
wherein: the illumination intensity of the light source is controlled as follows:
in natural light conditions, the unmanned aerial vehicle shoots the initial gray mean value of the rail surface
Figure BDA0002468230720000022
Mean gray level h, L of acquired images in tunnel environment0Setting the initial illumination intensity as 100Lux, u ∈ [ -10,10 [ ]],KpAnd KdProportional and differential coefficients, respectively.
The speed v when the unmanned aerial vehicle does not detect the defect is as follows:
Figure BDA0002468230720000023
wherein: v. of0To initial velocity, KvFor adjusting the coefficient, the value range is [0,0.5 ]];
Figure BDA0002468230720000024
Is the average detection frame rate f of the algorithm in ideal operationqThe frame rate is detected in real time in the inspection process.
In step 2), the detection steps for the acquired defect image are as follows:
a1, collected image preprocessing: acquiring a defect image of the rail surface, and preprocessing the defect image, wherein the preprocessing comprises rail positioning, image enhancement and image denoising, target information is protected, interference of a non-rail area and noise is eliminated, and image contrast is enhanced;
a2, obtaining image tensor values: taking the preprocessed defect image as an input image, performing feature extraction in a network to obtain feature maps of three scales, and obtaining a tensor value through up-sampling;
a3, prediction of target bounding box: obtaining label data in the image by conversion, including the central coordinate value bx,byWidth and height values bw,bhAnd the category, predicting the target boundary box;
a4, calculating a loss function, wherein the loss function mainly comprises three parts: a loss of target localization offset, a loss of target confidence, and a loss of target classification.
In the image preprocessing in step a 1: the rail positioning adopts an OTSU algorithm to perform image segmentation to obtain a complete rail surface area image; the image enhancement adopts a local contrast method to realize the contrast enhancement of the image; the image denoising adopts multi-level median filtering denoising, and tiny structures on the rail are protected while noise is suppressed.
The image contrast D (x, y) in image enhancement is calculated as follows:
Figure BDA0002468230720000031
h (x, y) is the gray value of the pixel point (x, y) in the image, hBIs the mean of the gray levels in the neighborhood B of pixel (x, y), where region B is defined as the linear region of 1 × 150 centered on pixel (x, y).
The three feature map scales obtained in step a2 are 13 × 13, 26 × 26, and 52 × 52, respectively, and the feature maps of the three scales are combined to be final output, and three tensor values are obtained, which are (S,3,13/26/52,13/26/52,3 × W + C + L), respectively, where S denotes a feature map grid size, W denotes a central coordinate value and a width-height value of a predicted result, C denotes a confidence of a prediction box, and L denotes a probability of predicting a category at the feature point.
The conversion process in step a3 is: the ratio of the predicted boundary box center to the grid upper left corner coordinate and the grid side length is respectively set as txAnd tyThe activation function adopts a Sigmoid function,
Figure BDA0002468230720000032
the coordinates of the center point of the bounding box (b)x,by) Width bwHigh b ishAnd confidence bcAre respectively as
bx=σ(tx)+cx
by=σ(ty)+cy
Figure BDA0002468230720000041
Figure BDA0002468230720000042
bc=σ(tc)
The final predicted bounding box output is denoted as b ═ b (b)x,by,bw,bh,bc)T(ii) a Wherein c isx,cyFor a predetermined prediction network, pw,phIs a preset anchor value; confidence σ (t)c) By including the probability p of the objectr(class) and bezel accuracy
Figure BDA0002468230720000043
Is composed of two parts, i.e.
Figure BDA0002468230720000044
Wherein
Figure BDA0002468230720000045
Is the intersection ratio of the prediction frame and the real frame; and setting a threshold value, and processing the obtained multiple prediction rectangular frames through a non-maximum suppression algorithm to finally obtain the most reliable rectangular frame.
The derivative form of the loss function used in step a4 is:
Figure BDA0002468230720000046
where σ ∈ (-1, 1). For larger errors, the loss function can be appropriately reduced, so that when the gradient is propagated to the Sigmoid function, the convergence speed at the initial time can be increased, and the influence of gradient disappearance can be reduced. Meanwhile, when the error approaches to 0, the adjustment range of the weight of the output layer can be smaller, and the model can be better converged.
The loss function has better follow-up property to error change, can change along with the change of the error, has the properties of large gradient when the error is large and small gradient when the error is small, and can adjust the weight according to the change of the error so as to ensure that the network model is better in convergence. By using the improved loss function, the detection speed and the detection precision of the network in the process of detecting the small defects of the rails are improved.
Compared with the prior art, the invention has the following excellent effects:
use unmanned aerial vehicle as patrolling and examining the main part, avoided a great deal of drawback of traditional detection method, reduce the human cost when artifical the detection, avoid the security threat to the detection personnel of the fault that probably appears in the tunnel.
The method is based on a YOLO framework, image preprocessing is achieved through a local contrast method and multi-level median filtering, and small target detection precision is improved through improvement of a loss function. Meanwhile, in the inspection process, the flight speed of the unmanned aerial vehicle can be adjusted according to the defect detection rate, so that the optimal detection result is obtained.
Drawings
Fig. 1 is a flowchart of the unmanned aerial vehicle tunnel inspection work of the present invention.
Fig. 2 is a flow chart of rail defect detection according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
First, a high definition camera and a laser radar are mounted on an unmanned aerial vehicle. Because the illumination effect in the tunnel may be relatively poor, therefore, carry on 3 x 3 controllable array light sources and can carry out the light filling according to the illumination condition in the tunnel in the organism below. Carry out autonomic location to unmanned aerial vehicle through laser radar, high definition digtal camera shoots the rail surface in the tunnel to acquire rail surface condition information.
Before beginning to patrol and examine, set for and patrol and examine initial position and ending position, initial position is tunnel entrance, and ending position is tunnel exit. The unmanned aerial vehicle takes off from the initial point and enters the tunnel environment to start the inspection work. After the unmanned aerial vehicle arrives at the tunnel exit, the inspection work is finished, and the unmanned aerial vehicle lands on the designated landing platform.
The unmanned aerial vehicle is used as a polling carrier of the detection method, a tunnel polling map is constructed by using a laser radar in cooperation with a SLAM algorithm, and the tunnel polling map is positioned in real time during polling work. The position information of the unmanned aerial vehicle is acquired by matching the laser radar with the SLAM algorithm, and the technology is a mature technology in the use of the unmanned aerial vehicle and is not repeated here.
Unmanned aerial vehicle patrols and examines the in-process and uses high definition camera to shoot the rail surface, real-time detection rail surface defect. And when the illuminance in the tunnel is insufficient, supplementing light through the controllable array light source.
Under the condition of natural light, the high-definition camera shoots the rail image and calculates the mean value of the initial gray level of the rail image
Figure BDA0002468230720000061
After entering the tunnel environment, the gray level mean value h of the collected image is calculated in real time to realize the light source lightAnd (5) controlling illumination intensity. The light source illumination intensity L is regulated by the formula:
L=L0+u (1)
Figure BDA0002468230720000062
wherein L is0Setting the initial illumination intensity as 100Lux, u ∈ [ -10,10 [ ]],KpAnd KdProportional and differential coefficients, respectively.
The rail defects are mainly present on the rail surface, and the defect types comprise rail fracture, surface crack, corrosion and the like. Rail surface defects can be largely classified into crack defects and scar defects according to shape. The crack defect has a slender shape and a large difference between length and width, and is similar to a strip. The length and width of the scar defect are relatively close, and are similar to a circular shape or an elliptical shape. The method comprises the steps of collecting various defects on the surface of a rail in advance, classifying according to the defects, manufacturing a rail defect image data set, comparing a shot defect image with the defect image data set during inspection, and determining whether the shot defect image is a defect. When the image is shot and the defect is judged, the unmanned aerial vehicle decelerates and hovers to shoot a clearer defect image. And storing the shot clear defect picture, marking the defect position in the shot picture, and simultaneously recording the defect type and the current unmanned aerial vehicle position information. When the defect is not detected, the unmanned aerial vehicle continuously keeps the inspection flight state.
The unmanned aerial vehicle patrols and examines that flying speed receives the detection rate influence to unmanned aerial vehicle slows down and hovers when detecting rail surface defect. The velocity v at which no defect is detected is:
Figure BDA0002468230720000063
wherein v is0Indicating the initial speed, K, at which the drone starts to operatevFor adjusting the coefficient, the value range is [0,0.5 ]]。
Figure BDA0002468230720000064
Is the average of the algorithm when it runs under ideal conditionsDetection frame rate, fqThe frame rate is detected in real time in the inspection process. When a rail surface defect is detected, the drone hovers to acquire a clearer image of the defect.
Unmanned aerial vehicle high definition digtal camera and horizontal plane contained angle
Figure BDA0002468230720000065
Influenced by the flight speed of the unmanned aerial vehicle, and the range of motion is 30 degrees and 40 degrees]。
Figure BDA0002468230720000071
Figure BDA0002468230720000072
The initial included angle between the camera and the horizontal plane is set to be 40 degrees,
Figure BDA0002468230720000074
is a set constant. The included angle corresponds the change along with unmanned aerial vehicle flying speed change, and when speed increased, the included angle reduces in order to acquire higher look-ahead field of vision.
When the unmanned aerial vehicle patrols and examines the rail surface defect, the detection step of surface defect is as follows:
a1, image acquisition and preprocessing: the high-definition camera is used for collecting the image of the rail defect, and the rail surface defect can be mainly divided into a crack defect and a scar defect according to the shape. The crack defect has a slender shape and a large difference between length and width, and is similar to a strip. The length and width of the scar defect are relatively close, and are similar to a circular shape or an elliptical shape. And preprocessing the collected image, including image enhancement and image denoising, protecting target information, eliminating the interference of a non-rail area and noise, and enhancing the image contrast.
Image preprocessing: the image preprocessing comprises rail positioning, image enhancement and image denoising. And (4) carrying out image segmentation on the rail by adopting an OTSU algorithm to obtain a complete rail surface area image. The image enhancement adopts a local contrast method to carry out contrast enhancement, and the calculation formula of the image contrast D (x, y) is as follows:
Figure BDA0002468230720000073
h (x, y) is the gray value of the pixel point (x, y) in the image, hBIs the mean of the gray levels in a neighborhood B of the pixel (x, y), where the region B is defined as a linear region of 1 × 150 centered on the pixel (x, y) the gray contrast D (x, y) obtained by the algorithm is remapped to [0,255 []And (5) realizing image enhancement in a gray scale interval. And the multistage median filtering is used as an image denoising technology, so that the tiny structures on the rail are protected while noise is suppressed.
A2, image feature extraction: inputting the preprocessed image serving as an input image into a Darknet53 network for feature extraction to obtain feature maps of three scales, wherein the obtained feature map scales are respectively 13 × 13, 26 × 26 and 52 × 52, the feature maps of the three scales are combined to be finally output, and three tensor values are obtained, wherein the three tensor values are respectively (S,3,13/26/52,13/26/52 and 3 (W + C + L)), S represents a feature map grid size, W represents a central coordinate value and a width and height value of a predicted result, C represents a confidence coefficient of a prediction frame, and L represents a probability of predicting a category on the feature point.
A3, prediction of target bounding box: acquiring label data in the image, including a central coordinate value bx,byWidth and height values bw,bhAnd a category for predicting the target bounding box.
The ratio of the predicted boundary box center to the grid upper left corner coordinate and the grid side length is respectively set as txAnd tyThe activation function adopts a Sigmoid function,
Figure BDA0002468230720000081
the coordinates of the center point of the bounding box (b)x,by) Width bwHigh b ishAnd confidence bcAre respectively as
Figure BDA0002468230720000082
The final predicted bounding box output is denoted as b ═ b (b)x,by,bw,bh,bc)T. Wherein c isx,cyFor a predetermined prediction network, pw,phIs a preset anchor value. Confidence σ (t)c) By including the probability p of the objectr(class) and bezel accuracy
Figure BDA0002468230720000083
Is composed of two parts, i.e.
Figure BDA0002468230720000084
Wherein
Figure BDA0002468230720000085
Is the intersection ratio of the prediction box and the real box. And setting a threshold value, and processing the obtained multiple prediction rectangular frames through a non-maximum suppression algorithm to finally obtain the most reliable rectangular frame.
A4, calculating a loss function, wherein the loss function mainly comprises three parts: a loss of target localization offset, a loss of target confidence, and a loss of target classification. Compared with the traditional method, the loss function in the method adopts a new loss function, and the change of the continuous variable can be better calculated.
The derivative of the loss function used is of the form:
Figure BDA0002468230720000086
where σ ∈ (-1,1), for larger errors, the loss function may appropriately reduce it, so that when the gradient propagates to the Sigmoid function, the convergence speed at the beginning can be increased, and the influence of gradient disappearance can be reduced. Meanwhile, when the error approaches to 0, the adjustment range of the weight of the output layer can be smaller, and the model can be better converged.
The loss function has better follow-up property to error change, can change along with the change of the error, has the properties of large gradient when the error is large and small gradient when the error is small, and can adjust the weight according to the change of the error so as to ensure that the network model is better in convergence. By using the improved loss function, the detection speed and the detection precision of the network in the process of detecting the small defects of the rails are improved.
And A5, comparing the processed image with a rail defect image data set acquired in advance, determining the defect type, and marking the defect type.
The rail defect detection method has the advantages that: according to the method for detecting the surface defects of the rails in the tunnel based on the YOLO, the unmanned aerial vehicle is used as a routing inspection main body, so that various defects of a traditional detection method are avoided, the labor cost during manual detection is reduced, and the safety threat of possible faults in the tunnel to detection personnel is avoided.
The method is based on a YOLO frame, image preprocessing is achieved by using a local contrast method and multi-stage median filtering, and small target detection precision is improved by improving a loss function. Meanwhile, in the inspection process, the flight speed of the unmanned aerial vehicle can be adjusted according to the defect detection rate, so that the optimal detection result is obtained.

Claims (10)

1. A method for detecting surface defects of rails in a tunnel based on YOLO comprises the following steps:
1) the unmanned aerial vehicle with the image acquisition and autonomous positioning functions takes a tunnel inlet as an initial point and a tunnel outlet as an end point; entering a tunnel to inspect the rail;
2) acquiring real-time image information of the surface of the rail, detecting the surface defects of the rail in real time, and entering the step 3 if the surface defects are detected; otherwise, repeating the step 2) until the end position is reached;
3) and storing the current picture, marking the defect position in the picture, and simultaneously recording the defect type and the current unmanned aerial vehicle position information.
2. The YOLO-based method for detecting surface defects of rails in tunnels according to claim 1, wherein the unmanned aerial vehicle in step 1) uses a laser radar to cooperate with a SLAM algorithm to construct a tunnel inspection map and performs real-time positioning during inspection work.
3. The YOLO-based method for detecting surface defects of rails in tunnels as claimed in claim 1, wherein a controllable array light source is mounted below the unmanned aerial vehicle body for supplementing light when the illumination in the tunnel is insufficient, and the adjustment formula of the light intensity L of the light source is as follows:
L=L0+u
Figure FDA0002468230710000011
wherein: the illumination intensity of the light source is controlled as follows:
in natural light conditions, the unmanned aerial vehicle shoots the initial gray mean value of the rail surface
Figure FDA0002468230710000012
Mean gray level h, L of acquired images in tunnel environment0Setting the initial illumination intensity as 100Lux, u ∈ [ -10,10 [ ]],KpAnd KdProportional and differential coefficients, respectively.
4. The YOLO-based method for detecting surface defects of rails in tunnels according to claim 1, wherein the speed v when no defect is detected by the drone is:
Figure FDA0002468230710000013
wherein: v. of0To initial velocity, KvFor adjusting the coefficient, the value range is [0,0.5 ]];
Figure FDA0002468230710000021
Is the average detection frame rate f of the algorithm in ideal operationqThe frame rate is detected in real time in the inspection process.
5. The YOLO-based method for detecting surface defects of rails in tunnels as claimed in claim 1, wherein in step 2), the detection steps for the acquired defect images are as follows:
a1, collected image preprocessing: acquiring a defect image of the rail surface, and preprocessing a current video frame image, wherein the preprocessing comprises rail positioning, image enhancement and image denoising, target information is protected, interference of a non-rail area and noise is eliminated, and image contrast is enhanced;
a2, image feature extraction: taking the preprocessed defect image as an input image, performing feature extraction in a network to obtain feature maps of three scales, and obtaining a tensor value through up-sampling;
a3, prediction of target bounding box: obtaining label data in the image by conversion, including the central coordinate value bx,byWidth and height values bw,bhAnd the category, predicting the target boundary box;
a4, loss regression: and (3) calculating a loss function, wherein the loss function mainly comprises three parts: a loss of target localization offset, a loss of target confidence, and a loss of target classification.
6. The method of claim 5, wherein in the pre-processing of the images in step A1: the rail positioning adopts an OTSU algorithm to perform image segmentation to obtain a complete rail surface area image; the image enhancement adopts a local contrast method to realize the contrast enhancement of the image; the image denoising adopts multi-level median filtering denoising, and tiny structures on the rail are protected while noise is suppressed.
7. The YOLO-based in-tunnel rail surface defect detection method of claim 6, wherein the image contrast D (x, y) in the image enhancement is calculated as follows:
Figure FDA0002468230710000022
h (x, y) is the gray value of the pixel point (x, y) in the image, hBIs the mean of the gray levels in the neighborhood B of pixel (x, y), where region B is defined as the linear region of 1 × 150 centered on pixel (x, y).
8. The YOLO-based method for detecting surface defects of railway rails inside tunnels according to claim 5, wherein the three feature maps obtained in step a2 are 13 × 13, 26 × 26 and 52 × 52, and the three feature maps are combined to be the final output and three tensor values are obtained, respectively (S,3,13/26/52,13/26/52,3 × W + C + L), where S represents the grid size of the feature map, W represents the central coordinate value and the width and height values of the predicted prediction results, C represents the confidence of the prediction box, and L represents the probability of predicting the category at the feature point.
9. The method of claim 5, wherein the transformation process in step A3 is as follows: the ratio of the predicted boundary box center to the grid upper left corner coordinate and the grid side length is respectively set as txAnd tyThe activation function adopts a Sigmoid function,
Figure FDA0002468230710000031
the coordinates of the center point of the bounding box (b)x,by) Width bwHigh b ishAnd confidence bcAre respectively as
bx=σ(tx)+cx
by=σ(ty)+cy
Figure FDA0002468230710000032
Figure FDA0002468230710000033
bc=σ(tc)
The final predicted bounding box output is denoted as b ═ b (b)x,by,bw,bh,bc)T(ii) a Wherein c isx,cyFor a predetermined prediction network, pw,phIs a preset anchor value; confidence σ (t)c) By including the probability p of the objectr(class) and bezel accuracy
Figure FDA0002468230710000034
Is composed of two parts, i.e.
Figure FDA0002468230710000035
Wherein
Figure FDA0002468230710000036
Is the intersection ratio of the prediction frame and the real frame; and setting a threshold value, and processing the obtained multiple prediction rectangular frames through a non-maximum suppression algorithm to finally obtain the most reliable rectangular frame.
10. The method of claim 5, wherein the derivative of the loss function used in step A4 is in the form of:
Figure FDA0002468230710000041
where σ ∈ (-1, 1).
CN202010340381.3A 2020-04-26 2020-04-26 Method for detecting rail defects in tunnel based on YOLO Active CN111598846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010340381.3A CN111598846B (en) 2020-04-26 2020-04-26 Method for detecting rail defects in tunnel based on YOLO

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010340381.3A CN111598846B (en) 2020-04-26 2020-04-26 Method for detecting rail defects in tunnel based on YOLO

Publications (2)

Publication Number Publication Date
CN111598846A true CN111598846A (en) 2020-08-28
CN111598846B CN111598846B (en) 2024-01-05

Family

ID=72190747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010340381.3A Active CN111598846B (en) 2020-04-26 2020-04-26 Method for detecting rail defects in tunnel based on YOLO

Country Status (1)

Country Link
CN (1) CN111598846B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113358665A (en) * 2021-05-25 2021-09-07 同济大学 Unmanned aerial vehicle tunnel defect detection method and system
CN113449617A (en) * 2021-06-17 2021-09-28 广州忘平信息科技有限公司 Track safety detection method, system, device and storage medium
CN113781480A (en) * 2021-11-10 2021-12-10 南京未来网络产业创新有限公司 Steel rail surface detection method and system based on machine vision
CN115963397A (en) * 2022-12-01 2023-04-14 华中科技大学 Rapid online detection method and device for surface defects of inner contour of motor stator

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109084735A (en) * 2018-08-29 2018-12-25 北京航空航天大学 A kind of tunnel monitoring abnormal state method and unmanned plane device
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN110689531A (en) * 2019-09-23 2020-01-14 云南电网有限责任公司电力科学研究院 Automatic power transmission line machine inspection image defect identification method based on yolo

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109084735A (en) * 2018-08-29 2018-12-25 北京航空航天大学 A kind of tunnel monitoring abnormal state method and unmanned plane device
CN110689531A (en) * 2019-09-23 2020-01-14 云南电网有限责任公司电力科学研究院 Automatic power transmission line machine inspection image defect identification method based on yolo

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吕铄等: "基于改进损失函数的YOLOv3网络", 计算机系统应用 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113358665A (en) * 2021-05-25 2021-09-07 同济大学 Unmanned aerial vehicle tunnel defect detection method and system
CN113449617A (en) * 2021-06-17 2021-09-28 广州忘平信息科技有限公司 Track safety detection method, system, device and storage medium
CN113781480A (en) * 2021-11-10 2021-12-10 南京未来网络产业创新有限公司 Steel rail surface detection method and system based on machine vision
CN115963397A (en) * 2022-12-01 2023-04-14 华中科技大学 Rapid online detection method and device for surface defects of inner contour of motor stator
CN115963397B (en) * 2022-12-01 2023-07-25 华中科技大学 Rapid online detection method and device for surface defects of inner contour of motor stator

Also Published As

Publication number Publication date
CN111598846B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
CN111598846B (en) Method for detecting rail defects in tunnel based on YOLO
Banić et al. Intelligent machine vision based railway infrastructure inspection and monitoring using UAV
CN110532889B (en) Track foreign matter detection method based on rotor unmanned aerial vehicle and YOLOv3
US10290219B2 (en) Machine vision-based method and system for aircraft docking guidance and aircraft type identification
CN112348034A (en) Crane defect detection system based on unmanned aerial vehicle image recognition and working method
WO2016015546A1 (en) System and method for aircraft docking guidance and aircraft type identification
CN110310255B (en) Point switch notch detection method based on target detection and image processing
CN111080611A (en) Railway wagon bolster spring fracture fault image identification method
CN111080617B (en) Railway wagon brake beam pillar round pin loss fault identification method
CN116757990A (en) Railway fastener defect online detection and identification method based on machine vision
CN112419289B (en) Intelligent detection method for urban subway rail fastener defects
CN102073852A (en) Multiple vehicle segmentation method based on optimum threshold values and random labeling method for multiple vehicles
CN110514133A (en) It is a kind of based on photogrammetric unmanned plane tunnel deformation detection method
Zhao et al. Image-based comprehensive maintenance and inspection method for bridges using deep learning
CN115482195A (en) Train part deformation detection method based on three-dimensional point cloud
CN112613509A (en) Railway wagon carriage number identification snapshot method and system
CN112508911A (en) Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
CN112884753A (en) Track fastener detection and classification method based on convolutional neural network
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
WO2022247597A1 (en) Papi flight inspection method and system based on unmanned aerial vehicle
CN113066050A (en) Method for resolving course attitude of airdrop cargo bed based on vision
CN111127381B (en) Non-parallel detection method for pantograph slide plate
CN113673614B (en) Metro tunnel foreign matter intrusion detection device and method based on machine vision
CN112508893A (en) Machine vision-based method and system for detecting tiny foreign matters between two railway tracks
CN107341455A (en) A kind of detection method and detection means to the region multiple features of exotic on night airfield runway road surface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant