CN111598846B - Method for detecting rail defects in tunnel based on YOLO - Google Patents

Method for detecting rail defects in tunnel based on YOLO Download PDF

Info

Publication number
CN111598846B
CN111598846B CN202010340381.3A CN202010340381A CN111598846B CN 111598846 B CN111598846 B CN 111598846B CN 202010340381 A CN202010340381 A CN 202010340381A CN 111598846 B CN111598846 B CN 111598846B
Authority
CN
China
Prior art keywords
image
tunnel
rail
defect
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010340381.3A
Other languages
Chinese (zh)
Other versions
CN111598846A (en
Inventor
楚红雨
王阳
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Kunpeng Uav Technology Co ltd
Suzhou Mingyi Think Tank Information Technology Co ltd
Kunpad Communication Kunshan Co ltd
Original Assignee
Kunshan Kunpeng Uav Technology Co ltd
Suzhou Mingyi Think Tank Information Technology Co ltd
Kunpad Communication Kunshan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Kunpeng Uav Technology Co ltd, Suzhou Mingyi Think Tank Information Technology Co ltd, Kunpad Communication Kunshan Co ltd filed Critical Kunshan Kunpeng Uav Technology Co ltd
Priority to CN202010340381.3A priority Critical patent/CN111598846B/en
Publication of CN111598846A publication Critical patent/CN111598846A/en
Application granted granted Critical
Publication of CN111598846B publication Critical patent/CN111598846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/70
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

A method for detecting rail surface defects in a tunnel based on YOLO comprises the following steps: 1) The unmanned aerial vehicle with the functions of image acquisition and autonomous positioning takes a tunnel entrance as an initial point and a tunnel exit as an end point; the railway track enters a tunnel to carry out inspection; 2) Collecting real-time image information of the rail surface, detecting rail surface defects in real time, and if the surface defects are detected, entering step 3); otherwise, repeating the step 2) until reaching the end position; 3) And storing the current picture, marking the defect position in the picture, and simultaneously recording the defect type and the current unmanned aerial vehicle position information. The invention can improve the detection efficiency and the detection precision of rail defects in tunnels.

Description

Method for detecting rail defects in tunnel based on YOLO
Technical Field
The invention relates to the field of defect detection and unmanned aerial vehicle, in particular to a method for detecting rail surface defects in a tunnel based on YOLO.
Background
Traffic transport has become part of our lives today, particularly railway transport. China railway transportation is in a rapid development stage, and the surface of a rail is a weak link in the safe running of a train at present. Aiming at the defect types of the rail surface, the method is mainly divided into hidden dangers such as abrasion, cracks, indentation, peeling and the like which influence the normal use of the rail and threaten the safe running of the train.
Traditional tunnel rail inspection mode mainly is through artifical inspection, but this mode has many potential safety hazards, also leads to leaking to examine the false detection condition serious because of light darkness in the tunnel when consuming a large amount of manpower. Therefore, an effective method is to use unmanned aerial vehicles to detect rail surface defects in tunnels. And (3) establishing a rail surface defect data set, identifying rail surface defects based on a YOLO frame model, realizing real-time detection through unmanned aerial vehicle inspection, and improving the detection precision of rail small defects by using an improved loss function.
Disclosure of Invention
The invention aims to solve the technical problem of providing a detection method of an unmanned aerial vehicle aiming at rail surface defects in a tunnel, which can improve detection efficiency and detection precision.
In order to solve the technical problems, the technical scheme of the invention is a method for detecting the surface defects of the rail in a tunnel based on YOLO, which comprises the following steps:
1) The unmanned aerial vehicle with the functions of image acquisition and autonomous positioning takes a tunnel entrance as an initial point and a tunnel exit as an end point; the railway track enters a tunnel to carry out inspection;
2) Collecting real-time image information of the rail surface, detecting rail surface defects in real time, and if the surface defects are detected, entering step 3); otherwise, repeating the step 2) until reaching the end position;
3) And storing the current picture, marking the defect position in the picture, and simultaneously recording the defect type and the current unmanned aerial vehicle position information.
The unmanned aerial vehicle uses a laser radar to cooperate with an SLAM algorithm to construct a tunnel inspection map, and real-time positioning is performed in the inspection work.
Carry on 3*3 controllable array light source in unmanned aerial vehicle organism below, carry out the light filling to the condition of illumination inadequacy in the tunnel, light source illumination intensity L governing formula is:
L=L 0 +u
wherein: wherein the illumination intensity of the light source is controlled as follows:
under natural light condition, the unmanned aerial vehicle shoots initial gray average value on rail surfaceGray scale mean value h, L of acquired image in tunnel environment 0 For the initial illumination intensity, set to 100Lux, u E [ -10,10],K p And K d The proportional coefficient and the differential coefficient, respectively.
The speed v when the unmanned aerial vehicle does not detect the defect is:
wherein: v 0 For initial speed, K v For adjusting the coefficient, the value range is [0,0.5];For the average detected frame rate, f, when the algorithm is ideally running q The frame rate is detected in real time in the inspection process.
In step 2), the detection steps for the acquired defect image are as follows:
a1, preprocessing an acquired image: collecting a defect image on the surface of a rail, and preprocessing the defect image, wherein the preprocessing comprises rail positioning, image enhancement and image denoising, protecting target information, eliminating interference of non-rail areas and noise, and enhancing image contrast;
a2, obtaining an image tensor value: taking the preprocessed defect image as an input image, performing feature extraction in a network to obtain feature images of three scales, and obtaining tensor values through up-sampling;
a3, predicting a target boundary box: obtaining tag data in the image by conversion, including a center coordinate value b x ,b y Width to height value b w ,b h And category, predicting the target bounding box;
a4, calculating a loss function, wherein the loss function mainly comprises three parts: target positioning offset loss, target confidence loss, and target classification loss.
In the image preprocessing in step A1: the rail positioning adopts an OTSU algorithm to carry out image segmentation to obtain a complete rail surface area image; the image enhancement adopts a local contrast method to realize the image contrast enhancement; the image denoising adopts multistage median filtering denoising, and the tiny structure on the rail is protected while noise is restrained.
The image contrast D (x, y) in image enhancement is calculated as follows:
h (x, y) is the gray value of the pixel point (x, y) in the image, h B Is the gray average value in the neighborhood B of the pixel (x, y), wherein the region B is defined as a linear region of 1×150 centered on the pixel (x, y).
The three feature map scales obtained in the step A2 are 13×13, 26×26 and 52×52 respectively, the feature maps of the three scales are combined into final output, and three tensor values are obtained, which are (S, 3,13/26/52,13/26/52,3× (w+c+l)), where S represents the feature map mesh size, W represents the central coordinate value and the width and height value of the predicted result obtained by prediction, C represents the confidence of the prediction frame, and L represents the probability of predicting the category at the feature point.
The conversion process in the step A3 is as follows: let the ratio of the left upper corner coordinates of the predicted border frame center relative to the grid where the border frame center is positioned and the grid side length be t respectively x And t y The activation function uses a Sigmoid function,
the center point coordinates of the bounding box (b x ,b y ) Width b w High b h And confidence b c Respectively is
b x =σ(t x )+c x
b y =σ(t y )+c y
b c =σ(t c )
The final prediction bounding box output is denoted as b= (b x ,b y ,b w ,b h ,b c ) T The method comprises the steps of carrying out a first treatment on the surface of the Wherein c x ,c y For a preset predictive network, p w ,p h Is a preset anchor value; confidence sigma (t) c ) From inclusion of the object probability p r (class) and frame accuracyTwo-part composition, i.e
Wherein the method comprises the steps ofThe intersection ratio of the prediction frame and the real frame; setting a threshold value, processing the obtained multiple prediction rectangular frames through a non-maximum suppression algorithm, and finally obtaining the most reliable rectangular frame.
The derivative form of the loss function used in step A4 is:
wherein σ ε (-1, 1). For larger errors, the loss function can be properly reduced, so that when the gradient propagates to the Sigmoid function, the initial convergence speed can be increased, and the influence of gradient disappearance is reduced. Meanwhile, when the error approaches 0, the adjustment amplitude of the weight of the output layer can be smaller, and the model can be better converged.
The loss function has good following performance on error change, can change along with the change of the error, has the characteristics of large gradient when the error is large and small gradient when the error is small, and can adjust weight according to the change of the error so as to ensure that the network model is better converged. By using the improved loss function, the detection speed and the detection precision of the network in detecting small rail defects are improved.
Compared with the prior art, the invention has the following excellent effects:
the unmanned aerial vehicle is used as the inspection main body, so that various defects of the traditional detection method are avoided, the labor cost during manual detection is reduced, and the potential accident in the tunnel is avoided from threatening the safety of detection personnel.
The method is based on a YOLO frame, the local contrast method and the multistage median filtering are used for realizing image preprocessing, and the small target detection precision is improved by improving the loss function. Meanwhile, in the inspection process, the flight speed of the unmanned aerial vehicle can be adjusted according to the defect detection rate, so that an optimal detection result is obtained.
Drawings
Fig. 1 is a flow chart of the unmanned aerial vehicle tunnel inspection work.
FIG. 2 is a flow chart of the rail defect detection method of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
First, a high-definition camera and a laser radar are mounted on an unmanned aerial vehicle. Because the illumination effect in the tunnel is possibly poor, the controllable array light source which is arranged below the machine body and 3*3 can supplement light according to the illumination condition in the tunnel. Autonomous positioning is carried out on the unmanned aerial vehicle through a laser radar, and the high-definition camera shoots the surface of the rail in the tunnel so as to acquire the surface condition information of the rail.
Before starting inspection, an initial inspection position and an end position are set, wherein the initial position is the tunnel entrance, and the end position is the tunnel exit. The unmanned aerial vehicle takes off from the initial point, enters a tunnel environment and starts the inspection work. After the unmanned aerial vehicle arrives at the tunnel exit, the inspection work is finished, and the unmanned aerial vehicle lands on the appointed landing platform.
The unmanned aerial vehicle is used as a patrol carrier of the detection method, a tunnel patrol map is constructed by using a laser radar in combination with an SLAM algorithm, and the tunnel patrol map is positioned in real time in the patrol work. The unmanned aerial vehicle position information is obtained through the laser radar matched with the SLAM algorithm, and the technology is a mature technology in the use of the unmanned aerial vehicle and is not repeated here.
And shooting the rail surface by using a high-definition camera in the unmanned aerial vehicle inspection process, and detecting the rail surface defects in real time. When the illuminance in the tunnel is insufficient, light supplementing is performed through the controllable array light source.
Under natural light condition, the high-definition camera shoots rail images and calculates initial gray average value thereofAnd after entering a tunnel environment, calculating the gray average value h of the acquired image in real time to realize the control of the illumination intensity of the light source. The light source illumination intensity L adjusting formula is:
L=L 0 +u (1)
wherein L is 0 For the initial illumination intensity, set to 100Lux, u E [ -10,10],K p And K d The proportional coefficient and the differential coefficient, respectively.
Rail defects are mainly present on rail surfaces, and the defect categories include rail fracture, surface cracking, corrosion, and the like. Rail surface defects can be largely classified into crack defects and scar defects according to shape. The crack defect is slender in shape and large in length-width difference, and is similar to a bar shape. Scar defects are relatively close in length and width, resembling a circular or oval shape. And collecting various defects on the surface of the rail in advance, classifying the defects, and manufacturing a rail defect image data set for comparing the shot defect image with the defect image data set during inspection to determine whether the shot defect is a defect. When the captured image is determined to be defective, the drone decelerates and hovers to capture a clearer defective image. And storing the shot clear defect picture, marking the defect position in the shot picture, and simultaneously recording the defect type and the current unmanned aerial vehicle position information. And when the defect is not detected, the unmanned aerial vehicle continues to maintain the inspection flight state.
The unmanned aerial vehicle inspection flying speed is affected by the detection rate, and the unmanned aerial vehicle decelerates and hovers when rail surface defects are detected. The speed v when no defect is detected is:
wherein v is 0 Represents the initial speed, K when the unmanned aerial vehicle starts working v For adjusting the coefficient, the value range is [0,0.5]。For the average detected frame rate, f, when the algorithm is ideally running q The frame rate is detected in real time in the inspection process. When a rail surface defect is detected, the drone hovers to acquire a clearer image of the defect.
Unmanned aerial vehicle high definition camera and horizontal plane contained angleIs influenced by the flying speed of the unmanned aerial vehicle, and has the moving range of [30 DEG, 40 DEG ]]。
For the initial included angle of the camera and the horizontal plane, the angle is set to 40 degrees>Is a set constant. The included angle correspondingly changes along with the change of the flying speed of the unmanned aerial vehicle, and when the speed increases, the included angle decreases to obtain a higher look-ahead view.
When the unmanned aerial vehicle patrols and examines rail surface defect, the detection step of surface defect is as follows:
a1, image acquisition and pretreatment: the high-definition camera is used for collecting images of rail defects, and the rail surface defects can be mainly divided into crack defects and scar defects according to the shapes. The crack defect is slender in shape and large in length-width difference, and is similar to a bar shape. Scar defects are relatively close in length and width, resembling a circular or oval shape. Preprocessing the acquired image, including image enhancement and image denoising, protecting target information and eliminating interference of non-rail areas and noise, and enhancing image contrast.
Image preprocessing: image preprocessing includes rail positioning, image enhancement, and image denoising. The rail positioning adopts OTSU algorithm to carry out image segmentation to obtain the complete rail surface area image. The image enhancement adopts a local contrast method to carry out contrast enhancement, and the calculation formula of the image contrast D (x, y) is as follows:
h (x, y) is the gray value of the pixel point (x, y) in the image, h B Is the gray average value in the neighborhood B of the pixel (x, y), wherein the region B is defined as a linear region of 1×150 centered on the pixel (x, y). The gray contrast value D (x, y) obtained by the algorithm is remapped to [0,255]Gray scale interval, realizing image increaseStrong. The multistage median filtering is used as an image denoising technology, so that the fine structure on the rail is protected while noise is restrained.
A2, extracting image features: the preprocessed image is used as an input image and is input into a Darknet53 network for feature extraction, three-scale feature graphs are obtained, the scales of the obtained feature graphs are 13 x 13, 26 x 26 and 52 x 52 respectively, the three-scale feature graphs are combined into final output, three tensor values are obtained, and the three tensor values are (S, 3,13/26/52,13/26/52,3 x (W+C+L)) respectively, wherein S represents the grid size of the feature graphs, W represents the central coordinate value and the wide high value of a predicted result obtained by prediction, C represents the confidence of a predicted frame, and L represents the probability of predicting the category on the feature point.
A3, predicting a target boundary box: acquiring tag data in an image, including a central coordinate value b x ,b y Width to height value b w ,b h And category, predicting the target bounding box.
Let the ratio of the left upper corner coordinates of the predicted border frame center relative to the grid where the border frame center is positioned and the grid side length be t respectively x And t y The activation function uses a Sigmoid function,
the center point coordinates of the bounding box (b x ,b y ) Width b w High b h And confidence b c Respectively is
The final prediction bounding box output is denoted as b= (b x ,b y ,b w ,b h ,b c ) T . Wherein c x ,c y For a preset predictive network, p w ,p h Is a preset anchor value. Confidence sigma (t) c ) From inclusion of the object probability p r (class) and frame accuracyTwo-part composition, i.e
Wherein the method comprises the steps ofIs the intersection ratio of the predicted frame and the real frame. Setting a threshold value, processing the obtained multiple prediction rectangular frames through a non-maximum suppression algorithm, and finally obtaining the most reliable rectangular frame.
A4, calculating a loss function, wherein the loss function mainly comprises three parts: target positioning offset loss, target confidence loss, and target classification loss. Compared with the traditional method, the loss function in the method adopts a new loss function, and can better calculate the change of continuous variables.
The loss function derivative used is in the form of:
wherein sigma epsilon (-1, 1), for larger errors, the loss function can be properly reduced, so that when the gradient propagates to the Sigmoid function, the convergence speed at the initial time can be increased, and the influence of gradient disappearance is reduced. Meanwhile, when the error approaches 0, the adjustment amplitude of the weight of the output layer can be smaller, and the model can be better converged.
The loss function has good following performance on error change, can change along with the change of the error, has the characteristics of large gradient when the error is large and small gradient when the error is small, and can adjust weight according to the change of the error so as to ensure that the network model is better converged. By using the improved loss function, the detection speed and the detection precision of the network in detecting small rail defects are improved.
And A5, comparing the processed image with a pre-acquired rail defect image data set, determining the defect type, and marking the defect type.
The rail defect detection method of the invention has the advantages that: according to the method for detecting the rail surface defects in the tunnel based on the YOLO, the unmanned aerial vehicle is used as a patrol main body, so that various defects of a traditional detection method are avoided, the labor cost during manual detection is reduced, and the potential safety threat of the deformation in the tunnel to detection personnel is avoided.
According to the method, based on a YOLO frame, image preprocessing is realized by using a local contrast method and multistage median filtering, and the detection precision of a small target is improved by improving a loss function. Meanwhile, in the inspection process, the flight speed of the unmanned aerial vehicle can be adjusted according to the defect detection rate, so that an optimal detection result is obtained.

Claims (8)

1. A method for detecting rail surface defects in a tunnel based on YOLO comprises the following steps:
1) The unmanned aerial vehicle with the functions of image acquisition and autonomous positioning takes a tunnel entrance as an initial point and a tunnel exit as an end point; the railway track enters a tunnel to carry out inspection;
2) Collecting real-time image information of the rail surface, detecting rail surface defects in real time, and if the surface defects are detected, entering step 3); otherwise, repeating the step 2) until reaching the end position; the speed v when the unmanned aerial vehicle does not detect the defect is:
wherein: v 0 For initial speed, K v For adjusting the coefficient, the value range is [0,0.5];For the average detected frame rate, f, when the algorithm is ideally running q The frame rate is detected in real time in the inspection process;
3) And storing the current picture, marking the defect position in the picture, and simultaneously recording the defect type and the current unmanned aerial vehicle position information.
2. The YOLO-based method for detecting surface defects of rails in a tunnel according to claim 1, wherein in the step 1), the unmanned aerial vehicle uses a laser radar in combination with a SLAM algorithm to construct a tunnel inspection map, and performs real-time positioning during inspection work.
3. The YOLO-based method for detecting surface defects of rails in a tunnel according to claim 1, wherein a controllable array light source is mounted below an unmanned aerial vehicle body, light is supplemented for the condition of insufficient illumination in the tunnel, and a light source illumination intensity L adjustment formula is as follows:
L=L 0 +u
wherein: wherein the illumination intensity of the light source is controlled as follows:
under natural light condition, the unmanned aerial vehicle shoots initial gray average value on rail surfaceGray scale mean value h, L of acquired image in tunnel environment 0 For the initial illumination intensity, set to 100Lux, u E [ -10,10],K p And K d The proportional coefficient and the differential coefficient, respectively.
4. The YOLO-based method for detecting a rail surface defect in a tunnel according to claim 1, wherein in step 2), the detection step for the collected defect image is as follows:
a1, preprocessing an acquired image: collecting a defect image on the surface of a rail, and preprocessing a current video frame image, wherein the preprocessing comprises rail positioning, image enhancement and image denoising, protecting target information, eliminating interference of a non-rail area and noise, and enhancing image contrast;
a2, extracting image features: taking the preprocessed defect image as an input image, performing feature extraction in a network to obtain feature images of three scales, and obtaining tensor values through up-sampling;
a3, predicting a target boundary box: obtaining tag data in the image by conversion, including a center coordinate value b x ,b y Width to height value b w ,b h And category, predicting the target bounding box;
a4, loss regression: calculation of a loss function, the loss function mainly comprises three parts: target positioning offset loss, target confidence loss, and target classification loss.
5. The YOLO-based in-tunnel rail surface defect detection method of claim 4, wherein in the image preprocessing in step A1: the rail positioning adopts an OTSU algorithm to carry out image segmentation to obtain a complete rail surface area image; the image enhancement adopts a local contrast method to realize the image contrast enhancement; the image denoising adopts multistage median filtering denoising, and the tiny structure on the rail is protected while noise is restrained.
6. The YOLO-based intra-tunnel rail surface defect detection method of claim 5, wherein the image contrast D (x, y) in image enhancement is calculated as:
h (x, y) is the gray value of the pixel point (x, y) in the image, h B Is the gray average value in the neighborhood B of the pixel (x, y), wherein the region B is defined as a linear region of 1×150 centered on the pixel (x, y).
7. The YOLO-based intra-tunnel rail surface defect detection method of claim 4, wherein the three feature map scales obtained in step A2 are 13 x 13, 26 x 26 and 52 x 52, respectively, the feature maps of the three scales are combined into a final output, and three tensor values are obtained, which are (S, 3,13/26/52,13/26/52,3 x (w+c+l)), respectively, wherein S represents a feature map mesh size, W represents a central coordinate value and a wide high value of a predicted result obtained by prediction, C represents a confidence level of the predicted frame, and L represents a probability of predicting a category at a feature point.
8. The YOLO-based intra-tunnel rail surface defect detection method of claim 4, wherein the derivative of the loss function used in step A4 is in the form of:
wherein σ ε (-1, 1).
CN202010340381.3A 2020-04-26 2020-04-26 Method for detecting rail defects in tunnel based on YOLO Active CN111598846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010340381.3A CN111598846B (en) 2020-04-26 2020-04-26 Method for detecting rail defects in tunnel based on YOLO

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010340381.3A CN111598846B (en) 2020-04-26 2020-04-26 Method for detecting rail defects in tunnel based on YOLO

Publications (2)

Publication Number Publication Date
CN111598846A CN111598846A (en) 2020-08-28
CN111598846B true CN111598846B (en) 2024-01-05

Family

ID=72190747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010340381.3A Active CN111598846B (en) 2020-04-26 2020-04-26 Method for detecting rail defects in tunnel based on YOLO

Country Status (1)

Country Link
CN (1) CN111598846B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113358665A (en) * 2021-05-25 2021-09-07 同济大学 Unmanned aerial vehicle tunnel defect detection method and system
CN113449617A (en) * 2021-06-17 2021-09-28 广州忘平信息科技有限公司 Track safety detection method, system, device and storage medium
CN113781480B (en) * 2021-11-10 2022-02-15 南京未来网络产业创新有限公司 Steel rail surface detection method and system based on machine vision
CN115963397B (en) * 2022-12-01 2023-07-25 华中科技大学 Rapid online detection method and device for surface defects of inner contour of motor stator

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109084735A (en) * 2018-08-29 2018-12-25 北京航空航天大学 A kind of tunnel monitoring abnormal state method and unmanned plane device
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN110689531A (en) * 2019-09-23 2020-01-14 云南电网有限责任公司电力科学研究院 Automatic power transmission line machine inspection image defect identification method based on yolo

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109084735A (en) * 2018-08-29 2018-12-25 北京航空航天大学 A kind of tunnel monitoring abnormal state method and unmanned plane device
CN110689531A (en) * 2019-09-23 2020-01-14 云南电网有限责任公司电力科学研究院 Automatic power transmission line machine inspection image defect identification method based on yolo

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进损失函数的YOLOv3网络;吕铄等;计算机系统应用;第28卷(第2期);2-6 *

Also Published As

Publication number Publication date
CN111598846A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN111598846B (en) Method for detecting rail defects in tunnel based on YOLO
Banić et al. Intelligent machine vision based railway infrastructure inspection and monitoring using UAV
CN110532889B (en) Track foreign matter detection method based on rotor unmanned aerial vehicle and YOLOv3
Liu et al. A review of applications of visual inspection technology based on image processing in the railway industry
CN105373135B (en) A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision
CN112348034A (en) Crane defect detection system based on unmanned aerial vehicle image recognition and working method
CN110246130B (en) Airport pavement crack detection method based on infrared and visible light image data fusion
CN110310255B (en) Point switch notch detection method based on target detection and image processing
CN110211101A (en) A kind of rail surface defect rapid detection system and method
CN105203552A (en) 360-degree tread image detecting system and method
CN110954968B (en) Airport runway foreign matter detection device and method
CN111080611A (en) Railway wagon bolster spring fracture fault image identification method
CN111080617B (en) Railway wagon brake beam pillar round pin loss fault identification method
CN110298216A (en) Vehicle deviation warning method based on lane line gradient image adaptive threshold fuzziness
CN116757990A (en) Railway fastener defect online detection and identification method based on machine vision
CN103442209A (en) Video monitoring method of electric transmission line
CN106096504A (en) A kind of model recognizing method based on unmanned aerial vehicle onboard platform
CN110009633B (en) Steel rail surface defect detection method based on reverse Gaussian difference
CN102073852A (en) Multiple vehicle segmentation method based on optimum threshold values and random labeling method for multiple vehicles
Zhao et al. Image-based comprehensive maintenance and inspection method for bridges using deep learning
CN108163014A (en) A kind of engine drivers in locomotive depot Fu Zhu lookout method for early warning and device
CN115482195A (en) Train part deformation detection method based on three-dimensional point cloud
CN113066050A (en) Method for resolving course attitude of airdrop cargo bed based on vision
CN112508911A (en) Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
Wang et al. FarNet: An attention-aggregation network for long-range rail track point cloud segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant