CN112614097B - Method for detecting foreign matter on axle box rotating arm of railway train - Google Patents

Method for detecting foreign matter on axle box rotating arm of railway train Download PDF

Info

Publication number
CN112614097B
CN112614097B CN202011490481.0A CN202011490481A CN112614097B CN 112614097 B CN112614097 B CN 112614097B CN 202011490481 A CN202011490481 A CN 202011490481A CN 112614097 B CN112614097 B CN 112614097B
Authority
CN
China
Prior art keywords
convolution kernel
size
output
paths
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011490481.0A
Other languages
Chinese (zh)
Other versions
CN112614097A (en
Inventor
燕天娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN202011490481.0A priority Critical patent/CN112614097B/en
Publication of CN112614097A publication Critical patent/CN112614097A/en
Application granted granted Critical
Publication of CN112614097B publication Critical patent/CN112614097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A method for detecting foreign matters on axle box rotating arms of a railway train, in particular to a method for detecting foreign matters on axle box rotating arms. The invention aims to solve the problems of low fault detection accuracy and efficiency of the existing manual fault detection method. Step one, collecting the whole train linear array image data of the train; step two, carrying out coarse positioning on the full-vehicle linear array image data acquired in the step one to obtain an axle box rotating arm position image; thirdly, establishing a sample data set by using the obtained axle box rotating arm position image; step four, selecting a detection network model; step five, training the detection network model by using the sample data set established in the step three to obtain a trained detection network; and step six, carrying out axle box rotating arm foreign matter fault judgment by using the trained detection network. The invention belongs to the field of fault image identification.

Description

Method for detecting foreign matter on axle box rotating arm of railway train
Technical Field
The invention belongs to the field of fault image recognition, and particularly relates to a method for detecting foreign matters on axle box rotating arms of a railway passenger car based on deep learning.
Background
For a long time, the vehicle inspection personnel adopt manual inspection (namely, a mode of inspecting vehicle images) to judge whether foreign matters are carried in a rotating arm area of the axle box, the inspection work is very important, but a large amount of image screening causes the vehicle inspection personnel to be very easy to fatigue in the working process, the conditions of missed inspection and wrong inspection are easy to occur, and the accuracy and the high efficiency of the detection are difficult to ensure. Therefore, the automatic identification mode adopted in the passenger train fault detection has certain necessity, and particularly at the present day when the deep learning technology is continuously mature and complete, the current situation of insufficient robustness caused by singly using the traditional image processing technology can be greatly improved, so that the detection efficiency and the accuracy are improved. The deep learning method is applied to realize automatic fault detection and alarm, realize the conversion from human inspection operation to machine inspection operation, and improve the operation quality and the operation efficiency.
Disclosure of Invention
The invention aims to provide a method for detecting foreign matters on a rotating arm of a railway train axle box, which aims to solve the problems of low accuracy and efficiency of fault detection in the conventional manual fault detection method.
The technical scheme adopted by the invention for solving the technical problems is as follows: the method for detecting the foreign matter on the rotating arm of the axle box of the railway train comprises the following steps:
the method for detecting the foreign matter on the rotating arm of the axle box of the railway train is characterized by comprising the following steps of:
step one, collecting the whole train linear array image data of the train;
step two, carrying out coarse positioning on the full-vehicle linear array image data acquired in the step one to obtain an axle box rotating arm position image;
thirdly, establishing a sample data set by using the obtained axle box rotating arm position image;
step four, selecting a detection network model;
step five, training the detection network model by using the sample data set established in the step three to obtain a trained detection network;
and step six, carrying out axle box rotating arm foreign matter fault judgment by using the trained detection network.
Acquiring full train linear array image data of a train in the first step; the specific process is as follows:
a camera or a video camera is carried around a train track by utilizing a fixed device, passenger trains running under different conditions are shot, and high-definition gray level whole-train images are obtained after the passenger trains pass through the camera or the video camera.
Step two, roughly positioning the full-vehicle linear array image data acquired in the step one to obtain an axle box rotating arm position image; the specific process is as follows:
and intercepting an axle box rotating arm position image from the whole vehicle linear array image data according to the axle distance information of the axle box rotating arm and the position priori knowledge of the axle box rotating arm.
Establishing a sample data set by using the obtained axle box rotating arm position image in the third step; the specific process comprises the following steps:
acquiring a gray image set, wherein the gray image set is the axle box rotating arm position image set acquired in the step two, and the gray image set comprises an axle box rotating arm position image with foreign matters and an axle box rotating arm position image without foreign matters;
acquiring a label file data set, wherein the label file data set is a set of files which correspond to the images in the gray level image set one by one and are used for recording the image size, the foreign matter category in the image and the upper left corner and the lower right corner of the foreign matter position coordinate;
and forming a sample data set by utilizing the gray image set and the label file data set.
Forming a sample data set by utilizing the gray image set and the label file data set; the specific process comprises the following steps:
preprocessing the gray level images in the gray level image set to obtain 3-channel gray level images, wherein one channel in the 3 channels is enlarged by 2% -5% and then cut to the size of an original image; a channel is unchanged; filling 0 pixel around the image to the size of the original image after one channel is reduced by 2% -5%;
and forming a sample data set by using the 3-channel gray image set and the label file data set.
Forming a sample data set by utilizing the gray image set and the label file data set; the specific process comprises the following steps:
whether the axle box rotating arm position images with the foreign matters in the gray level image set reach 50% of the total number of the images in the gray level image set is judged by reading the label file, if not, the axle box rotating arm position images with the foreign matters in the gray level image set are amplified, and the proportion of the axle box rotating arm position images with the foreign matters in the gray level image set in the total image in the gray level image set is larger than or equal to 50%.
Selecting a detection network model in the fourth step, wherein the specific process is as follows:
the backbone network is selected as a fast-rcnn detection network model of the resnet50, the size of a picture input by an input layer of the backbone network resnet50 is 3 × 640 × 352, wherein 3 in the 3 × 640 × 352 is the number of channels, 640 × 352 is the size of the picture, and the connection relation of the backbone network resnet50 is characterized as follows:
the input end of the input layer is connected with the input end of the convolution layer 1, and the specific parameters of the convolution layer 1 are as follows: the convolution kernel size is 7 × 7, the number of convolution kernels is 64, and the step size is 2;
the output end of the convolutional layer 1 is connected with the input end of the maximum pooling layer 1, and the specific parameters of the maximum pooling layer 1 are as follows: the convolution kernel size is 3 x 3, the step size is 2;
the output of the maximum pooling layer 1 is divided into two paths: one way connects the network basic block 2; one path is connected with convolution layers 2 with convolution kernel size of 1 x 1, convolution kernel number of 256 and step length of 1, and the two paths are used as output 1 after performing addition operation;
the network basic block2 is a convolution layer 3 with convolution kernel size of 1 × 1, convolution kernel number of 64 and step size of 1, and then is connected with a convolution layer 4 with convolution kernel size of 3 × 3, convolution kernel number of 64 and step size of 1, and then is connected with a convolution layer 5 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1;
the output 1 is divided into two paths: one way connects the network basic block 2; one path does not do any operation, and the two paths perform addition operation to be used as an output 2;
output 2 is divided into two paths: one way connects the network basic block 2; one path does not do any operation, and the two paths perform addition operation to be used as an output 3;
output 3 is divided into two paths: one path is connected with a network basic block 3-1; one path is connected with convolution layers 6 with convolution kernel size of 1 x 1, convolution kernel number of 512 and step length of 2, and the two paths are used as output 4 after performing addition operation;
the network basic block3-1 is a convolution layer 7 with convolution kernel size of 1 × 1, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 8 with convolution kernel size of 3 × 3, convolution kernel number of 128 and step size of 2, and then is connected with a convolution layer 9 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1;
the output 4 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 5;
the network basic block3-2 is a convolution layer 10 with convolution kernel size of 1 × 1, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 11 with convolution kernel size of 3 × 3, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 12 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1;
the output 5 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths are used as output 6 after performing addition operation;
the output 6 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths are used as an output 7 after performing addition operation;
the output 7 is divided into two paths: one path is connected with a network basic block 4-1; one path is connected with convolution layers 13 with convolution kernel size of 1 x 1, convolution kernel number of 1024 and step length of 2, and the two paths are used as output 8 after performing addition operation;
the network basic block4-1 is a convolution layer 14 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 15 with convolution kernel size of 3 × 3, convolution kernel number of 256 and step size of 2, and then is connected with a convolution layer 16 with convolution kernel size of 1 × 1, convolution kernel number of 1024 and step size of 1;
the output 8 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 9;
the network basic block4-2 is a convolution layer 17 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 18 with convolution kernel size of 3 × 3, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 19 with convolution kernel size of 1 × 1, convolution kernel number of 1024 and step size of 1;
the output 9 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as output 10 after performing addition operation;
the output 10 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 11;
the output 11 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as an output 12 after performing addition operation;
the output 12 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as an output 13 after performing addition operation;
the output 13 is divided into two paths: one path is connected with a network basic block 5-1; one path is connected with a convolution layer 20 with the convolution kernel size of 1 x 1, the convolution kernel number of 2048 and the step length of 2, and the two paths are used as an output 14 after performing addition operation;
the network basic block5-1 is a convolution layer 21 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 22 with convolution kernel size of 3 × 3, convolution kernel number of 512 and step size of 2, and then is connected with a convolution layer 23 with convolution kernel size of 1 × 1, convolution kernel number of 2048 and step size of 1;
the output 14 is divided into two paths: one path is connected with a network basic block 5-2; one path does not do any operation, and the two paths are used as output 15 after performing addition operation;
the network basic block5-2 is a convolution layer 24 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 25 with convolution kernel size of 3 × 3, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 26 with convolution kernel size of 1 × 1, convolution kernel number of 2048 and step size of 1;
the output 15 is divided into two paths: one path is connected with a network basic block 5-2; one path does not do any operation, and the two paths perform addition operation and then serve as an output 16;
the output 16 serves as an input, which is connected in turn to an averaging pooling layer, a full junction layer and a Softmax function.
Step five, training a detection network model by using the sample data set established in the step three to obtain a trained detection network; the specific process is as follows:
inputting the sample data set established in the third step into a detection network model, continuously training by reducing the loss value of the loss function as a standard, and finding out an optimal weight coefficient to obtain the trained detection network model.
In the sixth step, the trained detection network is used for judging the axle box rotating arm foreign matter fault; the specific process is as follows:
step six: acquiring full train linear array image data of a train to be detected;
step six and two: roughly positioning a whole train linear array image of a train to be detected to obtain an axle box rotating arm position image of the train to be detected;
step six and three: predicting the axle box rotating arm position image obtained in the sixth step and the sixth step by using a trained detection network model to obtain the score of the foreign matter category in the image and the upper left corner and the lower right corner of the foreign matter position coordinate, determining a rectangular frame by taking the upper left corner and the lower right corner as diagonal points of the rectangular frame to obtain the length, the width and the area of the rectangular frame;
step six and four: determining whether the train to be tested meets the discrimination standard or not based on the obtained score of the foreign matter category and the length, width and area of the rectangular frame; when the judgment standard is met, foreign matters exist in the axle box rotating arm area of the train to be detected.
The criteria include:
the score of the foreign matter category obtained in the sixth step and the third step is higher than 0.5;
1/10, the length of the rectangular frame obtained in the step six and three is larger than the length of the axle box rotating arm position image of the train to be tested;
1/10, the width of the rectangular frame obtained in the step six and three is larger than the width of the axle box rotating arm position image of the train to be tested;
and sixthly, the area of the rectangular frame obtained in the step three is larger than 5% of the image area of the axle box rotating arm position of the train to be tested.
The invention has the beneficial effects that:
and carrying imaging equipment around the train track by using the fixing equipment, and acquiring the whole-train linear array image data of the passenger train to be detected. According to the algorithm framework of positioning, a rectangular area containing the axle box boom is obtained from the full vehicle image. Inputting 3-channel images in the input data of the neural network: wherein, one channel is cut to the size of the original image after being amplified by 2 to 5 percent; a channel is unchanged; and after one channel is reduced by 2-5%, 0 pixel is filled around the image to the size of an original image, the image and the label are added with a detection algorithm for training, so that the foreign matter can be effectively identified and positioned, the coverage rate and accuracy of the algorithm are further improved, finally, the picture for confirming the carried foreign matter is output in an alarm mode, the station staff is assisted to quickly detect train parts, and the train operation safety is ensured.
1. The mode of automatically identifying the image is used for replacing manual detection, the operation standard is unified, the influence of personnel quality and responsibility is avoided, the operation quality is effectively improved, the detection stability and precision are improved, and the health of workers is improved.
2. The deep learning algorithm is applied to automatic identification of the fault of the foreign matter carried on the axle box rotating arm of the railway passenger train axle box, the stability and the precision of the whole algorithm are improved, and the method has high flexibility, accuracy and robustness compared with the traditional machine vision detection method of manual standard feature extraction.
3. Inputting 3-channel images in the input data of the neural network: wherein, one channel is cut to the size of the original image after being amplified by 2 to 5 percent; a channel is unchanged; after one channel is reduced by 2-5%, 0 pixel is filled around the image to the size of the original image, and the original image and the label are added into a detection algorithm for training, so that not only can the lengthening of training time caused by the amplification means of amplification and reduction be effectively reduced; and the generalization capability of detection can be effectively improved in real detection.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of an image amplification pre-processing;
fig. 3 is a network structure diagram of the ResNet-50.
Detailed Description
It should be noted that, in the present invention, the embodiments disclosed in the present application may be combined with each other without conflict.
The first embodiment is as follows: referring to fig. 1, the method for detecting a foreign matter on an axle box boom of a railway train according to the present embodiment includes the steps of:
step one, collecting the whole train linear array image data of the train;
step two, carrying out coarse positioning on the full-vehicle linear array image data acquired in the step one to obtain an axle box rotating arm position image;
thirdly, establishing a sample data set by using the obtained axle box rotating arm position image;
step four, selecting a detection network model;
step five, training the detection network model by using the sample data set established in the step three to obtain a trained detection network;
and step six, carrying out axle box rotating arm foreign matter fault judgment by using the trained detection network.
The second embodiment is as follows: the first embodiment is different from the first embodiment in that the first step collects the whole train linear array image data of the train; the specific process is as follows:
a camera or a video camera is carried around a train track by utilizing a fixed device, passenger trains running under different conditions are shot, and high-definition gray level whole-train images are obtained after the passenger trains pass through the camera or the video camera.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the second step is to perform coarse positioning on the full-vehicle linear array image data acquired in the first step to obtain an axle box rotating arm position image; the specific process is as follows:
and intercepting an axle box rotating arm position image from the whole vehicle linear array image data according to the axle distance information of the axle box rotating arm and the position priori knowledge of the axle box rotating arm.
Thereby reducing the amount of calculation and increasing the speed of recognition.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment and the first to third embodiments is that, in step three, a sample data set is established by using the obtained axle box rotating arm position image; the specific process comprises the following steps:
acquiring a gray image set, wherein the gray image set is the axle box rotating arm position image set acquired in the step two, and the gray image set comprises an axle box rotating arm position image with foreign matters and an axle box rotating arm position image without foreign matters;
acquiring a label file data set, wherein the label file data set is a set of files which correspond to the images in the gray level image set one by one and are used for recording the image size, the foreign matter category in the image and the upper left corner and the lower right corner of the foreign matter position coordinate;
and forming a sample data set by utilizing the gray image set and the label file data set.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the difference between the first embodiment and the fourth embodiment is that a gray image set and a label file data set are used to form a sample data set; the specific process comprises the following steps:
preprocessing the gray level images in the gray level image set to obtain 3-channel gray level images, wherein one channel in the 3 channels is enlarged by 2% -5% and then cut to the size of an original image; a channel is unchanged; filling 0 pixel around the image to the size of the original image after one channel is reduced by 2% -5%;
and forming a sample data set by using the 3-channel gray image set and the label file data set.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: the difference between the first embodiment and the fifth embodiment is that a gray image set and a label file data set are used to form a sample data set; the specific process comprises the following steps:
whether the axle box rotating arm position images with the foreign matters in the gray level image set reach 50% of the total number of the images in the gray level image set is judged by reading the label file, if not, the axle box rotating arm position images with the foreign matters in the gray level image set are amplified, and the proportion of the axle box rotating arm position images with the foreign matters in the gray level image set in the total image in the gray level image set is larger than or equal to 50%. As shown in fig. 2.
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between this embodiment and one of the first to sixth embodiments is that the detection network model is selected in the fourth step, and the specific process is as follows:
the backbone network is selected as a fast-rcnn detection network model of the resnet50, the size of a picture input by an input layer of the backbone network resnet50 is 3 × 640 × 352, wherein 3 in the 3 × 640 × 352 is the number of channels, 640 × 352 is the size of the picture, and the connection relation of the backbone network resnet50 is characterized as follows:
the input end of the input layer is connected with the input end of the convolution layer 1, and the specific parameters of the convolution layer 1 are as follows: the convolution kernel size is 7 × 7, the number of convolution kernels is 64, and the step size is 2;
the output end of the convolutional layer 1 is connected with the input end of the maximum pooling layer 1, and the specific parameters of the maximum pooling layer 1 are as follows: the convolution kernel size is 3 x 3, the step size is 2;
the output of the maximum pooling layer 1 is divided into two paths: one way connects the network basic block 2; one path is connected with convolution layers 2 with convolution kernel size of 1 x 1, convolution kernel number of 256 and step length of 1, and the two paths are used as output 1 after performing addition operation;
the network basic block2 is a convolution layer 3 with convolution kernel size of 1 × 1, convolution kernel number of 64 and step size of 1, and then is connected with a convolution layer 4 with convolution kernel size of 3 × 3, convolution kernel number of 64 and step size of 1, and then is connected with a convolution layer 5 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1;
the output 1 is divided into two paths: one way connects the network basic block 2; one path does not do any operation, and the two paths perform addition operation to be used as an output 2;
output 2 is divided into two paths: one way connects the network basic block 2; one path does not do any operation, and the two paths perform addition operation to be used as an output 3;
output 3 is divided into two paths: one path is connected with a network basic block 3-1; one path is connected with convolution layers 6 with convolution kernel size of 1 x 1, convolution kernel number of 512 and step length of 2, and the two paths are used as output 4 after performing addition operation;
the network basic block3-1 is a convolution layer 7 with convolution kernel size of 1 × 1, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 8 with convolution kernel size of 3 × 3, convolution kernel number of 128 and step size of 2, and then is connected with a convolution layer 9 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1;
the output 4 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 5;
the network basic block3-2 is a convolution layer 10 with convolution kernel size of 1 × 1, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 11 with convolution kernel size of 3 × 3, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 12 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1;
the output 5 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths are used as output 6 after performing addition operation;
the output 6 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths are used as an output 7 after performing addition operation;
the output 7 is divided into two paths: one path is connected with a network basic block 4-1; one path is connected with convolution layers 13 with convolution kernel size of 1 x 1, convolution kernel number of 1024 and step length of 2, and the two paths are used as output 8 after performing addition operation;
the network basic block4-1 is a convolution layer 14 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 15 with convolution kernel size of 3 × 3, convolution kernel number of 256 and step size of 2, and then is connected with a convolution layer 16 with convolution kernel size of 1 × 1, convolution kernel number of 1024 and step size of 1;
the output 8 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 9;
the network basic block4-2 is a convolution layer 17 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 18 with convolution kernel size of 3 × 3, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 19 with convolution kernel size of 1 × 1, convolution kernel number of 1024 and step size of 1;
the output 9 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as output 10 after performing addition operation;
the output 10 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 11;
the output 11 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as an output 12 after performing addition operation;
the output 12 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as an output 13 after performing addition operation;
the output 13 is divided into two paths: one path is connected with a network basic block 5-1; one path is connected with a convolution layer 20 with the convolution kernel size of 1 x 1, the convolution kernel number of 2048 and the step length of 2, and the two paths are used as an output 14 after performing addition operation;
the network basic block5-1 is a convolution layer 21 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 22 with convolution kernel size of 3 × 3, convolution kernel number of 512 and step size of 2, and then is connected with a convolution layer 23 with convolution kernel size of 1 × 1, convolution kernel number of 2048 and step size of 1;
the output 14 is divided into two paths: one path is connected with a network basic block 5-2; one path does not do any operation, and the two paths are used as output 15 after performing addition operation;
the network basic block5-2 is a convolution layer 24 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 25 with convolution kernel size of 3 × 3, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 26 with convolution kernel size of 1 × 1, convolution kernel number of 2048 and step size of 1;
the output 15 is divided into two paths: one path is connected with a network basic block 5-2; one path does not do any operation, and the two paths perform addition operation and then serve as an output 16;
the output 16 serves as an input, which is connected in turn to an averaging pooling layer, a full junction layer and a Softmax function.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
The specific implementation mode is eight: the difference between the first embodiment and the seventh embodiment is that, in the fifth embodiment, the sample data set established in the third step is used for training the detection network model to obtain a trained detection network; the specific process is as follows:
inputting the sample data set established in the third step into a detection network model, continuously training by reducing the loss value of the loss function as a standard, and finding out an optimal weight coefficient to obtain the trained detection network model.
Other steps and parameters are the same as those in one of the first to seventh embodiments.
The specific implementation method nine: the difference between the present embodiment and the first to eighth embodiments is that, in the sixth step, the trained detection network is used to perform axle box boom foreign matter fault determination; the specific process is as follows:
step six: acquiring full train linear array image data of a train to be detected;
step six and two: roughly positioning a whole train linear array image of a train to be detected to obtain an axle box rotating arm position image of the train to be detected;
step six and three: predicting the axle box rotating arm position image obtained in the sixth step and the sixth step by using a trained detection network model to obtain the score of the foreign matter category in the image and the upper left corner and the lower right corner of the foreign matter position coordinate, determining a rectangular frame by taking the upper left corner and the lower right corner as diagonal points of the rectangular frame to obtain the length, the width and the area of the rectangular frame;
step six and four: determining whether the train to be tested meets the discrimination standard or not based on the obtained score of the foreign matter category and the length, width and area of the rectangular frame; when the judgment standard is met, foreign matters exist in the axle box rotating arm area of the train to be detected.
Other steps and parameters are the same as those in one to eight of the embodiments.
The detailed implementation mode is ten: the present embodiment is different from one of the first to ninth embodiments in that the determination criterion includes:
the score of the foreign matter category obtained in the sixth step and the third step is higher than 0.5;
1/10, the length of the rectangular frame obtained in the step six and three is larger than the length of the axle box rotating arm position image of the train to be tested;
1/10, the width of the rectangular frame obtained in the step six and three is larger than the width of the axle box rotating arm position image of the train to be tested;
and sixthly, the area of the rectangular frame obtained in the step three is larger than 5% of the image area of the axle box rotating arm position of the train to be tested.
Other steps and parameters are the same as those in one of the first to ninth embodiments.
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.

Claims (5)

1. The method for detecting the foreign matter on the rotating arm of the axle box of the railway train is characterized by comprising the following steps of:
step one, collecting the whole train linear array image data of the train;
step two, carrying out coarse positioning on the full-vehicle linear array image data acquired in the step one to obtain an axle box rotating arm position image;
thirdly, establishing a sample data set by using the obtained axle box rotating arm position image;
step four, selecting a detection network model;
step five, training the detection network model by using the sample data set established in the step three to obtain a trained detection network;
sixthly, distinguishing the foreign matter fault of the axle box rotating arm by using the trained detection network;
establishing a sample data set by using the obtained axle box rotating arm position image in the third step; the specific process comprises the following steps:
acquiring a gray image set, wherein the gray image set is the axle box rotating arm position image set acquired in the step two, and the gray image set comprises an axle box rotating arm position image with foreign matters and an axle box rotating arm position image without foreign matters;
acquiring a label file data set, wherein the label file data set is a set of files which correspond to the images in the gray level image set one by one and are used for recording the image size, the foreign matter category in the image and the upper left corner and the lower right corner of the foreign matter position coordinate;
forming a sample data set by utilizing the gray image set and the label file data set;
forming a sample data set by utilizing the gray image set and the label file data set; the specific process comprises the following steps:
preprocessing the gray level images in the gray level image set to obtain 3-channel gray level images, wherein one channel in the 3 channels is enlarged by 2% -5% and then cut to the size of an original image; a channel is unchanged; filling 0 pixel around the image to the size of the original image after one channel is reduced by 2% -5%;
forming a sample data set by using the 3-channel gray image set and the label file data set;
selecting a detection network model in the fourth step, wherein the specific process is as follows:
the backbone network is selected as a fast-rcnn detection network model of the resnet50, the size of a picture input by an input layer of the backbone network resnet50 is 3 × 640 × 352, wherein 3 in the 3 × 640 × 352 is the number of channels, 640 × 352 is the size of the picture, and the connection relation of the backbone network resnet50 is characterized as follows:
the input end of the input layer is connected with the input end of the convolution layer 1, and the specific parameters of the convolution layer 1 are as follows: the convolution kernel size is 7 × 7, the number of convolution kernels is 64, and the step size is 2;
the output end of the convolutional layer 1 is connected with the input end of the maximum pooling layer 1, and the specific parameters of the maximum pooling layer 1 are as follows: the convolution kernel size is 3 x 3, the step size is 2;
the output of the maximum pooling layer 1 is divided into two paths: one way connects the network basic block 2; one path is connected with convolution layers 2 with convolution kernel size of 1 x 1, convolution kernel number of 256 and step length of 1, and the two paths are used as output 1 after performing addition operation;
the network basic block2 is a convolution layer 3 with convolution kernel size of 1 × 1, convolution kernel number of 64 and step size of 1, and then is connected with a convolution layer 4 with convolution kernel size of 3 × 3, convolution kernel number of 64 and step size of 1, and then is connected with a convolution layer 5 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1;
the output 1 is divided into two paths: one way connects the network basic block 2; one path does not do any operation, and the two paths perform addition operation to be used as an output 2;
output 2 is divided into two paths: one way connects the network basic block 2; one path does not do any operation, and the two paths perform addition operation to be used as an output 3;
output 3 is divided into two paths: one path is connected with a network basic block 3-1; one path is connected with convolution layers 6 with convolution kernel size of 1 x 1, convolution kernel number of 512 and step length of 2, and the two paths are used as output 4 after performing addition operation;
the network basic block3-1 is a convolution layer 7 with convolution kernel size of 1 × 1, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 8 with convolution kernel size of 3 × 3, convolution kernel number of 128 and step size of 2, and then is connected with a convolution layer 9 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1;
the output 4 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 5;
the network basic block3-2 is a convolution layer 10 with convolution kernel size of 1 × 1, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 11 with convolution kernel size of 3 × 3, convolution kernel number of 128 and step size of 1, and then is connected with a convolution layer 12 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1;
the output 5 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths are used as output 6 after performing addition operation;
the output 6 is divided into two paths: one path is connected with a network basic block 3-2; one path does not do any operation, and the two paths are used as an output 7 after performing addition operation;
the output 7 is divided into two paths: one path is connected with a network basic block 4-1; one path is connected with convolution layers 13 with convolution kernel size of 1 x 1, convolution kernel number of 1024 and step length of 2, and the two paths are used as output 8 after performing addition operation;
the network basic block4-1 is a convolution layer 14 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 15 with convolution kernel size of 3 × 3, convolution kernel number of 256 and step size of 2, and then is connected with a convolution layer 16 with convolution kernel size of 1 × 1, convolution kernel number of 1024 and step size of 1;
the output 8 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 9;
the network basic block4-2 is a convolution layer 17 with convolution kernel size of 1 × 1, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 18 with convolution kernel size of 3 × 3, convolution kernel number of 256 and step size of 1, and then is connected with a convolution layer 19 with convolution kernel size of 1 × 1, convolution kernel number of 1024 and step size of 1;
the output 9 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as output 10 after performing addition operation;
the output 10 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths perform addition operation to be used as an output 11;
the output 11 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as an output 12 after performing addition operation;
the output 12 is divided into two paths: one path is connected with a network basic block 4-2; one path does not do any operation, and the two paths are used as an output 13 after performing addition operation;
the output 13 is divided into two paths: one path is connected with a network basic block 5-1; one path is connected with a convolution layer 20 with the convolution kernel size of 1 x 1, the convolution kernel number of 2048 and the step length of 2, and the two paths are used as an output 14 after performing addition operation;
the network basic block5-1 is a convolution layer 21 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 22 with convolution kernel size of 3 × 3, convolution kernel number of 512 and step size of 2, and then is connected with a convolution layer 23 with convolution kernel size of 1 × 1, convolution kernel number of 2048 and step size of 1;
the output 14 is divided into two paths: one path is connected with a network basic block 5-2; one path does not do any operation, and the two paths are used as output 15 after performing addition operation;
the network basic block5-2 is a convolution layer 24 with convolution kernel size of 1 × 1, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 25 with convolution kernel size of 3 × 3, convolution kernel number of 512 and step size of 1, and then is connected with a convolution layer 26 with convolution kernel size of 1 × 1, convolution kernel number of 2048 and step size of 1;
the output 15 is divided into two paths: one path is connected with a network basic block 5-2; one path does not do any operation, and the two paths perform addition operation and then serve as an output 16;
the output 16 is used as an input and is sequentially connected with an average pooling layer, a full connection layer and a Softmax function;
in the sixth step, the trained detection network is used for judging the axle box rotating arm foreign matter fault; the specific process is as follows:
step six: acquiring full train linear array image data of a train to be detected;
step six and two: roughly positioning a whole train linear array image of a train to be detected to obtain an axle box rotating arm position image of the train to be detected;
step six and three: predicting the axle box rotating arm position image obtained in the sixth step and the sixth step by using a trained detection network model to obtain the score of the foreign matter category in the image and the upper left corner and the lower right corner of the foreign matter position coordinate, determining a rectangular frame by taking the upper left corner and the lower right corner as diagonal points of the rectangular frame to obtain the length, the width and the area of the rectangular frame;
step six and four: determining whether the train to be tested meets the discrimination standard or not based on the obtained score of the foreign matter category and the length, width and area of the rectangular frame; when the judgment standard is met, foreign matters exist in the axle box rotating arm area of the train to be detected;
the criteria include:
the score of the foreign matter category obtained in the sixth step and the third step is higher than 0.5;
1/10, the length of the rectangular frame obtained in the step six and three is larger than the length of the axle box rotating arm position image of the train to be tested;
1/10, the width of the rectangular frame obtained in the step six and three is larger than the width of the axle box rotating arm position image of the train to be tested;
and sixthly, the area of the rectangular frame obtained in the step three is larger than 5% of the image area of the axle box rotating arm position of the train to be tested.
2. The method for detecting a foreign matter on an axle box boom of a railway train according to claim 1, wherein: acquiring full train linear array image data of the train in the first step; the specific process is as follows:
a camera or a video camera is carried around a train track by utilizing a fixed device, passenger trains running under different conditions are shot, and high-definition gray level whole-train images are obtained after the passenger trains pass through the camera or the video camera.
3. The method for detecting a foreign matter on an axle box boom of a railway train according to claim 2, wherein: in the second step, the full-vehicle linear array image data collected in the first step are coarsely positioned to obtain an axle box rotating arm position image; the specific process is as follows:
and intercepting an axle box rotating arm position image from the whole vehicle linear array image data according to the axle distance information of the axle box rotating arm and the position priori knowledge of the axle box rotating arm.
4. The method for detecting a foreign matter on an axle box boom of a railway train according to claim 3, wherein: forming a sample data set by utilizing the gray image set and the label file data set; the specific process comprises the following steps:
whether the axle box rotating arm position images with the foreign matters in the gray level image set reach 50% of the total number of the images in the gray level image set is judged by reading the label file, if not, the axle box rotating arm position images with the foreign matters in the gray level image set are amplified, and the proportion of the axle box rotating arm position images with the foreign matters in the gray level image set in the total image in the gray level image set is larger than or equal to 50%.
5. The method for detecting a foreign matter on an axle box boom of a railway train according to claim 4, wherein: in the fifth step, the sample data set established in the third step is used for training a detection network model to obtain a trained detection network; the specific process is as follows:
inputting the sample data set established in the third step into a detection network model, continuously training by reducing the loss value of the loss function as a standard, and finding out an optimal weight coefficient to obtain the trained detection network model.
CN202011490481.0A 2020-12-16 2020-12-16 Method for detecting foreign matter on axle box rotating arm of railway train Active CN112614097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011490481.0A CN112614097B (en) 2020-12-16 2020-12-16 Method for detecting foreign matter on axle box rotating arm of railway train

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011490481.0A CN112614097B (en) 2020-12-16 2020-12-16 Method for detecting foreign matter on axle box rotating arm of railway train

Publications (2)

Publication Number Publication Date
CN112614097A CN112614097A (en) 2021-04-06
CN112614097B true CN112614097B (en) 2022-02-01

Family

ID=75240205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011490481.0A Active CN112614097B (en) 2020-12-16 2020-12-16 Method for detecting foreign matter on axle box rotating arm of railway train

Country Status (1)

Country Link
CN (1) CN112614097B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766884A (en) * 2018-12-26 2019-05-17 哈尔滨工程大学 A kind of airfield runway foreign matter detecting method based on Faster-RCNN
CN111080602A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for detecting foreign matters in water leakage hole of railway wagon
CN111582072A (en) * 2020-04-23 2020-08-25 浙江大学 Transformer substation picture bird nest detection method combining ResNet50+ FPN + DCN
CN111652211A (en) * 2020-05-21 2020-09-11 哈尔滨市科佳通用机电股份有限公司 Method for detecting foreign matter hanging fault of motor car anti-snaking shock absorber mounting seat

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572183B (en) * 2017-03-08 2021-11-30 清华大学 Inspection apparatus and method of segmenting vehicle image
CN111145239B (en) * 2019-12-30 2022-02-11 南京航空航天大学 Aircraft fuel tank redundancy automatic detection method based on deep learning
CN112001908B (en) * 2020-08-25 2021-03-09 哈尔滨市科佳通用机电股份有限公司 Railway freight car sleeper beam hole carried foreign matter detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766884A (en) * 2018-12-26 2019-05-17 哈尔滨工程大学 A kind of airfield runway foreign matter detecting method based on Faster-RCNN
CN111080602A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for detecting foreign matters in water leakage hole of railway wagon
CN111582072A (en) * 2020-04-23 2020-08-25 浙江大学 Transformer substation picture bird nest detection method combining ResNet50+ FPN + DCN
CN111652211A (en) * 2020-05-21 2020-09-11 哈尔滨市科佳通用机电股份有限公司 Method for detecting foreign matter hanging fault of motor car anti-snaking shock absorber mounting seat

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Faster R-CNN:towards real-time object detection with region proposal networks;Shaoqing Ren et al;《arXiv:1506.01497v3》;20160106;全文 *
基于深度背景差分的铁路异物检测算法;杜兴强;《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》;20200115;第4.2节 *

Also Published As

Publication number Publication date
CN112614097A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN109886298B (en) Weld quality detection method based on convolutional neural network
CN111080598B (en) Bolt and nut missing detection method for coupler yoke key safety crane
CN112434695B (en) Upper pull rod fault detection method based on deep learning
CN108711148B (en) Tire defect intelligent detection method based on deep learning
CN110992349A (en) Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN112528979B (en) Transformer substation inspection robot obstacle distinguishing method and system
CN111862029A (en) Fault detection method for bolt part of vertical shock absorber of railway motor train unit
CN111127448B (en) Method for detecting air spring fault based on isolated forest
CN111080600A (en) Fault identification method for split pin on spring supporting plate of railway wagon
CN113139572B (en) Image-based train air spring fault detection method
CN110186375A (en) Intelligent high-speed rail white body assemble welding feature detection device and detection method
CN108508023B (en) Defect detection system for contact end jacking bolt in railway contact network
CN110728269B (en) High-speed rail contact net support pole number plate identification method based on C2 detection data
CN113788051A (en) Train on-station running state monitoring and analyzing system
CN112508911A (en) Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
CN113870202A (en) Far-end chip defect detection system based on deep learning technology
CN111080621A (en) Method for identifying railway wagon floor damage fault image
CN115424128A (en) Fault image detection method and system for lower link of freight car bogie
CN115527170A (en) Method and system for identifying closing fault of door stopper handle of automatic freight car derailing brake device
CN114549414A (en) Abnormal change detection method and system for track data
CN117252840B (en) Photovoltaic array defect elimination evaluation method and device and computer equipment
CN112330633B (en) Jumper wire adhesive tape damage fault image segmentation method based on self-adaptive band-pass filtering
CN117876329A (en) Real-time road disease detection method based on radar, video and data analysis
CN112614097B (en) Method for detecting foreign matter on axle box rotating arm of railway train
CN109934172B (en) GPS-free full-operation line fault visual detection and positioning method for high-speed train pantograph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant