CN112966603B - Fault identification method for falling of cab apron of railway wagon - Google Patents

Fault identification method for falling of cab apron of railway wagon Download PDF

Info

Publication number
CN112966603B
CN112966603B CN202110244699.6A CN202110244699A CN112966603B CN 112966603 B CN112966603 B CN 112966603B CN 202110244699 A CN202110244699 A CN 202110244699A CN 112966603 B CN112966603 B CN 112966603B
Authority
CN
China
Prior art keywords
layer
feature map
convolutional layer
convolution layer
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110244699.6A
Other languages
Chinese (zh)
Other versions
CN112966603A (en
Inventor
汤岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN202110244699.6A priority Critical patent/CN112966603B/en
Publication of CN112966603A publication Critical patent/CN112966603A/en
Application granted granted Critical
Publication of CN112966603B publication Critical patent/CN112966603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fault identification method for the falling of a railway wagon cab apron, and relates to a fault identification method for the falling of a railway wagon cab apron. The invention aims to solve the problems of low fault detection accuracy and efficiency of the existing method. The process is as follows: firstly, collecting original image data; secondly, obtaining a sample image containing a cab apron part; thirdly, marking three types of cab apron, car body and triangular base; fourthly, acquiring a segmentation image data set; fifthly, drawing the target edge on the mask image; when the target edge is positioned at the peripheral boundary of the mask image, setting the mask image at the target edge as 0; otherwise, setting the mask image at the edge of the target to be 255, and expanding the mask image to obtain an edge image; sixthly, acquiring an edge image data set; seventhly, building an edge detection network; eighthly, building an instance segmentation network; ninthly, obtaining a segmentation network; and tenthly, judging the falling fault of the cab apron of the railway wagon by utilizing the segmentation network. The invention belongs to the field of fault image identification.

Description

Fault identification method for falling of cab apron of railway wagon
Technical Field
The invention relates to a fault identification method for the falling of a cab apron of a railway wagon.
Background
In the direction of railway safety, the traditional method is that after a photo is taken by a detection device, the fault point of a train is found through manual observation. This method allows fault detection during vehicle travel without requiring parking. However, the artificial observation has the defects of easy fatigue, high strength, training requirement and the like. More and more things can be replaced by machines at the present stage, and the machines have the characteristics of low cost, unified rule and no fatigue within 24 hours, so that the image recognition technology is used for replacing the traditional manual detection, and the feasibility is realized.
The cab apron drops and needs to judge a fault according to the distance, the human eyes are difficult to accurately judge the distance, and the human eye detection difficulty is higher due to different sizes of targets shot by different stations. Using conventional image algorithms requires a large amount of debugging for different images at different sites. Therefore, the fault identification is carried out by using the method of deep learning neural network, and the accurate identification can be carried out according to the fault standard.
Disclosure of Invention
The invention aims to solve the problems of low accuracy and efficiency of fault detection in the conventional manual fault detection method, and provides a fault identification method for the falling of a cab apron of a railway wagon.
The technical scheme adopted by the invention for solving the technical problems is as follows: the method for identifying the falling fault of the cab apron of the railway wagon comprises the following specific processes:
acquiring original image data;
step two, obtaining a sample image containing a cab apron position based on the original image data acquired in the step one;
thirdly, marking the sample image containing the cab apron part obtained in the second step by using a label, and marking three types of cab apron, a vehicle body and a triangular base;
step four, acquiring a segmentation image data set:
corresponding the sample image containing the cab apron part obtained in the step two and the cab apron marked in the step three, the vehicle body and the triangular base image one by one;
fifthly, converting the images of the cab apron, the vehicle body and the triangular base marked by the labels in the step three into binary images;
performing edge search on the binary image, finding out a target edge with a pixel value of 255, creating a mask image which contains all 0 pixels and has the same size as the sample image of the cab apron part obtained in the step two, and drawing the target edge on the mask image;
when the target edge is positioned at the peripheral boundary of the mask image, setting the mask image at the target edge as 0;
when the target edge is not positioned at the peripheral boundary of the mask image, setting the mask image at the target edge to be 255, and expanding the drawn mask image to obtain an edge image;
step six, obtaining an edge image data set:
corresponding the sample image containing the cab apron part obtained in the step two and the mask image expanded in the step five one by one;
step seven, building an edge detection network;
step eight, building an instance segmentation network;
training an edge detection network and an example segmentation network based on the edge image data set and the segmentation image data set to obtain a segmentation network;
and step ten, judging the falling fault of the cab apron of the railway wagon by utilizing the segmentation network.
Optionally, in the first step, original image data is collected; the specific process is as follows:
building imaging equipment at a fixed detection station, acquiring a 2D linear array gray image of the truck, and selecting a camera shooting image above the side part of the truck as an original image; and acquiring original images shot by different sites under different conditions.
Optionally, in the second step, a sample image containing the cab apron position is obtained based on the original image data acquired in the first step; the specific process is as follows:
and intercepting the original image according to the prior knowledge and the wheel base information to obtain a sample image containing the cab apron position.
Optionally, an edge detection network is established in the seventh step; the specific process is as follows:
the edge detection network comprises a first convolution layer, a second convolution layer, a first maximum pooling layer, a third convolution layer, a fourth convolution layer, a second maximum pooling layer, a fifth convolution layer, a sixth convolution layer, a third maximum pooling layer, a seventh convolution layer, an eighth convolution layer, a fourth maximum pooling layer, a ninth convolution layer, a tenth convolution layer, an eleventh convolution layer, a twelfth convolution layer, a thirteenth convolution layer, a fourteenth convolution layer, a fifteenth convolution layer, a sixteenth convolution layer and a softmax layer;
the convolution kernel size of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the sixth convolution layer, the seventh convolution layer, the eighth convolution layer, the ninth convolution layer and the tenth convolution layer is 3 x 3;
the convolution kernel size of the eleventh convolution layer, the twelfth convolution layer, the thirteenth convolution layer, the fourteenth convolution layer, the fifteenth convolution layer, and the sixteenth convolution layer is 1 x 1.
Optionally, the connection relationship of the edge detection network is characterized as follows:
the output end of the first convolution layer of the edge detection network is connected with the input end of the second convolution layer, the output end of the second convolution layer is connected with the input end of the first maximum pooling layer, the output end of the first maximum pooling layer is connected with the input end of the third convolution layer, the output end of the third convolution layer is connected with the input end of the fourth convolution layer, the output end of the fourth convolution layer is connected with the input end of the second maximum pooling layer, the output end of the second maximum pooling layer is connected with the input end of the fifth convolution layer, the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer, the output end of the sixth convolution layer is connected with the input end of the third maximum pooling layer, the output end of the third maximum pooling layer is connected with the input end of the seventh convolution layer, the output end of the seventh convolution layer is connected with the input end of the eighth convolution layer, the output end of the eighth convolution layer is connected with the input end of the fourth maximum pooling layer, and the output end of the fourth maximum pooling layer is connected with the input end of the ninth convolution layer, the output end of the ninth convolution layer is connected with the input end of the tenth convolution layer;
summing the outputs of the first convolution layer and the second convolution layer, then connecting the outputs of the first convolution layer and the second convolution layer to the eleventh convolution layer, and performing 1-time up-sampling on the output of the eleventh convolution layer to obtain a result of performing up-sampling on the output of the eleventh convolution layer;
summing the outputs of the third convolutional layer and the fourth convolutional layer, then connecting the outputs of the third convolutional layer and the fourth convolutional layer, and performing up-sampling on the output of the twelfth convolutional layer by 2 times to obtain a result of performing up-sampling on the output of the twelfth convolutional layer;
summing the outputs of the fifth convolutional layer and the sixth convolutional layer, then connecting the outputs to the thirteenth convolutional layer, and performing up-sampling on the output of the thirteenth convolutional layer by 4 times to obtain a result of performing up-sampling on the output of the thirteenth convolutional layer;
summing the outputs of the seventh convolutional layer and the eighth convolutional layer, then connecting the outputs to the fourteenth convolutional layer, and performing 8-time upsampling on the output of the fourteenth convolutional layer to obtain an upsampling result on the output of the fourteenth convolutional layer;
summing the outputs of the ninth convolutional layer and the tenth convolutional layer, then connecting the ninth convolutional layer and the fifteenth convolutional layer, and performing 16-time upsampling on the output of the fifteenth convolutional layer to obtain an upsampling result of the output of the fifteenth convolutional layer;
and performing splicing operation on the obtained result of up-sampling the output of the eleventh convolutional layer, the obtained result of up-sampling the output of the twelfth convolutional layer, the obtained result of up-sampling the output of the thirteenth convolutional layer, the obtained result of up-sampling the output of the fourteenth convolutional layer and the obtained result of up-sampling the output of the fifteenth convolutional layer, wherein the output result passes through the sixteenth convolutional layer, and the output result of the sixteenth convolutional layer passes through softmax to obtain an edge detection characteristic diagram B2.
Optionally, an example segmentation network is built in the step eight; the specific process is as follows:
the example split network includes a seventeenth convolutional layer, an eighteenth convolutional layer, a nineteenth convolutional layer, a twentieth convolutional layer, a twenty-first convolutional layer, a twenty-second convolutional layer, a twenty-third convolutional layer, a twenty-fourth convolutional layer, a twenty-fifth convolutional layer, a twenty-sixth convolutional layer, a twenty-seventh convolutional layer, a twenty-eighth convolutional layer, a twenty-ninth convolutional layer, a thirty-fourth convolutional layer, a thirty-eleventh convolutional layer, a roiign layer, and a full-connect layer.
Optionally, in the ninth step, training the edge detection network and the example segmentation network based on the edge image dataset and the segmentation image dataset to obtain a segmentation network; the specific process is as follows:
inputting the edge image data set obtained in the sixth step into an edge detection network for training to obtain the weight corresponding to the trained edge detection network, and outputting an edge detection characteristic diagram B2;
inputting the segmented image data set obtained in the fourth step into an example segmentation network for training until convergence to obtain a trained example segmentation network, and outputting an example segmentation network characteristic diagram B1; the method specifically comprises the following steps:
inputting the segmented image data set obtained in the fourth step into a seventeenth convolutional layer of the example segmentation network, inputting a feature map output by the seventeenth convolutional layer into an eighteenth convolutional layer, performing 0.5-time downsampling on the feature map output by the eighteenth convolutional layer to obtain a feature map F1, inputting a downsampled feature map F1 into a nineteenth convolutional layer, inputting a feature map output by the nineteenth convolutional layer into a twentieth convolutional layer, performing 0.5-time downsampling on the feature map output by the twentieth convolutional layer to obtain a feature map F2, inputting a downsampled feature map F2 into a twenty-first convolutional layer, inputting a feature map output by the twenty-first convolutional layer into a twenty-second convolutional layer, performing 0.5-time downsampling on the feature map output by the twenty-second convolutional layer to obtain a feature map F3, inputting a downsampled feature map F3 into a twenty-third convolutional layer, inputting a feature map output by the twenty-third convolutional layer into a twenty-fourth convolutional layer, and performing 0.5-time downsampling on the feature map output by the twenty-fourth convolutional layer to obtain a feature map F4;
inputting the feature map F4 into a twenty-fifth convolutional layer to obtain a feature map P5, upsampling the feature map P5 to the same size as the feature map F3 to obtain a feature map a, inputting the feature map F3 into a twenty-sixth convolutional layer to obtain a feature map b, and adding and fusing the feature map a and the feature map b to obtain a feature map P4;
up-sampling the characteristic diagram P4 to obtain a characteristic diagram c with the same size as the characteristic diagram F2, inputting the characteristic diagram F2 into a twenty-seventh convolutional layer to obtain a characteristic diagram d, and adding and fusing the characteristic diagram c and the characteristic diagram d to obtain a characteristic diagram P3;
upsampling the feature map P3 to obtain a feature map e with the same size as the feature map F1, inputting the feature map F1 into the twenty-eighth convolutional layer to obtain a feature map F, and adding and fusing the feature map e and the feature map F to obtain a feature map P2;
feature map N2 is the same as feature map P2, and feature map N3 is obtained by inputting feature map N2 to the twenty-ninth convolutional layer to obtain feature map g and adding feature map P3 to the obtained feature map g;
inputting the feature map N3 into the thirtieth convolutional layer to obtain a feature map h, and adding the feature map P4 to the obtained feature map h to obtain a feature map N4;
inputting the feature map N4 into the thirty-first convolutional layer to obtain a feature map i, and adding the feature map P5 to the obtained feature map i to obtain a feature map N5;
inputting N2, N3, N4 and N5 into a ROIAlign layer to unify feature maps with different sizes to the same size, then adding and fusing the four feature maps with the same size to obtain a feature map B1, performing full connection operation on the feature map B1, and performing classification and frame regression respectively after the full connection operation;
and loading the trained edge detection network weight, directly adding an edge detection characteristic diagram B2 and a characteristic diagram B1 output by the edge detection network into a mask branch of the PANet, and selecting iou loss as edge loss to obtain a segmentation network.
Optionally, the mask branch of the PANet includes a ROI pooling layer, a FCN, and a full connectivity layer;
intercepting the feature maps B1 and B2 according to ROI information to obtain an intercepted feature map B1 and an intercepted feature map B2, inputting the intercepted B1 to a thirty-second convolutional layer and a full-connection layer respectively, adding and fusing the intercepted B2 and thirty-second convolutional layer output and full-connection layer output, and outputting the feature maps by activating functions after the feature maps are fused.
Optionally, feature maps B1 and B2 are intercepted according to the ROI information; the method specifically comprises the following steps:
widening the ROI of the detection target frame by 20 pixels towards the left and right in the horizontal direction respectively, and if the ROI of the detection target frame after widening exceeds the range of a feature map B1 and a feature map B2, taking the boundaries of the feature maps B1 and B2 for intercepting; and if the detection target frame ROI after the broadening does not exceed the range of the feature map B1 and the feature map B2, cutting the feature maps B1 and B2 according to the detection target frame ROI after the broadening.
Optionally, in the step ten, judging the falling fault of the cab apron of the railway wagon by using a segmentation network; the specific process is as follows:
step eleven: when the truck passes through the detection base station, the camera acquires a linear array image;
step twelve: intercepting a position image of the cab apron part by using prior knowledge and wheelbase information;
step thirteen: inputting the intercepted cab apron position image into a segmentation network to segment three parts of a vehicle body, a cab apron and a triangular base;
fourteen steps: calculating the number of pixels of the cab apron base in the horizontal direction, which is divided in the division network, measuring the actual width of the cab apron base, and solving the proportion beta of the number of pixels to the actual width of the cab apron base;
step fifteen: finding the position of the upper end of the vehicle body, calculating the number of pixels between the cab apron and the vehicle body from top to bottom, and taking the maximum value of the number of the pixels as a discrimination pixel value;
sixthly, the steps are as follows: multiplying the discrimination pixel value by the proportion beta to convert the actual distance between the cab apron and the vehicle body; alarming when the actual distance between the cab apron and the vehicle body is larger than 180 mm; and when the actual distance between the cab apron and the vehicle body is less than or equal to 180mm, continuously identifying the next image.
The invention has the beneficial effects that:
and (3) mounting the linear array high-speed linear array camera at a detection station beside the track, and obtaining a 2D linear array image when the truck passes through the detection station. And intercepting the ferry plate subgraph according to the hardware and the wheel base information. And collecting the region subgraph and marking the region subgraph to obtain a training data set. And building a segmentation model, training the model to obtain network weight, and performing logic judgment on faults according to network output. An algorithm is arranged at the detection station and is called to carry out fault identification when vehicles pass by. When the vehicle passes and has a fault, alarm information is output and uploaded to a platform for manual confirmation and alarm.
1. The distance between the cab apron and the vehicle body is difficult to accurately judge through manual detection, the cab apron is easy to detect only when the fault changes greatly, and the distance between the cab apron and the vehicle body is accurately calculated according to the algorithm to be judged according to the standard.
2. The calibration is carried out by adopting the reference object with fixed size, so that the method is suitable for different sites and different angles, the program does not need to be adjusted, and the self-adaption degree is high.
3. By adding the edge attention module, the edge of the segmentation result is more complete, and the result is more accurate.
4. The fault judgment is carried out by using the segmentation calculation distance, the fault identification of different degrees can be carried out according to different standards of different road offices, and the fault identification can be carried out corresponding to different standards through one-time training.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of an edge detection network according to the present invention;
fig. 3 is a schematic diagram of the overall network structure of the present invention.
Detailed Description
It should be noted that, in the present invention, the embodiments disclosed in the present application may be combined with each other without conflict.
The first embodiment is as follows: the embodiment is described with reference to fig. 1, and a specific process of the method for identifying the falling-off fault of the cab apron of the railway wagon in the embodiment is as follows:
acquiring original image data;
step two, obtaining a sample image containing a cab apron position based on the original image data acquired in the step one;
thirdly, marking the sample image containing the cab apron part obtained in the second step by using a label, and marking three types of cab apron, a vehicle body and a triangular base;
because the angles of the cameras at different stations and the distances between the cameras and the vehicle body are different, a fixed part is needed for pixel calibration; the position and the posture of the cab apron are easy to change, but the position of the triangular base for supporting the cab apron is fixed and cannot move, so that the triangular base in the vehicle body is marked for unifying fault standards, and the target is used for calibrating pixels, so that the program can be suitable for the same standard for different stations;
step four, acquiring a segmentation image data set:
corresponding the sample image containing the cab apron part obtained in the step two and the cab apron marked in the step three, the vehicle body and the triangular base image one by one;
fifthly, converting the images of the cab apron, the car body and the triangular base marked by the labels in the step three into binary images (the pixel values of the cab apron, the car body and the triangular base are 255, and the pixel value of the background is 0);
performing edge search on the binary image through an Opncv function, finding out a target edge with a pixel value of 255, creating a mask image which contains all 0 pixels with the same size as the sample image of the cab apron part obtained in the step two, and drawing the target edge on the mask image;
when the target edge is positioned at the peripheral boundary of the mask image, setting the mask image at the target edge as 0;
when the target edge is not positioned at the peripheral boundary of the mask image, setting the mask image at the target edge to be 255, and expanding the drawn mask image to obtain an edge image with the width of 5;
step six, obtaining an edge image data set:
corresponding the sample image containing the cab apron part obtained in the step two and the mask image expanded in the step five one by one;
step seven, building an edge detection network;
step eight, building an instance segmentation network;
training an edge detection network and an example segmentation network based on the edge image data set and the segmentation image data set to obtain a segmentation network;
and step ten, judging the falling fault of the cab apron of the railway wagon by utilizing the segmentation network.
The second embodiment is as follows: the first embodiment is different from the first embodiment in that the first step collects original image data; the specific process is as follows:
building high-speed imaging equipment at a fixed detection station, acquiring a 2D high-definition linear array gray image of the truck, and selecting a camera shooting image above the side (used on the left and right sides) of the truck as an original image; the method comprises the steps of collecting images for a long time, obtaining original images shot at different sites under different conditions, wherein the different conditions refer to the fact that various natural interferences such as illumination, rainwater and the like exist in the images, and guaranteeing the diversity of data, so that the final model has better robustness.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the second embodiment is different from the first or second embodiment in that a sample image containing a cab apron part is obtained based on the original image data acquired in the first step; the specific process is as follows:
and intercepting the original image according to the priori knowledge and axle distance information provided by hardware and frames (data acquisition equipment such as sensors) to obtain a sample image containing the cab apron part.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment and the first to third embodiments is that, in the seventh step, an edge detection network is established; the specific process is as follows:
and for the cab apron falling fault identification criterion which is the distance between the cab apron and the vehicle body, firstly, the cab apron and the vehicle body are divided, and the actual distance is calculated through pixel conversion. The network directly influences the calculation result of the distance on the cutting quality of the vehicle body and the cab apron edge. When the mask-r-cnn is used for segmentation, the edge part of the segmentation result is always missing, so the invention provides an edge-enhanced image segmentation method to reduce the problem of missing of the target segmentation edge.
Firstly, an edge enhancement network is built, so that the network pays more attention to an edge area when being segmented. The network structure is shown in fig. 1, the network is divided into 5 layers, the first 4 layers are double 3 × 3 convolution pooling, and the fifth layer only comprises double 3 × 3 convolution. And summing two convolutions in each layer, then performing 1 × 1 convolution, converting the feature images into the same size through upsampling with different sizes, and then performing splicing operation. And outputting the result to obtain an edge detection result through softmax, and taking the result as a segmentation network enhancement module.
The edge detection network comprises a first convolution layer, a second convolution layer, a first maximum pooling layer, a third convolution layer, a fourth convolution layer, a second maximum pooling layer, a fifth convolution layer, a sixth convolution layer, a third maximum pooling layer, a seventh convolution layer, an eighth convolution layer, a fourth maximum pooling layer, a ninth convolution layer, a tenth convolution layer, an eleventh convolution layer, a twelfth convolution layer, a thirteenth convolution layer, a fourteenth convolution layer, a fifteenth convolution layer, a sixteenth convolution layer and a softmax layer;
the convolution kernel size of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the sixth convolution layer, the seventh convolution layer, the eighth convolution layer, the ninth convolution layer and the tenth convolution layer is 3 x 3;
the convolution kernel size of the eleventh convolution layer, the twelfth convolution layer, the thirteenth convolution layer, the fourteenth convolution layer, the fifteenth convolution layer, and the sixteenth convolution layer is 1 x 1.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the difference between this embodiment and one of the first to the fourth embodiments is that the connection relationship of the edge detection network is characterized as follows:
the output end of the first convolution layer of the edge detection network is connected with the input end of the second convolution layer, the output end of the second convolution layer is connected with the input end of the first maximum pooling layer, the output end of the first maximum pooling layer is connected with the input end of the third convolution layer, the output end of the third convolution layer is connected with the input end of the fourth convolution layer, the output end of the fourth convolution layer is connected with the input end of the second maximum pooling layer, the output end of the second maximum pooling layer is connected with the input end of the fifth convolution layer, the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer, the output end of the sixth convolution layer is connected with the input end of the third maximum pooling layer, the output end of the third maximum pooling layer is connected with the input end of the seventh convolution layer, the output end of the seventh convolution layer is connected with the input end of the eighth convolution layer, the output end of the eighth convolution layer is connected with the input end of the fourth maximum pooling layer, and the output end of the fourth maximum pooling layer is connected with the input end of the ninth convolution layer, the output end of the ninth convolution layer is connected with the input end of the tenth convolution layer;
summing the outputs of the first convolution layer and the second convolution layer, then connecting the outputs of the first convolution layer and the second convolution layer to the eleventh convolution layer, and performing 1-time up-sampling on the output of the eleventh convolution layer to obtain a result of performing up-sampling on the output of the eleventh convolution layer;
summing the outputs of the third convolutional layer and the fourth convolutional layer, then connecting the outputs of the third convolutional layer and the fourth convolutional layer, and performing up-sampling on the output of the twelfth convolutional layer by 2 times to obtain a result of performing up-sampling on the output of the twelfth convolutional layer;
summing the outputs of the fifth convolutional layer and the sixth convolutional layer, then connecting the outputs to the thirteenth convolutional layer, and performing up-sampling on the output of the thirteenth convolutional layer by 4 times to obtain a result of performing up-sampling on the output of the thirteenth convolutional layer;
summing the outputs of the seventh convolutional layer and the eighth convolutional layer, then connecting the outputs to the fourteenth convolutional layer, and performing 8-time upsampling on the output of the fourteenth convolutional layer to obtain an upsampling result on the output of the fourteenth convolutional layer;
summing the outputs of the ninth convolutional layer and the tenth convolutional layer, then connecting the ninth convolutional layer and the fifteenth convolutional layer, and performing 16-time upsampling on the output of the fifteenth convolutional layer to obtain an upsampling result of the output of the fifteenth convolutional layer;
and performing splicing operation on the obtained result of up-sampling the output of the eleventh convolutional layer, the obtained result of up-sampling the output of the twelfth convolutional layer, the obtained result of up-sampling the output of the thirteenth convolutional layer, the obtained result of up-sampling the output of the fourteenth convolutional layer and the obtained result of up-sampling the output of the fifteenth convolutional layer, wherein the output result passes through the sixteenth convolutional layer, and the output result of the sixteenth convolutional layer is connected with softmax.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: the difference between the present embodiment and one of the first to fifth embodiments is that, in the step eight, an example partition network is established; the specific process is as follows:
when the unet and depeplabv 3 networks are used for target segmentation, the targets are easily segmented by mistake, and the distance judgment is influenced. The invention uses the PANet to divide the vehicle body, the ferry plate and the triangular base. The invention adds the output of the edge enhancement network into the mask branch to optimize the target segmentation edge on the basis of the PANet. The overall network structure is shown in fig. 2.
The example split network includes a seventeenth convolutional layer, an eighteenth convolutional layer, a nineteenth convolutional layer, a twentieth convolutional layer, a twenty-first convolutional layer, a twenty-second convolutional layer, a twenty-third convolutional layer, a twenty-fourth convolutional layer, a twenty-fifth convolutional layer, a twenty-sixth convolutional layer, a twenty-seventh convolutional layer, a twenty-eighth convolutional layer, a twenty-ninth convolutional layer, a thirty-fourth convolutional layer, a thirty-eleventh convolutional layer, a roiign layer, and a full-connect layer.
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between the first embodiment and the sixth embodiment is that in the ninth embodiment, the edge detection network and the example segmentation network are trained based on the edge image dataset and the segmentation image dataset to obtain the segmentation network; the specific process is as follows:
the network is trained in two steps for easier fitting of the network.
Inputting the edge image data set obtained in the sixth step into an edge detection network for training until convergence, obtaining the weight corresponding to the trained edge detection network, and outputting an edge detection characteristic diagram B2;
inputting the segmented image data set obtained in the fourth step into an example segmentation network for training until convergence to obtain a trained example segmentation network, and outputting an example segmentation network characteristic diagram B1; the method specifically comprises the following steps:
inputting the segmented image data set obtained in the fourth step into a seventeenth convolutional layer of the example segmentation network, inputting a feature map output by the seventeenth convolutional layer into an eighteenth convolutional layer, performing 0.5-time downsampling on the feature map output by the eighteenth convolutional layer to obtain a feature map F1, inputting a downsampled feature map F1 into a nineteenth convolutional layer, inputting a feature map output by the nineteenth convolutional layer into a twentieth convolutional layer, performing 0.5-time downsampling on the feature map output by the twentieth convolutional layer to obtain a feature map F2, inputting a downsampled feature map F2 into a twenty-first convolutional layer, inputting a feature map output by the twenty-first convolutional layer into a twenty-second convolutional layer, performing 0.5-time downsampling on the feature map output by the twenty-second convolutional layer to obtain a feature map F3, inputting a downsampled feature map F3 into a twenty-third convolutional layer, inputting a feature map output by the twenty-third convolutional layer into a twenty-fourth convolutional layer, and performing 0.5-time downsampling on the feature map output by the twenty-fourth convolutional layer to obtain a feature map F4;
inputting the feature map F4 into a twenty-fifth convolutional layer to obtain a feature map P5, upsampling the feature map P5 to the same size as the feature map F3 to obtain a feature map a, inputting the feature map F3 into a twenty-sixth convolutional layer to obtain a feature map b, and adding and fusing the feature map a and the feature map b to obtain a feature map P4;
up-sampling the characteristic diagram P4 to obtain a characteristic diagram c with the same size as the characteristic diagram F2, inputting the characteristic diagram F2 into a twenty-seventh convolutional layer to obtain a characteristic diagram d, and adding and fusing the characteristic diagram c and the characteristic diagram d to obtain a characteristic diagram P3;
upsampling the feature map P3 to obtain a feature map e with the same size as the feature map F1, inputting the feature map F1 into the twenty-eighth convolutional layer to obtain a feature map F, and adding and fusing the feature map e and the feature map F to obtain a feature map P2;
feature N2 is the same as feature P2 (identical feature, but with the name changed), feature N2 is input to the twenty ninth convolutional layer (step 2) to obtain feature g, and feature P3 is added to the feature g to obtain feature N3;
inputting the feature map N3 into the thirtieth convolutional layer (step length is 2) to obtain a feature map h, and adding the feature map P4 to the obtained feature map h to obtain a feature map N4;
inputting the feature map N4 into a thirty-first convolutional layer (step size is 2) to obtain a feature map i, and adding the feature map P5 to the obtained feature map i to obtain a feature map N5;
inputting N2, N3, N4 and N5 into a ROIAlign layer (the RoiAlign layer has the same action with the roliwood but better precision) to unify the feature maps with different sizes to the same size, then adding and fusing the four feature maps with the same size to obtain a feature map B1, carrying out full connection operation on the feature map B1, and respectively carrying out classification and border regression after the full connection operation;
loading the trained edge detection network weight, and directly adding an edge detection feature graph B2 and a feature graph B1 output by the edge detection network into a mask branch of the PANet so as to increase the accuracy of target edge segmentation; and selecting iou loss as an edge loss (Iou is intersection ratio (the result of intersection of a predicted value and label is compared with an upper union), wherein the loss is 1 minus iou), and obtaining the segmentation network.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
The specific implementation mode is eight: this embodiment differs from one of the first to seventh embodiments in that the mask branch of the PANet includes a ROI posing layer, a FCN, and a full link layer;
intercepting the feature maps B1 and B2 according to ROI information to obtain an intercepted feature map B1 and an intercepted feature map B2, inputting the intercepted B1 to a thirty-second convolutional layer and a full-connection layer respectively, adding and fusing the intercepted B2 and thirty-second convolutional layer output and full-connection layer output, and outputting the feature maps by activating functions after the fusion;
other steps and parameters are the same as those in one of the first to seventh embodiments.
The specific implementation method nine: the present embodiment differs from the first to eighth embodiments in that feature maps B1 and B2 are extracted based on ROI information; the method specifically comprises the following steps:
widening the ROI of the detection target frame by 20 pixels towards the left and right in the horizontal direction respectively, and if the ROI of the detection target frame after widening exceeds the range of a feature map B1 and a feature map B2, taking the boundaries of the feature maps B1 and B2 for intercepting; and if the detection target frame ROI after the broadening does not exceed the range of the feature map B1 and the feature map B2, cutting the feature maps B1 and B2 according to the detection target frame ROI after the broadening.
In the test, it is found that there is a situation that the edge of the segmentation result is incomplete because the detection target frame (ROI in fig. 3, which is divided into two stages, firstly the target is detected and then the pixels in the target frame are segmented) is smaller than the object, the detection target frame is cut out by 20 pixels which are respectively widened towards the left and right of the horizontal direction, and after widening, if the detection target frame exceeds the edge, the edge is taken as the standard. The problem of boundary deletion caused by positioning of the boundary is solved.
Other steps and parameters are the same as those in one to eight of the embodiments.
The detailed implementation mode is ten: the difference between the first embodiment and the ninth embodiment is that in the tenth step, the splitting network is used for judging the falling fault of the cab apron of the railway wagon; the specific process is as follows:
step eleven: when the truck passes through the detection base station, the camera acquires a linear array image;
step twelve: intercepting a position image of the cab apron part by using prior knowledge and wheelbase information;
step thirteen: inputting the intercepted cab apron position image into a segmentation network to segment three parts of a vehicle body, a cab apron and a triangular base;
fourteen steps: calculating the number of pixels of the cab apron base in the horizontal direction, which is divided in the division network, measuring the actual width of the cab apron base, and solving the proportion beta of the number of pixels to the actual width of the cab apron base;
step fifteen: finding the position of the upper end of the vehicle body, calculating the number of pixels between the cab apron and the vehicle body from top to bottom, and taking the maximum value of the number of the pixels as a discrimination pixel value;
sixthly, the steps are as follows: multiplying the discrimination pixel value by the proportion beta to convert the actual distance between the cab apron and the vehicle body; alarming when the actual distance between the cab apron and the vehicle body is larger than 180 mm; and when the actual distance between the cab apron and the vehicle body is less than or equal to 180mm, continuously identifying the next image.
Firstly, the width W1(mm) of the triangular base is measured by using a ruler on site, the number P1 (pixels) of pixels in the width direction of the triangular base can be divided by a dividing network, and W1/P1 is the actual distance beta corresponding to one pixel. And multiplying the maximum pixel value obtained in the fifth step by beta to obtain the actual distance, and judging by using the actual distance.
The overall flow chart is shown in fig. 3.
Other steps and parameters are the same as those in one of the first to ninth embodiments.
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.

Claims (7)

1. A fault identification method for the falling of a cab apron of a railway wagon is characterized by comprising the following steps:
acquiring original image data;
step two, obtaining a sample image containing a cab apron position based on the original image data acquired in the step one;
thirdly, marking the sample image containing the cab apron part obtained in the second step by using a label, and marking three types of cab apron, a vehicle body and a triangular base;
step four, acquiring a segmentation image data set:
corresponding the sample image containing the cab apron part obtained in the step two and the cab apron marked in the step three, the vehicle body and the triangular base image one by one;
fifthly, converting the images of the cab apron, the vehicle body and the triangular base marked by the labels in the step three into binary images;
performing edge search on the binary image, finding out a target edge with a pixel value of 255, creating a mask image which contains all 0 pixels and has the same size as the sample image of the cab apron part obtained in the step two, and drawing the target edge on the mask image;
when the target edge is positioned at the peripheral boundary of the mask image, setting the mask image at the target edge as 0;
when the target edge is not positioned at the peripheral boundary of the mask image, setting the mask image at the target edge to be 255, and expanding the drawn mask image to obtain an edge image;
step six, obtaining an edge image data set:
corresponding the sample image containing the cab apron part obtained in the step two and the mask image expanded in the step five one by one;
step seven, building an edge detection network;
step eight, building an instance segmentation network;
training an edge detection network and an example segmentation network based on the edge image data set and the segmentation image data set to obtain a segmentation network;
step ten, judging the falling fault of the cab apron of the railway wagon by utilizing a segmentation network; the specific process is as follows:
step one): when the truck passes through the detection base station, the camera acquires a linear array image;
step two): intercepting a position image of the cab apron part by using prior knowledge and wheelbase information;
step three): inputting the intercepted cab apron position image into a segmentation network to segment three parts of a vehicle body, a cab apron and a triangular base;
step four): calculating the number of pixels of the cab apron base in the horizontal direction, which is divided in the division network, measuring the actual width of the cab apron base, and calculating the proportion beta of the actual width of the cab apron base to the number of the pixels;
step five): finding the position of the upper end of the vehicle body, calculating the number of pixels between the cab apron and the vehicle body from top to bottom, and taking the maximum value of the number of the pixels as a discrimination pixel value;
step six): multiplying the discrimination pixel value by the proportion beta to convert the actual distance between the cab apron and the vehicle body; alarming when the actual distance between the cab apron and the vehicle body is larger than 180 mm; when the actual distance between the cab apron and the vehicle body is less than or equal to 180mm, continuously identifying the next image;
establishing an edge detection network in the seventh step; the specific process is as follows:
the edge detection network comprises a first convolution layer, a second convolution layer, a first maximum pooling layer, a third convolution layer, a fourth convolution layer, a second maximum pooling layer, a fifth convolution layer, a sixth convolution layer, a third maximum pooling layer, a seventh convolution layer, an eighth convolution layer, a fourth maximum pooling layer, a ninth convolution layer, a tenth convolution layer, an eleventh convolution layer, a twelfth convolution layer, a thirteenth convolution layer, a fourteenth convolution layer, a fifteenth convolution layer, a sixteenth convolution layer and a softmax layer;
the convolution kernel size of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the sixth convolution layer, the seventh convolution layer, the eighth convolution layer, the ninth convolution layer and the tenth convolution layer is 3 x 3;
the convolution kernel size of the eleventh convolution layer, the twelfth convolution layer, the thirteenth convolution layer, the fourteenth convolution layer, the fifteenth convolution layer and the sixteenth convolution layer is 1 x 1;
the connection relationship of the edge detection network is characterized as follows:
the output end of the first convolution layer of the edge detection network is connected with the input end of the second convolution layer, the output end of the second convolution layer is connected with the input end of the first maximum pooling layer, the output end of the first maximum pooling layer is connected with the input end of the third convolution layer, the output end of the third convolution layer is connected with the input end of the fourth convolution layer, the output end of the fourth convolution layer is connected with the input end of the second maximum pooling layer, the output end of the second maximum pooling layer is connected with the input end of the fifth convolution layer, the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer, the output end of the sixth convolution layer is connected with the input end of the third maximum pooling layer, the output end of the third maximum pooling layer is connected with the input end of the seventh convolution layer, the output end of the seventh convolution layer is connected with the input end of the eighth convolution layer, the output end of the eighth convolution layer is connected with the input end of the fourth maximum pooling layer, and the output end of the fourth maximum pooling layer is connected with the input end of the ninth convolution layer, the output end of the ninth convolution layer is connected with the input end of the tenth convolution layer;
summing the outputs of the first convolution layer and the second convolution layer, then connecting the outputs of the first convolution layer and the second convolution layer to the eleventh convolution layer, and performing 1-time up-sampling on the output of the eleventh convolution layer to obtain a result of performing up-sampling on the output of the eleventh convolution layer;
summing the outputs of the third convolutional layer and the fourth convolutional layer, then connecting the outputs of the third convolutional layer and the fourth convolutional layer, and performing up-sampling on the output of the twelfth convolutional layer by 2 times to obtain a result of performing up-sampling on the output of the twelfth convolutional layer;
summing the outputs of the fifth convolutional layer and the sixth convolutional layer, then connecting the outputs to the thirteenth convolutional layer, and performing up-sampling on the output of the thirteenth convolutional layer by 4 times to obtain a result of performing up-sampling on the output of the thirteenth convolutional layer;
summing the outputs of the seventh convolutional layer and the eighth convolutional layer, then connecting the outputs to the fourteenth convolutional layer, and performing 8-time upsampling on the output of the fourteenth convolutional layer to obtain an upsampling result on the output of the fourteenth convolutional layer;
summing the outputs of the ninth convolutional layer and the tenth convolutional layer, then connecting the ninth convolutional layer and the fifteenth convolutional layer, and performing 16-time upsampling on the output of the fifteenth convolutional layer to obtain an upsampling result of the output of the fifteenth convolutional layer;
and performing splicing operation on the obtained result of up-sampling the output of the eleventh convolutional layer, the obtained result of up-sampling the output of the twelfth convolutional layer, the obtained result of up-sampling the output of the thirteenth convolutional layer, the obtained result of up-sampling the output of the fourteenth convolutional layer and the obtained result of up-sampling the output of the fifteenth convolutional layer, wherein the output result passes through the sixteenth convolutional layer, and the output result of the sixteenth convolutional layer is connected with the softmax layer.
2. The method for identifying the falling fault of the cab apron of the railway wagon according to claim 1, wherein the method comprises the following steps: acquiring original image data in the first step; the specific process is as follows:
building imaging equipment at a fixed detection station, acquiring a 2D linear array gray image of the truck, and selecting a camera shooting image above the side part of the truck as an original image; and acquiring original images shot by different sites under different conditions.
3. The method for identifying the falling fault of the cab apron of the railway wagon as claimed in claim 2, wherein the method comprises the following steps: in the second step, based on the original image data acquired in the first step, a sample image containing a cab apron position is acquired; the specific process is as follows:
and intercepting the original image according to the prior knowledge and the wheel base information to obtain a sample image containing the cab apron position.
4. The method for identifying the falling fault of the cab apron of the railway wagon according to claim 3, wherein the method comprises the following steps: step eight, establishing an example segmentation network; the specific process is as follows:
the example split network includes a seventeenth convolutional layer, an eighteenth convolutional layer, a nineteenth convolutional layer, a twentieth convolutional layer, a twenty-first convolutional layer, a twenty-second convolutional layer, a twenty-third convolutional layer, a twenty-fourth convolutional layer, a twenty-fifth convolutional layer, a twenty-sixth convolutional layer, a twenty-seventh convolutional layer, a twenty-eighth convolutional layer, a twenty-ninth convolutional layer, a thirty-fourth convolutional layer, a thirty-eleventh convolutional layer, a roiign layer, and a full-connect layer.
5. The method for identifying the falling fault of the cab apron of the railway wagon according to claim 4, wherein the method comprises the following steps: in the ninth step, training an edge detection network and an example segmentation network based on the edge image data set and the segmentation image data set to obtain a segmentation network; the specific process is as follows:
inputting the edge image data set obtained in the sixth step into an edge detection network for training until convergence, obtaining the weight corresponding to the trained edge detection network, and outputting an edge detection characteristic diagram B2;
inputting the segmented image data set obtained in the fourth step into an example segmentation network for training until convergence to obtain a trained example segmentation network, and outputting an example segmentation network characteristic diagram B1; the method specifically comprises the following steps:
inputting the segmented image data set obtained in the fourth step into a seventeenth convolutional layer of the example segmentation network, inputting a feature map output by the seventeenth convolutional layer into an eighteenth convolutional layer, performing 0.5-time downsampling on the feature map output by the eighteenth convolutional layer to obtain a feature map F1, inputting a downsampled feature map F1 into a nineteenth convolutional layer, inputting a feature map output by the nineteenth convolutional layer into a twentieth convolutional layer, performing 0.5-time downsampling on the feature map output by the twentieth convolutional layer to obtain a feature map F2, inputting a downsampled feature map F2 into a twenty-first convolutional layer, inputting a feature map output by the twenty-first convolutional layer into a twenty-second convolutional layer, performing 0.5-time downsampling on the feature map output by the twenty-second convolutional layer to obtain a feature map F3, inputting a downsampled feature map F3 into a twenty-third convolutional layer, inputting a feature map output by the twenty-third convolutional layer into a twenty-fourth convolutional layer, and performing 0.5-time downsampling on the feature map output by the twenty-fourth convolutional layer to obtain a feature map F4;
inputting the feature map F4 into a twenty-fifth convolutional layer to obtain a feature map P5, upsampling the feature map P5 to the same size as the feature map F3 to obtain a feature map a, inputting the feature map F3 into a twenty-sixth convolutional layer to obtain a feature map b, and adding and fusing the feature map a and the feature map b to obtain a feature map P4;
up-sampling the characteristic diagram P4 to obtain a characteristic diagram c with the same size as the characteristic diagram F2, inputting the characteristic diagram F2 into a twenty-seventh convolutional layer to obtain a characteristic diagram d, and adding and fusing the characteristic diagram c and the characteristic diagram d to obtain a characteristic diagram P3;
upsampling the feature map P3 to obtain a feature map e with the same size as the feature map F1, inputting the feature map F1 into the twenty-eighth convolutional layer to obtain a feature map F, and adding and fusing the feature map e and the feature map F to obtain a feature map P2;
feature map N2 is the same as feature map P2, and feature map N3 is obtained by inputting feature map N2 to the twenty-ninth convolutional layer to obtain feature map g and adding feature map P3 to the obtained feature map g;
inputting the feature map N3 into the thirtieth convolutional layer to obtain a feature map h, and adding the feature map P4 to the obtained feature map h to obtain a feature map N4;
inputting the feature map N4 into the thirty-first convolutional layer to obtain a feature map i, and adding the feature map P5 to the obtained feature map i to obtain a feature map N5;
inputting N2, N3, N4 and N5 into a ROIAlign layer to unify feature maps with different sizes to the same size, then adding and fusing the four feature maps with the same size to obtain a feature map B1, performing full connection operation on the feature map B1, and performing classification and frame regression respectively after the full connection operation;
and loading the trained edge detection network weight, directly adding an edge detection characteristic diagram B2 and a characteristic diagram B1 output by the edge detection network into a mask branch of the PANet, and selecting iou loss as edge loss to obtain a segmentation network.
6. The method for identifying the falling fault of the cab apron of the railway wagon according to claim 5, wherein the method comprises the following steps: the mask branch of the PANET comprises an ROI posing layer, an FCN and a full connection layer;
intercepting the feature maps B1 and B2 according to ROI information to obtain an intercepted feature map B1 and an intercepted feature map B2, inputting the intercepted B1 to a thirty-second convolutional layer and a full-connection layer respectively, adding and fusing the intercepted B2 and thirty-second convolutional layer output and full-connection layer output, and outputting the feature maps by activating functions after the feature maps are fused.
7. The method for identifying the falling fault of the cab apron of the railway wagon according to claim 6, wherein the method comprises the following steps: the feature maps B1 and B2 are intercepted according to the ROI information; the method specifically comprises the following steps:
widening the ROI of the detection target frame by 20 pixels towards the left and right in the horizontal direction respectively, and if the ROI of the detection target frame after widening exceeds the range of a feature map B1 and a feature map B2, taking the boundaries of the feature maps B1 and B2 for intercepting; and if the detection target frame ROI after the broadening does not exceed the range of the feature map B1 and the feature map B2, cutting the feature maps B1 and B2 according to the detection target frame ROI after the broadening.
CN202110244699.6A 2021-03-05 2021-03-05 Fault identification method for falling of cab apron of railway wagon Active CN112966603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110244699.6A CN112966603B (en) 2021-03-05 2021-03-05 Fault identification method for falling of cab apron of railway wagon

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110244699.6A CN112966603B (en) 2021-03-05 2021-03-05 Fault identification method for falling of cab apron of railway wagon

Publications (2)

Publication Number Publication Date
CN112966603A CN112966603A (en) 2021-06-15
CN112966603B true CN112966603B (en) 2022-03-08

Family

ID=76276708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110244699.6A Active CN112966603B (en) 2021-03-05 2021-03-05 Fault identification method for falling of cab apron of railway wagon

Country Status (1)

Country Link
CN (1) CN112966603B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734713A (en) * 2018-05-18 2018-11-02 大连理工大学 A kind of traffic image semantic segmentation method based on multi-characteristic
CN111179229A (en) * 2019-12-17 2020-05-19 中信重工机械股份有限公司 Industrial CT defect detection method based on deep learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893686B (en) * 2010-06-11 2012-10-24 河南电力试验研究院 Digital radiography-based breaker operating characteristic on-line detection device and method
KR102070828B1 (en) * 2017-11-01 2020-01-30 한국생산기술연구원 Apparatus And Method for Detecting An Object Through Analyzing Activation Functions Of Deep Neural Network
CN111079817B (en) * 2019-12-12 2020-11-27 哈尔滨市科佳通用机电股份有限公司 Method for identifying fault image of cross beam of railway wagon
CN111462126B (en) * 2020-04-08 2022-10-11 武汉大学 Semantic image segmentation method and system based on edge enhancement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734713A (en) * 2018-05-18 2018-11-02 大连理工大学 A kind of traffic image semantic segmentation method based on multi-characteristic
CN111179229A (en) * 2019-12-17 2020-05-19 中信重工机械股份有限公司 Industrial CT defect detection method based on deep learning

Also Published As

Publication number Publication date
CN112966603A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN111583337B (en) Omnibearing obstacle detection method based on multi-sensor fusion
CN113744270B (en) Unmanned aerial vehicle visual detection and identification method for crane complex steel structure surface defects
CN110031829B (en) Target accurate distance measurement method based on monocular vision
EP2329222B1 (en) Method and measuring assembly for determining the wheel or axle geometry of a vehicle
CN111169468B (en) Automatic parking system and method
CN108416292B (en) Unmanned aerial vehicle aerial image road extraction method based on deep learning
CN105844624A (en) Dynamic calibration system, and combined optimization method and combined optimization device in dynamic calibration system
CN104794743A (en) Color point cloud producing method of vehicle-mounted laser mobile measurement system
US20100104216A1 (en) Combining feature boundaries
CN110503638B (en) Spiral adhesive quality online detection method
CN111680611A (en) Road trafficability detection method, system and equipment
CN112270320B (en) Power transmission line tower coordinate calibration method based on satellite image correction
CN111523611A (en) Gluing detection method
CN105741296A (en) Auxiliary calibration method of 360-degre all-visual-angle aerial view panorama travelling crane
LU505937B1 (en) MASKRCNN WATER SEEPAGE DETECTION METHOD AND SYSTEM BASED ON LOW-LIGHT COMPENSATION
CN115049640A (en) Road crack detection method based on deep learning
CN111091551A (en) Method for detecting loss fault of brake beam strut opening pin of railway wagon
CN111098850A (en) Automatic parking auxiliary system and automatic parking method
CN114612883A (en) Forward vehicle distance detection method based on cascade SSD and monocular depth estimation
CN115953747A (en) Vehicle-end target classification detection method and vehicle-end radar fusion equipment
CN113221839B (en) Automatic truck image identification method and system
CN112966603B (en) Fault identification method for falling of cab apron of railway wagon
CN114372919A (en) Method and system for splicing panoramic all-around images of double-trailer train
CN112017243A (en) Medium visibility identification method
CN113378647B (en) Real-time track obstacle detection method based on three-dimensional point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant