CN112347952B - Railway wagon coupler tail frame supporting plate falling-off identification method - Google Patents

Railway wagon coupler tail frame supporting plate falling-off identification method Download PDF

Info

Publication number
CN112347952B
CN112347952B CN202011254597.4A CN202011254597A CN112347952B CN 112347952 B CN112347952 B CN 112347952B CN 202011254597 A CN202011254597 A CN 202011254597A CN 112347952 B CN112347952 B CN 112347952B
Authority
CN
China
Prior art keywords
supporting plate
frame supporting
coupler
tail frame
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011254597.4A
Other languages
Chinese (zh)
Other versions
CN112347952A (en
Inventor
汤岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN202011254597.4A priority Critical patent/CN112347952B/en
Publication of CN112347952A publication Critical patent/CN112347952A/en
Application granted granted Critical
Publication of CN112347952B publication Critical patent/CN112347952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A rail wagon coupler yoke supporting plate falling-off identification method relates to the technical field of image identification, and the traditional method is that after a detection device takes a picture, a fault point of a train is found through manual observation. This method allows fault detection during vehicle travel without requiring parking. However, the artificial observation has the defects of easy fatigue, high strength, training requirement and the like. The invention utilizes the image recognition technology to carry out fault recognition, liberates manpower, greatly reduces the workload of manpower, and increases the discovery rate and the accuracy rate of fault recognition.

Description

Railway wagon coupler tail frame supporting plate falling-off identification method
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for recognizing the falling of a supporting plate of a coupler yoke of a railway wagon.
Background
In the direction of railway safety, the traditional method is that after a photo is taken by a detection device, the fault point of a train is found through manual observation. This method allows fault detection during vehicle travel without requiring parking. However, the artificial observation has the defects of easy fatigue, high strength, training requirement and the like.
Disclosure of Invention
The purpose of the invention is: the railway wagon coupler tail frame supporting plate falling identification method is provided for solving the problem that in the prior art, when a coupler tail frame supporting plate falls off, whether the coupler tail frame supporting plate falls off or not is difficult to judge accurately by manpower.
The technical scheme adopted by the invention to solve the technical problems is as follows:
the rail wagon coupler yoke frame supporting plate falling-off identification method comprises the following steps:
the method comprises the following steps: acquiring a 2D linear array vehicle passing image;
step two: intercepting a sub-graph of a coupler part in the 2D linear array passing image;
step three: marking a coupler tail frame supporting plate and each nut in the intercepted coupler part sub-image, and taking the marked coupler part sub-image as a data set I;
step four: training the Faster _ R _ Cnn network model with dataset one;
step five: the method comprises the steps of identifying an image to be detected by using a trained fast _ R _ Cnn network model, considering to be normal if the identification result is that a coupler yoke supporting plate and each nut exist, considering to have a fault if the coupler yoke supporting plate cannot be detected, and giving an alarm, and judging whether the coupler yoke supporting plate has the fault or not by judging the offset angle of the coupler yoke supporting plate if the coupler yoke supporting plate exists and the nuts lack.
Further, the specific step of judging whether the hook tail frame supporting plate is in fault or not by judging the offset angle of the hook tail frame supporting plate is as follows:
firstly, the included angle between the left side and the right side of the coupler tail frame supporting plate and the horizontal direction is obtained, and when the included angle between any one of the left side and the right side of the coupler tail frame supporting plate and the horizontal direction is smaller than 80 degrees or larger than 100 degrees, the coupler tail frame supporting plate is determined to be a falling fault.
Further, the concrete steps of calculating the included angle between the left side and the right side of the hook tail frame supporting plate and the horizontal direction are as follows:
step five, first: obtaining the position of a coupler tail frame supporting plate according to a fast _ R _ Cnn network model, further obtaining the central position of the coupler tail frame supporting plate, outwards expanding the central position of the coupler tail frame supporting plate as a rectangular center to obtain a rectangular frame, and enabling the coupler tail frame supporting plate to be located in the rectangular frame;
step five two: marking four corners of the expanded rectangular frame in sequence, carrying out normalization processing on the rectangular frame image marked with the four corners, and taking the normalized image as a second data set;
step five and step three: training a quadrilateral detection network model by using a data set II;
step five and four: intercepting the hooking tail frame supporting plate subimage according to the result detected by the Faster _ R _ Cnn and inputting the intercepted image into a trained quadrilateral detection network model to obtain the positions of four corner points, the sampling errors of the four corner points and the displacement of the four corner points relative to the central point of the image,
when the predicted four corner points are all positioned in the hook tail frame supporting plate subimage, the predicted four corner points are connected in sequence, and a rectangular frame obtained by adding sampling errors is taken as a final rectangular frame,
when the predicted four corner points are not all located in the hook tail frame supporting plate subimage, obtaining the four corner points after displacement according to the predicted displacement of the four corner points relative to the central point of the image, connecting the four corner points after displacement in sequence to obtain a rectangular frame as a final rectangular frame,
and finally, the included angle between the left side and the right side of the rectangular frame and the horizontal direction, namely the included angle between the left side and the right side of the hook tail frame supporting plate and the horizontal direction.
Further, labeling is performed using labelme in step two.
Further, the quadrilateral detection network model is used for executing the following steps:
first, feature extraction
Using resnet18 as baseline, performing down-sampling on the four feature maps, wherein the size of X4 is 128 × 128, the size of X8 is 64 × 64, the size of X16 is 32 × 32, the size of X32 is 16 × 16, and the four feature maps are used as features of different scales;
second, upsampling feature fusion
According to four features obtained by feature extraction, firstly, inputting a feature map X32 with a down-sampling size of 16 × 16 into a CBR module, and performing deconvolution up-sampling on the output of the CBR module to obtain an X32_ up feature map with a size of 32 × 32; inputting the feature map X16 with the down-sampling size of 32X 32 into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X16_ up feature map with the size of 64X 64; inputting the feature map X8 with the down-sampling size of 64X 64 into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X8_ up feature map with the size of 128X 128, wherein the CBR module comprises Conv2d, BatchNormal2d and Relu;
adding the X32_ up feature map and the X16 feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X32_ up _ up feature map with the size of 64X 64 through upsampling; adding the X16_ up feature map and the X8 feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X16_ up _ up feature map with the size of 128X 128 through upsampling;
adding the X32_ up _ up feature map and the X16_ up feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X32_ up _ up _ up feature map with the size of 128X 128 through upsampling;
finally adding the feature map X4, the feature map X8_ up, the feature map X16_ up _ up and the feature map X32_ up _ up _ up to obtain the final fusion feature;
the third step: quadrilateral prediction section
Is divided into 3 branches
(1) Performing Conv2d, Relu and Conv2d (4) on feature to obtain feature0 with shape of 128, 128 and 4, and performing corner position prediction by using the feature0 as a final feature;
(2) performing Conv2d, Relu and Conv2d (8) on feature to obtain feature1 with shape of 128, 128 and 8, and performing corner sampling error prediction by using the feature1 as a final feature;
(3) and performing Conv2d, Relu and Conv2d (8) on feature to obtain feature2 with shape of 128, 128 and 8, and performing displacement prediction of four corner points relative to the central point of the image by using the feature2 as a final feature.
And further intercepting the sub-graph of the car coupler part in the step two through prior knowledge, hardware data and car coupler information.
Further, marking the hook tail frame supporting plate and each nut in the third step is performed through labelImg.
Further, the positions of the four corner points are obtained through a gaussian kernel function, and the formula of the gaussian kernel function is as follows:
ydst=exp(-((x-px)2+(y-py 2))/2*t2)
in the above formula pxIs the horizontal coordinate of the point where the point is located,pythe coordinate of the point in the vertical direction is shown as x, the coordinate of the point in the horizontal direction in the hotspot graph is shown as y, the coordinate of the point in the vertical direction in the hotspot graph is shown as t, and the value is 11 according to the size of the hook tail frame supporting plate because the size of the hook tail frame supporting plate is fixed.
Further, the sampling error is expressed as:
down_off=(x%4,y%4)
in the above formula, down _ off is a sampling error, x and y are positions of the target corner points,% 4 is a remainder obtained by dividing by 4, and a value interval of down _ off is [0,1 ].
Further, the displacement of the four corner points with respect to the center point of the image is represented as:
(x_off,y_off)=((xh/256-1),(yh/256-1))
in the above formula, x _ off is the horizontal displacement, y _ off is the vertical displacement, and the interval is (-1,1), xhIs the horizontal coordinate at which the target point is located, yhIs the vertical coordinate of the target point.
The invention has the beneficial effects that:
1. the image recognition technology is used for fault recognition, manpower is saved, manual workload is greatly reduced, and the fault recognition discovery rate and accuracy are increased.
2. According to the method and the device, the fault is judged through the double networks, and the fault is identified more accurately.
3. A quadrilateral detection network is designed according to the fault form, the angle of a target can be obtained, and faults are judged according to the angle.
4. The quadrilateral detection network is based on information detected by fast _ R _ Cnn, making the network more accurate and smaller and Faster.
Drawings
FIG. 1 is a schematic image capture of a data set 2 according to the present invention;
FIG. 2 is a schematic diagram of a characteristic map acquisition upsampling of the present invention;
FIG. 3 is a diagram of four-point prediction;
FIG. 4 is a flow chart of a third embodiment of the present invention;
FIG. 5 is a flow chart of the present invention.
Detailed Description
It should be noted that, in the present invention, the embodiments disclosed in the present application may be combined with each other without conflict.
The first embodiment is as follows: specifically describing the present embodiment with reference to fig. 5, the method for identifying the falling-off of the hook tail frame pallet of the railway wagon according to the present embodiment includes the following steps:
the method comprises the following steps: acquiring a 2D linear array vehicle passing image;
step two: intercepting a sub-graph of a coupler part in the 2D linear array passing image;
step three: marking a coupler tail frame supporting plate and each nut in the intercepted coupler part sub-image, and taking the marked coupler part sub-image as a data set I;
step four: training the Faster _ R _ Cnn network model with dataset one;
step five: the method comprises the steps of identifying an image to be detected by using a trained fast _ R _ Cnn network model, considering to be normal if the identification result is that a coupler yoke supporting plate and each nut exist, considering to have a fault if the coupler yoke supporting plate cannot be detected, and giving an alarm, and judging whether the coupler yoke supporting plate has the fault or not by judging the offset angle of the coupler yoke supporting plate if the coupler yoke supporting plate exists and the nuts lack.
Image acquisition
The method comprises the steps of obtaining 2D linear array vehicle-passing images at fixed detection stations, and collecting images of different stations and different cameras at different time and in different weather for a long time. The diversity of data is ensured, and the stability of the final design algorithm can be improved.
Rough cutting of sample subgraph
By using the priori knowledge, hardware data, coupler information and the like, an image of a coupler part is intercepted in a whole vehicle image, the image comprises a coupler tail frame supporting plate part, and the program identification speed can be accelerated after a small image is intercepted.
Fault identification method
And intercepting sub-graph 1 according to the information of the car coupler and the wheel base, and detecting a target on the intercepted sub-graph 1 by using fast _ R _ Cnn, and detecting the loss of a normal nut and the position of a supporting plate. When the detected coupler tail frame supporting plate exists, the number of the nuts is normal, and the types of the lost nuts cannot be detected, judging that the coupler tail frame supporting plate is normal; when the hook tail frame supporting plate cannot be detected, the program gives an alarm; when the existence of the hook tail frame supporting plate is detected, but the number of the detected nuts is insufficient, the angle of the hook tail frame supporting plate is judged, and faults are further identified to prevent the faults from being missed and give an alarm.
And (3) according to the position of the pallet detected by fast _ R _ Cnn, expanding the screenshot from the center position of the hook tail frame pallet to the periphery at equal intervals, and intercepting 512 × 512 sub-graph 2 shown in fig. 1. And inputting the subgraph 2 into a quadrilateral detection network to obtain the coordinates of upper, lower, left and right end points of the detected target.
The left side and the right side of the supporting plate can be obtained by connecting the left upper part with the left lower part and the right upper part with the right lower part, so that the angles of the left side and the right side of the supporting plate can be calculated. And judging the fault according to the angle.
Integral fault identification process
When the truck passes through the detection base station, the camera acquires a 2D linear array image, and the hardware acquires wheel base information. And intercepting a subgraph containing a target by using wheel base information, inputting the subgraph into a Faster _ R _ Cnn network, detecting a next image if the supporting plate and the nut are normal, giving an alarm if the nut is lost or the supporting plate cannot be detected, entering a quadrilateral detection network if the nut is not lost, detecting angles of the left side and the right side of the supporting plate, judging according to the angles, outputting alarm information if the judgment is a fault, and identifying the next image until all identifications are finished if the identification result is no fault.
The second embodiment is as follows: the present embodiment is further described with respect to the first embodiment, and the difference between the present embodiment and the first embodiment is that the specific steps of determining whether the hook tail frame pallet is faulty by determining the offset angle of the hook tail frame pallet are:
firstly, the included angle between the left side and the right side of the coupler tail frame supporting plate and the horizontal direction is obtained, and when the included angle between any one of the left side and the right side of the coupler tail frame supporting plate and the horizontal direction is smaller than 80 degrees or larger than 100 degrees, the coupler tail frame supporting plate is determined to be a falling fault.
The third concrete implementation mode: the second embodiment is further described, and the difference between the second embodiment and the second embodiment is that the specific step of obtaining the included angle between the left and right sides of the hook tail frame supporting plate and the horizontal direction is as follows:
step five, first: obtaining the position of a coupler tail frame supporting plate according to a fast _ R _ Cnn network model, further obtaining the central position of the coupler tail frame supporting plate, outwards expanding the central position of the coupler tail frame supporting plate as a rectangular center to obtain a rectangular frame, and enabling the coupler tail frame supporting plate to be located in the rectangular frame;
step five two: marking four corners of the expanded rectangular frame in sequence, carrying out normalization processing on the rectangular frame image marked with the four corners, and taking the normalized image as a second data set;
step five and step three: training a quadrilateral detection network model by using a data set II;
step five and four: intercepting the hooking tail frame supporting plate subimage according to the result detected by the Faster _ R _ Cnn and inputting the intercepted image into a trained quadrilateral detection network model to obtain the positions of four corner points, the sampling errors of the four corner points and the displacement of the four corner points relative to the central point of the image,
when the predicted four corner points are all positioned in the hook tail frame supporting plate subimage, the predicted four corner points are connected in sequence, and a rectangular frame obtained by adding sampling errors is taken as a final rectangular frame,
when the predicted four corner points are not all located in the hook tail frame supporting plate subimage, obtaining the four corner points after displacement according to the predicted displacement of the four corner points relative to the central point of the image, connecting the four corner points after displacement in sequence to obtain a rectangular frame as a final rectangular frame,
and finally, the included angle between the left side and the right side of the rectangular frame and the horizontal direction, namely the included angle between the left side and the right side of the hook tail frame supporting plate and the horizontal direction.
Quadrilateral detection model building and training
In the current recognition algorithm, Faster _ R _ Cnn has higher accuracy and is suitable for the environment with high requirement on fault recognition accuracy. First, a failure is detected using fast _ R _ Cnn, and when the nut falls off and the hook frame pallet also falls off, the bolt may not be detected. However, in this case, a failure can be discriminated from a change in the angle of the hook frame plate. Affine transformation may occur when the hook tail frame pallet falls off, so that the rotation rectangle detection is difficult to satisfy angle recognition of the situation. Therefore, the invention realizes a quadrilateral detection algorithm with any shape, and can detect the angles of the left and right edges of the supporting plate so as to judge faults.
The arbitrarily shaped quadrilateral detection network of the present invention performs further prediction based on the detection result of Faster _ R _ Cnn, does not share weights, but uses the information detected by Faster _ R _ Cnn. Because the network input is a subgraph of a hook-tail frame pallet detected by fast _ R _ Cnn, the network does not predict whether the target exists any more, only detects four corner points of the target, and sequentially connects the upper left corner point, the upper right corner point, the lower right corner point and the lower left corner point to form a detected quadrangle. The set fast _ R _ Cnn network makes the quadrilateral network more accurate and smaller in model.
The fourth concrete implementation mode: this embodiment mode is a further description of the third embodiment mode, and is different from the third embodiment mode in that labeling is performed using labelme in step two.
Data set preparation
And acquiring a subgraph intercepted by using the car coupler information, marking three types of hook tail frame supporting plates, nuts and nut losses by using labelImg as a data set 1, and training the fast _ R _ Cnn by using the data set to perform target detection and fault identification.
The size of the coupler frame supporting plate is about 350 pixels wide and about 420 pixels high, the central position of the coupler frame supporting plate is calculated by using data set 1 information, the coupler frame supporting plate is located in the center of an image, the same expansion values h and w are reserved above and below and left and right of the supporting plate respectively as shown in the following figure 1, and an image 512 x 512 is intercepted and used as a data set 2 image marked by a quadrangle. Labeling four angular points of the target by using labelme, wherein the labeling sequence is that the labeling is sequentially performed on the upper left, the upper right, the lower right and the lower left, and the labeling sequence cannot be disordered. Reading labelme generates point coordinates in a json file as dataset 2.
The fifth concrete implementation mode: the third embodiment is further described, and the difference between the third embodiment and the fourth embodiment is that the quadrilateral detection network model executes the following steps:
first, feature extraction
Using resnet18 as baseline, performing down-sampling on the four feature maps, wherein the size of X4 is 128 × 128, the size of X8 is 64 × 64, the size of X16 is 32 × 32, the size of X32 is 16 × 16, and the four feature maps are used as features of different scales;
second, upsampling feature fusion
According to four features obtained by feature extraction, firstly, inputting a feature map X32 with a down-sampling size of 16 × 16 into a CBR module, and performing deconvolution up-sampling on the output of the CBR module to obtain an X32_ up feature map with a size of 32 × 32; inputting the feature map X16 with the down-sampling size of 32X 32 into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X16_ up feature map with the size of 64X 64; inputting the feature map X8 with the down-sampling size of 64X 64 into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X8_ up feature map with the size of 128X 128, wherein the CBR module comprises Conv2d, BatchNormal2d and Relu;
adding the X32_ up feature map and the X16 feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X32_ up _ up feature map with the size of 64X 64 through upsampling; adding the X16_ up feature map and the X8 feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X16_ up _ up feature map with the size of 128X 128 through upsampling;
adding the X32_ up _ up feature map and the X16_ up feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X32_ up _ up _ up feature map with the size of 128X 128 through upsampling;
finally adding the feature map X4, the feature map X8_ up, the feature map X16_ up _ up and the feature map X32_ up _ up _ up to obtain the final fusion feature;
the third step: quadrilateral prediction section
Is divided into 3 branches
(1) Performing Conv2d, Relu and Conv2d (4) on feature to obtain feature0 with shape of 128, 128 and 4, and performing corner position prediction by using the feature0 as a final feature;
(2) performing Conv2d, Relu and Conv2d (8) on feature to obtain feature1 with shape of 128, 128 and 8, and performing corner sampling error prediction by using the feature1 as a final feature;
(3) and performing Conv2d, Relu and Conv2d (8) on feature to obtain feature2 with shape of 128, 128 and 8, and performing displacement prediction of four corner points relative to the central point of the image by using the feature2 as a final feature.
The network inputs are: the image of data set 2, a grayscale image of [512, 512] pixels in size, was normalized.
The resnet18 is selected as a backbone, the input is downsampled, and feature layers of downsampling x4 (one fourth), x8 (one eighth), x16 (one sixteenth) and x32 (one thirty-half) are selected. The selected features are subjected to cascaded upsampling, and the sampling structure is shown in fig. 2 below.
And obtaining a feature map with shape of [128, 128, 128], wherein the feature image size is one fourth of the input image, namely [128 x 128], and the number of channels is 128. And (4) making three branches by means of the feature, and respectively predicting the positions of the four angular points, offsets generated when the four angular points are sampled downwards, and displacements of the four angular points relative to the central point of the image. The prediction part of the network is shown in figure 3 below.
The network output is:
shape is feature0 of [128, 128, 4], and a Relu activation function is used for predicting the probability of the position of the target point; feature1 with shape of [128, 128, 8], wherein sampling errors are all within 0 to 1, so that a sigmoid activation function is connected later, a prediction result value interval is changed into [0,1], and the result is prediction of a target point down-sampling error; shape 2 with shape of [128, 128, 8], changing the value interval of the prediction result into [ -1,1] by using a tanh activation function, and obtaining the result of displacement of the target point from the central point.
The sixth specific implementation mode: the embodiment is further explained for the first specific embodiment, and the difference between the embodiment and the first specific embodiment is that the coupler position sub-graph is intercepted in the second step through the priori knowledge, hardware data and coupler information.
The seventh embodiment: this embodiment is a further description of the first embodiment, and is different from the first embodiment in that labeling of the hook tail frame pallet and each nut in step three is performed by labelImg.
The specific implementation mode is eight: the seventh embodiment is further described in the seventh embodiment, and the difference between the seventh embodiment and the seventh embodiment is that the positions of the four corner points are obtained through a gaussian kernel function, and the formula of the gaussian kernel function is as follows:
ydst=exp(-((x-px)2+(y-py 2))/2*t2)
in the above formula pxIs the horizontal coordinate of the point at which the point is located, pyThe coordinate of the point in the vertical direction is shown as x, the coordinate of the point in the horizontal direction in the hotspot graph is shown as y, the coordinate of the point in the vertical direction in the hotspot graph is shown as t, and the value is 11 according to the size of the hook tail frame supporting plate because the size of the hook tail frame supporting plate is fixed.
When a position of a target is marked, it is difficult to accurately determine a point, an area near the point can actually represent the target, so the weight of the position near the target point is given, the weight is larger as the position is closer to the target point, so that a model can find the target point more easily during training, a network is easier to converge, a Gaussian kernel function and a label image are used for the positions of the four marked points in a 128 x 128 image in a training sample, wherein the numerical value of the positions where the four points are located is 1, corresponding numerical values are calculated by the Gaussian kernel function in the upper, lower, left and right 11 pixels of each point in the t range of each point, the closer to the target point is, the farther from the target point is, and the out-of-range is 0.
Four-point localization of hotspot maps using Gaussian kernel functions
ydst=exp(-((x-px)2+(y-py 2))/2*t2)
In the above formula pxIs the horizontal coordinate of the point at which the point is located, pyThe coordinate in the vertical direction of the point is shown, x is the coordinate in the horizontal direction in the hotspot graph, y is the coordinate in the vertical direction in the hotspot graph, and the t target possibly exists in an interval. When [ p ]x-t,px+t]And [ p ]y-t,py+t]Y calculated using the above formuladstAs the value of the hotspot graph, the hotspot graph value is 0 in other cases. Since the data objects are basically consistent in size, the mean value is 11 according to the calculation parameter t.
The output of the network has two target point prediction results. The first is the position of the target point which is directly predicted, and the accurate position is obtained by adding the sampling error. The second is the predicted position of the four point to center point offsets. The rectangular frame where the hook frame pallet is located is obtained from the Faster _ R _ Cnn, and when the first predicted target point is located in the pallet frame detected by the Faster _ R _ Cnn, the first target point is used. And when the detected target point is outside the supporting plate frame, using a second type of target point. And obtaining four target points, and then sequentially connecting the four target points to obtain the target quadrangle.
The specific implementation method nine: this embodiment is a further description of a third embodiment, and the difference between this embodiment and the third embodiment is that a sampling error is expressed as:
down_off=(x%4,y%4)
in the above formula, down _ off is a sampling error, x and y are positions of the target corner points,% 4 is a remainder obtained by dividing by 4, and a value interval of down _ off is [0,1 ].
The detailed implementation mode is ten: the third embodiment is further described, and the difference between the third embodiment and the fourth embodiment is that the displacement of four corner points relative to the central point of the image is represented as:
(x_off,y_off)=((xh/256-1),(yh/256-1))
in the above formula, x _ off is the horizontal displacement, y _ off is the vertical displacement, and the interval is (-1,1), xhIs the horizontal coordinate at which the target point is located, yhIs the vertical coordinate of the target point.
It should be noted that the detailed description is only for explaining and explaining the technical solution of the present invention, and the scope of protection of the claims is not limited thereby. It is intended that all such modifications and variations be included within the scope of the invention as defined in the following claims and the description.

Claims (9)

1. The rail wagon coupler yoke frame supporting plate falling-off identification method comprises the following steps:
the method comprises the following steps: acquiring a 2D linear array vehicle passing image;
step two: intercepting a sub-graph of a coupler part in the 2D linear array passing image;
step three: marking a coupler tail frame supporting plate and each nut in the intercepted coupler part sub-image, and taking the marked coupler part sub-image as a data set I;
step four: training the Faster _ R _ Cnn network model with dataset one;
step five: identifying an image to be detected by using a trained fast _ R _ Cnn network model, if the identification result is that a hook tail frame supporting plate and each nut exist, determining that the image is normal, if the hook tail frame supporting plate cannot be detected, determining that a fault exists, and giving an alarm, and if the hook tail frame supporting plate exists and the nut is missing, determining whether the image is the fault by judging the offset angle of the hook tail frame supporting plate;
the method is characterized in that the specific steps of judging whether the hook tail frame supporting plate is in failure or not by judging the offset angle of the hook tail frame supporting plate are as follows:
firstly, the included angle between the left side and the right side of the coupler tail frame supporting plate and the horizontal direction is obtained, and when the included angle between any one of the left side and the right side of the coupler tail frame supporting plate and the horizontal direction is smaller than 80 degrees or larger than 100 degrees, the coupler tail frame supporting plate is determined to be a falling fault.
2. The method for identifying the falling of the coupler tail frame supporting plate of the railway wagon according to claim 1, wherein the specific step of calculating the included angle between the left side and the right side of the coupler tail frame supporting plate and the horizontal direction is as follows:
step five, first: obtaining the position of a coupler tail frame supporting plate according to a fast _ R _ Cnn network model, further obtaining the central position of the coupler tail frame supporting plate, outwards expanding the central position of the coupler tail frame supporting plate as a rectangular center to obtain a rectangular frame, and enabling the coupler tail frame supporting plate to be located in the rectangular frame;
step five two: marking four corners of the expanded rectangular frame in sequence, carrying out normalization processing on the rectangular frame image marked with the four corners, and taking the normalized image as a second data set;
step five and step three: training a quadrilateral detection network model by using a data set II;
step five and four: intercepting the hooking tail frame supporting plate subimage according to the result detected by the Faster _ R _ Cnn and inputting the intercepted image into a trained quadrilateral detection network model to obtain the positions of four corner points, the sampling errors of the four corner points and the displacement of the four corner points relative to the central point of the image,
when the predicted four corner points are all positioned in the hook tail frame supporting plate subimage, the predicted four corner points are connected in sequence, and a rectangular frame obtained by adding sampling errors is taken as a final rectangular frame,
when the predicted four corner points are not all located in the hook tail frame supporting plate subimage, obtaining the four corner points after displacement according to the predicted displacement of the four corner points relative to the central point of the image, connecting the four corner points after displacement in sequence to obtain a rectangular frame as a final rectangular frame,
and finally, the included angle between the left side and the right side of the rectangular frame and the horizontal direction, namely the included angle between the left side and the right side of the hook tail frame supporting plate and the horizontal direction.
3. The method for identifying the falling off of the coupler yoke of the railway wagon as claimed in claim 2, wherein labelme is used for marking in the second step.
4. The railway wagon coupler yoke bracket pallet falling-off identification method as claimed in claim 2, wherein the quadrilateral detection network model is used for executing the following steps:
first, feature extraction
Using resnet18 as baseline, performing down-sampling on the four feature maps, wherein the size of X4 is 128 × 128, the size of X8 is 64 × 64, the size of X16 is 32 × 32, the size of X32 is 16 × 16, and the four feature maps are used as features of different scales;
second, upsampling feature fusion
According to four features obtained by feature extraction, firstly, inputting a feature map X32 with a down-sampling size of 16 × 16 into a CBR module, and performing deconvolution up-sampling on the output of the CBR module to obtain an X32_ up feature map with a size of 32 × 32; inputting the feature map X16 with the down-sampling size of 32X 32 into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X16_ up feature map with the size of 64X 64; inputting the feature map X8 with the down-sampling size of 64X 64 into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X8_ up feature map with the size of 128X 128, wherein the CBR module comprises Conv2d, BatchNormal2d and Relu;
adding the X32_ up feature map and the X16 feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X32_ up _ up feature map with the size of 64X 64 through upsampling; adding the X16_ up feature map and the X8 feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X16_ up _ up feature map with the size of 128X 128 through upsampling;
adding the X32_ up _ up feature map and the X16_ up feature map, inputting the result into a CBR module, and performing deconvolution on the output of the CBR module to obtain an X32_ up _ up _ up feature map with the size of 128X 128 through upsampling;
finally adding the feature map X4, the feature map X8_ up, the feature map X16_ up _ up and the feature map X32_ up _ up _ up to obtain the final fusion feature;
the third step: quadrilateral prediction section
Is divided into 3 branches
(1) Performing Conv2d, Relu and Conv2d (4) on feature to obtain feature0 with shape of 128, 128 and 4, and performing corner position prediction by using the feature0 as a final feature;
(2) performing Conv2d, Relu and Conv2d (8) on feature to obtain feature1 with shape of 128, 128 and 8, and performing corner sampling error prediction by using the feature1 as a final feature;
(3) and performing Conv2d, Relu and Conv2d (8) on feature to obtain feature2 with shape of 128, 128 and 8, and performing displacement prediction of four corner points relative to the central point of the image by using the feature2 as a final feature.
5. The method for identifying the falling off of the supporting plate of the coupler yoke of the railway wagon according to claim 1, wherein the step two of intercepting the sub-graph of the coupler part is carried out through priori knowledge, hardware data and coupler information.
6. The method for identifying the falling of the coupler tail frame supporting plate of the railway wagon according to claim 1, wherein the marking of the coupler tail frame supporting plate and each nut in the third step is performed by labelImg.
7. The method for identifying the falling off of the supporting plate of the coupler yoke of the rail wagon according to claim 4, wherein the positions of the four corner points are obtained by a Gaussian kernel function, and the formula of the Gaussian kernel function is as follows:
ydst=exp(-((x-px)2+(y-py 2))/2*t2)
in the above formula pxIs the horizontal coordinate of the point at which the point is located, pyThe coordinate of the point in the vertical direction is shown as x, the coordinate of the point in the horizontal direction in the hotspot graph is shown as y, the coordinate of the point in the vertical direction in the hotspot graph is shown as t, and the value is 11 according to the size of the hook tail frame supporting plate because the size of the hook tail frame supporting plate is fixed.
8. The railway wagon coupler yoke bracket pallet dropout identification method of claim 7, wherein the sampling error is expressed as:
down_off=(x%4,y%4)
in the above formula, down _ off is a sampling error, x and y are positions of the target corner points,% 4 is a remainder obtained by dividing by 4, and a value interval of down _ off is [0,1 ].
9. The method for identifying the falling of the coupler yoke bracket supporting plate of the railway wagon as claimed in claim 8, wherein the displacement of the four corner points relative to the central point of the image is represented as:
(x_off,y_off)=((xh/256-1),(yh/256-1))
in the above formula, x _ off is the horizontal displacement, y _ off is the vertical displacement, and the interval is (-1,1), xhIs the horizontal coordinate at which the target point is located, yhIs the vertical coordinate of the target point.
CN202011254597.4A 2020-11-11 2020-11-11 Railway wagon coupler tail frame supporting plate falling-off identification method Active CN112347952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011254597.4A CN112347952B (en) 2020-11-11 2020-11-11 Railway wagon coupler tail frame supporting plate falling-off identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011254597.4A CN112347952B (en) 2020-11-11 2020-11-11 Railway wagon coupler tail frame supporting plate falling-off identification method

Publications (2)

Publication Number Publication Date
CN112347952A CN112347952A (en) 2021-02-09
CN112347952B true CN112347952B (en) 2021-05-11

Family

ID=74363400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011254597.4A Active CN112347952B (en) 2020-11-11 2020-11-11 Railway wagon coupler tail frame supporting plate falling-off identification method

Country Status (1)

Country Link
CN (1) CN112347952B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102099698A (en) * 2008-07-18 2011-06-15 Abb技术有限公司 Method and device for fault location of series-compensated transmission line
CN103278511A (en) * 2013-05-17 2013-09-04 南京大学 Wafer defect detection method based on multi-scale corner feature extraction
KR102097120B1 (en) * 2018-12-31 2020-04-09 주식회사 애자일소다 System and method for automatically determining the degree of breakdown by vehicle section based on deep running
CN111079631A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method and system for identifying falling fault of hook lifting rod of railway wagon
CN111079821A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Derailment automatic braking pull ring falling fault image identification method
CN111091546A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Railway wagon coupler tail frame breaking fault identification method
CN111652296A (en) * 2020-05-21 2020-09-11 哈尔滨市科佳通用机电股份有限公司 Deep learning-based rail wagon lower pull rod fracture fault detection method
CN111833347A (en) * 2020-07-31 2020-10-27 广东电网有限责任公司 Transmission line damper defect detection method and related device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202006805U (en) * 2011-01-04 2011-10-12 南车长江车辆有限公司 Rotary coupler yoke of railway vehicle
KR102200496B1 (en) * 2018-12-06 2021-01-08 주식회사 엘지씨엔에스 Image recognizing method and server using deep learning
CN109829893B (en) * 2019-01-03 2021-05-25 武汉精测电子集团股份有限公司 Defect target detection method based on attention mechanism
CN111080650B (en) * 2019-12-12 2020-10-09 哈尔滨市科佳通用机电股份有限公司 Method for detecting looseness and loss faults of small part bearing blocking key nut of railway wagon
CN111080604B (en) * 2019-12-12 2021-02-26 哈尔滨市科佳通用机电股份有限公司 Image identification method for breakage fault of hook lifting rod of railway wagon

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102099698A (en) * 2008-07-18 2011-06-15 Abb技术有限公司 Method and device for fault location of series-compensated transmission line
CN103278511A (en) * 2013-05-17 2013-09-04 南京大学 Wafer defect detection method based on multi-scale corner feature extraction
KR102097120B1 (en) * 2018-12-31 2020-04-09 주식회사 애자일소다 System and method for automatically determining the degree of breakdown by vehicle section based on deep running
CN111079631A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method and system for identifying falling fault of hook lifting rod of railway wagon
CN111079821A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Derailment automatic braking pull ring falling fault image identification method
CN111091546A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Railway wagon coupler tail frame breaking fault identification method
CN111652296A (en) * 2020-05-21 2020-09-11 哈尔滨市科佳通用机电股份有限公司 Deep learning-based rail wagon lower pull rod fracture fault detection method
CN111833347A (en) * 2020-07-31 2020-10-27 广东电网有限责任公司 Transmission line damper defect detection method and related device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Real-Time Vision-Based System of Fault Detection for Freight Trains;Yang Zhang 等;《IEEE Transactions on Instrumentation and Measurement 》;20191125;第69卷(第07期);第5274-5284页 *
中低速磁浮F轨紧固螺栓松动故障检测;王宝丽 等;《信息技术》;20190831(第08期);第88-92、97页 *
基于各向异性高斯核的多尺度角点检测;章为川 等;《电子测量与仪器学报》;20120131;第26卷(第01期);第37-42页 *
货车螺栓丢失故障图像识别算法研究;张洪健;《中国博士学位论文全文数据库 工程科技II辑》;20180115(第(2018)01期);C033-16 *

Also Published As

Publication number Publication date
CN112347952A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
Maeda et al. Road damage detection using deep neural networks with images captured through a smartphone
US11113543B2 (en) Facility inspection system and facility inspection method
CN113516660A (en) Visual positioning and defect detection method and device suitable for train
CN103077526B (en) There is train method for detecting abnormality and the system of depth detection function
CN111080603B (en) Method for detecting breakage fault of shaft end bolt of railway wagon
JP2022549541A (en) Defect detection method, apparatus, electronic equipment and computer storage medium
CN114049356B (en) Method, device and system for detecting structure apparent crack
CN114022537B (en) Method for analyzing loading rate and unbalanced loading rate of vehicle in dynamic weighing area
CN113159024A (en) License plate recognition technology based on improved YOLOv4
CN115909092A (en) Light-weight power transmission channel hidden danger distance measuring method and hidden danger early warning device
CN116579992A (en) Small target bolt defect detection method for unmanned aerial vehicle inspection
CN117492026A (en) Railway wagon loading state detection method and system combined with laser radar scanning
CN112347952B (en) Railway wagon coupler tail frame supporting plate falling-off identification method
CN112749741B (en) Hand brake fastening fault identification method based on deep learning
CN117351298A (en) Mine operation vehicle detection method and system based on deep learning
CN116524382A (en) Bridge swivel closure accuracy inspection method system and equipment
CN110543612A (en) card collection positioning method based on monocular vision measurement
CN114332814A (en) Parking frame identification method and device, electronic equipment and storage medium
CN111854678B (en) Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision
CN113569702A (en) Deep learning-based truck single-tire and double-tire identification method
CN113128563A (en) High-speed engineering vehicle detection method, device, equipment and storage medium
CN115031640B (en) Train wheel set online detection method, system, equipment and storage medium
CN116309418B (en) Intelligent monitoring method and device for deformation of girder in bridge cantilever construction
Ge et al. Long-term monitoring system for full-bridge traffic load distribution on long-span bridges
CN111583176B (en) Image-based lightning protection simulation disc element fault detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant