CN113808095A - Big data-based intelligent damage identification and analysis system for railway steel rails - Google Patents
Big data-based intelligent damage identification and analysis system for railway steel rails Download PDFInfo
- Publication number
- CN113808095A CN113808095A CN202111069815.1A CN202111069815A CN113808095A CN 113808095 A CN113808095 A CN 113808095A CN 202111069815 A CN202111069815 A CN 202111069815A CN 113808095 A CN113808095 A CN 113808095A
- Authority
- CN
- China
- Prior art keywords
- damage
- data
- display
- frame
- analysis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000006378 damage Effects 0.000 title claims abstract description 136
- 238000004458 analytical method Methods 0.000 title claims abstract description 58
- 229910000831 Steel Inorganic materials 0.000 title claims abstract description 40
- 239000010959 steel Substances 0.000 title claims abstract description 40
- 238000001514 detection method Methods 0.000 claims abstract description 71
- 238000005520 cutting process Methods 0.000 claims abstract description 25
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 34
- 238000000034 method Methods 0.000 claims description 21
- 238000010586 diagram Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 16
- 238000012360 testing method Methods 0.000 claims description 11
- 208000027418 Wounds and injury Diseases 0.000 claims description 9
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 208000014674 injury Diseases 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 7
- 238000012790 confirmation Methods 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000013473 artificial intelligence Methods 0.000 claims description 3
- 238000007405 data analysis Methods 0.000 abstract 2
- 239000000523 sample Substances 0.000 description 47
- 238000012545 processing Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 19
- 238000009826 distribution Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 238000003466 welding Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000012528 membrane Substances 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000010845 search algorithm Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
- G01N29/06—Visualisation of the interior, e.g. acoustic microscopy
- G01N29/0609—Display arrangements, e.g. colour displays
- G01N29/0645—Display representation or displayed parameters, e.g. A-, B- or C-Scan
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4472—Mathematical theories or simulation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4481—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/02—Indexing codes associated with the analysed material
- G01N2291/023—Solids
- G01N2291/0234—Metals, e.g. steel
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/02—Indexing codes associated with the analysed material
- G01N2291/028—Material parameters
- G01N2291/0289—Internal structure, e.g. defects, grain size, texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Analytical Chemistry (AREA)
- Artificial Intelligence (AREA)
- Pathology (AREA)
- Immunology (AREA)
- Biochemistry (AREA)
- Data Mining & Analysis (AREA)
- Chemical & Material Sciences (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Quality & Reliability (AREA)
- Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)
Abstract
The invention discloses a big data-based intelligent damage identification and analysis system for a railway steel rail, which relates to the technical field of railway steel rail damage identification and is used for solving the problems that the conventional steel rail flaw detection data are mostly manually analyzed, the playback workload is large, the analysis speed is slow and the intelligent degree is low; the system comprises a damage analysis module, a data acquisition module, a data analysis module and a data analysis module, wherein the damage analysis module is used for acquiring B display data played back by a flaw detector in real time, matching a damage analysis model corresponding to the flaw detector, preprocessing the B display data, calibrating and cutting the preprocessed B display data to obtain a plurality of B display images, and identifying the damage of the B display images through the matched damage analysis model so as to realize intelligent identification and analysis of the railway steel rail damage and improve the analysis efficiency; through setting up sample collection module, improve the work efficiency to sample collection earlier stage, sample collection module still supports artifical drawing damage simultaneously to guarantee the variety of sample storehouse.
Description
Technical Field
The invention relates to the technical field of railway steel rail damage identification, in particular to a railway steel rail intelligent damage identification and analysis system based on big data.
Background
Along with the continuous speeding up of railway, the continuous increase of freight volume, the rail damage increase is showing, and the damage type is also diversified, and rail flaw detection data playback mainly is through the mode of manual analysis, has that the playback work volume is big, analysis speed is slow and intelligent degree low grade problem, because factors such as playback personnel health, mental state, service level cause easily that to miss judgement and erroneous judgement, have left the potential safety hazard to the railway safety operation in addition. In order to solve the problem, it is necessary to research an efficient and intelligent railway rail damage identification and analysis system for improving the working efficiency and reducing the hidden danger caused by human factors.
Disclosure of Invention
The invention aims to provide a railway steel rail intelligent damage identification and analysis system based on big data, aiming at solving the problems that the existing steel rail flaw detection data is mostly manually analyzed, and has large playback workload, low analysis speed and low intelligent degree.
The purpose of the invention can be realized by the following technical scheme: a big data based intelligent damage identification and analysis system for railway rails; used in a server, comprising:
the damage analysis module is used for acquiring B display data replayed by the flaw detector in real time, matching a damage analysis model corresponding to the flaw detector, preprocessing the B display data, calibrating and cutting the preprocessed B display data to obtain a plurality of B display images, performing damage identification on the B display images through the matched damage analysis model, feeding back an identification result to the front end for display, receiving a secondary confirmation instruction of a replay worker on the identification result, and generating a replay report; the specific process of identifying the injury comprises the following steps: comparing the B display image with the typical injury characteristic map, and reinjecting the injury type and similarity graph on the B display image to obtain an identification result;
as a preferred embodiment of the present invention, the pretreatment specifically comprises the following steps:
s1: verifying the integrity of the data;
s2: removing ultrasonic clutter;
s3: and (3) detection data mileage calibration: carrying out position mileage calibration on the detected B display data by using information and typical characteristic points in a circuit basic information database; the typical characteristic points of the line comprise turnouts for entering and exiting, reinforcement damage at known positions in the line, curve straight points or slow straight points and the like;
s4: and eliminating invalid data.
As a preferred embodiment of the present invention, the calibration trimming specifically includes the steps of:
determining the size of an image in the B display data through a flaw detector;
carrying out shielding cutting on the damage in the right side edge of the image frame, and outputting the type and position of the residual damage; when the sliding window is the last sliding window, the identified damage is directly output without cutting;
as a preferred embodiment of the present invention, the specific process of the reinjection is: reinjecting the damage type and position output by the cutting network into a B display image;
wherein the damage position output by the cutting network passes through Li=(xi,yi,li,wi) Represents;
Lirepresenting the position of a steel rail damage picture frame output by the cutting network;
xirepresenting the x component in the position of the rail damage frame output by the cutting network;
yirepresenting the y component in the position of the rail damage picture frame output by the cutting network;
lirepresenting the length of a steel rail damage picture frame output by the cutting network;
wirepresenting the width of a steel rail damage picture frame output by the cutting network;
the method specifically comprises the following steps: when the position of the left side of the sliding frame in the B display data of the rail damage detection is xswAnd the position L of the rail damage frame in the B display data after reinjectionBi=(xBi,yi,li,wi) Wherein x isBiRepresenting the x component, x, in the position of the rail flaw frame in the B-picture of rail flaw detectionBi=xsw+xi。
As a preferred embodiment of the present invention, the present invention further includes:
the sample acquisition module is used for collecting a sample of flaw detection data and classifying the sample to obtain sample data; the sample data comprises positive samples and negative samples; the positive sample comprises normal B display signals such as joints, welding seams, screw holes and the like; the negative sample is a damage B display signal, such as rail head nuclear damage, welding seam nuclear damage, screw hole cracks (upper cracks, lower cracks and horizontal cracks), rail bottom cracks and the like;
the sample library is used for storing sample data;
the sample training module is used for carrying out deep learning training on samples in the sample library so as to generate a damage analysis model for carrying out artificial intelligence analysis on damage;
the test evaluation module is used for testing and evaluating the damage analysis model;
as a preferred embodiment of the present invention, the specific process of the sample training module training is as follows: ultrasonic B display images in the sample pass through a convolutional neural network ResNet to obtain a B display image characteristic diagram; b display image characteristic diagram is shared by RPN network and ROI pooling layer, B display image characteristic diagram enters RPN layer to generate ultrasonic wave group candidate frame, then ROI pooling layer combines B display image characteristic diagram and ultrasonic wave echo group candidate frame to output ROI characteristic diagram, and then characteristic diagram of ultrasonic wave echo group candidate frame is used for classification and frame regression to obtain damage analysis model;
in a preferred embodiment of the present invention, the process of generating the ultrasonic echo group by the RPN layer includes:
generating an anchor frame by the RPN, converting each pixel on the ultrasonic B-display image characteristic diagram of the last layer extracted by the convolutional neural network according to 1:2, 1:1 and 2:1, and amplifying and mapping the pixels back to the original ultrasonic B-display image according to the multiples of 8, 16 and 32 to obtain the anchor frame;
comparing the anchor frame of each characteristic point with the target real label to obtain training data required by the RPN, and when the overlapping degree of the intersection ratio of the anchor frame and the target real label is more than 0.7, determining that the target is the target;
the loss function of the RPN ultrasonic echo group candidate frame extraction network is as follows:
in the formula, p in the formulajRepresenting the probability that the ultrasonic echo group in the jth anchor frame belongs to the ultrasonic reflector;
tjthe number of the position prediction parameters of the ultrasonic echo group representing the jth anchor frame in the ultrasonic B display image is 4, and the parameters are respectively a center coordinate, a width and a height;
the number of the position label parameters of the real reflector of the ultrasonic echo group of the jth anchor frame in the ultrasonic B display image is 4, and the parameters are respectively a center coordinate, a width and a height;
Nclsindicating the batch size;
Nregrepresenting the number of anchor frames of the ultrasonic echo group;
λ represents a balance parameter at the time of normalization processing;
and (3) representing a classification loss function of the ultrasonic echo group, wherein the calculation formula is as follows:
in the formulaThe frame regression loss function of the ultrasonic echo group is represented by the following calculation formula:
r represents smooth L1 smoothing equation, which is expressed as:
(x, y, w, h) represents the center coordinates, width and height of the box;
the goal of the ultrasonic echo group frame regression is to find a relationship f such that the ultrasonic echo group anchor frame a is (a)x,Ay,Aw,Ah) And the frame G 'after conversion is (G'x,G′y,G′w,G′h) And real echo group frame G ═ (G)x,Gy,Gw,Gh) Approximately the same, the formula is:
(G′x,G′y,G′w,G′h)≈(Gx,Gy,Gw,Gh);
the regression transformation relation formula of the ultrasonic echo group frame is as follows:
f(Ax,Ay,Aw,Ax)=(G′x,G′y,G′w,G′h);
the conversion from A to G is thought: first, perform translation tx、tyThen zoom in and zoom out tw、thTranslation amount tx、tyAnd scaling the scale factor tw、thThe calculation method specifically comprises the following steps:
compared with the prior art, the invention has the beneficial effects that:
1. the method comprises the steps of collecting B display data replayed by a flaw detector in real time, matching a flaw analysis model corresponding to the flaw detector, preprocessing the B display data to enable flaw detection output to be more accurate, then calibrating and cutting to obtain a plurality of B display images, carrying out flaw identification on the B display images through the matched flaw analysis model, feeding identification results back to the front end for display, receiving secondary confirmation instructions of replay personnel on the identification results, generating a replay report to realize intelligent identification and analysis of railway steel rail flaws and improve analysis efficiency;
2. according to the invention, the sample collection module is arranged, so that the working efficiency of sample collection in the early stage is improved, meanwhile, the sample collection module also supports manual drawing of damage, and the sample drawing is carried out on some possible damage which does not occur in the actual flaw detection operation, so that the diversity of the sample library is ensured.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is an overall schematic block diagram of the present invention;
FIG. 2 is a flow chart of the B-display image preprocessing of the present invention;
FIG. 3 is a diagram of an image cropping inventory of the present invention;
FIG. 4 is a schematic diagram of image side masking according to the present invention;
FIG. 5 is a schematic diagram illustrating the fast RCNN network architecture training of the present invention;
FIG. 6 is a graph of the loss function versus iteration number of the present invention;
fig. 7 is an overall schematic block diagram of another real-time embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations;
the target detection algorithm based on deep learning uses the deep convolutional neural network for target detection, can automatically extract various high-level features of image data, and improves the extraction efficiency of the features. The target detection algorithm based on deep learning has two directions, namely a detection algorithm for region suggestion and a detection algorithm for regression problem.
1) Region-proposed detection algorithm:
an RCNN network architecture is designed by combining a selective search algorithm and an SVM, and compared with the traditional target detection method, the speed and the accuracy are greatly improved, but more redundant calculation is performed, and the speed is low; the feature extraction of SPP (spatial Pyramid Powing) -Net on the whole picture at one time can solve the problems of RCNN redundant computation and picture distortion, but the training time and space complexity is still higher
The Fast RCNN network architecture simplifies the SPP layer of SPP-Net into an ROI-Pooling layer, and classification and regression are carried out on candidate frames by using Softmax, so that the detection speed is greatly accelerated by the improvement of Fast RCNN
The fast RCNN Network architecture uses RPN RegionProposal Network) to replace a selective search algorithm, and end-to-end calculation is realized.
2) Target detection algorithm based on regression problem
The YOLO algorithm divides the picture into meshes, each of which completes the detection task.
The YOLO algorithm greatly improves the detection speed, but the detection precision is not as good as that of the fast RCNN.
In consideration of the above situation, the proposed SSD network architecture combines the YoLo regression idea with the RPN network in fast RCNN, and considers both accuracy and speed of detection, and the method has a poor detection effect on small objects.
After comparing the two target detection algorithms, the fast RCNN network architecture is selected as the basic network structure in the embodiment of the present application.
In the B-display data damage detection task, the positioning of the damage position is important for automatic judgment of detection personnel and subsequent rail damage. In various target detection algorithms based on deep learning, the damage position positioning accuracy of the Faster RCNN algorithm is relatively high.
The damage identification requirement in the B display data is high, the B display data can be mistakenly reported, cannot be missed, and has irregular appearance. A regression problem-based detection algorithm adopts a multi-scale region substitution region recommendation method with grids as centers, the detection precision is larger than that of fast RCNN, and the detection effect on objects with unconventional shapes is poor. The region of the Faster RCNN suggests different amplification and transformation ratios, which are more suitable for the characteristics of the ultrasonic B display data.
The detection capability of fast RCNN on small objects is superior to that of other target detection algorithms based on regression problems;
example 1:
referring to fig. 1, the intelligent railway rail damage identification and analysis system based on big data is used in a server and comprises a sample acquisition module, a sample library, a sample training module, a model library, a test evaluation module and a damage analysis module;
the sample acquisition module collects and classifies the samples of the flaw detection data, and the sample data is stored through the sample storage; the sample data mainly comes from natural damage and damage of a calibration line which appear in the flaw detection process in the past year; the specific collection process is as follows: collecting the types of main parent metal flaw detectors of the work section of the bumper and corresponding playback software; analyzing the characteristics of playback software of flaw detectors of various types, and determining sample collection specifications, wherein the sample collection specifications comprise image framing areas, channel colors, jigsaw patterns, jigsaw requirements and the like; summarizing and summarizing main typical steel rail detection data in the past year; collecting various damaged images and standard images through a sample collecting tool; the samples of the B display signals are mainly divided into positive samples and negative samples, the positive samples are mainly common normal signals such as joints, welding seams, screw holes and the like, and the negative samples are common damage signals such as rail head nuclear damage, welding seam nuclear damage, screw hole cracks (upper cracks, lower cracks and horizontal cracks), rail bottom cracks and the like; through setting up sample collection module, improve the work efficiency to sample collection in earlier stage, sample collection module still supports artifical drawing damage simultaneously, to some probably appearing, but not appearing the damage that has appeared in the actual flaw detection operation, carries out the sample and draws to guarantee the variety of sample storehouse. The sample mainly comes from flaw detection instrument flaw calibration rails and actual flaw detection data. For the collected samples, 80% of the samples are used for training the injury analysis model, and 20% of the samples are used for verifying the effectiveness of the injury analysis model;
the sample training module is used for carrying out deep learning training on the collected sample so as to generate a damage analysis model capable of carrying out artificial intelligence damage analysis, the training process is as follows, an ultrasonic B-display image passes through a convolutional neural network ResNet to obtain a B-display image characteristic diagram, and the characteristic diagram is shared by an RPN network and an ROI pooling layer; b, the B display characteristic image enters an RPN layer to generate an ultrasonic wave group candidate frame, and then an ROI pooling layer combines the B display image characteristic image and the ultrasonic wave echo group candidate frame to output an ROI characteristic image; and then classifying and frame regression are carried out by utilizing the feature map of the ultrasonic echo group candidate frame.
The sample training module also supports the expansion and retraining of a sample library, if the condition of missing judgment occurs, the missing damage can be sampled through the sampling system and then transferred to the sample training module for training, so that a richer damage analysis model is generated, and the detection rate of the damage is ensured;
the damage analysis module collects B display data played back by the flaw detector in real time, matches with a damage analysis model corresponding to the flaw detector, and then preprocesses the B display data so as to facilitate the damage detection output to be more accurate;
referring to fig. 2, the preprocessing process of the B-display data includes checking the integrity of the data, removing the ultrasonic clutter signal, calibrating the range of the detected data position, and removing the invalid detected data;
the ultrasonic clutter rejection is mainly ultrasonic clutter caused by the damage or high gain of an outer membrane of an ultrasonic probe wheel, poor rail tread condition or poor coupling of an ultrasonic probe and a rail tread. For example, after the outer membrane of the ultrasonic detection wheel is damaged, air enters the detection wheel under the action of pressure of the detection wheel, and thus, the ultrasonic sensors in the detection wheel generate unexpected clutter. The ultrasonic clutter can cause the flaw detection of the B display data to have misjudgment;
the detection data mileage calibration refers to the position mileage calibration of the detection data by using the information in the line basic information database and the typical characteristic points. The mileage calibration is helpful for rapidly and accurately positioning the position of the flaw when the steel rail flaw detector rechecks, improves the work efficiency of the flaw rechecking, and is also used as the basis for carrying out contrastive analysis on periodic detection data. Typical characteristic points of the line are: points entering and exiting the station, reinforcement damage at known positions in the line, curve straight points or slow straight points and the like. And (3) detecting whether the position mileage calibration effect of the data is accurately and closely related to the selection of the feature points, and selecting the feature points with obvious features in the line. And after the characteristic point calibration is finished, performing mileage reckoning calibration on the rest detection data in the detection data.
The invalid detection data refers to an undetected or undetected section recorded by an operator in the detection process, usually, the detection section after the detection wheel is damaged, a section with poor coupling of the detection wheel caused by poor surface state of the steel rail, a section with poor state of an ultrasonic sensor in the detection wheel and the like found by the operator on site are detected and recorded, the records contain the initial mileage and the ending mileage of the undetected/undetected section, the data contained in the record of the operator is generally invalid detection data, and the data needs to be manually cut off before the damage judgment of the ultrasonic B display data is carried out, so that the confusion and adverse effects caused by the fine identification of the damage of the steel rail are avoided;
referring to fig. 3, the preprocessed B-video data is calibrated and cropped to obtain a plurality of B-video images, the detection result of the B-video images in the images is filtered, the damage within a distance Δ r from the right side of the image frame is masked and cropped, and the type C of the remaining damage is trimmediPosition LiOutputting;
referring to fig. 4, when outputting the detected type and position of the rail damage for all the images except the first image of the ultrasonic B-display data, only the type and position of the damage whose distance from the right boundary of the image is greater than or equal to the distance from the right boundary of the image are output; as shown in fig. 4, the distance Δ < Δ r from the right side of the damage a to the right side of the image is masked, and the type and position of the damage are not output, but output in the next image. In the figure, the distance delta > delta r between the right side of the damage B and the right side of the image, and the type and the position of the damage are output. The right side of the image can be prevented from being cut off by a certain damage, and the type and the position of the damage can not be repeatedly output with the previous image; when the sliding window is the last sliding window, the identified damage is directly output without cutting;
performing damage identification on the B-display image through the matched damage analysis model, comparing the B-display image with a typical damage characteristic map, reinjecting a damage type and similarity graph on the B-display image to obtain an identification result, feeding the identification result back to the front end for display, receiving a secondary confirmation instruction of a playback person on the identification result, and generating a playback report; the front end can be a detector or an intelligent terminal of a playback person, and the intelligent terminal comprises an intelligent mobile phone, a tablet and other electronic equipment;
example 2:
because the length of the data of the B-display for detecting the damage of the steel rail is linearly related to the detection mileage, the longer the detection mileage is, the longer the length of the data of the B-display for detecting the damage of the steel rail is.
The invention adopts a fast RCNN algorithm and takes a picture as input, and takes a steel rail damage detection data B display image marked with the type and the position of the steel rail damage as output to be presented to a user;
and cutting the B-display data for detecting the rail damage to form a B-display image by a mode of determining the step length of the B-display data. The B display images are sent to a detection network in sequence, the detection network preprocesses the sent images according to the selected model, compares the typical damage characteristic maps and finally returns to the user in a graph mode of reinjecting the damage types and the similarity;
the damage analysis model is a Fast RCNN model;
the Faster RCNN unifies candidate box recommendation, feature extraction, feature classification and border regression into a frame, thereby truly realizing end-to-end training, and the specific training process is as follows:
referring to fig. 5, the B-frame image is processed through a convolutional neural network to obtain a B-frame image feature map, which is shared by an rpn (regionproposalnetwork) network and an roi (regionointersest) pooling layer;
b, the image characteristic image is displayed and enters an RPN layer, and an ultrasonic echo group candidate frame is generated;
the ROI pooling layer is combined with the B display image feature map and the ultrasonic echo group candidate frame to output an ROI feature map;
classifying and frame regression by using the feature map of the ultrasonic echo group candidate frame;
the Faster RCNN integrates the candidate box recommendation algorithm and the convolutional neural network, so that end-to-end target detection is realized, and the accuracy and the speed are improved.
The fast RCNN uses continuous multilayer small convolution kernels to capture the characteristics with smaller damage, and can extract the characteristics with higher dimension of the ultrasonic B display image, because the more complex function can be fitted with the increase of the number of the convolution layers, the extracted characteristics are richer; the loss of boundary features can be reduced, and the large convolution kernel fills more invalid data in the boundary, so that the accuracy of feature extraction is influenced.
The RPN generates an ultrasonic echo group candidate frame and sends the frame to the next detection, and the generation process of the ultrasonic echo group is as follows:
and generating an Anchor frame (Anchor) by the RPN, converting each pixel on the ultrasonic B display image characteristic diagram of the last layer extracted by the convolutional neural network according to the ratio of 1:2, 1:1 and 2:1, and amplifying and mapping the pixels back to the original ultrasonic B display image according to the multiples of 8, 16 and 32 to obtain the Anchor frame. After obtaining the anchor frame of each feature point, comparing the anchor frame with a group route (target real label) to obtain training data required by the RPN, and if the overlapping degree of the anchor frame and IoU (interaction-Unit) of the group route is greater than 0.7, determining that the target is the target. The anchor frame is used for classifying and judging whether the target is the background; and also used for regression, fine tuning the position of the anchor frame. As shown in equation 1.
In the formula pj-probability of the ultrasonic echo group in the jth anchor frame belonging to an ultrasonic reflector;
tjthe number of the position prediction parameters of the ultrasonic echo group of the jth anchor frame in the ultrasonic B display image is 4, and the position prediction parameters are respectively a center coordinate, a width and a height;
the number of the position label parameters of the real reflector of the ultrasonic echo group of the jth anchor frame in the ultrasonic B display image is 4, and the parameters are respectively the center coordinate, the width and the height;
Ncls-batch size;
Nreg-number of anchor frames of the ultrasonic echo group;
λ is the equilibrium parameter at the time of normalization processing;
The frame regression loss function of the ultrasonic echo group is calculated according to the formula 3.
Wherein R-smoothL 1 smoothing equation is shown in equation 4.
The formula 4 includes the classification loss function and the frame regression loss function of the ultrasonic echo group, and for the ultrasonic echo group frame, the center coordinates, the width and the height of the frame are generally represented by (x, y, w, h). The goal of the ultrasonic echo group frame regression is to find a relationship f such that the ultrasonic echo group anchor frame a is (a)x,Ay,Aw,Ah) And the frame G 'after conversion is (G'x,G′y,G′w,G′h) And real echo group frame G ═ (G)x,Gy,Gw,Gh) Approximately the same as shown in equation 5.
(G′x,G′y,G′w,G′h)≈(Gx,Gy,Gw,Gh) Equation 5
The frame regression transformation relationship of the ultrasonic echo group is shown in formula 6.
f(Ax,Ay,Aw,Ah)=(G′x,G′y,G′w,G′h) Equation 6
The conversion from A to G is thought: first, perform translation tx、tyThen zoom in and zoom out tw、thTranslation amount tx、tyAnd scaling the scale factor tw、thThe calculation is shown in equation 7.
And the test evaluation module is used as test set ultrasonic B display data to test and evaluate the fast RCNN model obtained by training.
For one sample in the test set, calculating IoU value of each recommendation box (proposal regions), and if IoU is more than or equal to 0.5, outputting probability value that the ultrasonic reflector in the recommendation box is of a certain class; if IoU is less than 0.5, setting the probability value of the ultrasonic reflector in the recommendation frame to be 0, sorting the ultrasonic reflectors in the same category according to the probability values of the category, obtaining a point (FPR, TPR) for each probability value, connecting the points to obtain a ROC curve of the category, drawing the ROC curve for each ultrasonic reflector, drawing 10 ROC curves, and finally averaging the 10 ROC curves to obtain a final ROC curve of the Faster RCNN network model, wherein the area value AUC under the ROC curve is 0.95, and the model has good overall classification effect and usability.
In order to verify the ultrasonic B display data damage fine identification function of the fast RCNN model, a training sample library is established, a server is set up, and parameters of the server comprise more than 8G professional operation GPU, an AI processor, 12-core 24 threads, main frequency of 2.5G or more, memory of 32G or more, storage of 1T or more, system CentOS 7.7+ and database MYSQL 5.7 +.
And the server-side program adopts a deep learning framework to realize a training environment. Inputting an ultrasonic B display data image of a fast RCNN model, and not performing ultrasonic sensor correlation preprocessing and ultrasonic echo group space segmentation;
establishing a training data set;
the training data set still adopts the detection data of an ultrasonic reflector of the rail flaw detection vehicle on the artificial flaw calibration line and the rail flaw picture detected by part of the actual lines, and the size of the picture in the training data set is determined according to the type of the flaw detector;
the invention aims at the current general type 10 flaw detector to collect;
the sample set is named as jgt01, 11000 pictures of jgt01 sample sets are obtained, and the number of the contained various ultrasonic reflectors is 36906.
Type (B) | Number of samples | Type (B) | Number of samples |
Crack at rail bottom | 766 | Normal screw hole | 6412 |
Joint | 3574 | Rail web cross hole | 8852 |
Damage to rail web | 626 | Crack of screw hole | 4462 |
Nuclear injury of railhead | 5824 | Nuclear damage of welding seam | 1882 |
Weld seam | 4108 | Zero degree anomaly | 400 |
80% of the sample set is selected as a training set, and 20% is selected as a testing set.
In the training process, the convergence condition of loss functions in each stage in the fast RCNN model is checked. In the model, 4 loss functions are provided, namely a classification loss function in an RPN network, an ultrasonic echo group frame regression loss function in the RPN network, a multi-class classification loss function of an ultrasonic reflector and a multi-class frame regression loss function of the ultrasonic reflector;
referring to fig. 6, from the relationship between each loss function and the number of iterations, the loss function value decreases as the number of iterations increases until convergence. According to the test, when training is carried out for 4W + times, all loss functions have changed little, and the training is stopped;
after the model is trained, optimizing all ultrasonic echo group candidate frames recommended by the model before multi-classification, and measuring the overlapping rate between the ultrasonic echo group candidate frame and the ultrasonic echo group real frame by using an intersection-to-union ratio IoU, namely the ratio of the intersection and the union of the ultrasonic echo group candidate frame and the ultrasonic echo group real frame, wherein IoU-1 indicates complete overlapping, and IoU-0 indicates complete non-intersecting.
For the ultrasonic echo group candidate frame ROIp, the lower left coordinates (X0, Y0) and the upper right coordinates (X1, Y1), and the lower left coordinates (a0, B0) and the upper right coordinates (a1, B1) of the real frame ROIG.
Wherein,
area(ROIP∩ROIG)=[min(X1,A1)-max(X0,A0)]×[min(Y1,B1)-max(Y1,B1)]
area(ROIP)=(X1-X0)×(Y1-Y0)
area(ROIG)=(A1-A0)×(B1-B0)
the characteristics of the input ultrasonic B-display image are automatically extracted by the fast RCNN, different image characteristics are extracted from each characteristic image, and various damaged characteristics can be automatically extracted by the fast RCNN through self-learning to identify the ultrasonic B-display image.
Clutter in a sample picture is not marked, the recognition model automatically regards the clutter as a picture background during training, and the background is not recommended to be classified and positioned during region recommendation, so that the clutter cannot be recognized by the recognition model, and the fast RCNN has a good filtering effect on the ultrasonic clutter;
fast RCNN damage recognition speed is high, and the time required for a single B-display image to pass through model recognition is about 0.15 second;
example 3:
as shown in fig. 7, based on the above embodiment 1, the system further includes a processing end, the processing end is in communication connection with the server and performs data exchange, and the processing end is internally provided with a damage analysis module; the server is internally provided with a data distribution module;
the data distribution module counts the number of flaw detectors feeding back data and steel rail B display data sent by the flaw detectors and performs distribution processing, and specifically comprises the following steps:
when the number of the fed-back flaw detectors is larger than a set number threshold, acquiring the type of the flaw detectors and the data volume of the flaw detectors for correspondingly sending the steel rail B display data; setting all types of flaw detectors to correspond to a preset model value, matching the types of the flaw detectors with all types to obtain corresponding preset model values, and marking the corresponding preset model values as QX 1; marking the data volume of the steel rail B display data sent by the flaw detector as QX2, carrying out normalization processing on the preset model value and the data volume, taking the values of the preset model value and the data volume, and setting the weight coefficients of the preset model value and the data volume as St1 and St2 respectively; substituting the two values into a preset formula QP (QX 1 × St1+ St2/QX2 to obtain a row score QP of the flaw detector; sorting the flaw detectors according to the sorting value from large to small, screening the sorted flaw detectors, sequentially selecting the flaw detectors with the quantity threshold value equal to that of the flaw detectors from front to back, and marking the flaw detectors which are not selected as the flaw detectors to be sorted;
distributing the flaw detectors to be classified, acquiring the processing models of a registration end and a registration end in the server, and marking the registration end with the processing model consistent with the model of the flaw detector to be classified as a primary selection end;
acquiring a primary selection end position, and calculating the position distance between the primary selection end position and the position of the server to obtain a transmission distance and acquiring an effective average value of the primary selection end; normalizing the transmission distance and the treatment mean value duration, and taking the normalized values of the transmission distance and the treatment mean value duration to mark the normalized values as MS1 and MS 2;
obtaining a first-selected-end optimal value MZ by using a formula MZ of 100/MS1 multiplied by 0.1+10/MS2 multiplied by 0.9, and marking the first-selected end with the maximum optimal value as a processing end of a flaw detector to be separated;
the data distribution module sends the steel rail B display data of the flaw detector to be divided and the flaw analysis model corresponding to the flaw detector to be divided to the processing end, and the processing end processes the steel rail B display data and feeds back the front end corresponding to the identification result through the built-in flaw analysis module after receiving the steel rail B display data;
the data distribution module is used for counting the time when the steel rail B display data of the flaw detector to be classified are sent, the time when the flaw analysis model corresponding to the flaw detector to be classified is sent and the time when the processing end feeds back the identification result, and calculating the time difference between the two times to obtain the single processing duration; dividing the single processing time length by a single processing value of the data volume of the B-display data of the steel rail, summing all the single processing values of the processing end, and taking the average value to obtain an effective average value;
the data distribution module is used for analyzing the data sent by the flaw detector and sending the data to the corresponding processing end for processing so as to improve the processing speed of the data and reduce the data processing pressure of the damage analysis module in the server;
the data distribution module is also internally provided with a registration unit, the registration unit is used for replaying terminal information submitted by personnel through the computer terminal to register, sending the successfully registered terminal information to the server for storage, and simultaneously marking the successfully registered computer terminal as a registration end; the terminal information comprises a position, a model, a communication address and the like;
the preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.
Claims (7)
1. The utility model provides a railway steel rail intelligence damage identification analytic system based on big data for in the server, its characterized in that includes:
the damage analysis module is used for acquiring B display data replayed by the flaw detector in real time, matching a damage analysis model corresponding to the flaw detector, preprocessing the B display data, calibrating and cutting the preprocessed B display data to obtain a plurality of B display images, performing damage identification on the B display images through the matched damage analysis model, feeding back an identification result to the front end for display, receiving a secondary confirmation instruction of a replay worker on the identification result, and generating a replay report; the specific process of identifying the injury comprises the following steps: and comparing the B display image with the typical injury characteristic map, and reinjecting the injury type and similarity graph on the B display image to obtain an identification result.
2. The intelligent railway steel rail damage identification and analysis system based on big data as claimed in claim 1, wherein the preprocessing comprises the following specific steps:
s1: verifying the integrity of the data;
s2: removing ultrasonic clutter;
s3: and (3) detection data mileage calibration: carrying out position mileage calibration on the detected B display data by using information and typical characteristic points in a circuit basic information database;
s4: and eliminating invalid data.
3. The intelligent big data-based railway steel rail damage identification and analysis system according to claim 1, wherein the calibration cutting comprises the following specific steps:
determining the size of an image in the B display data through a flaw detector; carrying out shielding cutting on the damage in the right side edge of the image frame, and outputting the type and position of the residual damage; and when the sliding window is the last sliding window, directly outputting the identified damage without cutting.
4. The big data-based intelligent damage identification and analysis system for the railway steel rail as claimed in claim 1, wherein the reinjection specifically comprises the following processes: reinjecting the damage type and position output by the cutting network into a B display image;
wherein the damage position output by the cutting network passes through Li=(xi,yi,li,wi) Represents;
Lirepresenting the position of a steel rail damage picture frame output by the cutting network;
xirepresenting the x component in the position of the rail damage frame output by the cutting network;
yirepresenting the y component in the position of the rail damage picture frame output by the cutting network;
lirepresenting the length of a steel rail damage picture frame output by the cutting network;
wirepresenting the width of a steel rail damage picture frame output by the cutting network;
the method specifically comprises the following steps: when the position of the left side of the sliding frame in the B display data of the rail damage detection is xswAnd the position L of the rail damage frame in the B display data after reinjectionBi=(xBi,yi,li,wi) Wherein x isBiRepresenting the x component, x, in the position of the rail flaw frame in the B-picture of rail flaw detectionBi=xsw+xi。
5. The big data based intelligent damage identification and analysis system for the railway steel rail as claimed in claim 1, further comprising:
the sample acquisition module is used for collecting a sample of flaw detection data and classifying the sample to obtain sample data;
the sample library is used for storing sample data;
the sample training module is used for carrying out deep learning training on samples in the sample library so as to generate a damage analysis model for carrying out artificial intelligence analysis on damage;
and the test evaluation module is used for testing and evaluating the damage analysis model.
6. The big data-based intelligent damage identification and analysis system for the railway steel rail as claimed in claim 5, wherein the specific process of the sample training module training is as follows: ultrasonic B display images in the sample pass through a convolutional neural network ResNet to obtain a B display image characteristic diagram; the B display image feature map is shared by the RPN and the ROI pooling layer, the B display image feature map enters the RPN layer to generate an ultrasonic wave group candidate frame, then the ROI pooling layer combines the B display image feature map and the ultrasonic wave echo group candidate frame to output the ROI feature map, and then the feature map of the ultrasonic wave echo group candidate frame is used for classification and frame regression to obtain a damage analysis model.
7. The big data based intelligent damage identification and analysis system for the railway steel rail as claimed in claim 6, wherein the RPN layer generates the ultrasonic echo group by the following steps: the RPN layer transforms each pixel on the last layer of B display image characteristic image extracted by the convolutional neural network according to a preset proportion, and the pixels are amplified according to a preset multiple and then mapped back to the original B display image to obtain an anchor frame; and comparing the anchor frame of each characteristic point with the target real label to obtain training data required by the RPN, and marking the target when the overlapping degree of the intersection ratio of the anchor frame and the target real label is greater than a preset threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111069815.1A CN113808095A (en) | 2021-09-13 | 2021-09-13 | Big data-based intelligent damage identification and analysis system for railway steel rails |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111069815.1A CN113808095A (en) | 2021-09-13 | 2021-09-13 | Big data-based intelligent damage identification and analysis system for railway steel rails |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113808095A true CN113808095A (en) | 2021-12-17 |
Family
ID=78941006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111069815.1A Pending CN113808095A (en) | 2021-09-13 | 2021-09-13 | Big data-based intelligent damage identification and analysis system for railway steel rails |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113808095A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117036234A (en) * | 2023-05-09 | 2023-11-10 | 中国铁路广州局集团有限公司 | Mixed steel rail ultrasonic B-display map damage identification method, system and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767427A (en) * | 2018-12-25 | 2019-05-17 | 北京交通大学 | The detection method of train rail fastener defect |
CN111896625A (en) * | 2020-08-17 | 2020-11-06 | 中南大学 | Real-time monitoring method and monitoring system for rail damage |
CN112200225A (en) * | 2020-09-23 | 2021-01-08 | 西南交通大学 | Steel rail damage B display image identification method based on deep convolutional neural network |
CN112465027A (en) * | 2020-11-27 | 2021-03-09 | 株洲时代电子技术有限公司 | Steel rail damage detection method |
-
2021
- 2021-09-13 CN CN202111069815.1A patent/CN113808095A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767427A (en) * | 2018-12-25 | 2019-05-17 | 北京交通大学 | The detection method of train rail fastener defect |
CN111896625A (en) * | 2020-08-17 | 2020-11-06 | 中南大学 | Real-time monitoring method and monitoring system for rail damage |
CN112200225A (en) * | 2020-09-23 | 2021-01-08 | 西南交通大学 | Steel rail damage B display image identification method based on deep convolutional neural network |
CN112465027A (en) * | 2020-11-27 | 2021-03-09 | 株洲时代电子技术有限公司 | Steel rail damage detection method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117036234A (en) * | 2023-05-09 | 2023-11-10 | 中国铁路广州局集团有限公司 | Mixed steel rail ultrasonic B-display map damage identification method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109785337B (en) | In-column mammal counting method based on example segmentation algorithm | |
CN109886298B (en) | Weld quality detection method based on convolutional neural network | |
CN109977813A (en) | A kind of crusing robot object localization method based on deep learning frame | |
CN108711148B (en) | Tire defect intelligent detection method based on deep learning | |
CN108830332A (en) | A kind of vision vehicle checking method and system | |
US20040247171A1 (en) | Image processing method for appearance inspection | |
CN112149512A (en) | Helmet wearing identification method based on two-stage deep learning | |
CN107230203A (en) | Casting defect recognition methods based on human eye vision attention mechanism | |
CN107578021A (en) | Pedestrian detection method, apparatus and system based on deep learning network | |
CN108596009A (en) | A kind of obstacle detection method and system for agricultural machinery automatic Pilot | |
CN109543665A (en) | Image position method and device | |
CN113378831B (en) | Mouse embryo organ identification and scoring method and system | |
CN113222913B (en) | Circuit board defect detection positioning method, device and storage medium | |
CN103971106A (en) | Multi-view human facial image gender identification method and device | |
CN113205511A (en) | Electronic component batch information detection method and system based on deep neural network | |
CN111783616B (en) | Nondestructive testing method based on data-driven self-learning | |
CN110728269B (en) | High-speed rail contact net support pole number plate identification method based on C2 detection data | |
CN110276276A (en) | The determination method and system of examinee's face direction of visual lines in a kind of Driving Test | |
CN115272830A (en) | Pantograph foreign matter detection method based on deep learning | |
CN113808095A (en) | Big data-based intelligent damage identification and analysis system for railway steel rails | |
CN115685102A (en) | Target tracking-based radar vision automatic calibration method | |
CN111178405A (en) | Similar object identification method fusing multiple neural networks | |
CN107247967A (en) | A kind of vehicle window annual test mark detection method based on R CNN | |
CN114637893A (en) | FMEA database system and processing method thereof | |
CN114943843A (en) | Welding defect detection method based on shape perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211217 |