CN112129777A - Rail train maintains detection device based on computer vision - Google Patents
Rail train maintains detection device based on computer vision Download PDFInfo
- Publication number
- CN112129777A CN112129777A CN202011117334.9A CN202011117334A CN112129777A CN 112129777 A CN112129777 A CN 112129777A CN 202011117334 A CN202011117334 A CN 202011117334A CN 112129777 A CN112129777 A CN 112129777A
- Authority
- CN
- China
- Prior art keywords
- train
- track
- running
- network
- flushing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 56
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 44
- 238000012423 maintenance Methods 0.000 claims abstract description 23
- 238000004140 cleaning Methods 0.000 claims abstract description 22
- 238000007664 blowing Methods 0.000 claims abstract description 12
- 239000007921 spray Substances 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 48
- 238000011010 flushing procedure Methods 0.000 claims description 31
- 238000009826 distribution Methods 0.000 claims description 23
- 238000013527 convolutional neural network Methods 0.000 claims description 21
- 238000000034 method Methods 0.000 claims description 18
- 238000001035 drying Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000005507 spraying Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 8
- 230000005764 inhibitory process Effects 0.000 claims description 7
- 238000005498 polishing Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000007599 discharging Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 abstract description 13
- 230000008901 benefit Effects 0.000 abstract description 4
- 239000012535 impurity Substances 0.000 abstract description 2
- 239000010802 sludge Substances 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 239000010865 sewage Substances 0.000 description 5
- 229910000831 Steel Inorganic materials 0.000 description 3
- 239000000428 dust Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 239000010959 steel Substances 0.000 description 3
- 230000003042 antagnostic effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000512668 Eunectes Species 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B08—CLEANING
- B08B—CLEANING IN GENERAL; PREVENTION OF FOULING IN GENERAL
- B08B3/00—Cleaning by methods involving the use or presence of liquid or steam
- B08B3/02—Cleaning by the force of jets or sprays
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B08—CLEANING
- B08B—CLEANING IN GENERAL; PREVENTION OF FOULING IN GENERAL
- B08B5/00—Cleaning by methods involving the use of air flow or gas flow
- B08B5/02—Cleaning by the force of jets, e.g. blowing-out cavities
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N1/00—Sampling; Preparing specimens for investigation
- G01N1/28—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
- G01N1/34—Purifying; Cleaning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/01—Arrangements or apparatus for facilitating the optical investigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Molecular Biology (AREA)
- Chemical & Material Sciences (AREA)
- Immunology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
A rail train maintenance detection device based on computer vision. According to the invention, the cleaning part and the data acquisition part are sequentially arranged on the track along the advancing direction of the train, a high-pressure water gun and an air blowing port of the cleaning part are firstly utilized to spray high-pressure water columns and high-pressure air flow to wash out sludge and impurities on the train travelling part, then a camera device of the data acquisition part is utilized to shoot the travelling part of the train, and a video image of the travelling part is transmitted to the machine vision unit, so that the machine vision unit can detect cracks and part conditions on the train travelling part. The invention can automatically, quickly and efficiently realize the inspection of the train running gear. The inspection and maintenance mode for the train running gear provided by the invention has the advantages of low cost and high efficiency, is not easy to cause the condition of missed inspection due to the blind area observed by human eyes, and can effectively improve the inspection and maintenance accuracy.
Description
Technical Field
The invention relates to the technical field of urban rail trains, in particular to a rail train maintenance detection device based on computer vision.
Background
Urban rail trains, such as subways and urban rail trains, are the main bodies of urban passenger dedicated line transportation, and the technical performance of the urban rail trains is directly related to the running speed and safety of the trains. Therefore, the normal work of the device can be guaranteed only by comprehensively, uniformly, comprehensively and accurately controlling and detecting the device. The running part is one of the most important components of the subway, and the running quality, the dynamic performance and the driving safety of the urban railway are directly influenced by the state of the running part. Therefore, the detection of the running part of the urban rail train is an important guarantee for ensuring the safe and reliable operation of the subway transportation network.
However, at present, inspection of the train running gear is mainly performed manually for the maintenance and repair of the urban rail train. The inspection and maintenance mode is high in cost and low in efficiency, and the condition of missing inspection is easy to occur due to the blind area observed by human eyes.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a rail train maintenance and detection device based on computer vision, which utilizes the computer vision technology to clean, maintain and automatically detect the bottom of an urban rail train. The invention specifically adopts the following technical scheme.
Firstly, in order to achieve the above purpose, a rail train maintenance detection device based on computer vision is provided, which comprises a cleaning part, a data acquisition part and a water baffle arranged in front of the cleaning part and the data acquisition part, wherein the water baffle is vertically arranged in the advancing direction of a train in an upward direction perpendicular to the rail of the train;
wherein the cleaning part includes:
the laser monitoring instrument is arranged in the advancing direction of the train along the track, is used for sensing the position of the train and outputting a trigger signal when the train advances to the cleaning position;
the flushing device is arranged in the advancing direction of the train along the track, a flushing area is formed in the advancing direction of the train, in the process that the train advances on the track, a train body of the train enters from one side of the flushing area and exits from the other side of the flushing area, and the flushing device is used for flushing water flow sprayed by the flushing device in the flushing area;
the drying device is arranged behind the flushing device along the advancing direction of the train, a drying area is formed in the advancing direction of the train, the train body of the train enters the drying area from the flushing area in the process of advancing on the track, and the moisture remained after the train body is flushed is removed in the drying area;
the floor drain is arranged in front of the drying equipment along the advancing direction of the train and used for receiving and discharging water flow sprayed by the flushing device;
the data acquisition part includes:
the camera device is arranged behind the drying equipment along the advancing direction of the train and is used for shooting a running part of the train, transmitting a video image of the running part to the machine vision unit and enabling the machine vision unit to detect cracks and part conditions on the running part of the train.
Optionally, the rail train maintenance detection device based on computer vision as described in any one of the above, wherein the camera device includes:
the bottom high-definition camera is arranged in the middle of the track and is used for shooting a lower bottom image of the train running part at an upward angle;
the side high-definition cameras comprise a pair of side high-definition cameras which are respectively arranged at the outer sides of the tracks and are used for shooting images of the left side surface and the right side surface of the train running component at a head-up angle;
the bottom LED lamp comprises a plurality of groups arranged in the middle of the track and is used for polishing the lower bottom surface of the train running part when the bottom high-definition camera shoots so as to ensure that the shooting brightness meets the detection requirement of a machine vision unit;
and the side LED lamp belts comprise at least two groups which are respectively arranged on the outer sides of the tracks and are used for polishing the left side surface and the right side surface of the train running part to ensure that the shooting brightness meets the detection requirement of a machine vision unit when the high-definition cameras on the sides shoot.
Optionally, the apparatus for rail train maintenance detection based on computer vision as described in any one of the above, wherein the flushing apparatus includes:
the side high-pressure water guns comprise at least two groups which are respectively arranged on the outer side of the track, and each group of side high-pressure water guns are vertically arranged and are used for spraying high-pressure water columns to the left side surface and the right side surface of the train at an angle of being horizontal or close to the horizontal;
the bottom high-pressure water gun comprises at least one group of water guns horizontally arranged in the middle of the track and is used for spraying high-pressure water columns to the lower bottom surface of the train running part at a vertical or nearly vertical angle;
the drying apparatus includes:
the bottom blowing openings comprise at least one group of bottom blowing openings horizontally arranged in the middle of the track and are used for spraying high-pressure air flow to the lower bottom surface of the train running part at a vertical or nearly vertical angle to blow dry the residual moisture at the bottom of the train;
and the side air blowing openings comprise at least two groups of side air blowing openings which are respectively arranged on the outer side of the track, and each group of side air blowing openings are vertically arranged and used for spraying high-pressure air flow to the left side surface and the right side surface of the train at a horizontal or nearly horizontal angle to blow dry the residual moisture at the bottom of the train.
Optionally, the machine vision unit is connected to the camera device, and configured to detect a crack on the train running component by using the SSD network according to the frame images of the train running component captured by the camera device, according to the following steps:
step L1, calling a first convolution neural network to perform first feature extraction on the image, and generating a plurality of layers of crack feature maps;
step L2, extracting the crack feature maps of six layers, and generating corresponding crack feature detection frames by each point of the crack feature maps;
and L3, collecting all the generated crack characteristic detection frames, respectively carrying out non-maximum value inhibition processing, and outputting the screened crack characteristic detection frames to prompt cracks on a running component of the train.
Optionally, the machine vision unit is further configured to detect, according to each frame image of the train running component captured by the camera, a condition of a part on the train running component by using the SSD network according to the following steps:
step P1, adjusting the size of the image, calling a second convolutional neural network obtained by the training of a generative confrontation network, and extracting the second feature of the adjusted image to obtain a plurality of layers of feature maps of the parts;
step P2, extracting feature maps of six layers, and generating corresponding feature detection frames by each point of the feature maps;
and step P3, collecting all the generated part feature detection frames, respectively carrying out non-maximum value inhibition processing, and outputting the screened part feature detection frames to prompt the part missing condition on the running component of the train.
Optionally, the second convolutional neural network is obtained by performing idiomatic countermeasure network training according to the following steps:
step s1, respectively collecting and marking a fault picture of part loss on the train running part and a normal picture of part intact on the train running part;
step s2, inputting the noise data generated by Gaussian random distribution into the generation network G in the idiom countermeasure network to obtain the noise distribution G (Z)
Step s3, inputting the failure picture and the noise distribution G (Z) into a discrimination network D in an idiom network together, generating an additional true negative sample, and marking the additional true negative sample as the failure picture;
step s4, randomly selecting pictures from the normal pictures and the fault pictures obtained in the step s1 and the step s3 according to a preset proportion to respectively serve as a training set, a test set and a verification set;
and step s5, sequentially inputting the pictures in the training set into a second convolutional neural network, and performing back propagation training on the second convolutional neural network according to a loss function until the second convolutional neural network reaches nash equilibrium, so as to complete the training on the second convolutional neural network.
Optionally, the train maintenance detecting step includes:
driving the train to move from the cleaning part to the data acquisition part along the track, and in the process, firstly controlling a flushing device in the cleaning part to spray high-pressure water columns to the left side surface and the right side surface of the train and the lower bottom surface of the train running part at horizontal and vertical angles respectively and to spray high-pressure air flows to the left side surface and the right side surface of the train and the lower bottom surface of the train running part at horizontal and vertical angles respectively according to a trigger signal of a laser monitor;
then the camera device in the data acquisition part is controlled to shoot the lower bottom surface image of the train running component and the images of the left side surface and the right side surface of the train running component at the upward-looking angle and the horizontal-looking angle respectively.
Advantageous effects
According to the invention, the cleaning part and the data acquisition part are sequentially arranged on the track along the advancing direction of the train, a high-pressure water gun and an air blowing port of the cleaning part are firstly utilized to spray high-pressure water columns and high-pressure air flow to wash out sludge and impurities on the train travelling part, then a camera device of the data acquisition part is utilized to shoot the travelling part of the train, and a video image of the travelling part is transmitted to the machine vision unit, so that the machine vision unit can detect cracks and part conditions on the train travelling part. The invention can automatically, quickly and efficiently realize the inspection of the train running gear. The inspection and maintenance mode for the train running gear provided by the invention has the advantages of low cost and high efficiency, is not easy to cause the condition of missed inspection due to the blind area observed by human eyes, and can effectively improve the inspection and maintenance accuracy.
Further, the number of image samples with missing faults of the screws is small, so that the small sample class identification performance is poor due to the fact that unbalanced training samples generate inclination and induction deviation when convolutional neural network identification is caused among different object classes. According to the method, noise data distributed at random in Gaussian is utilized, and extra real negative samples are added through the generation network G, so that the data quantity among different types of samples is balanced. Therefore, the invention can ensure the accuracy of fault detection through the training data close to balance.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of the overall structure of a computer vision based rail train maintenance detection device of the present invention;
FIG. 2 is a block diagram of the rail train maintenance detection system of the present invention;
FIG. 3 is a schematic diagram of a detection box determined by the crack marking during the training process of the present invention;
FIG. 4 is a schematic view of a crack marked during the inspection process of the present invention;
FIG. 5 is a flow chart of the present invention for machine vision recognition and inspection of an urban rail train;
FIG. 6 is a schematic diagram of an antagonistic neural network employed by the present invention;
fig. 7 is a schematic diagram of the detection result of the urban rail train screw falling-off according to the invention.
In the drawings, 1 denotes a laser monitor; 21 denotes a bottom high definition camera; 22 side high-definition camera; 31, a side high-pressure water gun; 32 denotes a bottom high pressure water gun; 41 bottom LED lamp; 42 denotes a side LED strip; 51 denotes an under-blow port; 52 denotes a side blow port; 6 denotes a water guard plate; 7 represents a floor drain; and 8 denotes a track.
Detailed Description
In order to make the purpose and technical solution of the embodiments of the present invention clearer, the technical solution of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without any inventive step, are within the scope of protection of the invention.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As shown in fig. 1, the hardware device of the system can be mainly divided into two parts, the first part is a cleaning part, and the second part is a data acquisition part:
wherein the cleaning part comprises:
the laser monitoring instrument 1 is arranged in the advancing direction of the train along a track 8, is used for sensing the position of the train and outputting a trigger signal when the train advances to a cleaning position;
a flushing device which is arranged along the track 8 in the travelling direction of the train and is provided with a flushing area in the travelling direction of the train, in the process that the train runs on the track, the train body enters from one side of the flushing area and is driven out from the other side, high-pressure water columns are sprayed to the left side surface and the right side surface of the train at an angle of being horizontal or close to the horizontal through two groups of side surface high-pressure water guns 31 which are vertically arranged in the flushing area, high-pressure water columns are sprayed to the lower bottom surface of the train running part at a vertical or nearly vertical angle through at least one group of horizontally arranged bottom high-pressure water guns 32, then high-pressure airflow is sprayed to the lower bottom surface of the train running part at a vertical or nearly vertical angle through drying equipment comprising bottom and side air blowing openings to blow dry the residual moisture at the bottom of the train, spraying high-pressure air flow to the left side surface and the right side surface of the train at a horizontal angle or an angle close to the horizontal angle to blow dry the residual moisture at the bottom of the train;
the floor drain 7 is arranged in front of the drying equipment along the advancing direction of the train and is used for receiving and discharging water flow sprayed by the flushing device;
the cleaning portion described above may be arranged to operate in the following manner: when the train enters a train section, the train slowly runs over a steel rail, and at the moment, when a running part moves to the position above a high-pressure water gun, the high-pressure water gun at the bottom and the high-pressure water gun arranged on the side face simultaneously spray high-pressure water flow to wash the bottom and the side face of the running part and wash dust and dust, when muddy water falls to the position near the steel rail from a running main part, the muddy water can flow out through a floor drain device, the sewage is simply filtered and can be repeatedly used due to the fact that the main component of the sewage is silt, when the water flow is large, the sewage cannot be timely discharged through the floor drain, in order to avoid that other equipment is soaked by the water and further equipment is damaged, a water baffle is arranged between a cleaning part and a data acquisition part, the corrosion of the sewage to other equipment can be effectively prevented, when the running part is cleaned, the running part is dried through air guns at the bottom and the side face, the data acquisition is convenient to carry out later.
The data acquisition part comprises:
a camera device which is arranged behind the drying device along the advancing direction of the train and comprises a bottom high-definition camera 21 for shooting images of the lower bottom surface of the train running component at an upward angle and a side high-definition camera 22 for shooting images of the left and right sides of the train running component at an upward angle;
when shooting, the two cameras can also respectively realize lighting illumination through the bottom LED lamp 41 and the side LED lamp strip 42;
the bottom LED lamps 41 comprise a plurality of groups arranged in the middle of the track 8 and are used for polishing the lower bottom surface of the train running part when the bottom high-definition camera 21 shoots so as to ensure that the shooting brightness meets the detection requirement of a machine vision unit;
the side LED lamp strip 42 comprises at least two groups which are respectively arranged on the outer side of the track 8 and is used for polishing the left side surface and the right side surface of the train running part when the side high-definition camera 22 shoots so as to ensure that the shooting brightness meets the detection requirement of a machine vision unit;
the data acquisition portion described above may be arranged to operate in the following manner: when the blow-dried running part continues to move forwards, the wheels block the laser detector, at the moment, the opposite side cannot receive signals so as to light a high-pressure water gun at the side and the bottom to jet high-pressure water flow, bottom soil and dust are cleaned, sewage flows out through a floor drain and cannot corrode a steel rail and other ground equipment, after cleaning is completed, air blowing at the side and the bottom blows the running part quickly through jetting high-pressure gas to blow the running part, then an LED lamp and a side LED lamp belt fill the running part with light, meanwhile, a bottom high-definition camera and a side high-definition camera acquire images of the running part, then the video images of the shot running part of the train are transmitted to a machine vision unit, and the machine vision unit executes the following steps to call an SSD network to detect cracks and part running conditions on the train part on each frame of image one by one:
step A1, adjusting the size of the image, calling a convolutional neural network, extracting the characteristics of the adjusted image, and obtaining a plurality of layers of characteristic feature maps;
step A2, extracting feature maps of six layers, and generating corresponding feature detection frames by each point of the feature maps;
and step A3, collecting all the generated feature detection frames, respectively carrying out non-maximum value inhibition processing, outputting the feature detection frames screened by the non-maximum values, and prompting the crack and part missing conditions on the train running component.
The invention selects the SSD network to realize the detection of cracks and part falling, and the reasons are as follows: compared with the SSD network structure and the lower YOLO network structure, the SSD has the advantage that the feature map generated by the SSD to generate the detection frame default box is not only the last layer of the CNN output, but also the detection frame default box generated by the feature map of a shallower layer, so that the default box generated by the SSD is multi-scale. Therefore, the SSD can detect small targets better than YOLO v1 (because the small targets in YOLO v1 have almost all their corresponding features disappeared after high-level convolution). Meanwhile, because the multi-scale default box generated by the SSD has a higher probability of finding a candidate box closer to the Ground Truth, the stability of the model is definitely stronger than that of YOLO (there are few bounding boxes of YOLO, only 98, if the distance is farther from GT, the linear regression for correcting the bounding boxes is not established, and the model may run away during training).
The training of the convolutional neural network CNN can be realized by the following steps:
inputting a picture similar to that shown in fig. 3, making the picture pass through a Convolutional Neural Network (CNN) to extract features, generating feature maps, extracting feature maps of six layers, then generating default boxes at each point of the feature maps (the number of each layer is different, but each point is present), collecting all the generated default boxes, all the default boxes are thrown into an NMS (maximum suppression), outputting the screened default boxes, and outputting.
When the training data is made, LabelImg can be selected to mark cracks, and the data of the detection frame selected by marking is small, so that convergence is facilitated. Then, a data format is made, and the following three subfolders are adopted in the VOC2007 data format folder, wherein: storing an xml file generated by LbaelImg manufacturing data by using an indications folder, storing the xml file by using JPEGImages, storing an original image in a jpg format, using a Main folder below ImageSets, and respectively and correspondingly storing four txt files below the Main folder to randomly select an image name from the indications folder, so that the original image can be divided into a training set, a testing set and a verification set according to a certain proportion, and a model can be tested through the testing set and the verification set after the training is finished.
The learning rate of the present invention can be set to be smaller, for example, 0.0001. Too large a setting can result in a gradient explosion.
When data are manufactured, the foreground and the background need to be distinguished, the crack is obvious, the detection frame can be set to be small, and irrelevant backgrounds are not included as much as possible; for the feature with unobvious cracks, the detection frame needs to be larger, and the unobvious feature can be distinguished from the background only through the long crack.
When crack detection is specifically implemented, the method can be carried out according to the following steps:
1. installation marking tool
The model is trained by using own data, firstly, data annotation is carried out, namely, the machine image is told to which object is in the image and where the object is located, and the model can be trained after the information is obtained.
(1) Annotating data files
The currently popular data annotation file formats are mainly VOC _2007 and VOC _2012, and the text formats are derived from Pascal VOC standard data sets, which is one of the important benchmarks for measuring the image classification and identification capability. And the data is stored in an xml format by adopting a VOC-2007 data format file.
2. Downloading source code
The source code is downloaded by accessing the gitubb page of labelImg (https:// gitubb. com/tzutalin/labelImg). Clone can be carried out through git, and files in a zip compression format can also be directly downloaded.
b. Installation compilation
And decompressing the zip file of the labelImg to obtain a LabelImg-master folder.
The labelImg interface is written by PyQt, because the constructed basic environment uses the latest version of anaconda which carries PyQt5, under the python3 environment, only the lxml needs to be installed again, and the labelImg-master catalog is used for compiling,
3. annotating data
After the successful installation of the annotation tool, the annotation of the data now begins.
(1) Creating a folder
According to the requirement of the VOC data set, the following folders are created: and (3) storing the annotated xml file ImageSets/Main: the file list is used for storing a training set, a testing set and a checking and collecting file list;
JPEGImages: for storing original images
(2) Annotating data
The picture set is placed inside the JPEGImages folder, noting that the format of the pictures must be in jpg format.
The labelImg annotation tool is opened and the left side toolbar "Open Dir" button is clicked.
(3) Dividing training set, test set and verification set
After the labeling of all the photos is completed, the data set corresponding to each photo is divided into a training set, a test set and a verification set.
An automatically divided script (https:// github. com/EddyGao/make _ VOC2007/blob/master/make _ main _ txt. py) is downloaded on the github, and then the following code python make _ main _ txt. py is executed, so that the training set, the test set and the validation set are automatically split according to the proportion set in the script, and the corresponding file name list is stored in the script.
4. Configuring SSD
(1) Downloading SSD code
Since this case is tensiorflow based, a tensiorflow based SSD is downloaded on github with the address https:// github
(2) Converting file formats
The file in the voc _2007 format is converted into a binary file in the tfrecord format, wherein the binary file stores the image data and the label together in the tfrecord data file tensorflow, and the binary file can be copied, moved, read, stored and the like in the tensorflow more quickly.
The SSD-transorflow-master provides a script that converts the format as follows:
DATASET_DIR=./panda_voc2007/
OUTPUT_DIR=./panda_tfrecord/
python SSD-Tensorflow-master/tf_convert_data.py--dataset_name=pascalvoc--dataset_dir=${DATASET_DIR}--output_name=voc_2007_train--output_dir=${OUTPUT_DIR}
(3) modifying object classes
Since it is the object that we are self-defining, to modify the definition of the object class in the SSD-stencil-master, open the SSD-stencil-master/datasets/pascalloc _ common.
5. Downloading pre-training models
SSD-tensflow provides a pre-trained model based on the CNN classical model VGG, as follows:
however, these pre-trained model files are all stored on drive. The method can only download in a VPN mode, and the SSD-300VGG-based pre-training model is downloaded to obtain a file: VGG _ VOC0712_ SSD _300x300_ ft _ iter _120000.ckpt. zip, and then decompression is performed.
6. Training model
Finally, both the annotation file and the SSD model are ready and are now ready to be trained.
Before training the model, with a parameter to be modified, turn on SSD-transorflow-master/train _ SSD _ network. py to find the DATA _ FORMAT parameter entry inside, with a value of NHWC if cpu training is used and a value of NCHW if gpu training is used, as follows:
DATA_FORMAT='NCHW'#gpu
#DATA_FORMAT='NHWC'#cpu
then, formal training is started: opening the terminal, switching the source active technology flow of the conda virtual environment and then executing the following command for training
# uses the pretrained vgg _ ssd _300 model
Wherein, according to the performance condition of the computer, the value of batch _ size is set, and the larger the value is, the larger the batch processing quantity is, the higher the requirement on the machine performance is. If the computer performance is normal, 8 or even 4 can be set. The learning rate learning _ rate may be adjusted according to actual conditions, and the smaller the learning rate is, the more accurate the learning rate is, the longer the training time is, and the larger the learning rate is, the shorter the training time is, but the accuracy is reduced. Using the pre-trained model, the SSD will lock some parameters of the VGG model for training, which can be done in a shorter time.
7. Use model
When the SSD model is trained and used, the SSD-Tensorflow-master carries a notebooks script, and the model can be directly used through jupyter.
Installing jupyter firstly in the following way: the conda install jupyter then starts jupyter-notebook, the code is as follows: after the jupyter-notebook SSD-sensorflow-master/notebook/SSD _ notebook, ipynb is started, the path and name of the Model are set in the code block of the SSD 300Model, and the Model can obtain a crack result mark similar to that shown in FIG. 4 after identifying the video image of the running part of the train.
When the part defect detection is specifically implemented, the training of the model can be executed according to the following steps as shown in fig. 6:
1. taking a training set x of batch and randomly generating noise z;
2. calculating loss;
3. updating generator and resolver using back propagation: for a known real distribution, generator generates a fake distribution. Since the two distributions are not exactly the same, there is a KL-subvrgence between them, i.e. the loss function is not 0. See the real distribution and the fake distribution simultaneously. If the disacrimator can distinguish between what is generated from generator and what is distributed from real, loss is generated and the weight of updating generator is propagated backwards. After the generator finishes updating, the generated fake data is more consistent with the real distribution. However, if the generated data is still not close enough to the real distribution, the descriptor can still recognize, so the generator is updated again. Finally, the discriminator is deceived into thinking that the fake data generated by the generator conforms to the real distribution. This corresponds to the False Positive case, which requires updating the resolver. Back-propagation to update the weights of the discriminators. This process continues until the network reaches nash equilibrium when the generator-generated distribution is indistinguishable from the real distribution.
At the moment, the GAN can be applied to solving the problem of data imbalance caused by limited fault data quantity, such as missing screws, incomplete outer surfaces of shot parts and cracks, and the like: learning the distribution situation of real data samples by adopting a generative countermeasure network, and balancing and expanding a training data set by generating additional real negative samples; then, a deep residual neural network (ResNet) is used for training of recognition and diagnosis, and the expanded data set is used for training.
The detection of the missing parts in the video images of the running parts of the train is then carried out by the steps shown in fig. 5:
step P1, adjusting the size of the image, calling a second convolutional neural network obtained by the training of a generative confrontation network, and extracting the second feature of the adjusted image to obtain a plurality of layers of feature maps of the parts;
step P2, extracting feature maps of six layers, and generating corresponding feature detection frames by each point of the feature maps;
and step P3, collecting all the generated part feature detection frames, respectively carrying out non-maximum value inhibition processing, and outputting the screened part feature detection frames to prompt the part missing condition on the running component of the train.
Therefore, the problem that the quantity of missing fault data of the screw is limited is solved by the method, in order to better utilize mature technologies of image processing, computer vision, machine learning and neural networks, to ensure that training samples can be close to the training data which are mostly used in the current intelligent fault diagnosis research and can obtain balance, and to ensure that the training samples can be trained by using the same quantity of marking samples under different experimental conditions, under the condition of unbalanced training data, the problem of data imbalance is solved by utilizing a antagonistic neural Network (GAN) from the aspect of data generation.
The GAN network generally consists of two modules, namely a generation network G and an authentication network D, which are parameterized deep neural networks. Fig. 6 shows a GAN structure diagram, where input is the input of G, typically the noise data generated by gaussian random distribution, where the output of G is G (z), for real data, typically pictures, the distribution variable is denoted by X, and for D the output is the probability from X where G (z) denotes the noise distribution. Thus, the GAN network can generate a competing network to learn the distribution of the real data samples and balance and expand the training data set by generating additional real negative samples.
Thus, as the network trains g-loss and d-loss to reach Nash equilibrium, it means that the training almost does not need to put requirements on model accuracy due to the expansion of the data we have. Therefore, the data can be enhanced mainly by utilizing the gan, the common data enhancement is inverted and noise is added, and the original data distribution is directly learned and then the fault data is synthesized.
When the fault data of cracks and part loss is detected, the method can adopt the mode to generate the countermeasure network to learn the distribution condition of the real data samples, and balance and expand the training data set by generating additional real negative samples.
In the specific detection process, image data generally needs to be preprocessed before being input into a network, so as to achieve a more stable and reliable detection effect. The pretreatment step comprises: blurred picture data is culled and the picture is rescize directly to 512 × 256 size.
In summary, the invention enables us to use a neural network with a large number of nodes to simulate any nonlinear function between Input and Output through a universal function approximation algorithm (universal function approximation), thereby realizing higher degree of freedom compared with other methods and avoiding the limitation of the judgment process by the capability of the algorithm. The generator or the discriminator is not limited in any form, and the form of the generator or the discriminator is not necessarily the same, but both can be regarded as a totipotent neural network: for example, the input of generator (G) is noise z, and the output is the generated sample G (z). Thus, the present invention can input the generated sample mixed reality data into the discriminator (D), perform binary classification by the discriminator and give a score D (g (z)) of whether it is true or not. By loss of generator and resolver, the goal is achieved depending on whether the resolver is good or bad: g is the approximation of maximizeD (G (z)), D is the approximation of maximizeD (x) and minimize D (G (z)). Therefore, the whole GAN realizes mutual confrontation of the generator and the discriminator through the game of D and G, and finally obtains Nash equilibrium which is at the peak relative to the other network. The trained model can accurately detect the image, and the part falling condition with the maximum possibility similar to that shown in FIG. 7 can be screened out through non-extreme inhibition.
The above are merely embodiments of the present invention, which are described in detail and with particularity, and therefore should not be construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the spirit of the present invention, and these changes and modifications are within the scope of the present invention.
Claims (7)
1. The rail train maintenance detection device based on the computer vision is characterized by comprising a cleaning part, a data acquisition part and a water baffle (6) arranged in front of the cleaning part and the data acquisition part, wherein the water baffle (6) is vertically arranged on the advancing direction of a train in an upward mode perpendicular to a rail (8) of the train;
wherein the cleaning part includes:
the laser monitoring instrument (1) is arranged in the advancing direction of the train along a track (8) and used for sensing the position of the train and outputting a trigger signal when the train advances to a cleaning position;
the flushing device is arranged in the advancing direction of the train along a track (8), a flushing area is formed in the advancing direction of the train, in the process that the train advances on the track, a train body enters from one side of the flushing area and exits from the other side of the flushing area, and the flushing device is used for flushing water flow sprayed by the flushing device in the flushing area;
the drying device is arranged behind the flushing device along the advancing direction of the train, a drying area is formed in the advancing direction of the train, the train body of the train enters the drying area from the flushing area in the process of advancing on the track, and the moisture remained after the train body is flushed is removed in the drying area;
the floor drain (7) is arranged in front of the drying equipment along the advancing direction of the train and is used for receiving and discharging water flow sprayed by the flushing device;
the data acquisition part includes:
the camera device is arranged behind the drying equipment along the advancing direction of the train and is used for shooting a running part of the train, transmitting a video image of the running part to the machine vision unit and enabling the machine vision unit to detect cracks and part conditions on the running part of the train.
2. The computer vision based rail train maintenance detection device of claim 1, wherein said camera device comprises:
the bottom high-definition camera (21) is arranged in the middle of the track (8) and is used for shooting a lower bottom image of the train running component at an upward angle;
a pair of side high-definition cameras (22) respectively arranged outside the track (8) and used for shooting images of the left side and the right side of the train running component at a head-up angle;
the bottom LED lamps (41) comprise a plurality of groups arranged in the middle of the track (8) and are used for polishing the lower bottom surface of the train running part when the bottom high-definition camera (21) shoots so as to ensure that the shooting brightness meets the detection requirement of a machine vision unit;
and the side LED lamp belts (42) comprise at least two groups which are respectively arranged on the outer sides of the tracks (8) and are used for polishing the left side surface and the right side surface of the train running part to ensure that the shooting brightness meets the detection requirement of a machine vision unit when the side high-definition cameras (22) shoot.
3. The computer vision based rail train maintenance detection device of claim 1, wherein said flushing device comprises:
the side high-pressure water guns (31) comprise at least two groups which are respectively arranged on the outer side of the track (8), and each group of side high-pressure water guns (31) are vertically arranged and used for spraying high-pressure water columns to the left side surface and the right side surface of the train at a horizontal or nearly horizontal angle;
the bottom high-pressure water gun (32) comprises at least one group horizontally arranged in the middle of the track (8) and is used for spraying a high-pressure water column to the lower bottom surface of the train running part at a vertical or nearly vertical angle;
the drying apparatus includes:
the bottom blowing openings (51) comprise at least one group horizontally arranged in the middle of the track (8) and are used for spraying high-pressure air flow to the lower bottom surface of the train running part at a vertical or nearly vertical angle to blow dry the residual moisture at the bottom of the train;
and the side air blowing openings (52) comprise at least two groups which are respectively arranged at the outer side of the track (8), and each group of side air blowing openings (52) are vertically arranged and used for spraying high-pressure air flow to the left side surface and the right side surface of the train at a horizontal or nearly horizontal angle to blow dry the residual moisture at the bottom of the train.
4. The computer vision based rail train maintenance detection device according to any one of claims 1 to 3, wherein the machine vision unit is connected to the camera device and is configured to detect the crack on the train running component by using the SSD network according to the frame images of the train running component captured by the camera device according to the following steps:
step L1, calling a first convolution neural network to perform first feature extraction on the image, and generating a plurality of layers of crack feature maps;
step L2, extracting the crack feature maps of six layers, and generating corresponding crack feature detection frames by each point of the crack feature maps;
and L3, collecting all the generated crack characteristic detection frames, respectively carrying out non-maximum value inhibition processing, and outputting the screened crack characteristic detection frames to prompt cracks on a running component of the train.
5. The computer vision based rail train maintenance detection device according to any one of claims 1 to 4, wherein the machine vision unit is further configured to detect the condition of the part on the train running part by using the SSD network according to the frame images of the train running part captured by the camera device, according to the following steps:
step P1, adjusting the size of the image, calling a second convolutional neural network obtained by the training of a generative confrontation network, and extracting the second feature of the adjusted image to obtain a plurality of layers of feature maps of the parts;
step P2, extracting feature maps of six layers, and generating corresponding feature detection frames by each point of the feature maps;
and step P3, collecting all the generated part feature detection frames, respectively carrying out non-maximum value inhibition processing, and outputting the screened part feature detection frames to prompt the part missing condition on the running component of the train.
6. The computer vision based rail train maintenance detection device of claim 5, wherein the second convolutional neural network is obtained by a formal countermeasure network training according to the following steps:
step s1, respectively collecting and marking a fault picture of part loss on the train running part and a normal picture of part intact on the train running part;
step s2, inputting the noise data generated by Gaussian random distribution into the generation network G in the idiom countermeasure network to obtain the noise distribution G (Z)
Step s3, inputting the failure picture and the noise distribution G (Z) into a discrimination network D in an idiom network together, generating an additional true negative sample, and marking the additional true negative sample as the failure picture;
step s4, randomly selecting pictures from the normal pictures and the fault pictures obtained in the step s1 and the step s3 according to a preset proportion to respectively serve as a training set, a test set and a verification set;
and step s5, sequentially inputting the pictures in the training set into a second convolutional neural network, and performing back propagation training on the second convolutional neural network according to a loss function until the second convolutional neural network reaches nash equilibrium, so as to complete the training on the second convolutional neural network.
7. The computer vision based rail train maintenance detection device of any one of claims 1 to 6, wherein the train maintenance detection step comprises:
driving the train to move from the cleaning part to the data acquisition part along the track (8), and in the process, firstly controlling a flushing device in the cleaning part to spray high-pressure water columns to the left and right side surfaces of the train and the lower bottom surface of the train running part at horizontal and vertical angles respectively and to spray high-pressure air flows to the left and right side surfaces of the train and the lower bottom surface of the train running part at horizontal and vertical angles respectively according to a trigger signal of the laser monitor (1);
then the camera device in the data acquisition part is controlled to shoot the lower bottom surface image of the train running component and the images of the left side surface and the right side surface of the train running component at the upward-looking angle and the horizontal-looking angle respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011117334.9A CN112129777A (en) | 2020-10-19 | 2020-10-19 | Rail train maintains detection device based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011117334.9A CN112129777A (en) | 2020-10-19 | 2020-10-19 | Rail train maintains detection device based on computer vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112129777A true CN112129777A (en) | 2020-12-25 |
Family
ID=73854206
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011117334.9A Pending CN112129777A (en) | 2020-10-19 | 2020-10-19 | Rail train maintains detection device based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112129777A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104608799A (en) * | 2014-12-12 | 2015-05-13 | 郑州轻工业学院 | Information fusion technology based train wheel set tread damage online detection and recognition method |
CN106124521A (en) * | 2016-08-31 | 2016-11-16 | 成都铁安科技有限责任公司 | A kind of Railway wheelset detection device and method |
CN206002458U (en) * | 2016-08-31 | 2017-03-08 | 成都铁安科技有限责任公司 | A kind of Railway wheelset detection means |
CN106600581A (en) * | 2016-12-02 | 2017-04-26 | 北京航空航天大学 | Train operation fault automatic detection system and method based on binocular stereoscopic vision |
CN208026644U (en) * | 2018-04-01 | 2018-10-30 | 华东交通大学 | A kind of pavement distress survey device with cleaning function |
CN108734108A (en) * | 2018-04-24 | 2018-11-02 | 浙江工业大学 | A kind of fissured tongue recognition methods based on SSD networks |
CN110756488A (en) * | 2019-10-30 | 2020-02-07 | 安徽省六二八光电科技有限公司 | Glass detects cleaning device |
CN111351793A (en) * | 2020-04-17 | 2020-06-30 | 中南大学 | Train axle surface defect detection system |
-
2020
- 2020-10-19 CN CN202011117334.9A patent/CN112129777A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104608799A (en) * | 2014-12-12 | 2015-05-13 | 郑州轻工业学院 | Information fusion technology based train wheel set tread damage online detection and recognition method |
CN106124521A (en) * | 2016-08-31 | 2016-11-16 | 成都铁安科技有限责任公司 | A kind of Railway wheelset detection device and method |
CN206002458U (en) * | 2016-08-31 | 2017-03-08 | 成都铁安科技有限责任公司 | A kind of Railway wheelset detection means |
CN106600581A (en) * | 2016-12-02 | 2017-04-26 | 北京航空航天大学 | Train operation fault automatic detection system and method based on binocular stereoscopic vision |
CN208026644U (en) * | 2018-04-01 | 2018-10-30 | 华东交通大学 | A kind of pavement distress survey device with cleaning function |
CN108734108A (en) * | 2018-04-24 | 2018-11-02 | 浙江工业大学 | A kind of fissured tongue recognition methods based on SSD networks |
CN110756488A (en) * | 2019-10-30 | 2020-02-07 | 安徽省六二八光电科技有限公司 | Glass detects cleaning device |
CN111351793A (en) * | 2020-04-17 | 2020-06-30 | 中南大学 | Train axle surface defect detection system |
Non-Patent Citations (2)
Title |
---|
俞彬: "基于生成对抗网络的图像类别不平衡问题数据扩充方法", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 12, pages 1 * |
李清奇;: "一种基于自编码的混凝土裂纹识别方法", 北京交通大学学报, no. 02 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112129778A (en) | Rail train maintenance detection system based on computer vision | |
US11887064B2 (en) | Deep learning-based system and method for automatically determining degree of damage to each area of vehicle | |
CN109816024B (en) | Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN | |
CN103279765B (en) | Steel wire rope surface damage detection method based on images match | |
CN110992349A (en) | Underground pipeline abnormity automatic positioning and identification method based on deep learning | |
CN111402211A (en) | High-speed train bottom foreign matter identification method based on deep learning | |
CN108596883B (en) | Aerial image vibration damper slip fault diagnosis method based on deep learning and distance constraint | |
CN110648490B (en) | Multi-factor flame identification method suitable for embedded platform | |
CN103235830A (en) | Unmanned aerial vehicle (UAV)-based electric power line patrol method and device and UAV | |
CN107886131A (en) | One kind is based on convolutional neural networks detection circuit board element polarity method and apparatus | |
CN111079822A (en) | Method for identifying dislocation fault image of middle rubber and upper and lower plates of axle box rubber pad | |
CN110135302A (en) | Method, apparatus, equipment and the storage medium of training Lane detection model | |
CN108648210B (en) | Rapid multi-target detection method and device under static complex scene | |
CN116359233A (en) | Square battery appearance defect detection method and device, storage medium and electronic equipment | |
CN115100497A (en) | Robot-based method, device, equipment and medium for routing inspection of abnormal objects in channel | |
CN109975307A (en) | Bearing surface defect detection system and detection method based on statistics projection training | |
CN116385758A (en) | Detection method for damage to surface of conveyor belt based on YOLOv5 network | |
CN110618129A (en) | Automatic power grid wire clamp detection and defect identification method and device | |
CN114549414A (en) | Abnormal change detection method and system for track data | |
CN110163081A (en) | SSD-based real-time regional intrusion detection method, system and storage medium | |
CN113028897B (en) | Image guidance method and device | |
CN116682070B (en) | Infrared video detection method and system for dangerous gas leakage under complex scene | |
CN112129777A (en) | Rail train maintains detection device based on computer vision | |
CN112129779A (en) | Rail train maintenance detection method based on computer vision | |
CN113205163A (en) | Data labeling method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |