CN113092495A - Intelligent inspection system and method for subway tunnel defects with cooperation of train and ground - Google Patents
Intelligent inspection system and method for subway tunnel defects with cooperation of train and ground Download PDFInfo
- Publication number
- CN113092495A CN113092495A CN202110330420.6A CN202110330420A CN113092495A CN 113092495 A CN113092495 A CN 113092495A CN 202110330420 A CN202110330420 A CN 202110330420A CN 113092495 A CN113092495 A CN 113092495A
- Authority
- CN
- China
- Prior art keywords
- detection
- positioning
- vehicle
- module
- ground
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000007689 inspection Methods 0.000 title claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 131
- 238000011897 real-time detection Methods 0.000 claims description 19
- 238000004806 packaging method and process Methods 0.000 claims description 12
- 238000013135 deep learning Methods 0.000 claims description 10
- 238000010586 diagram Methods 0.000 claims description 10
- 239000013589 supplement Substances 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 102100032202 Cornulin Human genes 0.000 description 1
- 101000920981 Homo sapiens Cornulin Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007789 sealing Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/9515—Objects of complex shape, e.g. examined with use of a surface follower device
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/46—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being of a radio-wave signal type
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/47—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30132—Masonry; Concrete
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Immunology (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Geometry (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Automation & Control Theory (AREA)
- Machines For Laying And Maintaining Railways (AREA)
- Train Traffic Observation, Control, And Security (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention discloses a subway tunnel defect intelligent inspection system and method with cooperation of a vehicle and a ground, and relates to the technical field of tunnel detection.
Description
Technical Field
The invention relates to the technical field of tunnel detection, in particular to a subway tunnel defect intelligent inspection system and method with cooperation of a vehicle and a ground.
Background
The tunnel detection is an important means for construction quality management and is a precondition for ensuring the operation safety of the train. Tunnel defects such as wet tunnel damage, block dropping, cracks, falling of sealing rubber strips, leakage stoppage, falling and the like can influence the normal running of traffic flow in the tunnel and even harm the safe running of trains. For the existing tunnel defects, the defects are generally detected in a manual inspection or offline data analysis mode, but the traditional modes are time-consuming and labor-consuming, cannot timely eliminate potential safety hazards and bring great troubles to railway workers, so that the system provides a method for detecting the tunnel defects in real time, intelligently and efficiently by utilizing a side-cloud cooperation mode.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a system and a method for intelligently inspecting defects of a subway tunnel with vehicle-ground cooperation.
The purpose of the invention is realized by the following technical scheme:
the intelligent subway tunnel defect inspection system with cooperation of the train and the ground comprises an image acquisition positioning module, a real-time detection module of a vehicle-mounted embedded unit server, a vehicle-mounted cloud alarm module, a ground unit server detection module and a ground cloud alarm module, wherein the image acquisition positioning module, the real-time detection module of the vehicle-mounted embedded unit server, the vehicle-mounted cloud alarm module, the ground unit server detection module and the ground cloud alarm module are sequentially connected;
the image acquisition positioning module is used for acquiring image data required by detecting the defects of the subway tunnel and acquiring positioning information;
the vehicle-mounted embedded unit server real-time detection module is used for receiving the image data acquired by the image acquisition and positioning module and detecting the defects of the subway tunnel in real time;
the vehicle-mounted cloud alarm module is used for packaging the defect picture, the detection result and the found positioning information detected by the real-time detection module of the vehicle-mounted embedded unit server into an alarm file and transmitting the alarm file to the vehicle-mounted cloud data terminal and the ground unit server detection module;
the ground unit server detection module is used for receiving the vehicle-mounted detection result and carrying out secondary accurate detection on the defects;
the ground cloud alarm module is used for packaging the defect picture after the secondary accurate detection, the detection result and the found positioning information into an alarm file and transmitting the alarm file to the ground cloud data terminal for storage.
Preferably, the image acquisition positioning module comprises a data acquisition unit and a positioning unit, the data acquisition unit comprises a camera and a light supplement lamp, and the positioning unit comprises a base station, a GPS and an inertial navigator;
the camera and the light supplement lamp are used for imaging the defects of the tunnel in real time;
the GPS is used for carrying out initial positioning in a satellite searching good area, the base station is used for carrying out initial positioning in a satellite guiding blind area, the inertial navigation device is used for carrying out continuous positioning, and meanwhile, when GPS signals or base station signals exist, position correction is carried out, and position drift is prevented.
A subway tunnel defect intelligent inspection method with vehicle-ground cooperation comprises the following steps:
step 1: the image acquisition and positioning module acquires image information and positioning information and sends the image information and the positioning information to the real-time detection module of the vehicle-mounted embedded unit server;
step 2: the real-time detection module of the vehicle-mounted embedded unit server receives the image data acquired by the image acquisition positioning module, detects and filters the defects of the subway tunnel in real time, judges whether the defects are tunnel defects, if so, executes the step 3, otherwise, returns to execute the step 1;
and step 3: the vehicle-mounted cloud alarm module is used for packaging the defect picture, the detection result and the found positioning information detected by the real-time detection module of the vehicle-mounted embedded unit server into an alarm file and transmitting the alarm file to the vehicle-mounted cloud data terminal and the ground unit server detection module;
and 4, step 4: the ground unit server detection module receives the vehicle-mounted detection result, secondary accurate detection is carried out on the defects, whether the defects are tunnel defects is judged, if yes, the step 5 is carried out, and otherwise, the ground unit server detection module receives the vehicle-mounted detection result of the vehicle-mounted cloud alarm module and carries out detection again;
and 5: the ground cloud alarm module is used for packaging the defect picture after the secondary accurate detection, the detection result and the found positioning information into an alarm file and transmitting the alarm file to the ground cloud data terminal for storage.
Preferably, the image acquisition positioning module comprises a data acquisition unit and a positioning unit, the data acquisition unit comprises a camera and a light supplement lamp, the positioning unit comprises a base station, a GPS and an inertial navigator, and the positioning step is as follows:
a, in a good satellite searching area, carrying out initial positioning by using a GPS (global positioning system), and in a satellite guiding blind area, carrying out initial positioning by using a base station;
b, continuously positioning by using an inertial navigation device, and correcting the position when a GPS signal or a base station signal exists to prevent the position from drifting;
and c, synchronously positioning and picture clock information, and associating nearest neighbor position information for each frame of picture.
Preferably, the step 2 further comprises the following sub-steps:
step 2.1, preprocessing the received image data by using a deep learning method, resetting the size of the image and then normalizing the image data;
step 2.2, detecting the normalized image data by adopting a yolov3 tiny detection algorithm;
the yolov3 tiny detection algorithm adopts tiny-dark net as the main network, and the process is as follows:
when data is input into a detection network, an input image is zoomed to the size specified by the detection network, characteristics are extracted to obtain a characteristic diagram with a certain size, the characteristic diagram is divided into NxN cells, and the cell in which the central coordinate of a target falls is used for predicting the target and position information.
Preferably, the step 4 comprises the following substeps:
step 4.1: preprocessing the detection data by using a deep learning method to enable the detection data to meet the input requirement of a detection model;
step 4.2: detecting the preprocessed detection data by adopting a Mask RCNN algorithm, selecting three groups of different hyper-parameters to train three detection models, detecting the detection data by using the three detection models respectively, and taking the intersection of the detection results of the three models as a final detection result;
the detection process is as follows:
inputting data into a detection network: scanning an input image through a sliding window to carry out convolution operation, namely multiplication operation, combining different sizes and aspect ratios to generate mutually overlapped areas, and obtaining position information of each area for extracting a candidate area; and (3) according to the extracted candidate region, the Mask of Mask RCNN predicts a binary Mask corresponding to the branch output, and further performs defect classification and regression of the detection frame according to the binary Mask to obtain the detection position (x, y, w, h) of the defect.
The invention has the beneficial effects that:
the system is a subway tunnel defect intelligent inspection system based on deep learning and train-ground cooperation, online real-time filterability detection is carried out by utilizing a vehicle-mounted embedded unit server, a detection result is returned to the ground, secondary accurate detection is completed by a ground detection program, and a final detection result is obtained.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a flow chart of a method of use of the system of the present invention;
FIG. 3 is a schematic diagram of yolov3 algorithm according to the present invention;
FIG. 4 is a flow chart of Mask RCNN algorithm detection according to the present invention;
FIG. 5 is a schematic diagram of a detected foreign object in a subway tunnel according to the present invention;
FIG. 6 is a schematic diagram of a subway tunnel block drop detected by the present invention;
FIG. 7 is a schematic diagram of the wet stain of a subway tunnel detected by the present invention;
fig. 8 is a schematic diagram of a subway tunnel crack detected by the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
The method is characterized in that the traditional manual inspection mode is replaced by the on-line intelligent detection of the defects of the subway tunnel based on the deep learning vehicle-ground cooperation, firstly, the vehicle-mounted edge detection system carries out filterability detection on the defects of the tunnel, the detection result is returned to the ground, and the ground detection system carries out accurate detection. The whole vehicle-mounted system is deployed at NVIDIA Jetson AGX Xavier, the ground system is deployed at the private cloud GPU cluster server, the two work in coordination can find defects in the tunnel rapidly and efficiently in real time, and the defects are accurately positioned, so that hidden dangers can be timely eliminated by railway workers, the safe operation of a train is ensured, the labor burden and the time cost are reduced, and the working efficiency of the railway workers is improved.
The system adopts a design scheme of subway tunnel defect online real-time detection based on deep learning vehicle-ground coordination, adopts a vehicle-mounted embedded unit server to perform primary filtering detection on the subway tunnel, returns a detection result (including defect types and position information) to a ground private cloud AI center, and performs secondary accurate detection by the ground unit server to generate a detection report.
As shown in fig. 1, the system is mainly divided into 5 modules:
(1) and the image acquisition positioning module. The method is mainly used for acquiring image data required by detecting the defects of the subway tunnel and obtaining positioning information. The data acquisition unit mainly comprises a camera and a light supplement and is used for imaging the defects of the tunnel in real time. The positioning information is obtained by a base station, a GPS and an inertial navigation combined positioning module, and the specific positioning steps are as follows:
1) in the area with good satellite searching, the GPS is used for carrying out initial positioning, and in the satellite guiding blind area, the base station is used for carrying out initial positioning;
2) the inertial navigation device is used for continuous positioning, and meanwhile, when GPS signals or base station signals exist, position correction is carried out to prevent position drift;
3) and synchronizing positioning and picture clock information, and associating nearest neighbor position information for each frame of picture.
(2) And the vehicle-mounted embedded unit server real-time detection module. And receiving image data acquired by the camera, sending the data to a deployed defect detection program, and detecting the defects of the subway tunnel in real time. The whole detection process is as follows:
1) firstly, the received image data is preprocessed by a deep learning method, and in order to enable the received data to meet the input requirement of the deep learning detection method, the size of the image needs to be reset (the size of the input image is reset by using an opencv library self-contained reset function). At the same time, the image data is normalized for better detection.
2) To enable rapid detection, yolov3 tiny detection algorithm was used. The backbone network adopted by yolov3 tiny is tiny-dark net. As shown in fig. 3, Yolov3 is roughly as follows:
after inputting data into the detection network:
zooming an input image to a specified size of a detection network, and extracting features to obtain a feature map with a certain size;
the feature map is divided into NxN cells, and if the center coordinates of the target fall in which cell, the target and the position information are predicted from the cell.
(3) Vehicle-mounted cloud alarm module
1) Packaging the defect picture, the detection result and the found positioning information into an alarm file;
2) the vehicle-mounted deployed transmission program transmits the packaged alarm file to the vehicle-mounted cloud data terminal and the ground unit server detection module through the 4G network;
3) and the vehicle-mounted cloud data terminal analyzes the alarm file and then is used for relevant workers to check and maintain in time.
(4) Ground unit server detection module. And the ground unit server detection module receives the vehicle-mounted detection result, sends the detection data to a deployed defect detection program, and performs secondary accurate detection on the defects. The whole detection process is as follows:
1) firstly, the detection data is preprocessed by a deep learning method, so that the detection data meets the input requirement of a detection model.
2) And the ground unit server detection module adopts a Mask RCNN algorithm for detection. Three groups of different hyper-parameter training three detection models are selected, detection data are detected by the three detection models respectively, and the intersection of the detection results of the three models is taken as the final detection result. As shown in fig. 4, the Mask CRNN detection roughly flows as follows:
after inputting data into the detection network:
a) the input image is scanned through a sliding window to perform convolution operation, namely multiplication operation, and different sizes and aspect ratios are combined to generate mutually overlapped areas, and position information of each area is obtained to be used for extracting candidate areas.
b) And (3) according to the extracted candidate region, the Mask of Mask RCNN predicts a binary Mask corresponding to the branch output, and further performs defect classification and regression of the detection frame according to the binary Mask to obtain the detection position (x, y, w, h) of the defect.
(5) Ground cloud alarm module
1) Packaging the defect picture, the detection result and the found positioning information into an alarm file;
2) the ground transmission program transmits the packaged alarm file to a ground cloud data terminal through a 4G network;
3) and the ground cloud data terminal analyzes the alarm file and then is used for relevant workers to check and maintain in time.
FIG. 2 is a flow chart of the entire detection system, including:
step 1: the image acquisition and positioning module acquires image information and positioning information and sends the image information and the positioning information to the real-time detection module of the vehicle-mounted embedded unit server;
step 2: the real-time detection module of the vehicle-mounted embedded unit server receives the image data acquired by the image acquisition positioning module, detects and filters the defects of the subway tunnel in real time, judges whether the defects are tunnel defects, if so, executes the step 3, otherwise, returns to execute the step 1;
and step 3: the vehicle-mounted cloud alarm module is used for packaging the defect picture, the detection result and the found positioning information detected by the real-time detection module of the vehicle-mounted embedded unit server into an alarm file and transmitting the alarm file to the vehicle-mounted cloud data terminal and the ground unit server detection module;
and 4, step 4: the ground unit server detection module receives the vehicle-mounted detection result, secondary accurate detection is carried out on the defects, whether the defects are tunnel defects is judged, if yes, the step 5 is carried out, and otherwise, the ground unit server detection module receives the vehicle-mounted detection result of the vehicle-mounted cloud alarm module and carries out detection again;
and 5: the ground cloud alarm module is used for packaging the defect picture after the secondary accurate detection, the detection result and the found positioning information into an alarm file and transmitting the alarm file to the ground cloud data terminal for storage.
As shown in fig. 5-8, the present invention is a system for collecting and identifying the result of the present invention during the operation of a vehicle. The framed area is the area where the detected tunnel defect is located. Through test statistical analysis, the tunnel defect detection accuracy rate reaches more than 85%, and the omission factor is kept within 5%.
The foregoing is merely a preferred embodiment of the invention, it being understood that the embodiments described are part of the invention, and not all of it. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The invention is not intended to be limited to the forms disclosed herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (6)
1. The intelligent subway tunnel defect inspection system with cooperation of the train and the ground is characterized by comprising an image acquisition positioning module, a real-time detection module of a vehicle-mounted embedded unit server, a vehicle-mounted cloud alarm module, a ground unit server detection module and a ground cloud alarm module, wherein the image acquisition positioning module, the real-time detection module of the vehicle-mounted embedded unit server, the vehicle-mounted cloud alarm module, the ground unit server detection module and the ground cloud alarm module are sequentially connected;
the image acquisition positioning module is used for acquiring image data required by detecting the defects of the subway tunnel and acquiring positioning information;
the vehicle-mounted embedded unit server real-time detection module is used for receiving the image data acquired by the image acquisition and positioning module and detecting the defects of the subway tunnel in real time;
the vehicle-mounted cloud alarm module is used for packaging the defect picture, the detection result and the found positioning information detected by the real-time detection module of the vehicle-mounted embedded unit server into an alarm file and transmitting the alarm file to the vehicle-mounted cloud data terminal and the ground unit server detection module;
the ground unit server detection module is used for receiving the vehicle-mounted detection result and carrying out secondary accurate detection on the defects;
the ground cloud alarm module is used for packaging the defect picture after the secondary accurate detection, the detection result and the found positioning information into an alarm file and transmitting the alarm file to the ground cloud data terminal for storage.
2. The train-ground cooperative intelligent inspection system for defects of subway tunnels according to claim 1, wherein the image acquisition and positioning module comprises a data acquisition unit and a positioning unit, the data acquisition unit comprises a camera and a light supplement lamp, and the positioning unit comprises a base station, a GPS and an inertial navigator;
the camera and the light supplement lamp are used for imaging the defects of the tunnel in real time;
the GPS is used for carrying out initial positioning in a satellite searching good area, the base station is used for carrying out initial positioning in a satellite guiding blind area, the inertial navigation device is used for carrying out continuous positioning, and meanwhile, when GPS signals or base station signals exist, position correction is carried out, and position drift is prevented.
3. The intelligent subway tunnel defect inspection method based on vehicle-ground cooperation is characterized by comprising the following steps of:
step 1: the image acquisition and positioning module acquires image information and positioning information and sends the image information and the positioning information to the real-time detection module of the vehicle-mounted embedded unit server;
step 2: the real-time detection module of the vehicle-mounted embedded unit server receives the image data acquired by the image acquisition positioning module, detects and filters the defects of the subway tunnel in real time, judges whether the defects are tunnel defects, if so, executes the step 3, otherwise, returns to execute the step 1;
and step 3: the vehicle-mounted cloud alarm module is used for packaging the defect picture, the detection result and the found positioning information detected by the real-time detection module of the vehicle-mounted embedded unit server into an alarm file and transmitting the alarm file to the vehicle-mounted cloud data terminal and the ground unit server detection module;
and 4, step 4: the ground unit server detection module receives the vehicle-mounted detection result, secondary accurate detection is carried out on the defects, whether the defects are tunnel defects is judged, if yes, the step 5 is carried out, and otherwise, the ground unit server detection module receives the vehicle-mounted detection result of the vehicle-mounted cloud alarm module and carries out detection again;
and 5: the ground cloud alarm module is used for packaging the defect picture after the secondary accurate detection, the detection result and the found positioning information into an alarm file and transmitting the alarm file to the ground cloud data terminal for storage.
4. The intelligent inspection method for defects of subway tunnels with cooperation of vehicle and ground according to claim 3, wherein the image acquisition and positioning module comprises a data acquisition unit and a positioning unit, the data acquisition unit comprises a camera and a light supplement lamp, the positioning unit comprises a base station, a GPS and an inertial navigator, and the positioning steps are as follows:
a, in a good satellite searching area, carrying out initial positioning by using a GPS (global positioning system), and in a satellite guiding blind area, carrying out initial positioning by using a base station;
b, continuously positioning by using an inertial navigation device, and correcting the position when a GPS signal or a base station signal exists to prevent the position from drifting;
and c, synchronously positioning and picture clock information, and associating nearest neighbor position information for each frame of picture.
5. The intelligent inspection method for defects of train-ground cooperative subway tunnels according to claim 3, wherein said step 2 further comprises the following substeps:
step 2.1, preprocessing the received image data by using a deep learning method, resetting the size of the image and then normalizing the image data;
step 2.2, detecting the normalized image data by adopting a yolov3 tiny detection algorithm;
the yolov3 tiny detection algorithm adopts tiny-dark net as the main network, and the process is as follows:
when data is input into a detection network, an input image is zoomed to the size specified by the detection network, characteristics are extracted to obtain a characteristic diagram with a certain size, the characteristic diagram is divided into NxN cells, and the cell in which the central coordinate of a target falls is used for predicting the target and position information.
6. The intelligent inspection method for defects of train-ground cooperative subway tunnels according to claim 5, wherein said step 4 comprises the following substeps:
step 4.1: preprocessing the detection data by using a deep learning method to enable the detection data to meet the input requirement of a detection model;
step 4.2: detecting the preprocessed detection data by adopting a Mask RCNN algorithm, selecting three groups of different hyper-parameters to train three detection models, detecting the detection data by using the three detection models respectively, and taking the intersection of the detection results of the three models as a final detection result;
the detection process is as follows:
inputting data into a detection network: scanning an input image through a sliding window to carry out convolution operation, namely multiplication operation, combining different sizes and aspect ratios to generate mutually overlapped areas, and obtaining position information of each area for extracting a candidate area; and (3) according to the extracted candidate region, the Mask of Mask RCNN predicts a binary Mask corresponding to the branch output, and further performs defect classification and regression of the detection frame according to the binary Mask to obtain the detection position (x, y, w, h) of the defect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110330420.6A CN113092495A (en) | 2021-03-19 | 2021-03-19 | Intelligent inspection system and method for subway tunnel defects with cooperation of train and ground |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110330420.6A CN113092495A (en) | 2021-03-19 | 2021-03-19 | Intelligent inspection system and method for subway tunnel defects with cooperation of train and ground |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113092495A true CN113092495A (en) | 2021-07-09 |
Family
ID=76670507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110330420.6A Pending CN113092495A (en) | 2021-03-19 | 2021-03-19 | Intelligent inspection system and method for subway tunnel defects with cooperation of train and ground |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113092495A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114820595A (en) * | 2022-06-23 | 2022-07-29 | 湖南大学 | Method for detecting regional damage by cooperation of quadruped robot and unmanned plane and related components |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090222438A1 (en) * | 2008-02-29 | 2009-09-03 | Nokia Corporation And Recordation Form Cover Sheet | Method, system, and apparatus for location-aware search |
CN205003293U (en) * | 2015-07-01 | 2016-01-27 | 南京骑骄通信技术有限公司 | Towards on -vehicle high accuracy position terminal |
WO2016095352A1 (en) * | 2014-12-17 | 2016-06-23 | 中兴通讯股份有限公司 | Reverse navigation method and mobile terminal |
CN108038450A (en) * | 2017-12-14 | 2018-05-15 | 海安常州大学高新技术研发中心 | Marine pollution object detecting method based on unmanned plane and image recognition |
CN108398438A (en) * | 2018-05-11 | 2018-08-14 | 中国水利水电科学研究院 | A kind of defects detection vehicle and defect inspection method |
KR20190032908A (en) * | 2017-09-20 | 2019-03-28 | 주식회사 에이치엔에스휴먼시스템 | Method for managing steel quality and system |
US20200041284A1 (en) * | 2017-02-22 | 2020-02-06 | Wuhan Jimu Intelligent Technology Co., Ltd. | Map road marking and road quality collecting apparatus and method based on adas system |
CN112113978A (en) * | 2020-09-22 | 2020-12-22 | 成都国铁电气设备有限公司 | Vehicle-mounted tunnel defect online detection system and method based on deep learning |
-
2021
- 2021-03-19 CN CN202110330420.6A patent/CN113092495A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090222438A1 (en) * | 2008-02-29 | 2009-09-03 | Nokia Corporation And Recordation Form Cover Sheet | Method, system, and apparatus for location-aware search |
WO2016095352A1 (en) * | 2014-12-17 | 2016-06-23 | 中兴通讯股份有限公司 | Reverse navigation method and mobile terminal |
CN205003293U (en) * | 2015-07-01 | 2016-01-27 | 南京骑骄通信技术有限公司 | Towards on -vehicle high accuracy position terminal |
US20200041284A1 (en) * | 2017-02-22 | 2020-02-06 | Wuhan Jimu Intelligent Technology Co., Ltd. | Map road marking and road quality collecting apparatus and method based on adas system |
KR20190032908A (en) * | 2017-09-20 | 2019-03-28 | 주식회사 에이치엔에스휴먼시스템 | Method for managing steel quality and system |
CN108038450A (en) * | 2017-12-14 | 2018-05-15 | 海安常州大学高新技术研发中心 | Marine pollution object detecting method based on unmanned plane and image recognition |
CN108398438A (en) * | 2018-05-11 | 2018-08-14 | 中国水利水电科学研究院 | A kind of defects detection vehicle and defect inspection method |
CN112113978A (en) * | 2020-09-22 | 2020-12-22 | 成都国铁电气设备有限公司 | Vehicle-mounted tunnel defect online detection system and method based on deep learning |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114820595A (en) * | 2022-06-23 | 2022-07-29 | 湖南大学 | Method for detecting regional damage by cooperation of quadruped robot and unmanned plane and related components |
CN114820595B (en) * | 2022-06-23 | 2022-09-02 | 湖南大学 | Method for detecting regional damage by cooperation of quadruped robot and unmanned plane and related components |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112113978A (en) | Vehicle-mounted tunnel defect online detection system and method based on deep learning | |
KR102346676B1 (en) | Method for creating damage figure using the deep learning-based damage image classification of facility | |
KR102017870B1 (en) | Real-time line defect detection system | |
WO2017213718A1 (en) | Automating the assessment of damage to infrastructure assets | |
CN114355907B (en) | Cloud-based intelligent garbage identification and cleaning method and system | |
CN112528979B (en) | Transformer substation inspection robot obstacle distinguishing method and system | |
JP2018018461A (en) | Information processing apparatus, display device, information processing method, and program | |
CN103640595B (en) | A kind of railway protective hurdle net automatic detection system and failure evaluation method | |
CN109242035B (en) | Vehicle bottom fault detection device and method | |
Radopoulou et al. | A framework for automated pavement condition monitoring | |
CN107424150A (en) | A kind of road damage testing method and device based on convolutional neural networks | |
CN112508911A (en) | Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof | |
CN109409336A (en) | A kind of dense fog early warning system and method based on image recognition | |
CN101626493A (en) | Method for judging forward motion direction of pedestrian by combining laser scanning and videos | |
JP6647171B2 (en) | Information processing apparatus, information processing method, and program | |
CN110593957A (en) | Tunnel inspection method | |
CN113092495A (en) | Intelligent inspection system and method for subway tunnel defects with cooperation of train and ground | |
CN112487894A (en) | Automatic inspection method and device for rail transit protection area based on artificial intelligence | |
CN109829923B (en) | Base station antenna downward inclination angle measurement system and method based on deep neural network | |
CN115586310A (en) | Near-surface carbon concentration monitoring system based on satellite-ground cooperation | |
CN113205133B (en) | Tunnel water stain intelligent identification method based on multitask learning | |
CN113569943A (en) | Deep neural network-based slag piece bulk early warning method, system and device | |
CN113129590A (en) | Traffic facility information intelligent analysis method based on vehicle-mounted radar and graphic measurement | |
CN113762247A (en) | Road crack automatic detection method based on significant instance segmentation algorithm | |
US20210383141A1 (en) | Sign position identification system and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |