CN113837996B - Automatic subway tunnel defect detection method supporting manual verification - Google Patents
Automatic subway tunnel defect detection method supporting manual verification Download PDFInfo
- Publication number
- CN113837996B CN113837996B CN202110941228.0A CN202110941228A CN113837996B CN 113837996 B CN113837996 B CN 113837996B CN 202110941228 A CN202110941228 A CN 202110941228A CN 113837996 B CN113837996 B CN 113837996B
- Authority
- CN
- China
- Prior art keywords
- disease
- image
- file
- small
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 85
- 230000007547 defect Effects 0.000 title claims abstract description 34
- 238000012795 verification Methods 0.000 title claims abstract description 15
- 201000010099 disease Diseases 0.000 claims abstract description 170
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 170
- 238000000034 method Methods 0.000 claims abstract description 22
- 238000010586 diagram Methods 0.000 claims abstract description 16
- 238000013507 mapping Methods 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 20
- 238000005259 measurement Methods 0.000 claims description 10
- 238000012216 screening Methods 0.000 claims description 10
- 238000004140 cleaning Methods 0.000 claims description 4
- 230000008030 elimination Effects 0.000 claims description 3
- 238000003379 elimination reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000002310 reflectometry Methods 0.000 claims description 2
- 208000015897 writing disease Diseases 0.000 claims description 2
- 230000008901 benefit Effects 0.000 abstract description 4
- 238000012937 correction Methods 0.000 abstract description 3
- 238000013136 deep learning model Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 abstract description 2
- 230000008859 change Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000001502 supplementing effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/11—File system administration, e.g. details of archiving or snapshots
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/13—File access structures, e.g. distributed indices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an automatic detection method for subway tunnel diseases supporting manual verification, which comprises the following steps: 1) Establishing a coordinate model of a projection space, and projecting the point cloud data into an orthographic projection diagram of the subway tunnel; 2) Performing disease detection on each orthographic projection image to generate a detection result; 3) Mapping the small-image disease detection result to a large image; 4) Manually correcting the detection result of each large graph to generate a verification file; 5) Calculating tunnel defect areas and extracting tunnel defect edges to generate an edge extraction result file; 6) Calculating the starting and stopping mileage, the starting and stopping ring number and the same ring number of the disease, and generating a disease detail table. The method and the device for detecting the tunnel defects based on the point cloud data realize intelligent identification of the tunnel defects by the deep learning model, and are assisted by a manual correction function, so that a complete, efficient and reliable solution is provided for subway tunnel defect detection, and the method and the device have high application value and economic benefit.
Description
Technical Field
The invention relates to automatic detection of tunnel defects, in particular to an automatic defect detection method based on deep learning and supporting manual verification, and belongs to the field of rail transit detection.
Background
With the rapid development of underground rail transit, safety and maintenance work of underground rail transit are attracting attention. Due to the influence of comprehensive factors such as the change of the environmental load around the ground, the construction activity around the subway, the change of geological conditions and the like, diseases such as water leakage in a tunnel, peeling of the surface of the tunnel, cracks on the surface of the tunnel and the like can be caused, and if the diseases of a specified level are not treated in time, the hidden safety hazards which are difficult to estimate can be caused. Therefore, accurately and real-time identifying the defects in the subway tunnel is important for effectively making a maintenance strategy and further eliminating potential safety hazards possibly caused by the defects.
The common subway tunnel defect detection mainly adopts a mode of manually carrying out visual inspection by using auxiliary tools, and the mode has the defects of low efficiency, low accuracy, poor instantaneity, large influence by subjectivity and the like, and gradually cannot meet the requirements of subway tunnel defect detection. The existing method (Li Jun, zhu Guoqi, fan Xiaodong, yang Wei, huang Zhen) is that a subway tunnel structure machine vision detection system and application analysis [ J ]. Mapping report, 2020 (09): 27-32+37) is that image data are obtained for disease detection after a 6-table array CCD camera is used for parameter adjustment and light supplementing equipment is used for holding, a detection sample is manufactured by using an image enhancement and cutting method, a Cascade R-CNN model is trained for disease detection, and finally a crack, a leakage clock distribution map and a mileage distribution map are generated.
The CCD area array camera has the advantages of complicated parameter adjusting process, long image acquisition time and low measurement efficiency, and meanwhile, the image can be reduced due to the influence of factors such as light in a tunnel, and finally the measurement accuracy is influenced. The existing subway tunnel track inspection trolley is provided with laser radar scanning equipment, accurate tunnel point cloud data can be provided without adjusting and participating in the adding and holding of light supplementing equipment, the data acquisition time is short, and the measurement efficiency is high. The point cloud data obtained by laser radar scanning measurement is measurement data in a three-dimensional space, and geometrical characteristics of the inner surface of a tunnel cannot be directly expressed. In order to realize disease detection by utilizing laser radar point cloud data, the point cloud data is required to be projected into an orthographic projection image, and then the disease in the tunnel is finally detected by utilizing an image detection technology.
Disclosure of Invention
The invention aims to realize an automatic subway tunnel defect detection method supporting manual verification, which comprises the following steps: 1) Establishing a coordinate model of a projection space, and projecting the point cloud data into an orthographic projection diagram of the subway tunnel; 2) Performing disease detection on each orthographic projection image to generate a detection result; 3) Mapping the small-image disease detection result to a large image; 4) Manually correcting the detection result of each large graph to generate a verification file; 5) Calculating tunnel defect areas and extracting tunnel defect edges to generate an edge extraction result file; 6) Calculating the starting and stopping mileage, the starting and stopping ring number and the same ring number of the disease, and generating a disease detail table; specifically, the method of the present invention comprises the steps of:
A. the method comprises the following specific steps of establishing a coordinate model of a projection space, and projecting point cloud data into an orthographic projection diagram of a subway tunnel:
A1. establishing a coordinate model of a projection space, wherein the coordinate model comprises a corresponding relation among a laser radar measurement coordinate system, an orthographic projection coordinate system and a section coordinate system;
A2. cleaning noise data in laser radar point cloud data;
the laser radar point cloud data refers to a set of XYZ coordinates and reflectivity Ref formed by each scanning point on the inner wall of a tunnel in a laser radar measurement coordinate system, and is abbreviated as point cloud data;
A3. according to the mileage keywords of each scanning point, indexing, segmenting and merging point cloud data, distributing point cloud data blocks with mileage section marks to each computing node, and carrying out distributed parallel processing;
A4. projecting the point cloud data of each data block to generate an orthographic projection image file, and storing the orthographic projection image file under a to-be-detected large image directory;
B. and detecting diseases of each orthographic projection image to generate detection results, wherein the specific steps are as follows:
B1. traversing each orthographic projection image file under the large-image directory to be detected, segmenting the orthographic projection image file into small-image files, and generating a position comparison table, wherein the specific steps are as follows:
b1.1, reading a large map file into a memory, namely a large map for short;
b1.2, calculating the cutting position of each small graph in the graph according to the size of the large graph;
b1.3, slicing according to the cutting position to generate a small image file, and storing the small image file under a small image directory to be detected;
the name and the pixel size of the large picture file, the name and the pixel size of the small picture file, the left upper corner coordinate and the right lower corner coordinate of the small picture in the large picture and the number of slice rows and columns of the small picture in the large picture are written into a position comparison table by the B1.4;
B2. recording paths of all small drawing files in a small drawing catalog to be detected into a text file, and setting the file as a detection configuration file of a model;
B3. invoking the trained detection model to detect diseases of each small image under the small image directory to be detected, and generating a detection result file for each type of diseases under the detection result directory;
in the detection result file, each disease detection result occupies one row and comprises a small image index number, confidence coefficient and coordinates of the upper left corner and the lower right corner of the identification frame;
C. mapping the small-image disease detection result to a large image, wherein the specific steps are as follows:
C1. traversing the detection result files of each type of diseases under the detection result directory, screening out the detection results with each confidence score larger than the specified confidence threshold value, and writing the screened detection results into text files, namely screening files;
C2. reading a position comparison table and a screening file, and extracting the position corresponding relation between each large graph and each small graph and disease information in the small graph, namely extracting information;
C3. initializing a two-dimensional matrix for each large graph according to the extracted information, namely a zero-order bitmap, wherein the method comprises the following specific steps of:
each bit in the C3.1 matrix corresponds to a small graph;
c3.2 columns and rows of the large graph cut into small graphs, which are the width W and height H of the matrix;
C4. according to the disease detection result in each small image, combining adjacent diseases on the large image, and specifically comprises the following steps:
setting the value of each bit in the two-dimensional matrix by C4.1, and setting the position as 1 if the class A disease exists on the small diagram corresponding to the bit and the distance between at least one side and the edge of the small diagram is within the set threshold value range; otherwise, set to 0;
c4.2 traversing the position with the value of 1 in the bitmap, namely the position i, searching three positions on the right, the lower and the lower right of the position i, namely the searching position, and if the value of the searching position is 1 and the distance between the surrounding frame corresponding to the disease b on the small map and the surrounding frame corresponding to the disease C on the position i exceeds a specified threshold value, namely the positions b and C are adjacent;
c4.3, combining two adjacent diseases into one disease, wherein the combined disease confidence is i-position disease confidence; the combined disease upper left corner coordinate is i position upper left corner coordinate, the combined disease lower right corner coordinate value x is the maximum x coordinate value in the combined disease, and the combined disease lower right corner coordinate value y is the maximum y coordinate value in the combined disease;
and C4.4, generating a new two-dimensional matrix for each large graph according to the combined disease information, namely a first-order bitmap, wherein the specific steps are as follows:
each bit in the C4.4.1 matrix corresponds to four small drawings;
c4.4.2 the first-order bitmap is W/2 wide and H/2 high;
if adjacent diseases exist in the first-order bitmap, carrying out second-round merging, merging all adjacent diseases, generating merged disease information, and writing the merged disease information into an xml file;
the information comprises disease types, names of the large graphs, confidence, upper left corner coordinates and lower right corner coordinates in the large graphs;
D. the detection result of each large graph is manually corrected to generate a verification file, and the specific steps are as follows:
D1. reading a large graph and an xml file corresponding to the large graph, marking the detection result on the large graph by using rectangular frames with highlight colors, wherein each rectangular frame corresponds to one disease detection result;
D2. the method comprises the following specific steps of manually checking and correcting a rectangular frame of each disease detection result:
d2.1, observing and confirming whether diseases exist in the rectangular frame; if no disease exists in the rectangular frame, deleting the rectangular frame;
d2.2 checking whether the disease type corresponding to the rectangular frame is correct; if the label information in the bullet frame is wrong, modifying the label information in the bullet frame;
d2.3, observing whether the size of the rectangular frame is correct, and if the size of the rectangular frame is too large or too small, dragging the rectangular frame to correct the size of the rectangular frame so that the rectangular frame just frames the disease;
D3. writing the position information of all rectangular frames in the large graph, the corresponding disease type, the name of the large graph where the large graph is positioned and the confidence into a check file, and storing the check file into a check catalog;
E. calculating tunnel defect areas and extracting tunnel defect edges to generate an edge extraction result file, wherein the method comprises the following specific steps:
E1. traversing each large graph and a corresponding check file under the check directory, acquiring the position of the disease in the large graph, and cutting out a disease area, namely a cut graph;
E2. traversing each cut-down graph, filtering the disease cut-down graph by using a Gaussian kernel, and graying the image after noise elimination to generate a gray-scale graph;
E3. calculating the maximum inter-class variance corresponding to each gray level image, taking the maximum inter-class variance as a dividing threshold value, and carrying out binarization division on the gray level image to generate a binarized image;
E4. counting the number of black area pixel points in the binarized image, calculating the disease pixel area, and mapping the disease pixel area into the disease actual area according to the mapping relation between the large-image pixels and the actual mileage;
E5. extracting the edge of the disease by using a binarized image, and storing the edge extraction result in a file in the form of a numerical matrix to be used as a mask of the edge of the disease area;
F. calculating the starting and stopping mileage, the starting and stopping ring number and the same ring number of the disease, and generating a disease detail table, wherein the specific steps are as follows:
F1. calculating the mileage value of the position where the disease is located according to the mapping relation between the pixel value of the orthographic projection diagram and the actual line mileage;
F2. calculating a start-stop ring number corresponding to the position of the disease according to the pixel value of each circular seam and the pixel value of the disease position in the orthographic projection diagram;
F3. the disease same-ring sequence number is calculated, and the specific steps are as follows:
f3.1, classifying all diseases according to the pipe rings according to the disease starting ring numbers, and writing disease information in the same pipe ring into the same list;
f3.2, sorting the same-ring diseases from small to large according to the initial angle values of the same-ring diseases, and marking the position serial numbers of the diseases in the same ring according to the sorting order;
F4. and writing the actual area, the start-stop ring number, the same ring number and the disease type of each disease in each large graph into a file, and generating a disease detail table for each large graph.
The method has the advantages that laser radar scanning point cloud data are processed into the orthographic projection image with multi-dimensional feature fusion in a distributed computing mode, the orthographic projection image is subjected to disease identification by using a deep learning model, the identification result is corrected manually, and finally the correction result is analyzed to generate a disease detail table, so that a complete, efficient and reliable solution is provided for subway tunnel disease detection.
Drawings
Fig. 1: automatic subway tunnel defect detection method flow chart supporting manual verification
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples.
The flow chart of the method is shown in fig. 1, and the method comprises the following steps: 1) Establishing a coordinate model of a projection space, and projecting the point cloud data into an orthographic projection diagram of the subway tunnel; 2) Performing disease detection on each orthographic projection image to generate a detection result; 3) Mapping the small-image disease detection result to a large image; 4) Manually correcting the detection result of each large graph to generate a verification file; 5) Calculating tunnel defect areas and extracting tunnel defect edges to generate an edge extraction result file; 6) Calculating the starting and stopping mileage, the starting and stopping ring number and the same ring number of the disease, and generating a disease detail table;
the invention is further described in terms of steps in connection with data instances, taking a piece of data of a section of a tunnel as an example:
1. the method comprises the following specific steps of establishing a coordinate model of a projection space, and projecting point cloud data into an orthographic projection diagram of a subway tunnel:
1.1, establishing a coordinate model of a projection space, wherein the coordinate model comprises a laser radar measurement coordinate system, an orthographic projection coordinate system and a section coordinate system;
1.2 cleaning noise data in laser radar point cloud data to obtain data after partial cleaning of the noise data as follows:
1.3, indexing, segmenting and merging point cloud data according to the mileage keywords of each scanning point, distributing point cloud data blocks with mileage marks to each computing node, and carrying out distributed parallel processing to obtain partial data blocks with mileage marks, wherein the partial data blocks with mileage marks are as follows:
1.4, projecting the point cloud data of each data block to generate an orthographic projection image file, and storing the orthographic projection image file under a to-be-detected large image directory to obtain a gray matrix result generated by partial data blocks, wherein the gray matrix result is as follows:
matrix 1: [[[124],[122],[132],...]
[[124],[124],[130],...]
...
[[50],[0],[0],...]]
Matrix 2: [[[115],[120],[117],...]
[[116],[116],[116],...]
...
[[0],[20],[0],...]]
2. And detecting diseases of each orthographic projection image to generate detection results, wherein the specific steps are as follows:
2.1 traversing each orthographic projection image file under the large-image directory to be detected, segmenting the orthographic projection image file into small-image files, generating a position comparison table, and obtaining partial position comparison table data as follows:
large graph | Small figures | Large drawing size | Small drawing size | Small picture position | Row of lines | Column of |
Name1.jpg | 0.jpg | 41727;6000 | 200;474 | 0;0;200;474 | 0 | 0 |
Name1.jpg | 1.jpg | 41727;6000 | 200;474 | 0;474;200;948 | 1 | 0 |
Name1.jpg | 2.jpg | 41727;6000 | 200;474 | 0;948;200;1422 | 2 | 0 |
Name1.jpg | 3.jpg | 41727;6000 | 200;474 | 0;1422;200;1896 | 3 | 0 |
Name1.jpg | 4.jpg | 41727;6000 | 200;474 | 0;1896;200;2370 | 4 | 0 |
Name1.jpg | 5.jpg | 41727;6000 | 200;474 | 0;2370;200;2844 | 5 | 0 |
Name1.jpg | 6.jpg | 41727;6000 | 200;474 | 0;2844;200;3318 | 6 | 0 |
Name1.jpg | 7.jpg | 41727;6000 | 200;474 | 0;3318;200;3792 | 7 | 0 |
Name1.jpg | 8.jpg | 41727;6000 | 200;474 | 0;3792;200;4266 | 8 | 0 |
... | ... | ... | ... | ... | ... | ... |
2.2, recording paths of all small image files in the small image catalog to be checked into text files, and setting the files as detection configuration files of the model;
2.3, calling the trained detection model to detect the diseases of each small image under the small image directory to be detected, and generating a detection result file for each type of diseases under the detection result directory to obtain partial detection result file data as follows:
3. mapping the small-image disease detection result to a large image, wherein the specific steps are as follows:
3.1 traversing the detection result files of each type of diseases under the detection result directory, screening out the detection result with each confidence score larger than the specified confidence threshold, and writing the screened detection result into a text file, namely a screening file, so as to obtain part of screening results as follows:
3.2, reading a position comparison table and a screening file, and extracting the position corresponding relation between each large graph and each small graph and disease information in the small graph, namely extracting information;
3.3 initializing a two-dimensional matrix for each large graph according to the extracted information, namely a zero-order bitmap;
3.4, merging adjacent diseases on the large graph according to the disease detection result in each small graph;
3.5 if adjacent diseases exist in the first-order bitmap, carrying out second-round merging, merging all adjacent diseases, generating merged disease information, writing the merged disease information into an xml file, and obtaining partial xml file data as follows:
4. the detection result of each large graph is manually corrected to generate a verification file, and the specific steps are as follows:
4.1, reading the large graph and the corresponding xml file, marking the detection result on the large graph by using highlighted rectangular frames, wherein each rectangular frame corresponds to one disease detection result:
4.2, manually checking and correcting rectangular frames of each disease detection result;
and 4.3, writing the position information of all rectangular frames in the large graph, the corresponding disease types, the names of the large graph and the confidence level into a check file, and storing the check file into a check catalog to obtain partial check file data as follows:
5. calculating tunnel defect areas and extracting tunnel defect edges to generate an edge extraction result file, wherein the method comprises the following specific steps:
5.1 traversing each large graph and a corresponding check file under the check directory, acquiring the position of the disease in the large graph, and cutting out a disease area, namely a cut-down graph;
5.2 traversing each cut-down graph, filtering the disease cut-down graph by using a Gaussian kernel, and graying the image after noise elimination to generate a gray-scale graph;
5.3, calculating the maximum inter-class variance corresponding to each gray level image, taking the maximum inter-class variance as a dividing threshold value, and carrying out binarization division on the gray level image to generate a binarized image;
5.4, counting the number of black area pixel points in the binarized image, calculating the disease pixel area, and mapping the disease pixel area into the actual disease area according to the mapping relation between the large-image pixels and the actual mileage;
5.5, extracting the disease edges by using the binarized image, and storing the edge extraction result in a file in the form of a numerical matrix to be used as a mask of the disease region edges;
6. calculating the starting and stopping mileage, the starting and stopping ring number and the same ring number of the disease, and generating a disease detail table, wherein the specific steps are as follows:
and 6.1, calculating the starting and stopping mileage of the disease according to the mapping relation between the pixel value of the orthographic projection diagram and the actual line mileage, and obtaining partial starting and stopping mileage data as follows:
and 6.2, calculating a start-stop ring number corresponding to the position of the disease according to the pixel value of each circular seam and the pixel value of the disease position in the orthographic projection diagram, and obtaining partial start-stop ring number data as follows:
disease location picture name | Disease type | Initial ring number | Terminating ring number |
Name1 | ss | 1 | 2 |
Name1 | ss | 1 | 2 |
Name1 | ss | 1 | 2 |
Name1 | ss | 2 | 3 |
Name1 | ss | 2 | 3 |
Name1 | ss | 3 | 3 |
... | ... | ... | ... |
And 6.3, calculating disease same-ring serial numbers, and obtaining partial same-ring serial number data as follows:
disease location picture name | Disease type | Same ring number |
Name1 | ss | 1 |
Name1 | ss | 2 |
Name1 | ss | 3 |
Name1 | ss | 1 |
Name1 | ss | 2 |
Name1 | ss | 1 |
... | ... | ... |
6.4, writing the actual area, the start-stop ring number, the same ring number and the disease type of each disease in each large graph into a file, and generating a disease detail table for each large graph to obtain partial disease detail table data as follows:
the method and the device for detecting the tunnel defects based on the point cloud data realize intelligent identification of the tunnel defects by the deep learning model, and are assisted by a manual correction function, so that a complete, efficient and reliable solution is provided for subway tunnel defect detection, and the method and the device have high application value and economic benefit.
Finally, it should be noted that the examples are disclosed for the purpose of aiding in the further understanding of the present invention, but those skilled in the art will appreciate that: various alternatives and modifications are possible without departing from the spirit and scope of the invention and the appended claims. Therefore, the invention should not be limited to the disclosed embodiments, but rather the scope of the invention is defined by the appended claims.
Claims (3)
1. A subway tunnel defect automatic detection method supporting manual verification comprises the following steps:
A. the method comprises the following specific steps of establishing a coordinate model of a projection space, and projecting point cloud data into an orthographic projection diagram of a subway tunnel:
A1. establishing a coordinate model of a projection space, wherein the coordinate model comprises a corresponding relation among a laser radar measurement coordinate system, an orthographic projection coordinate system and a section coordinate system;
A2. cleaning noise data in laser radar point cloud data;
the laser radar point cloud data refers to a set of XYZ coordinates and reflectivity Ref formed by each scanning point on the inner wall of a tunnel in a laser radar measurement three-dimensional coordinate system, and is abbreviated as point cloud data;
A3. according to the mileage keywords of each scanning point, indexing, segmenting and merging point cloud data, distributing point cloud data blocks with mileage section marks to each computing node, and carrying out distributed parallel processing;
A4. projecting the point cloud data of each data block to generate an orthographic projection image file, and storing the orthographic projection image file under a to-be-detected large image directory;
B. and detecting diseases of each orthographic projection image to generate detection results, wherein the specific steps are as follows:
B1. traversing each orthographic projection image file under the large image directory to be detected, segmenting the orthographic projection image files into small image files, and generating a position comparison table;
B2. recording paths of all small drawing files in a small drawing catalog to be detected into a text file, and setting the file as a detection configuration file of a model;
B3. invoking the trained detection model to detect diseases of each small image under the small image directory to be detected, and generating a detection result file for each type of diseases under the detection result directory;
in the detection result file, each disease detection result occupies one row and comprises a small image index number, confidence coefficient and coordinates of the upper left corner and the lower right corner of the identification frame;
C. mapping the small-image disease detection result to a large image, wherein the specific steps are as follows:
C1. traversing the detection result files of each type of diseases under the detection result directory, screening out the detection results with each confidence score larger than the specified confidence threshold value, and writing the screened detection results into text files, namely screening files;
C2. reading a position comparison table and a screening file, and extracting the position corresponding relation between each large graph and each small graph and disease information in the small graph, namely extracting information;
C3. initializing a two-dimensional matrix for each large graph according to the extracted information, namely a zero-order bitmap, wherein the method comprises the following specific steps of:
each bit in the C3.1 matrix corresponds to a small graph;
c3.2 columns and rows of the large graph cut into small graphs, which are the width W and height H of the matrix;
C4. merging adjacent diseases on the large graph according to the disease detection result in each small graph;
C5. if adjacent diseases exist in the first-order bitmap, carrying out second-round merging, merging all adjacent diseases, generating merged disease information, and writing the merged disease information into an xml file;
the information comprises disease types, names of the large graphs, confidence, upper left corner coordinates and lower right corner coordinates in the large graphs;
D. the detection result of each large graph is manually corrected to generate a verification file, and the specific steps are as follows:
D1. extracting a detection result corresponding to each large graph in the combined file, writing the detection result into a text file according to a specified xml format, and generating an xml file of each large graph;
D2. the method comprises the following specific steps of manually checking and correcting a rectangular frame of each disease detection result:
d2.1, observing and confirming whether diseases exist in the rectangular frame; if no disease exists in the rectangular frame, deleting the rectangular frame;
d2.2 checking whether the disease type corresponding to the rectangular frame is correct; if the label information in the bullet frame is wrong, modifying the label information in the bullet frame;
d2.3, observing whether the size of the rectangular frame is correct, and if the size of the rectangular frame is too large or too small, dragging the rectangular frame to correct the size of the rectangular frame so that the rectangular frame just frames the disease;
D3. writing the position information of all rectangular frames in the large graph, the corresponding disease type, the name of the large graph where the large graph is positioned and the confidence into a check file, and storing the check file into a check catalog;
E. calculating tunnel defect areas and extracting tunnel defect edges to generate an edge extraction result file, wherein the method comprises the following specific steps:
E1. traversing each large graph and a corresponding check file under the check directory, acquiring the position of the disease in the large graph, and cutting out a disease area, namely a cut graph;
E2. traversing each cut-down graph, filtering the disease cut-down graph by using a Gaussian kernel, and graying the image after noise elimination to generate a gray-scale graph;
E3. calculating the maximum inter-class variance corresponding to each gray level image, taking the maximum inter-class variance as a dividing threshold value, and carrying out binarization division on the gray level image to generate a binarized image;
E4. counting the number of black area pixel points in the binarized image, calculating the disease pixel area, and mapping the disease pixel area into the disease actual area according to the mapping relation between the large-image pixels and the actual mileage;
E5. extracting the edge of the disease by using a binarized image, and storing the edge extraction result in a file in the form of a numerical matrix to be used as a mask of the edge of the disease area;
F. calculating the starting and stopping mileage, the starting and stopping ring number and the same ring number of the disease, and generating a disease detail table, wherein the specific steps are as follows:
F1. calculating the mileage value of the position where the disease is located according to the mapping relation between the pixel value of the orthographic projection diagram and the actual line mileage;
F2. calculating a start-stop ring number corresponding to the position of the disease according to the pixel value of each circular seam and the pixel value of the disease position in the orthographic projection diagram;
F3. the disease same-ring sequence number is calculated, and the specific steps are as follows:
f3.1, classifying all diseases according to the pipe rings according to the disease starting ring numbers, and writing disease information in the same pipe ring into the same list;
f3.2, sorting the same-ring diseases from small to large according to the initial angle values of the same-ring diseases, and marking the position serial numbers of the diseases in the same ring according to the sorting order;
F4. and writing the actual area, the start-stop ring number, the same ring number and the disease type of each disease in each large graph into a file, and generating a disease detail table for each large graph.
2. The automatic subway tunnel defect detection method supporting manual verification according to claim 1, wherein each orthographic projection image file under the large map directory to be detected is traversed, segmented into small map files, and a position comparison table is generated, and the specific steps are as follows:
b1.1, reading a large map file into a memory, namely a large map for short;
b1.2, calculating the cutting position of each small graph in the graph according to the size of the large graph;
b1.3, slicing according to the cutting position to generate a small image file, and storing the small image file under a small image directory to be detected;
and B1.4, writing the name and the pixel size of the large-image file, the name and the pixel size of the small-image file, the left upper corner coordinate and the right lower corner coordinate of the small image in the large image and the number of slice rows and columns of the small image in the large image into a position comparison table.
3. The automatic detection method for subway tunnel defects supporting manual verification according to claim 1, wherein adjacent defects on a large map are combined according to the defect detection result in each small map, and the method comprises the following specific steps:
setting the value of each bit in the two-dimensional matrix by C4.1, and setting the position as 1 if the class A disease exists on the small diagram corresponding to the bit and the distance between at least one side and the edge of the small diagram is within the set threshold value range; otherwise, set to 0;
c4.2 traversing the position with the value of 1 in the bitmap, namely the position i, searching three positions on the right, the lower and the lower right of the position i, namely the searching position, and if the value of the searching position is 1 and the distance between the surrounding frame corresponding to the disease b on the small map and the surrounding frame corresponding to the disease C on the position i exceeds a specified threshold value, namely the positions b and C are adjacent;
c4.3, combining two adjacent diseases into one disease, wherein the combined disease confidence is i-position disease confidence; the combined disease upper left corner coordinate is i position upper left corner coordinate, the combined disease lower right corner coordinate value x is the maximum x coordinate value in the combined disease, and the combined disease lower right corner coordinate value y is the maximum y coordinate value in the combined disease;
and C4.4, generating a new two-dimensional matrix for each large graph according to the combined disease information, wherein the new two-dimensional matrix is called a first-order bitmap.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110941228.0A CN113837996B (en) | 2021-08-17 | 2021-08-17 | Automatic subway tunnel defect detection method supporting manual verification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110941228.0A CN113837996B (en) | 2021-08-17 | 2021-08-17 | Automatic subway tunnel defect detection method supporting manual verification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113837996A CN113837996A (en) | 2021-12-24 |
CN113837996B true CN113837996B (en) | 2023-09-29 |
Family
ID=78960605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110941228.0A Active CN113837996B (en) | 2021-08-17 | 2021-08-17 | Automatic subway tunnel defect detection method supporting manual verification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113837996B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127771A (en) * | 2016-06-28 | 2016-11-16 | 上海数联空间科技有限公司 | Tunnel orthography system and method is obtained based on laser radar LIDAR cloud data |
CN110766798A (en) * | 2019-11-30 | 2020-02-07 | 中铁一局集团有限公司 | Tunnel monitoring measurement result visualization method based on laser scanning data |
CN111325747A (en) * | 2020-03-19 | 2020-06-23 | 北京城建勘测设计研究院有限责任公司 | Disease detection method and device for rectangular tunnel |
CN112215958A (en) * | 2020-10-10 | 2021-01-12 | 北京工商大学 | Laser radar point cloud data projection method based on distributed computation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300126B (en) * | 2018-09-21 | 2022-01-07 | 重庆建工集团股份有限公司 | High-precision intelligent detection method for bridge diseases based on spatial positions |
-
2021
- 2021-08-17 CN CN202110941228.0A patent/CN113837996B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127771A (en) * | 2016-06-28 | 2016-11-16 | 上海数联空间科技有限公司 | Tunnel orthography system and method is obtained based on laser radar LIDAR cloud data |
CN110766798A (en) * | 2019-11-30 | 2020-02-07 | 中铁一局集团有限公司 | Tunnel monitoring measurement result visualization method based on laser scanning data |
CN111325747A (en) * | 2020-03-19 | 2020-06-23 | 北京城建勘测设计研究院有限责任公司 | Disease detection method and device for rectangular tunnel |
CN112215958A (en) * | 2020-10-10 | 2021-01-12 | 北京工商大学 | Laser radar point cloud data projection method based on distributed computation |
Non-Patent Citations (1)
Title |
---|
基于深度学习的地铁隧道衬砌病害检测模型优化;薛亚东;高健;李宜城;黄宏伟;;湖南大学学报(自然科学版)(第07期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113837996A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110210409B (en) | Method and system for detecting form frame lines in form document | |
CN111626146B (en) | Merging cell table segmentation recognition method based on template matching | |
CN111080622B (en) | Neural network training method, workpiece surface defect classification and detection method and device | |
CN106960208B (en) | Method and system for automatically segmenting and identifying instrument liquid crystal number | |
Garz et al. | Binarization-free text line segmentation for historical documents based on interest point clustering | |
CN111598856B (en) | Chip surface defect automatic detection method and system based on defect-oriented multipoint positioning neural network | |
CN111179152A (en) | Road sign identification method and device, medium and terminal | |
CN104990925A (en) | Defect detecting method based on gradient multiple threshold value optimization | |
CN107507170A (en) | A kind of airfield runway crack detection method based on multi-scale image information fusion | |
CN113393426B (en) | Steel rolling plate surface defect detection method | |
CN114693610A (en) | Welding seam surface defect detection method, equipment and medium based on machine vision | |
CN114219773B (en) | Pre-screening and calibrating method for bridge crack detection data set | |
CN112364834A (en) | Form identification restoration method based on deep learning and image processing | |
CN111898494B (en) | Mining disturbed land boundary identification method | |
CN110837839A (en) | High-precision unmanned aerial vehicle orthoimage manufacturing and data acquisition method | |
CN115690081A (en) | Tree counting method, system, storage medium, computer equipment and terminal | |
CN118134923B (en) | High-speed article visual detection method based on artificial intelligence | |
CN112200053B (en) | Form identification method integrating local features | |
CN111768385B (en) | Neural network detection method for USB surface defect detection | |
CN113837996B (en) | Automatic subway tunnel defect detection method supporting manual verification | |
CN116452826A (en) | Coal gangue contour estimation method based on machine vision under shielding condition | |
CN112419244B (en) | Concrete crack segmentation method and device | |
CN111325076A (en) | Aviation ground building extraction method based on U-net and Seg-net network fusion | |
CN106909720B (en) | Method for rapidly extracting finite element node coordinates | |
CN112150453B (en) | Automatic detection method for breakage fault of bolster spring of railway wagon |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231221 Address after: 100048, Fu Cheng Road, Beijing, Haidian District, No. 33 Patentee after: BEIJING TECHNOLOGY AND BUSINESS University Patentee after: NANJING SHULIANKONGJIAN PLOTTING TECHNOLOGY Co.,Ltd. Address before: 100048 33 Fucheng Road, Haidian District, Beijing Patentee before: BEIJING TECHNOLOGY AND BUSINESS University Patentee before: ZHEJIANG HUAZHAN INSTITUTE OF ENGINEERING RESEARCH AND DESIGN |
|
TR01 | Transfer of patent right |