WO2022052790A1 - 基于探地雷达和深度学习的地下管线探测方法和系统 - Google Patents

基于探地雷达和深度学习的地下管线探测方法和系统 Download PDF

Info

Publication number
WO2022052790A1
WO2022052790A1 PCT/CN2021/113749 CN2021113749W WO2022052790A1 WO 2022052790 A1 WO2022052790 A1 WO 2022052790A1 CN 2021113749 W CN2021113749 W CN 2021113749W WO 2022052790 A1 WO2022052790 A1 WO 2022052790A1
Authority
WO
WIPO (PCT)
Prior art keywords
ground penetrating
penetrating radar
data
underground
sample data
Prior art date
Application number
PCT/CN2021/113749
Other languages
English (en)
French (fr)
Inventor
刘海
孟旭
刘超
崔杰
Original Assignee
广州大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州大学 filed Critical 广州大学
Publication of WO2022052790A1 publication Critical patent/WO2022052790A1/zh
Priority to US18/063,718 priority Critical patent/US20230108634A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/885Radar or analogous systems specially adapted for specific applications for ground probing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/887Radar or analogous systems specially adapted for specific applications for detection of concealed objects, e.g. contraband or weapons
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/04Display arrangements
    • G01S7/06Cathode-ray tube displays or other two dimensional or three-dimensional displays
    • G01S7/10Providing two-dimensional and co-ordinated display of distance and direction
    • G01S7/16Signals displayed as intensity modulation with rectangular co-ordinates representing distance and bearing, e.g. type B

Definitions

  • the invention relates to the field of engineering non-destructive testing, in particular to an underground pipeline detection method and system based on ground penetrating radar and deep learning.
  • the underground pipe network is an essential part of the operation and development of a modern city. As an important infrastructure, the underground pipe network not only provides important living materials for the residents of the city, but also undertakes to provide basic resources and resources for the production and development of the city. energy responsibility. In the process of urban construction or construction, due to the lack of accurate layout drawings of underground pipelines in the construction area, and the lack of fast and accurate pipeline detection methods, accurate distribution of underground pipelines may not be obtained, resulting in the inability to effectively avoid pipelines during construction. Cause pipeline damage, affect the normal life of urban residents and the healthy operation of the city, and may even cause a series of safety accidents.
  • Ground Penetrating Radar is a non-destructive detection method that detects the internal structure and distribution of the medium by transmitting and receiving ultra-wideband high-frequency pulsed electromagnetic waves. It has the advantages of high detection efficiency, convenient operation and high detection accuracy. It is widely used in the engineering field, and has gradually become the main means of underground pipeline detection. However, in the process of underground pipeline detection, a large amount of data will be generated during the detection process due to the fast data collection speed. Human interpretation of data is time-consuming and labor-intensive, and relies heavily on the level of expertise and engineering experience of practitioners. In the actual detection application, the underground pipeline target is represented in the form of hyperbola in the ground penetrating radar data, which has obvious characteristics and is clearly distinguishable from the background medium. How to utilize this imaging feature is a technical problem to be solved urgently by those skilled in the art.
  • Real-time kinematic is a differential method that processes the carrier phase observations of two measuring stations in real time, and calculates the coordinates by calculating the difference.
  • RTK measuring instrument can obtain centimeter-level positioning accuracy in real time, which helps to accurately locate underground pipelines by combining ground penetrating radar and intelligent identification.
  • embodiments of the present invention provide a highly accurate and efficient underground pipeline detection method and system based on ground penetrating radar and deep learning.
  • a first aspect of the present invention provides an underground pipeline detection method based on ground penetrating radar and deep learning, including:
  • the YOLOv3 model is obtained, and the YOLOv3 model is used to identify the hyperbolic data of the underground pipeline;
  • obtaining sample data of known underground pipelines through ground penetrating radar, and establishing an image database according to the sample data including:
  • Target image data from the grayscale image data, the target image data comprising a pipeline hyperbola;
  • constructing an image database from the sample data includes:
  • an image database is constructed.
  • the basic network of the YOLOv3 model is the Darknet deep learning framework
  • the Darknet deep learning framework consists of 53 convolutional layers
  • Each of the convolutional layers includes a batch normalization layer and a leakage-corrected linear unit layer.
  • the batch normalization layer is used to speed up network training and network convergence
  • the leakage correction linear unit layer is used to introduce a leakage value in the negative half interval of the corrected linear unit function, so that the neurons can continue to learn after the corrected linear unit function enters the negative interval.
  • a second aspect of the present invention provides an underground pipeline detection system based on ground penetrating radar and deep learning, including:
  • a database building module is used to obtain sample data of known underground pipelines through ground penetrating radar, and build an image database according to the sample data;
  • an identification module for obtaining a YOLOv3 model according to the image database training, and the YOLOv3 model is used to identify the hyperbolic data of the underground pipeline;
  • the positioning module is used to locate the position of the underground pipeline by combining RTK equipment and ground penetrating radar.
  • sample data of known underground pipelines are obtained by ground penetrating radar, and an image database is established according to the sample data; a YOLOv3 model is obtained by training according to the image database, and the YOLOv3 model is used to identify the hyperbola of the underground pipeline.
  • data the underground pipeline target in the measured radar image is detected by the YOLOv3 model;
  • the present invention is based on the ground penetrating radar and the YOLOv3 model, and can accurately identify the pipeline hyperbolic target in the ground penetrating radar image, which improves the detection efficiency and saves time and cost ;
  • the precise geographic location coordinates of underground pipelines can be obtained.
  • FIG. 1 is a schematic overall flow diagram of an underground pipeline detection method based on ground penetrating radar and deep learning provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a ground penetrating radar detecting an underground pipeline according to an embodiment of the present invention and a corresponding hyperbola of the pipeline in the ground penetrating radar image;
  • FIG. 3 is a schematic diagram of a YOLOv3-P model detection process in an embodiment of the present invention.
  • Fig. 4 is the effect diagram of YOLOv3-P model in the embodiment of the present invention identifying underground pipeline target;
  • FIG. 5 is a schematic diagram of the use of an RTK measuring instrument provided by an embodiment of the present invention.
  • an embodiment of the present invention provides an underground pipeline detection method based on ground penetrating radar and deep learning. As shown in FIG. 1 , the method includes the following steps:
  • ground penetrating radar is used to detect known underground pipelines and collect data to make a sample set, and the pipeline hyperbola in each sample image is marked with a real frame to establish an image database.
  • the embodiment of the present invention uses samples to train a YOLOv3 model to obtain a YOLOv3 model capable of identifying pipeline hyperbolas.
  • the YOLOv3 model is used to intelligently identify the ground penetrating radar profile image of the newly collected underground pipeline.
  • FIG. 5 is a schematic diagram of the use of an RTK measuring instrument, which is used to precisely locate the position of an underground pipeline.
  • the example of the present invention combines the RTK measuring instrument and the ground penetrating radar.
  • the base station antenna and the base station receiver are set up, and the mobile station receiver is set up above the ground penetrating radar.
  • the ground penetrating radar collects electromagnetic wave data
  • the receiver records the longitude, latitude, and elevation information of the position where the ground penetrating radar passes, and corresponds to each ground penetrating radar data, so that the position passed by the ground penetrating radar can be determined, and the positioning accuracy is centimeters.
  • step S1 includes S11-S16:
  • the data for making the sample set is the real data collected on site, which has high application value and can more truly reflect the external factors in the data collection process, such as noise interference, noisy background caused by soil non-uniformity, and human operation.
  • the impact on the data and the impact of these disturbances will be truly reflected in the samples, making the trained YOLOv3 model more robust and more suitable for practical engineering applications.
  • the method for making a sample set for training a model in the embodiment of the present invention is to convert the B-scan data of the underground pipeline collected by the ground penetrating radar into a grayscale image in JPG format, and through data screening, retain the image containing the pipeline hyperbola Data, through 0.8x, 1.5x and 2x scaling and horizontal flipping and other data enhancement strategies to expand the amount of data, all images are cropped to 416 ⁇ 416 pixel specifications to obtain a sample set with a sample size of 416 ⁇ 416.
  • the collected data is not in JPG format, it cannot be used to train the YOLOv3 model in the form of pictures.
  • the data is converted into a grayscale image in JPG format by software (such as MATLAB), and the data containing the pipeline hyperbola is retained through data filtering.
  • the data volume is enlarged through data augmentation strategies such as 0.8x, 1.5x and 2x scaling and horizontal flipping, and all images are cropped to 416 ⁇ 416 pixels to obtain a sample set with a sample size of 416 ⁇ 416.
  • the augmentation strategy is conducive to quickly obtaining a large amount of actual collected data, which can greatly save manpower and material resources.
  • the sample set includes positive samples and negative samples, and the hyperbolic targets in the samples are manually marked, that is, each hyperbolic target is completely framed by a rectangular real frame, and the coordinate information of the real frame is recorded. .
  • the sample set is divided into two parts, the training set and the test set, and the ratio of the number of samples in the training set and the test set is 3:1.
  • the way the YOLOv3 model learns the target features is supervised learning. Therefore, the target in each sample needs to be marked by manual annotation.
  • the content of the mark is to mark the target with a rectangular real frame, and obtain the upper left corner of the target rectangular frame (x min , y min ) and the coordinate information of the lower right corner (x max , y max ), and at the same time, a class label (class) is attached to the target.
  • the basic network of the YOLOv3 model is the Darknet deep learning framework, which consists of 53 convolutional layers, and each convolutional layer is followed by a batch normalization layer and a leakage correction linear unit layer,
  • the role of the batch normalization layer is to speed up the training and convergence of the network, prevent the gradient from disappearing and prevent overfitting;
  • the role of the leakage correction linear unit layer is to introduce a leakage value in the negative half interval of the corrected linear unit function to solve the correction. After the linear unit function enters the negative interval, the neuron does not learn, while maintaining its sparse activation, which is conducive to better mining relevant image features and fitting training data.
  • the environment where the pipeline is located is complex, so the signal-to-noise ratio of the obtained detection data is low, and the accuracy of the model is required to be high.
  • the YOLOv3 model can well recognize the ground penetrating radar images of underground pipelines in complex environments, and meets the requirements of engineering applications.
  • the characteristics of the pipeline hyperbola are continuously learned, and the loss value of the loss function is calculated to reversely adjust the weight parameters of the convolution kernel and the pooling kernel in the basic network.
  • the target hyperbola is identified by generating a prediction frame on the newly collected GPR image, the position of the hyperbola is predicted, and the target confidence is predicted by logistic regression.
  • the loss function of the YOLOv3 model is mainly composed of target positioning loss, target confidence loss and target classification loss. Among them, target positioning loss and target confidence loss adopt binary cross entropy loss, which can cope with more complex classification scenarios.
  • FIG. 3 is a schematic diagram of a YOLOv3 model detection process in an embodiment of the present invention, and the figure describes the algorithm structure of YOLOv3 when extracting, learning, identifying, and predicting image features.
  • the real box marked by the sample is cross-combined and matched with the default box.
  • the default box feature content with a matching value greater than 0.5 is regarded as a positive example (that is, containing the target), and the rest are negative examples.
  • Positive examples and negative examples are used to calculate the loss function of the current model detection target.
  • the loss function calculation is composed of confidence loss, target class loss and localization loss. The worse the detection ability of the model and the smaller the loss value, the stronger the detection ability of the model.
  • the target confidence loss is calculated using the binary cross-entropy loss, and the expression for the loss calculation is as follows:
  • o represents the matching value between the default box and the real box
  • c represents the predicted probability of the corresponding category p of the default box
  • L conf (o, c) represents the confidence loss
  • sigmoid (ci ) represents the nonlinear activation function classification
  • o i ⁇ ⁇ 1 , 0 ⁇ when the default box is equal to
  • the category loss is also calculated using the binary cross-entropy loss, and the expression for the loss calculation is as follows:
  • L cla (O, C) represents the target category loss
  • Sigmoid (C ij ) represents the nonlinear activation function classification
  • O ij ⁇ ⁇ 1, 0 ⁇ indicates whether there is a real j-th object in the predicted target bounding box i , 0 means does not exist, 1 means exists.
  • the localization loss is calculated as the sum of the squares of the difference between the true deviation value and the predicted deviation value:
  • L loc (l, g) represents the localization loss; Represents the x-coordinate, y-coordinate, width, and height offset of the predicted target bounding box i; Represents the x-coordinate, y-coordinate, width, and height offset between the predicted target bounding box i real box and the default box; Represents the width offset between the predicted target bounding box i real box and the default box; Represents the height offset between the predicted target bounding box i real box and the default box; Indicates the coordinate offset of the predicted rectangular frame, Represents the coordinate offset between the matching real box and the default box.
  • b x , b y , b w and b h are the parameters of the predicted target rectangle
  • c x , c y , p w and p h represent the rectangle parameters of the default frame
  • g x , g y , g w , g h is the matching real target rectangle parameter.
  • the loss function is obtained by the weighted sum of confidence loss, localization loss and target classification loss, and its expression is as follows:
  • L(O, o, C, c, l, g) ⁇ 1 L conf (o, c) + ⁇ 2 L cla (O, C) + ⁇ 3 L loc (l, g)
  • L(O, o, C, c, l, g) represents the total loss function
  • L conf (o, c) represents the confidence loss
  • L cla (O, C) represents the target category loss
  • L loc (l , g) represents the localization loss
  • ⁇ 1 , ⁇ 2 , ⁇ 3 are balance coefficients.
  • Model training is an iterative process that keeps the loss smaller and smaller.
  • the calculation of each loss value will be back-propagated to the model network to update and adjust each weight value in the hidden layer, and continuously improve the network's ability to extract the features of the target in the image, making the predicted frame and the real frame more and more The closer you get, the more you end up with the ability for the model to detect a specific target.
  • step S3 the YOLOv3 model is used to intelligently identify the newly collected GPR profile image, so as to realize non-destructive detection of underground pipelines.
  • the YOLOv3 model When the YOLOv3 model is trained, it has the ability to identify pipeline hyperbolas in GPR images. Therefore, when the newly collected GPR data image is processed by the YOLOv3 model, the pipeline hyperbola in the image will be automatically selected by the rectangular prediction frame, and the confidence level of the hyperbola will be displayed above the rectangular frame.
  • the YOLOv3 model intelligently recognizes the hyperbola of the underground pipeline in the ground penetrating radar image. In application, it can assist the detection personnel to interpret the data, feed back the target information in time, improve the detection efficiency, shorten the detection period, save time and cost, and has significant economic value. and social value.
  • Figure 4 shows the effect of YOLOv3 model identifying underground pipeline targets.
  • the hyperbola in the figure is the imaging of the underground pipeline in the ground penetrating radar image.
  • the boxes 401 and 402 in the figure are the prediction boxes for target recognition.
  • the hyperbola in the box means that the classification result is a hyperbola feature, and the number on the box is the detection and recognition accuracy. (wherein, the recognition accuracy of frame 401 is 0.93; the recognition accuracy of frame 402 is 0.94);
  • the present invention provides an underground pipeline detection system based on ground penetrating radar and deep learning, including:
  • a database building module is used to obtain sample data of known underground pipelines through ground penetrating radar, and build an image database according to the sample data;
  • an identification module for obtaining a YOLOv3 model according to the image database training, and the YOLOv3 model is used to identify the hyperbolic data of the underground pipeline;
  • the positioning module is used to locate the position of the underground pipeline by combining RTK equipment and ground penetrating radar.
  • the method and system for intelligent detection of underground pipelines based on GPR and YOLOv3 model proposed by the embodiments of the present invention can accurately identify the hyperbolic targets of pipelines in GPR images, locate the precise position of pipelines, and can be used in practical applications. In the application, it can assist the detection personnel in data interpretation, timely feedback target information, improve detection efficiency, shorten detection cycle, save time and cost, and have significant economic and social value.
  • YOLOv3 is a target recognition model, which has no precedent application in the field of ground penetrating radar intelligent recognition.
  • the invention can effectively combine the deep learning model and the ground penetrating radar to realize the automatic identification of underground pipelines, greatly improve the detection efficiency of the detection personnel, and meet the requirements of engineering detection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

一种基于探地雷达和深度学习的地下管线探测方法和系统,方法包括:通过探地雷达获取已知地下管线的样本数据,并根据样本数据建立图像数据库;根据图像数据库训练得到YOLOv3模型,YOLOv3模型用于识别地下管线的双曲线数据;通过YOLOv3模型检测实测雷达图像中的地下管线目标;通过的RTK测量仪,对管道位置进行精准定位。该方法基于探地雷达和YOLOv3模型,能精准识别探地雷达图像中的管线双曲线目标,提高了探测效率且节约了时间成本,可广泛应用于工程无损检测领域。

Description

基于探地雷达和深度学习的地下管线探测方法和系统 技术领域
本发明涉及工程无损检测领域,尤其是一种基于探地雷达和深度学习的地下管线探测方法和系统。
背景技术
地下管网是现代化城市运行与发展中必不可少的重要部分,作为重要的基础设施,地下管网不仅为城市中居民提供重要的生活物资,更承担着为城市的生产与发展提供基础资源和能量的责任。在城市建设或施工过程中,由于缺乏施工区域地下管线的准确布设图,且没有快速、精准的管线探测手段,所以可能无法获得准确的地下管线分布情况,导致施工过程中无法有效避开管线而造成管线破坏,影响城市居民的正常生活与城市的健康运作,甚至可能造成一系列的安全事故。
探地雷达(Ground Penetrating Radar,GPR)是通过发射和接收超宽带高频脉冲电磁波来探测介质内部结构和分布规律的一种无损探测方法,具有探测效率高、操作方便和探测精度高等优点,在工程领域应用广泛,并且逐步成为了地下管线探测的主要手段。然而,在地下管线探测过程中,由于数据采集速度快,探测过程中会产生大量的数据。通过人工解译数据耗时费力,并且严重依赖于从业人员的专业知识水平与工程经验。在实际探测应用中,地下管线目标在探地雷达数据中以双曲线的形态表示,具有明显的特征,与背景介质有明显的区分度。如何利用这一成像特征是本领域技术人员亟待解决的技术问题。
实时动态载波相位差分技术(Real-time kinematic,RTK),是实时地处理两个测量站载波相位观测量的差分方法,通过求差解算坐标。RTK测量仪能够实时获得厘米级的定位精度,有助于通过结合探地雷达和智能识别,精确定位地下管线。
发明内容
有鉴于此,本发明实施例提供一种准确性高且高效的基于探地雷达和深度学习的地下管线探测方法和系统。
本发明的第一方面提供了一种基于探地雷达和深度学习的地下管线探测方法,包括:
通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库;
根据所述图像数据库训练得到YOLOv3模型,所述YOLOv3模型用于识别地下管线的双曲线数据;
通过所述YOLOv3模型检测实测雷达图像中的地下管线目标;
通过结合RTK设备和探地雷达,定位地下管线的位置。
在一些实施例中,所述通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库,包括:
通过探地雷达获取已知地下管线的扫描数据;
将所述扫描数据转换为灰度图像数据;
从所述灰度图像数据中筛选目标图像数据,所述目标图像数据包含管线双曲线;
对所述目标图像数据进行数据增强;
对所述数据增强后得到的所有图像进行裁剪,得到样本数据;
根据所述样本数据构建图像数据库。
在一些实施例中,根据所述样本数据构建图像数据库,包括:
通过矩形真实框将所述样本数据中的双曲线目标进行标注,得到所述矩形真实框的坐标信息;
根据所述坐标信息,将所述样本数据划分为正样本和负样本;
根据所述正样本和所述负样本,构建图像数据库。
在一些实施例中,所述YOLOv3模型的基础网络为Darknet深度学习框架;
所述Darknet深度学习框架由53个卷积层组成;
所述每个卷积层包括一个批归一化层和一个泄露修正线性单元层。
在一些实施例中,所述批归一化层,用于加快网络训练和网络收敛的速度;
所述泄露修正线性单元层,用于在修正线性单元函数的负半区间引入一个泄露值,使得修正线性单元函数进入负区间后神经元能够继续学习。
本发明的第二方面提供了一种基于探地雷达和深度学习的地下管线探测系统,包括:
建库模块,用于通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库;
识别模块,用于根据所述图像数据库训练得到YOLOv3模型,所述YOLOv3模型用于识别地下管线的双曲线数据;
探测模块,用于通过所述YOLOv3模型检测实测雷达图像中的地下管线目标;
定位模块,用于通过结合RTK设备和探地雷达,定位地下管线的位置。
本发明的实施例通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库;根据所述图像数据库训练得到YOLOv3模型,所述YOLOv3模型用于识别地下 管线的双曲线数据;通过所述YOLOv3模型检测实测雷达图像中的地下管线目标;本发明基于探地雷达和YOLOv3模型,能精准识别探地雷达图像中的管线双曲线目标,提高了探测效率且节约了时间成本;同时通过结合RTK设备和探地雷达,获得地下管线精确的地理位置坐标点。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的基于探地雷达和深度学习的地下管线探测方法的整体流程示意图;
图2为本发明实施例中探地雷达探测地下管线的示意图以及探地雷达图像中的管线对应双曲线;
图3为本发明实施例中的YOLOv3-P模型检测过程示意图;
图4为本发明实施例中的YOLOv3-P模型识别地下管线目标的效果图;
图5为本发明实施例提供的RTK测量仪使用示意图。
具体实施方式
下面结合说明书附图和具体实施例对本发明作进一步解释和说明。对于本发明实施例中的步骤编号,其仅为了便于阐述说明而设置,对步骤之间的顺序不做任何限定,实施例中的各步骤的执行顺序均可根据本领域技术人员的理解来进行适应性调整。
针对现有技术存在的问题,本发明实施例提供了一种基于探地雷达和深度学习的地下管线探测方法,如图1所示,该方法包括以下步骤:
S1、通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库;
具体地,本发明实施例用探地雷达对已知地下管线进行探测并采集数据制作样本集,对每个样本图像中的管线双曲线用真实框进行标记,建立图像数据库。
S2、根据所述图像数据库训练得到YOLOv3模型,所述YOLOv3模型用于识别地下管线的双曲线数据;
如图2所示,其中x 0为发射天线和接收天线的距离,x i为天线位于i位置的x坐标,R为地下管线的直径,z为地下管线的埋深。本图阐明了探地雷达图像成像原理和地下管线在探地雷达图中的成像特征。
具体地,本发明实施例利用样本对YOLOv3模型进行训练以获得能够识别管线双曲线的 YOLOv3模型。
S3、通过所述YOLOv3模型检测实测雷达图像中的地下管线目标。
具体地,本发明实施例通过YOLOv3模型对新采集地下管线的探地雷达剖面图像进行智能识别。
S4、通过结合RTK设备和探地雷达,定位地下管线的位置。
具体地,图5为RTK测量仪使用示意图,该仪器用于精确定位地下管线位置。本发明实例结合了RTK测量仪和探地雷达,首先搭设基准站天线和基准站接收机,在探地雷达上方搭设移动站接收机,通过数据线将其与探地雷达采集设备相连接,使得探地雷达采集电磁波数据的同时,接收机记录探地雷达经过位置的经度、纬度、高程信息,并对应到每道探地雷达数据中,从而能够确定探地雷达所经过的位置,定位精度为厘米级。
在一些实施例中,步骤S1包括S11-S16:
S11、通过探地雷达获取已知地下管线的扫描数据;
制作样本集的数据是现场采集的真实数据,具有很高的应用价值,能够更真实地反映数据采集过程中受到的外部因素影响,比如噪声干扰、土壤的非均匀性产生的嘈杂背景以及人为操作对数据的影响,这些干扰的影响都会真实反映在样本中,使训练后的YOLOv3模型更具有鲁棒性,更适合于实际工程的应用。
S12、将所述扫描数据转换为灰度图像数据;
S13、从所述灰度图像数据中筛选目标图像数据,所述目标图像数据包含管线对应双曲线;
S14、对所述目标图像数据进行数据增强;
S15、对所述数据增强后得到的所有图像进行裁剪,得到样本数据;
S16、根据所述样本数据构建图像数据库。
具体地,本发明实施例用于训练模型的样本集制作方法,是把探地雷达采集的地下管线B扫描数据转换为JPG格式的灰度图像,并通过数据筛选,保留包含管线双曲线的图像数据,通过0.8倍、1.5倍和2倍缩放以及水平翻转等数据增强策略来扩增数据量,对所有图像进行416×416像素规格的裁剪,获得样本尺寸416×416的样本集。
由于采集好的数据不是JPG格式,不能以图片的形式被用于训练YOLOv3模型。通过软件(例如MATLAB)把数据转换成JPG格式的灰度图像,并通过数据筛选,保留包含管线双曲线的数据。通过0.8倍、1.5倍和2倍缩放以及水平翻转等数据增强策略来扩增数据量,对所有图像进行416×416像素规格的裁剪,获得样本尺寸416×416的样本集。增广策略有利于迅速获得大量实际采集数据,从而能够大大节省人力物力。
在一些实施例中,样本集中包含正样本及负样本,对样本中的双曲线目标进行人工标注,即通过矩形真实框把每个双曲线目标完整地框选出来,并记录真实框的坐标信息。样本集分为训练集和测试集两部分,训练集和测试集的样本数量比例为3:1。
YOLOv3模型学习目标特征的方式为监督学习,因此,每一个样本中的目标都需要通过人工标注进行标记,标记的内容是用矩形真实框把目标标记出来,获取目标矩形框的左上角(x min,y min)与右下角(x max,y max)坐标信息,同时对目标贴上类别标签(class)。
在一些实施例中,所述的YOLOv3模型的基础网络为Darknet深度学习框架,由53个卷积层组成,每个卷积层后都会有一个批归一化层和一个泄露修正线性单元层,批归一化层的作用是加快网络的训练和收敛的速度,防止梯度消失和防止过拟合;泄露修正线性单元层的作用是在修正线性单元函数的负半区间引入一个泄露值,解决修正线性单元函数进入负区间后导致神经元不学习的问题,同时保持其稀疏激活性,有利于更好地挖掘相关图像特征,拟合训练数据。在实际检测应用中,管线所在地下环境复杂,因此得到的探测数据信噪比较低,对模型的精度要求较高。相较于其他深度学习方法,YOLOv3模型能够很好地在复杂环境中实现对地下管线的探地雷达图像的识别,并且满足于工程应用要求。
具体地,本发明实施例的模型在训练过程中,不断学习管线双曲线的特征,并计算损失函数的损失值来反向调整基础网络中卷积核和池化核的权重参数。在探测过程中,通过在新采集的探地雷达图像上生成预测框来识别目标双曲线,并对双曲线的位置进行预测,利用逻辑回归来预测目标置信度。YOLOv3模型的损失函数主要由目标定位损失,目标置信度损失和目标分类损失三部分组成,其中目标定位损失,目标置信度损失采用采用二值交叉熵损失,能够应对更加复杂的分类场景。
图3是本发明实施例中的YOLOv3模型检测过程示意图,该图描述了YOLOv3对图片特征的提取、学习、识别、预测时的算法结构。模型在训练期间,把样本标注的真实框与默认框进行交并比匹配,匹配值大于0.5的默认框特征内容被视为正例(即含有目标),其余的为反例。正例与反例被用于计算当前模型检测目标的损失函数,损失函数计算是由置信度损失(confidence loss)、目标类别损失(class loss)和定位损失(localization loss)组成,损失值越大,反映模型的检测能力越差,损失值越小,则模型的检测能力越强。目标置信度损失是采用二值交叉熵损失计算的,损失计算的表达式如下:
Figure PCTCN2021113749-appb-000001
Figure PCTCN2021113749-appb-000002
式中,o代表默认框与真实框匹配值;c代表默认框对应类别p的预测概率;
Figure PCTCN2021113749-appb-000003
代表第i 个默认框对应类别p的预测概率;L conf(o,c)代表置信度损失;sigmoid(c i)代表非线性激活函数分类;o i∈{1,0},当默认框与真实框匹配时,o i=1,否则,o i=0;
Figure PCTCN2021113749-appb-000004
为第i个默认框对应类别p的预测概率。无论是正例与真实框的匹配还是反例与背景的匹配,当预测概率越高时,损失值越小。
类别损失也是采用二值交叉熵损失计算的,损失计算的表达式如下:
Figure PCTCN2021113749-appb-000005
Figure PCTCN2021113749-appb-000006
式中,L cla(O,C)代表目标类别损失;Sigmoid(C ij)代表非线性激活函数分类;O ij∈{1,0},表示预测目标边界框i中是否真实存在第j类目标,0表示不存在,1表示存在。
Figure PCTCN2021113749-appb-000007
表示网络预测目标边界框i内存在第j类目标的Sigmoid概率。
而定位损失是由真实偏差值与预测偏差值差的平方和计算而得:
Figure PCTCN2021113749-appb-000008
Figure PCTCN2021113749-appb-000009
Figure PCTCN2021113749-appb-000010
Figure PCTCN2021113749-appb-000011
Figure PCTCN2021113749-appb-000012
式中,L loc(l,g)代表定位损失;
Figure PCTCN2021113749-appb-000013
代表预测目标边界框i的x坐标、y坐标、宽度、高度偏移量;
Figure PCTCN2021113749-appb-000014
代表预测目标边界框i真实框与默认框之间的x坐标、y坐标、宽度、高度偏移量;
Figure PCTCN2021113749-appb-000015
代表预测目标边界框i真实框与默认框之间的宽度偏移量;
Figure PCTCN2021113749-appb-000016
代表预测目标边界框i真实框与默认框之间的高度偏移量;
Figure PCTCN2021113749-appb-000017
表示预测矩形框坐标偏移量,
Figure PCTCN2021113749-appb-000018
表示与之匹配的真实框与默认框之间的坐标偏移量。b x,b y,b w和b h为预测的目标矩形框参数,c x,c y,p w和p h分别代表默认框的矩形框参数,g x,g y,g w,g h为与之匹配的真实目标矩形框参数。
损失函数由置信度损失、定位损失及目标分类损失的加权和求得,其表达式如下:
L(O,o,C,c,l,g)=λ 1L conf(o,c)+λ 2L cla(O,C)+λ 3L loc(l,g)
式中,L(O,o,C,c,l,g)代表总损失函数;L conf(o,c)代表置信度损失;L cla(O,C)代表目标类别损失;L loc(l,g)代表定位损失;λ 1、λ 2、λ 3是平衡系数。
模型的训练是一个使损失值不断变小的迭代过程。在这一过程中,每次损失值的计算都会反向传播到模型网络以更新和调整隐藏层中各个权重值,不断改善网络对图像中目标的特征提取能力,使得预测框与真实框越来越接近,最终实现模型能检测特定目标的功能。
在步骤S3中,通过YOLOv3模型对新采集的探地雷达剖面图像进行智能识别,实现对地下管线的无损探测。
当YOLOv3模型训练完成后,便具有了能在探地雷达图像中识别管线双曲线的功能。因此,当新采集的探地雷达数据图像经过YOLOv3模型处理后,图像中的管线双曲线将会被矩形预测框自动框选出来,并在矩形框上方显示双曲线的置信度。YOLOv3模型对探地雷达图像中的地下管线双曲线智能识别,在应用上能够辅助探测人员进行数据解译,及时反馈目标信息,提高探测效率,缩短探测周期,节约时间成本,具有显著的经济价值和社会价值。
图4为YOLOv3模型识别地下管线目标的效果图。图中双曲线即为地下管线在探地雷达图像中的成像,图中的框401和402为目标识别的预测框,框内hyperbola指分类结果为双曲线特征,框上的数字为检测识别精度(其中,框401的识别精度为0.93;框402的识别精度为0.94);
与图1的方法相对应,本发明提供了一种基于探地雷达和深度学习的地下管线探测系统,包括:
建库模块,用于通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库;
识别模块,用于根据所述图像数据库训练得到YOLOv3模型,所述YOLOv3模型用于识别地下管线的双曲线数据;
探测模块,用于通过所述YOLOv3模型检测实测雷达图像中的地下管线目标;
定位模块,用于通过结合RTK设备和探地雷达,定位地下管线的位置。
综上所述,本发明实施例提出的一种基于探地雷达和YOLOv3模型的地下管线智能探测方法和系统,能精准识别探地雷达图像中的管线双曲线目标,定位管道精确位置,在实际应用中能够辅助探测人员进行数据解译,及时反馈目标信息,提高探测效率,缩短探测周期,节约时间成本,具有显著的经济价值和社会价值。
另外,还需要说明的是,YOLOv3是目标识别模型,在探地雷达智能识别领域上未有先例应用。本发明能够有效结合深度学习模型和探地雷达实现自动识别地下管线,大幅提高检 测人员的检测效率,并满足工程检测的要求。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
尽管已经示出和描述了本发明的实施例,本领域的普通技术人员可以理解:在不脱离本发明的原理和宗旨的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由权利要求及其等同物限定。
以上是对本发明的较佳实施进行了具体说明,但本发明并不限于所述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可做作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。

Claims (6)

  1. 一种基于探地雷达和深度学习的地下管线探测方法,其特征在于,包括:
    通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库;
    根据所述图像数据库训练得到YOLOv3模型,所述YOLOv3模型用于识别地下管线的双曲线数据;
    通过所述YOLOv3模型检测实测雷达图像中的地下管线目标;
    通过结合RTK设备和探地雷达,定位地下管线的位置。
  2. 根据权利要求1所述的一种基于探地雷达和深度学习的地下管线探测方法,其特征在于,所述通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库,包括:
    通过探地雷达获取已知地下管线的扫描数据;
    将所述扫描数据转换为灰度图像数据;
    从所述灰度图像数据中筛选目标图像数据,所述目标图像数据包含管线双曲线;
    对所述目标图像数据进行数据增强;
    对所述数据增强后得到的所有图像进行裁剪,得到样本数据;
    根据所述样本数据构建图像数据库。
  3. 根据权利要求2所述的一种基于探地雷达和深度学习的地下管线探测方法,其特征在于,所述根据所述样本数据构建图像数据库,包括:
    通过矩形真实框将所述样本数据中的双曲线目标进行标注,得到所述矩形真实框的坐标信息;
    根据所述坐标信息,将所述样本数据划分为正样本和负样本;
    根据所述正样本和所述负样本,构建图像数据库。
  4. 根据权利要求1所述的一种基于探地雷达和深度学习的地下管线探测方法,其特征在于,所述YOLOv3模型的基础网络为Darknet深度学习框架;
    所述Darknet深度学习框架由53个卷积层组成;
    所述每个卷积层包括一个批归一化层和一个泄露修正线性单元层。
  5. 根据权利要求4所述的一种基于探地雷达和深度学习的地下管线探测方法,其特征在于,
    所述批归一化层,用于加快网络训练和网络收敛的速度;
    所述泄露修正线性单元层,用于在修正线性单元函数的负半区间引入一个泄露值,使得修正线性单元函数进入负区间后神经元能够继续学习。
  6. 一种基于探地雷达和深度学习的地下管线探测系统,其特征在于,包括:
    建库模块,用于通过探地雷达获取已知地下管线的样本数据,并根据所述样本数据建立图像数据库;
    识别模块,用于根据所述图像数据库训练得到YOLOv3模型,所述YOLOv3模型用于识别地下管线的双曲线数据;
    探测模块,用于通过所述YOLOv3模型检测实测雷达图像中的地下管线目标;
    定位模块,用于通过结合RTK设备和探地雷达,定位地下管线的位置。
PCT/CN2021/113749 2020-09-11 2021-08-20 基于探地雷达和深度学习的地下管线探测方法和系统 WO2022052790A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/063,718 US20230108634A1 (en) 2020-09-11 2022-12-09 Ground penetrating radar and deep learning-based underground pipeline detection method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010951509.X 2020-09-11
CN202010951509.XA CN112130132B (zh) 2020-09-11 2020-09-11 基于探地雷达和深度学习的地下管线探测方法和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/063,718 Continuation-In-Part US20230108634A1 (en) 2020-09-11 2022-12-09 Ground penetrating radar and deep learning-based underground pipeline detection method and system

Publications (1)

Publication Number Publication Date
WO2022052790A1 true WO2022052790A1 (zh) 2022-03-17

Family

ID=73845555

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113749 WO2022052790A1 (zh) 2020-09-11 2021-08-20 基于探地雷达和深度学习的地下管线探测方法和系统

Country Status (3)

Country Link
US (1) US20230108634A1 (zh)
CN (1) CN112130132B (zh)
WO (1) WO2022052790A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116256722A (zh) * 2023-05-16 2023-06-13 中南大学 探地雷达B-scan图像的多次波干扰抑制方法及装置

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130132B (zh) * 2020-09-11 2023-08-29 广州大学 基于探地雷达和深度学习的地下管线探测方法和系统
CN112859006B (zh) * 2021-01-11 2023-08-29 成都圭目机器人有限公司 一种检测多通道探地雷达数据中金属类弯曲圆柱结构的方法
CN113065617A (zh) * 2021-06-03 2021-07-02 中国南方电网有限责任公司超高压输电公司广州局 物体识别方法、装置、计算机设备和存储介质
CN114359369B (zh) * 2021-12-28 2024-05-03 华南农业大学 一种基于深度学习和探地雷达的果树根系识别与定位方法
KR102519104B1 (ko) * 2022-09-28 2023-04-06 셀파이엔씨 주식회사 인공지능학습기반 gpr 이미지 데이터 수집 방법을 이용한 지중 구조물 진단 방법
CN115510176A (zh) * 2022-10-12 2022-12-23 杭州余杭建筑设计院有限公司 地下管线探测方法及系统
CN116106833B (zh) * 2023-04-12 2023-07-04 中南大学 一种基于深度学习的抑制表层钢筋回波的处理方法及系统
CN116256720B (zh) * 2023-05-09 2023-10-13 武汉大学 基于三维探地雷达的地下目标探测方法、装置和电子设备
CN117289355B (zh) * 2023-09-26 2024-05-07 广东大湾工程技术有限公司 一种地下管线探测数据处理方法
CN117092710B (zh) * 2023-10-16 2023-12-26 福建省冶金工业设计院有限公司 一种建设工程勘察用地下线路检测系统
CN117784123A (zh) * 2023-11-15 2024-03-29 北京市燃气集团有限责任公司 一种获取更深层地下介质数据的方法及系统
CN118015010B (zh) * 2024-04-10 2024-07-05 中南大学 Gpr实例分割方法、装置、电子设备及存储介质
CN118153176B (zh) * 2024-05-09 2024-07-12 西华大学 基于Transformer模型与GWO算法的系杆张拉力优化方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2802303A1 (fr) * 1999-12-14 2001-06-15 Centre Nat Rech Scient Procede d'obtention d'une imagerie du sous-sol utilisant un radar a penetration de sol
US20120006116A1 (en) * 2010-06-10 2012-01-12 Hwang Tae-Joon Device, System and Method For Locating a Pipe
CN109685011A (zh) * 2018-12-25 2019-04-26 北京华航无线电测量研究所 一种基于深度学习的地下管线检测识别方法
CN110866545A (zh) * 2019-10-30 2020-03-06 中国地质大学(武汉) 一种探地雷达资料中管线目标的自动识别方法及系统
CN112130132A (zh) * 2020-09-11 2020-12-25 广州大学 基于探地雷达和深度学习的地下管线探测方法和系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007085965A (ja) * 2005-09-26 2007-04-05 Fuji Heavy Ind Ltd 位置情報検出システム
CN107817509A (zh) * 2017-09-07 2018-03-20 上海电力学院 基于rtk北斗和激光雷达的巡检机器人导航系统及方法
CN109829386B (zh) * 2019-01-04 2020-12-11 清华大学 基于多源信息融合的智能车辆可通行区域检测方法
CN110308444B (zh) * 2019-08-08 2021-03-09 中国矿业大学(北京) 道路层位智能识别及干扰源排除方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2802303A1 (fr) * 1999-12-14 2001-06-15 Centre Nat Rech Scient Procede d'obtention d'une imagerie du sous-sol utilisant un radar a penetration de sol
US20120006116A1 (en) * 2010-06-10 2012-01-12 Hwang Tae-Joon Device, System and Method For Locating a Pipe
CN109685011A (zh) * 2018-12-25 2019-04-26 北京华航无线电测量研究所 一种基于深度学习的地下管线检测识别方法
CN110866545A (zh) * 2019-10-30 2020-03-06 中国地质大学(武汉) 一种探地雷达资料中管线目标的自动识别方法及系统
CN112130132A (zh) * 2020-09-11 2020-12-25 广州大学 基于探地雷达和深度学习的地下管线探测方法和系统

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FENG DESHAN, YANG ZI-LONG: "Automatic Recognition of Ground Penetrating Radar Image of Tunnel Liningstructure based on Deep Learning", PROGRESS IN GEOPHYSICS, ZHONGGUO KEXUEYUAN DIZHI YU DIQIU WULI YANJIUSUO, CN, vol. 35, no. 4, 30 December 2019 (2019-12-30), CN , pages 1552 - 1556, XP055909968, ISSN: 1004-2903, DOI: 10.6038/pg2020DD0325 *
HU HAOBANG, FANG HONGYUAN;WANG FUMING;DONG JIAXIU: "Intelligent Recognition of Pipeline Target Based on Faster R-CNN Algorithm for Ground Penetrating Radar", URBAN GEOTECHNICAL INVESTIGATION & SURVEYING, no. 3, 30 June 2020 (2020-06-30), pages 203 - 208, XP055909971, ISSN: 1672-8262 *
WANG YIJUN, CAO PEIPEI;WANG XUESONG;YANXINGYU: "Research on Insulator Self Explosion Detection Method Based on Deep Learning", JOURNAL OF NORTHEAST DIANLI UNIVERSITY, vol. 40, no. 3, 30 June 2020 (2020-06-30), pages 33 - 40, XP055909964, ISSN: 1005-2992, DOI: 10.19718/j.issn.1005-2992.2020-03-0033-08 *
YANG BISHENG, ZONG ZELIANG;CHEN CHI;SUN WENLU;MI XIAOXIN;WU WEITONG;HUANG RONGGANG: "Real Time Approach for Underground Objects Detection from Vehicle-Borne Ground Penetrating Radar", ACTA GEODAETICA ET CARTOGRAPHICA SINICA, vol. 49, no. 7, 15 July 2020 (2020-07-15), pages 874 - 882, XP055909959, ISSN: 1001-1595, DOI: 10.11947/j.AGCS.2020.20190293 *
ZHAO DI, YE SHENGBO;ZHOU BIN: "Ground Penetrating Radar Anomaly Detection based on Convolution Grad-CAM ", ELECTRONIC MEASUREMENT TECHNOLOGY, vol. 43, no. 10, 31 May 2020 (2020-05-31), pages 113 - 118, XP055909973, ISSN: 1002-7300, DOI: 10.19651/j.cnki.emt.2004094 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116256722A (zh) * 2023-05-16 2023-06-13 中南大学 探地雷达B-scan图像的多次波干扰抑制方法及装置

Also Published As

Publication number Publication date
US20230108634A1 (en) 2023-04-06
CN112130132B (zh) 2023-08-29
CN112130132A (zh) 2020-12-25

Similar Documents

Publication Publication Date Title
WO2022052790A1 (zh) 基于探地雷达和深度学习的地下管线探测方法和系统
Liu et al. Detection and localization of rebar in concrete by deep learning using ground penetrating radar
Hou et al. Improved Mask R-CNN with distance guided intersection over union for GPR signature detection and segmentation
CN112395987B (zh) 基于无监督域适应cnn的sar图像目标检测方法
CN111598098B (zh) 一种基于全卷积神经网络的水尺水位线检测及有效性识别方法
CN111798411B (zh) 一种基于探地雷达和深度学习的混凝土内钢筋智能定位方法
CN111724355B (zh) 一种鲍鱼体型参数的图像测量方法
Zhong et al. A method for litchi picking points calculation in natural environment based on main fruit bearing branch detection
WO2021083394A1 (zh) 一种基于图谱灰度自适应选取的沥青路面水损害检测方法
CN112198170B (zh) 一种无缝钢管外表面三维检测中识别水滴的检测方法
CN108711172A (zh) 基于细粒度分类的无人机识别与定位方法
Hou et al. Review of GPR activities in Civil Infrastructures: Data analysis and applications
Hu et al. A study of automatic recognition and localization of pipeline for ground penetrating radar based on deep learning
CN110307903A (zh) 一种家禽特定部位无接触温度动态测量的方法
CN114359702A (zh) 一种基于Transformer的宅基地遥感图像违建识别方法及系统
CN113469097B (zh) 一种基于ssd网络的水面漂浮物多相机实时检测方法
Xiong et al. Automatic detection and location of pavement internal distresses from ground penetrating radar images based on deep learning
Li et al. Detection of the foreign object positions in agricultural soils using Mask-RCNN
Zhang et al. Research on pipeline defect detection based on optimized faster r-cnn algorithm
Xu et al. Detection method of tunnel lining voids based on guided anchoring mechanism
Jaufer et al. Deep learning based automatic hyperbola detection on GPR data for buried utility pipes mapping
CN114155428A (zh) 基于Yolo-v3算法的水下声呐侧扫图像小目标检测方法
Li et al. Defect detection of large wind turbine blades based on image stitching and improved Unet network
FAN et al. Intelligent antenna attitude parameters measurement based on deep learning ssd model
CN117409329B (zh) 用于三维探地雷达降低地下空洞检测虚警率的方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865846

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21865846

Country of ref document: EP

Kind code of ref document: A1