CN117830340A - Ground penetrating radar target feature segmentation method, system, equipment and storage medium - Google Patents

Ground penetrating radar target feature segmentation method, system, equipment and storage medium Download PDF

Info

Publication number
CN117830340A
CN117830340A CN202410017401.1A CN202410017401A CN117830340A CN 117830340 A CN117830340 A CN 117830340A CN 202410017401 A CN202410017401 A CN 202410017401A CN 117830340 A CN117830340 A CN 117830340A
Authority
CN
China
Prior art keywords
loss
image
model
obtaining
segmentation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410017401.1A
Other languages
Chinese (zh)
Inventor
侯斐斐
乔博轩
王一军
樊欣宇
龚凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202410017401.1A priority Critical patent/CN117830340A/en
Publication of CN117830340A publication Critical patent/CN117830340A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image segmentation, and particularly relates to a method, a system, equipment and a storage medium for segmenting target features of a ground penetrating radar, wherein the method comprises the following steps: acquiring a training image and a corresponding label image; obtaining paired training images based on the training images and the label images; obtaining a segmentation model; inputting the paired training images into the segmentation model to obtain an output image; obtaining a loss value based on the output image, the label image and the segmentation model; model parameters of the segmentation model are adjusted based on the loss value and the loss condition, and an optimized segmentation model is obtained; and obtaining target features based on the optimized segmentation model and the input image. According to the method and the device, the segmentation model is optimized, so that the quality of the extracted target characteristic image is higher.

Description

Ground penetrating radar target feature segmentation method, system, equipment and storage medium
Technical Field
The invention belongs to the field of image segmentation, and particularly relates to a method, a system, equipment and a storage medium for segmenting target features of a ground penetrating radar.
Background
Ground Penetrating Radar (GPR) methods are methods for identifying subsurface objects by analyzing discontinuities inside subsurface media. GPR is outstanding in detecting hyperbolic morphological characteristics, has characteristics such as rapidity, effectiveness and stability, is crucial to identifying underground unknown objects, and has strong capability, but is limited in image interpretation use in the civil infrastructure field.
In the related art, a deep learning method is mainly adopted for target feature segmentation of the GPR, and high-dimensional features are directly extracted from the ground penetrating radar B-type scanning image through convolution operation, so that steps of specific feature selection, parameter adjustment and the like are omitted. In the GPR image interpretation field, the mainstream depth model comprises a faster target detection model like R-CNN and an instance segmentation model like Mask R-CNN. For example, the masker-CNN model, has been developed for automatic detection of target features using GPR B-mode scan images, with finer segmentation performed on the GPR images.
For the above related art, the conventional GPR model causes a generator to generate an erroneous image due to improper setting of a loss function and causes distortion or loss of image details when extracting target features.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method, a system, equipment and a storage medium for segmenting target features of a ground penetrating radar, which can enable the quality of an extracted target feature image to be higher and improve the accuracy of extracting the target features of a GPR image by optimizing a segmentation model.
A ground penetrating radar target feature segmentation method comprises the following steps:
acquiring a training image and a corresponding label image;
obtaining paired training images based on the training images and the label images;
obtaining a segmentation model;
inputting the paired training images into the segmentation model to obtain an output image;
obtaining a loss value based on the output image, the label image and the segmentation model;
model parameters of the segmentation model are adjusted based on the loss value and the loss condition, and an optimized segmentation model is obtained;
and obtaining target features based on the optimized segmentation model and the input image.
Optionally, the inputting the paired training images into the segmentation model to obtain an output image includes:
acquiring a first generator, a second generator, a first discriminator and a second discriminator based on the segmentation model;
the output image is obtained based on the first generator, the second generator, the first discriminator, the second discriminator, the paired training images, and the segmentation model.
Optionally, the obtaining the loss value based on the output image, the label image and the segmentation model includes:
obtaining a loss formula based on the segmentation model;
and obtaining the loss value based on the output image, the label image and the loss formula.
Optionally, the obtaining the loss value based on the output image, the label image, the model loss formula, the periodic loss formula, and the discrimination loss formula includes:
acquiring a periodic weight, a model weight and an identification weight;
obtaining model loss, period loss and discrimination loss based on the output image, the label image, the model loss formula, the period loss formula and the discrimination loss formula;
and obtaining the loss value based on the cycle weight, the cycle loss, the model weight, the model loss, the discrimination loss and the discrimination weight.
Optionally, the loss formula includes:
L CycleGAN =-L GAN (G,F,D X ,D Y )+λ cyc L cycle (G,F)+λ id L identity (G,F);
wherein lambda is cyc And lambda (lambda) id Loss weights representing cycle loss and discrimination loss, respectively, G and F are generator functions, D X And D Y As a discriminator function;
obtaining image features based on the input image;
obtaining periodic loss based on the image characteristics and a periodic loss formula;
the periodic loss formula is as follows:
wherein E [. Cndot.]As a function of the desired function,for the feature extractor, w, h and d denote the width, height and depth of the feature space, respectively.
Optionally, the loss condition is:
wherein L is GAN (G, D) is a loss function.
A ground penetrating radar target feature segmentation system, comprising:
the first acquisition module is used for acquiring training images and corresponding tag images;
the training image generation module is used for obtaining paired training images based on the training images and the label images;
the second acquisition module is used for acquiring the segmentation model;
the output module is used for inputting the paired training images into the segmentation model to obtain an output image;
the training module is used for obtaining a loss value based on the output image, the label image and the segmentation model;
the optimizing module is used for adjusting the model parameters of the segmentation model based on the loss value and the loss condition to obtain an optimized segmentation model;
and the extraction module is used for obtaining target characteristics based on the optimized segmentation model and the input image.
Optionally, the training module includes:
an acquisition unit configured to acquire a cycle weight, a model weight, and an authentication weight;
a calculation unit configured to obtain a model loss, a period loss, and an authentication loss based on the output image, the tag image, the model loss formula, the period loss formula, and the authentication loss formula;
and the weighting unit is used for obtaining the loss value based on the cycle weight, the cycle loss, the model weight, the model loss, the discrimination loss and the discrimination weight.
The terminal equipment comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and when the processor loads and executes the computer program, any one of the ground penetrating radar target feature segmentation methods is adopted.
A computer readable storage medium having a computer program stored therein, the computer program when loaded and executed by a processor employing a ground penetrating radar target feature segmentation method as described above.
The invention has the beneficial effects that the output result of the generator A, namely the output image, namely the conversion image from the domain A to the domain B is obtained by preparing two pairs of training images and label images and converting the image of the domain A into the image of the domain B by using the first generator, the output image (the image of the domain B) of the generator A is converted into the input of the generator A, namely the image of the domain A by using the second generator, the conversion image from the domain B to the domain A is realized, the image generated by the generator is more and more vivid in the image conversion process, the capability of the discriminator for identifying the synthesized image is stronger, the model training is completed when a balance point is finally reached, namely the loss condition is met, and the input image is input into the optimized segmentation model to extract the target characteristics. According to the method and the device, the quality of the extracted target characteristic image can be higher by optimizing the segmentation model.
Drawings
Fig. 1 is a schematic flow chart of one implementation of a method for segmenting a target feature of a ground penetrating radar according to an embodiment of the present application;
fig. 2 is a flow chart of a method for segmenting target features of a ground penetrating radar according to an embodiment of the present application;
fig. 3 is an image comparison chart of a ground penetrating radar target feature segmentation method according to an embodiment of the present application.
Detailed Description
A ground penetrating radar target feature segmentation method, as shown in figure 1, comprises the following steps:
s100, acquiring a training image and a corresponding label image.
Specifically, the training image is an image which is found from the disclosed dataset and used for training the segmentation model, and the label image is an image which is used for comparing with an image generated by the training image through the segmentation model and verifying the similarity between the generated image and the label image.
S110, obtaining paired training images based on the training images and the label images.
In particular, when training a model, it is necessary to prepare pairs of images, i.e. one training image is to look like a corresponding image to be generated by the model, so that each training image has a corresponding comparison image, i.e. a label image. Thus, a set of training images and corresponding label images are used as pairs of training images for training the segmentation model.
S120, obtaining a segmentation model.
Specifically, the segmentation model is a model for extracting target features of interest from an input image according to requirements.
The extraction process can be expressed by the following formula:
I seg =f(I raw ),I seg =x tar +x bac
where f (·) represents the GPR target feature segmentation process, xtar is the target feature, and Xbac is the background information.
Therefore, only the complete double-track target characteristics are required to be reserved, and the influence of background information is weakened.
S130, inputting the paired training images into the segmentation model to obtain an output image.
Specifically, the paired training images are input into the segmentation model, that is, both the training image and the label image are input into the segmentation model, and the segmentation model outputs one image, that is, an output image, according to the training image, and then the output image and the label image are compared.
And S140, obtaining a loss value based on the output image, the label image and the segmentation model.
Specifically, the image output by the segmentation model is compared with the label image to obtain the difference value between the label image and the output image, and the segmentation model is trained by improving the segmentation model to continuously reduce the difference value so that the output image is closer to the label image.
And S150, adjusting model parameters of the segmentation model based on the loss value and the loss condition to obtain an optimized segmentation model.
Specifically, model parameters of the segmentation model are adjusted through training, so that an optimized segmentation model is obtained.
And S160, obtaining target features based on the optimized segmentation model and the input image.
Specifically, the segmentation model is optimized according to the loss values of the output image and the label image and the loss conditions when the segmentation model is trained, and then the input image is input into the optimized segmentation model to obtain the target feature. The input image is a GPR B-mode scan image.
The object features are the binary values with the segmented hyperbolic features extracted, so that the identification and extraction of the GPR object are represented.
In one implementation of this embodiment, step S130, where the pair of training images are input into the segmentation model, includes:
s200, acquiring a first generator, a second generator, a first discriminator and a second discriminator.
S210, obtaining an output image based on the first generator, the second generator, the first discriminator, the second discriminator, the paired training images and the segmentation model.
Specifically, two different sets of training images, i.e., a pair of paired samples, including an image from domain a and an image from domain B are converted into an image of domain B using a first generator to obtain an output result of generator a, i.e., an output image, i.e., a converted image from domain a to domain B.
The second generator is used to convert the output image of the generator a (domain B image) into the input of the generator a, i.e., domain a image, to realize the converted image from domain B to domain a.
And then using a first discriminator to identify the difference between the image generated by the first generator and the real image, and comparing the difference with the label image of the domain A in time, and using a second discriminator to identify the difference between the image generated by the second generator and the real image, and continuously optimizing the segmentation model according to the difference so that the difference between the generated image and the real image is smaller and smaller.
In one implementation of the present embodiment, step S140, that is, obtaining the loss value based on the output image, the label image and the segmentation model, includes:
s300, obtaining a model loss formula, a periodic loss formula and a discrimination loss formula based on the segmentation model.
S310, obtaining a loss value based on the output image, the label image, the model loss formula, the periodic loss formula and the discrimination loss formula.
Specifically, a model loss formula, a period loss formula and a discrimination loss formula are preset in the segmentation model, the model loss formula is used for calculating model loss in the training process, the period loss formula is used for calculating period loss in the training process, the discrimination loss formula is used for calculating discrimination loss in the training process, and the loss value is obtained by adding different weight proportion to the model loss, the period loss and the discriminator loss.
The segmentation model adopts a CycleGAN model, and the CycleGAN is a network trained based on unpaired data, so that mapping among images in different domains can be learned. Unlike GAN, cycleGAN consists of two different sets of generators and discriminators, the model contains two mapping functions: a generator function (F: X-Y and G: Y-X) and a discriminator function (D X And D Y )。D X Make D X Is distinguished from domain X; d (D) Y The output of F (x) is distinguished from domain Y.
In one implementation manner of the present embodiment, step S310, that is, obtaining the loss value based on the output image, the label image, the model loss formula, the periodic loss formula, and the discrimination loss formula includes:
s400, acquiring cycle weights, model weights and discrimination weights.
Specifically, the cycle weight, the model weight, and the discrimination weight are weights occupied by the cycle loss, the model loss, and the discrimination loss in the segmentation model, respectively.
S410, obtaining model loss, period loss and discrimination loss based on the output image, the label image, the model loss formula, the period loss formula and the discrimination loss formula.
Specifically, model loss, cycle loss, and discrimination loss are obtained by comparing the image generated by the generator with the label image and the ability of the discriminator to identify the composite image of the generator.
And S420, obtaining a loss value based on the period weight, the period loss, the model weight, the model loss, the discrimination loss and the discrimination weight.
The total loss calculation formula is:
L CycleGAN =-L GAN (G,F,D X ,D Y )+λ cyc L cycle (G,F)+λ id L identity (G,F)。
wherein lambda is cyc And lambda (lambda) id Respectively represent period lossAnd identifying a loss weight for the loss. Lcycle (G, F) is a cycle loss, lidency (G, F) is an identification loss, L GAN (G,F,D X ,D Y ) Is a model loss. However, these loss functions may ultimately lead to the generator producing erroneous images and to distortion or loss of image details. Thus, a new loss function strategy is replaced to preserve image detail or information content, and a perceived loss function defined in feature space is applied to calculate the periodic loss L cycle (G,F)。
Based on the input image, image features are obtained.
Specifically, the image features are extracted through the VGG-19 network as a pre-trained feature extractor.
Based on the image characteristics and the period loss formula, the period loss is obtained.
Wherein E [. Cndot.]In order to perceive the loss function,for the feature extractor, w, h and d denote the width, height and depth of the feature space, respectively. In the process of realizing the perception loss function, a VGG-19 network is adopted as a pre-trained feature extractor. The VGG-19 network contains 16 convolutional layers, 3 fully-connected layers. The output of the 16 th convolution layer is the characteristic extracted by the VGG network, and the output of the last layer is the loss value.
Therefore, the improved total loss value calculation formula is:
specifically, the output image of the discriminator is compared with the label image, and a model loss formula, a period loss formula and a discrimination loss formula are combined to obtain a loss value, and the parameters of the segmentation model are changed by setting loss conditions. The optimization process is to optimize the generator and the discriminator at the same time, the generator is responsible for generating a realistic image, the discriminator is responsible for judging whether the generated image is real or false, so that the optimization condition of the generator and the discriminator is equivalent to the minimum-maximum problem, and the loss condition is that:
wherein L is GAN (G, D) is a loss function.
In the training process, after a pair of paired training images are input each time, the generator and the discriminator are mutually opposed, so that the parameters of the generator and the discriminator, namely the period weight, the model weight and the discrimination weight, are continuously improved in the training process, and the segmentation model is continuously optimized, so that the feature extraction capability of the model is continuously enhanced.
The generator gradually generates samples that are closer to the true data distribution by continuously adjusting the parameters. In this process, the generator learns the features of the target area and how to extract them from random noise or other input. Through the optimization process, the generator is more and more capable, and more realistic samples similar to the target data can be generated. The discriminator gradually improves its discrimination ability by learning the sample and the true sample generated by the discrimination generator. The discriminator enables it to discriminate more accurately the difference between the generated sample and the real sample by constantly adjusting the parameters. This means that the discriminator needs to learn the features of the authentic data and use these features to determine the authenticity of the sample. Thus, the discriminator can also extract and enhance the feature extraction capability during the learning process. The target feature capability of the extracted required image is finally enhanced.
As shown in fig. 2, in the input generator of the GPR B scanning image and the label image, the generator generates the image and the real image together with the discriminator, the discriminator calculates the generator loss and the discriminator loss, the generator loss and the discriminator loss together form a model loss, then the input image extracts image features through the VGG network, including the input image features and the generated image features, the model loss, the perception loss and the periodic loss in the cyclic process are reduced by adjusting parameters when the perception loss is calculated according to the loss function, the generator and the discriminator mutually resist the process, so that the capability of extracting the target features of the GPR B scanning image by the optimization training model is enhanced, as shown in fig. 3, the diagram of extracting the target features by the optimization training model is shown in fig. 3, wherein the diagram a in fig. 3 is the input image, the diagram B in fig. 3 is the label image, the diagram c in fig. 3 is the segmentation image, and the segmentation image is the image obtained after extracting the target features in the input image.
A ground penetrating radar target feature segmentation system, comprising:
the first acquisition module is used for acquiring the training image and the corresponding label image.
The training image generation module is used for obtaining paired training images based on the training images and the label images.
And the second acquisition module is used for acquiring the segmentation model.
And the output module is used for inputting the paired training images into the segmentation model to obtain an output image.
And the training module is used for obtaining the loss value based on the output image, the label image and the segmentation model.
And the optimizing module is used for adjusting the model parameters of the segmentation model based on the loss value and the loss condition to obtain an optimized segmentation model.
And the extraction module is used for obtaining target characteristics based on the optimized segmentation model and the input image.
The training module comprises:
and the acquisition unit is used for acquiring the cycle weight, the model weight and the discrimination weight.
And the calculating unit is used for obtaining model loss, period loss and discrimination loss based on the output image, the label image, the model loss formula, the period loss formula and the discrimination loss formula.
And the weighting unit is used for obtaining a loss value based on the period weight, the period loss, the model weight, the model loss, the discrimination loss and the discrimination weight.
The embodiment of the application also discloses a terminal device which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and when the processor loads and executes the computer program, a ground penetrating radar target feature segmentation method is adopted.
The terminal device may be a computer device such as a desktop computer, a notebook computer, or a cloud server, and the terminal device includes, but is not limited to, a processor and a memory, for example, the terminal device may further include an input/output device, a network access device, a bus, and the like.
The processor may be a Central Processing Unit (CPU), or of course, according to actual use, other general purpose processors, digital Signal Processors (DSP), application Specific Integrated Circuits (ASIC), ready-made programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., and the general purpose processor may be a microprocessor or any conventional processor, etc., which is not limited in this application.
The memory may be an internal storage unit of the terminal device, for example, a hard disk or a memory of the terminal device, or may be an external storage device of the terminal device, for example, a plug-in hard disk, a Smart Memory Card (SMC), a secure digital card (SD), or a flash memory card (FC) equipped on the terminal device, or the like, and may be a combination of the internal storage unit of the terminal device and the external storage device, where the memory is used to store a computer program and other programs and data required by the terminal device, and the memory may be used to temporarily store data that has been output or is to be output, which is not limited in this application.
The ground penetrating radar target feature segmentation method in the embodiment is stored in a memory of the terminal device through the terminal device, and is loaded and executed on a processor of the terminal device, so that the ground penetrating radar target feature segmentation method is convenient to use.
The embodiment of the application also discloses a computer readable storage medium, and the computer readable storage medium stores a computer program, wherein the computer program adopts the ground penetrating radar target feature segmentation method in the embodiment when being executed by a processor.
The computer program may be stored in a computer readable medium, where the computer program includes computer program code, where the computer program code may be in a source code form, an object code form, an executable file form, or some middleware form, etc., and the computer readable medium includes any entity or device capable of carrying the computer program code, a recording medium, a usb disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunication signal, a software distribution medium, etc., where the computer readable medium includes, but is not limited to, the above components.
Wherein, by the computer readable storage medium, the ground penetrating radar target feature segmentation method in the embodiment is stored in the computer readable storage medium and is loaded and executed on a processor, so as to facilitate the storage and application of the method
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to imply that the scope of the present application is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the spirit of the application, steps may be implemented in any order, and there are many other variations of the different aspects of one or more embodiments described above which are not provided in detail for the sake of brevity.
One or more embodiments herein are intended to embrace all such alternatives, modifications and variations that fall within the broad scope of the present application. Any omissions, modifications, equivalents, improvements, and the like, which are within the spirit and principles of the one or more embodiments in the present application, are therefore intended to be included within the scope of the present application.

Claims (10)

1. The method for segmenting the target features of the ground penetrating radar is characterized by comprising the following steps of:
acquiring a training image and a corresponding label image;
obtaining paired training images based on the training images and the label images;
obtaining a segmentation model;
inputting the paired training images into the segmentation model to obtain an output image;
obtaining a loss value based on the output image, the label image and the segmentation model;
model parameters of the segmentation model are adjusted based on the loss value and the loss condition, and an optimized segmentation model is obtained;
and obtaining target features based on the optimized segmentation model and the input image.
2. The method of claim 1, wherein inputting the pair of training images into the segmentation model to obtain an output image comprises:
acquiring a first generator, a second generator, a first discriminator and a second discriminator based on the segmentation model;
the output image is obtained based on the first generator, the second generator, the first discriminator, the second discriminator, the paired training images, and the segmentation model.
3. The method of claim 2, wherein obtaining the loss value based on the output image, the tag image, and the segmentation model comprises:
based on the segmentation model, a model loss formula, a periodic loss formula and a discrimination loss formula are obtained;
and obtaining the loss value based on the output image, the label image, the model loss formula, the periodic loss formula and the discrimination loss formula.
4. The method of claim 3, wherein the obtaining the loss value based on the output image, the tag image, the model loss formula, the periodic loss formula, and the discrimination loss formula comprises:
acquiring a periodic weight, a model weight and an identification weight;
obtaining model loss, period loss and discrimination loss based on the output image, the label image, the model loss formula, the period loss formula and the discrimination loss formula;
and obtaining the loss value based on the cycle weight, the cycle loss, the model weight, the model loss, the discrimination loss and the discrimination weight.
5. The method for partitioning target features of a ground penetrating radar of claim 4, wherein said loss formula comprises:
L CycleGAN =-L GAN (G,F,D X ,D Y )
cyc L cycle (G,F)
id L identity (G,F);
wherein lambda is cyc And lambda (lambda) id Loss weights representing cycle loss and discrimination loss, respectively, G and F are generator functions, D X And D Y As a discriminator function;
obtaining image features based on the input image;
obtaining periodic loss based on the image characteristics and a periodic loss formula;
the periodic loss formula is as follows:
wherein E [. Cndot.]As a function of the desired function,is a feature extractorW, h and d represent the width, height and depth of the feature space, respectively.
6. The method for segmenting the target features of the ground penetrating radar according to claim 5, wherein the loss condition is as follows:
wherein L is GAN (G, D) is a loss function.
7. The utility model provides a ground penetrating radar target feature segmentation system which characterized in that includes:
the first acquisition module is used for acquiring training images and corresponding tag images;
the training image generation module is used for obtaining paired training images based on the training images and the label images;
the second acquisition module is used for acquiring the segmentation model;
the output module is used for inputting the paired training images into the segmentation model to obtain an output image;
the training module is used for obtaining a loss value based on the output image, the label image and the segmentation model;
the optimizing module is used for adjusting the model parameters of the segmentation model based on the loss value and the loss condition to obtain an optimized segmentation model;
and the extraction module is used for obtaining target characteristics based on the optimized segmentation model and the input image.
8. The ground penetrating radar target feature segmentation system of claim 7, wherein the training module comprises:
an acquisition unit configured to acquire a cycle weight, a model weight, and an authentication weight;
a calculation unit configured to obtain a model loss, a period loss, and an authentication loss based on the output image, the tag image, the model loss formula, the period loss formula, and the authentication loss formula;
and the weighting unit is used for obtaining the loss value based on the cycle weight, the cycle loss, the model weight, the model loss, the discrimination loss and the discrimination weight.
9. A terminal device comprising a memory and a processor, characterized in that the memory stores a computer program capable of running on the processor, which processor, when loaded and executed, employs the method according to any of claims 1-6.
10. A computer readable storage medium having a computer program stored therein, characterized in that the method of any of claims 1 to 6 is employed when the computer program is loaded and executed by a processor.
CN202410017401.1A 2024-01-04 2024-01-04 Ground penetrating radar target feature segmentation method, system, equipment and storage medium Pending CN117830340A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410017401.1A CN117830340A (en) 2024-01-04 2024-01-04 Ground penetrating radar target feature segmentation method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410017401.1A CN117830340A (en) 2024-01-04 2024-01-04 Ground penetrating radar target feature segmentation method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117830340A true CN117830340A (en) 2024-04-05

Family

ID=90518903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410017401.1A Pending CN117830340A (en) 2024-01-04 2024-01-04 Ground penetrating radar target feature segmentation method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117830340A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015010A (en) * 2024-04-10 2024-05-10 中南大学 GPR instance partitioning method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886974A (en) * 2019-01-28 2019-06-14 北京易道博识科技有限公司 A kind of seal minimizing technology
CN110246488A (en) * 2019-06-14 2019-09-17 苏州思必驰信息科技有限公司 Half optimizes the phonetics transfer method and device of CycleGAN model
CN110570363A (en) * 2019-08-05 2019-12-13 浙江工业大学 Image defogging method based on Cycle-GAN with pyramid pooling and multi-scale discriminator
CN111199550A (en) * 2020-04-09 2020-05-26 腾讯科技(深圳)有限公司 Training method, segmentation method, device and storage medium of image segmentation network
CN112819732A (en) * 2021-04-19 2021-05-18 中南大学 B-scan image denoising method for ground penetrating radar
WO2021206284A1 (en) * 2020-04-09 2021-10-14 한밭대학교 산학협력단 Depth estimation method and system using cycle gan and segmentation
US20210383520A1 (en) * 2021-03-03 2021-12-09 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for generating image, device, storage medium and program product
CN116188516A (en) * 2022-12-30 2023-05-30 深存科技(无锡)有限公司 Training method of defect data generation model
CN116563402A (en) * 2023-03-31 2023-08-08 徐州鑫达房地产土地评估有限公司 Cross-modal MRI-CT image synthesis method, system, equipment and medium
KR102565747B1 (en) * 2022-11-25 2023-08-11 대한민국 Regional precipitation nowcasting system and method based on cyclegan extension

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886974A (en) * 2019-01-28 2019-06-14 北京易道博识科技有限公司 A kind of seal minimizing technology
CN110246488A (en) * 2019-06-14 2019-09-17 苏州思必驰信息科技有限公司 Half optimizes the phonetics transfer method and device of CycleGAN model
CN110570363A (en) * 2019-08-05 2019-12-13 浙江工业大学 Image defogging method based on Cycle-GAN with pyramid pooling and multi-scale discriminator
CN111199550A (en) * 2020-04-09 2020-05-26 腾讯科技(深圳)有限公司 Training method, segmentation method, device and storage medium of image segmentation network
WO2021206284A1 (en) * 2020-04-09 2021-10-14 한밭대학교 산학협력단 Depth estimation method and system using cycle gan and segmentation
US20210383520A1 (en) * 2021-03-03 2021-12-09 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for generating image, device, storage medium and program product
CN112819732A (en) * 2021-04-19 2021-05-18 中南大学 B-scan image denoising method for ground penetrating radar
KR102565747B1 (en) * 2022-11-25 2023-08-11 대한민국 Regional precipitation nowcasting system and method based on cyclegan extension
CN116188516A (en) * 2022-12-30 2023-05-30 深存科技(无锡)有限公司 Training method of defect data generation model
CN116563402A (en) * 2023-03-31 2023-08-08 徐州鑫达房地产土地评估有限公司 Cross-modal MRI-CT image synthesis method, system, equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FEIFEI HOU ET AL.: "S-CycleGAN: A Novel Target Signature Segmentation Method for GPR Image Interpretation", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, vol. 21, 12 February 2024 (2024-02-12) *
JUN YAN ZHU ET AL.: "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks", 《HTTPS://DOI.ORG/10.48550/ARXIV.1703.10593》, 24 August 2020 (2020-08-24) *
黄佳恒 等: "基于改进CycleGAN的夜间道路环境下非机动车特征增强方法", 《现代计算机》, vol. 29, no. 20, 25 October 2023 (2023-10-25), pages 1 - 8 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015010A (en) * 2024-04-10 2024-05-10 中南大学 GPR instance partitioning method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111222434A (en) Method for obtaining evidence of synthesized face image based on local binary pattern and deep learning
CN110659582A (en) Image conversion model training method, heterogeneous face recognition method, device and equipment
CN110119753B (en) Lithology recognition method by reconstructed texture
CN109472817B (en) Multi-sequence magnetic resonance image registration method based on loop generation countermeasure network
CN107729926B (en) Data amplification method and machine identification system based on high-dimensional space transformation
CN108520215B (en) Single-sample face recognition method based on multi-scale joint feature encoder
CN105718866A (en) Visual target detection and identification method
CN112950639B (en) SA-Net-based MRI medical image segmentation method
CN112364851B (en) Automatic modulation recognition method and device, electronic equipment and storage medium
CN114842524B (en) Face false distinguishing method based on irregular significant pixel cluster
CN114329034A (en) Image text matching discrimination method and system based on fine-grained semantic feature difference
CN113379707A (en) RGB-D significance detection method based on dynamic filtering decoupling convolution network
CN113095158A (en) Handwriting generation method and device based on countermeasure generation network
CN113902613A (en) Image style migration system and method based on three-branch clustering semantic segmentation
CN110222217B (en) Shoe print image retrieval method based on segmented weighting
CN112330562B (en) Heterogeneous remote sensing image transformation method and system
CN112560925A (en) Complex scene target detection data set construction method and system
CN114519689A (en) Image tampering detection method, device, equipment and computer readable storage medium
CN117830340A (en) Ground penetrating radar target feature segmentation method, system, equipment and storage medium
CN110321889A (en) Illustration positioning extracting method and system in a kind of picture file
Akther et al. Detection of Vehicle's Number Plate at Nighttime using Iterative Threshold Segmentation (ITS) Algorithm
CN109886212A (en) From the method and apparatus of rolling fingerprint synthesis fingerprint on site
KR102464851B1 (en) Learning method and image cassification method using multi-scale feature map
CN114758123A (en) Remote sensing image target sample enhancement method
CN110287991B (en) Method and device for verifying authenticity of plant crude drug, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination