CN113222898B - Double-navigation SAR image trace detection method based on multi-element statistics and deep learning - Google Patents

Double-navigation SAR image trace detection method based on multi-element statistics and deep learning Download PDF

Info

Publication number
CN113222898B
CN113222898B CN202110401084.XA CN202110401084A CN113222898B CN 113222898 B CN113222898 B CN 113222898B CN 202110401084 A CN202110401084 A CN 202110401084A CN 113222898 B CN113222898 B CN 113222898B
Authority
CN
China
Prior art keywords
image
cunet
trace
navigation
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110401084.XA
Other languages
Chinese (zh)
Other versions
CN113222898A (en
Inventor
邢孟道
石鑫
张金松
孙光才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110401084.XA priority Critical patent/CN113222898B/en
Publication of CN113222898A publication Critical patent/CN113222898A/en
Application granted granted Critical
Publication of CN113222898B publication Critical patent/CN113222898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a double navigation SAR image trace detection method based on multiple statistics and deep learning, which comprises the following steps: obtaining a difference image of the double-navigation SAR image by using a complex reflection change detection estimator; carrying out recognition and false alarm elimination of water areas and vegetation areas on the difference image by using an unsupervised multivariate statistic method to obtain a false alarm elimination image; training a CUnet network by utilizing induction transfer learning and coarse-to-fine images; and performing trace recognition on the image to be processed by using the trained CUnet network. According to the method, a difference image of the double-navigation SAR image is obtained through a complex reflection change detection estimator, a water area and a vegetation area are obtained through an unsupervised multi-element statistic, a false alarm elimination image is further obtained, a coarse-to-fine image is constructed by combining an original image, the difference image and a false alarm removal image, induction transfer learning is carried out by utilizing the coarse-to-fine image and a CUnet, and therefore double-navigation SAR image trace detection under the condition of a small sample is achieved, and the detection effect is good.

Description

Double-navigation SAR image trace detection method based on multi-element statistics and deep learning
Technical Field
The invention belongs to the technical field of target detection, and particularly relates to a double-navigation SAR image trace detection method based on multivariate statistics and deep learning.
Background
An important application of the synthetic aperture radar (SAR, synthetic Aperture Radar) system is the detection of footprints, wheel marks and other trace areas, which can be used for supervision and searching purposes. The double-navigation SAR image is two SAR images obtained by repeatedly flying the same area at different times. The correlated change detection (CCD, coherence Change Detection) has the ability to locate trace areas from large scale areas, which can be used to achieve dual-navigation SAR image trace detection. The CCD model includes two modules: a difference generation module that generates a difference image using pairs of registered images that repeatedly pass and repeat geometry; and a difference analysis module for analyzing the difference image to obtain a constant and a variable pixel region of interest.
For the first part of the CCD model, two methods can be summarized: one approach to differential image generation by designing appropriate statistical models, which, while reducing the false alarms of the generated images, does not completely remove areas of low correlation coefficient caused by natural environment variables; another approach, which can effectively distinguish false alarms from traces, but requires very stringent experimental conditions, acquires multidimensional images by way of multiple flights or multiple bandwidths. Therefore, it remains an important issue for the generation of the difference image how to generate as much variation information as possible while removing the interference area.
For the second part of the CCD model, two methods can also be summarized: a non-deep learning method that can use an unsupervised method such as thresholding, clustering, etc. to identify pixels in the difference image; one is a deep learning method, which identifies pixels by constructing a neural network. Due to the change of natural environments such as wind blowing, river and the like, the difference image generated by the CCD contains a large number of areas with low correlation coefficients caused by the natural environments, which brings great challenges to the detection method. In addition, trace samples are rare, and how to train a neural network with small samples is also an important issue.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a double-navigation SAR image trace detection method based on multivariate statistics and deep learning. The technical problems to be solved by the invention are realized by the following technical scheme:
the invention provides a double navigation SAR image trace detection method based on multiple statistics and deep learning, which comprises the following steps:
s1: obtaining a difference image of the double-navigation SAR image by using a complex reflection change detection estimator;
s2: carrying out recognition and false alarm elimination of water areas and vegetation areas on the difference image by using an unsupervised multivariate statistic method to obtain a false alarm elimination image;
S3: training a CUnet network by utilizing induction transfer learning and coarse-to-fine images;
s4: and performing trace recognition on the image to be processed by using the trained CUnet network.
In one embodiment of the present invention, the S1 includes:
s11: constructing mathematical models of double navigation SAR images obtained at the same geometric position at different moments;
s12: obtaining a complex reflection change detection estimator according to the mathematical model;
s13: and processing the double-navigation SAR image by using the complex reflection change detection estimator to obtain the difference image.
In one embodiment of the invention, the expression of the complex reflection change detection estimator is:
wherein,and->Respectively represent image X 1 And X 2 Is the kth complex data of>Representing +.>Performing conjugate operation, wherein N is the number of neighborhood pixel points, sigma n1 Sum sigma n2 Respectively, are image X 1 And X 2 Is added to the system thermal noise estimate.
In one embodiment of the present invention, the S2 includes:
s21: performing intensity superposition by using an image pair consisting of the double-navigation SAR image to obtain water area extraction statistics;
s22: acquiring a global threshold of the water area extraction statistic based on an OTSU method and extracting the water area from the difference image;
S23: performing intensity subtraction by using the image pairs to obtain vegetation extraction statistics;
s24: based on a method of manually determining a threshold, acquiring a threshold of vegetation extraction statistics and extracting a vegetation region from the difference image;
s25: and constructing a false alarm elimination image for eliminating the water area and the vegetation area according to the difference image.
In one embodiment of the present invention, the S22 includes:
after obtaining water area extraction statistics, obtaining a global threshold tau by an OTSU method s Judging whether the current pixel m belongs to a water area or not, wherein the judgment standard is as follows:
wherein S is m A water volume representing the current pixel m of the difference image extracts statistics.
In one embodiment of the present invention, the S24 includes:
for each pixel m in the image pair, determining whether or not the image pair satisfiesIf so, the pixel m belongs to a vegetation region, wherein τ D And τ α Statistics D and CRCD estimator derived estimator for vegetation area identification, respectively +.>Threshold value of D m Representing the vegetation extraction statistics corresponding to pixel m, < ->Representing the estimated amount corresponding to pixel m.
In one embodiment of the present invention, the S3 includes:
s31: the original SAR image, the difference image and the false alarm elimination image are connected in parallel according to channels, and a coarse image to a fine image are obtained;
S32: acquiring a CUnet network;
s33: dividing the thick image to the thin image based on a migration learning method to obtain a source label and a target label;
s34: pre-training the CUnet network by utilizing a source domain task;
s35: and fine-tuning the CUnet network by utilizing the target task to obtain the trained CUnet network.
In one embodiment of the present invention, the S33 includes:
s331: slicing the region of the CTF image excluding trace pixels into a plurality of image blocks, and creating source domain dataJudging each pixel area as a water area, vegetation or background pixel according to an unsupervised multi-statistics method or manual labeling, and establishing a source domain label +.>
S332: slicing the trace region in the CTF picture into a plurality of image blocks, and establishing target dataJudging each pixel area as a water area, vegetation or back according to an unsupervised multi-statistics method or manual labelingScenery pixels and creates the target label +.>
In one embodiment of the present invention, the S34 includes:
establishing a prediction function f of a source domain task by using the CUnet network S (. Cndot.) the CUnet network is pre-trained by using the source domain data and the source domain label to obtain a trained source domain task prediction function f S (·)。
In one embodiment of the present invention, the S35 includes:
establishing a predictive function f of trace detection task by using the CUnet network T (. Cndot.) use of trained source domain task prediction function f S Weight initialization trace detection task prediction function f T Weight of (-), and training a predictive function f using the target data and the target tag T (. Cndot.) thus obtaining a trained CUnet network.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a high-resolution SAR image trace detection method based on unsupervised multivariate statistics and small sample deep learning, which is characterized in that a difference image of an SAR image pair is obtained through a complex reflection change detection estimator, a water area and a vegetation area are obtained through unsupervised multivariate statistics, a false alarm elimination image is further obtained, a coarse-to-fine image is constructed by combining an original image, the difference image and a false alarm removal image, and induction transfer learning is performed by utilizing the coarse-to-fine image and a CUnet network, so that SAR image trace detection under the condition of small samples is realized, and the detection effect is good.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a flow chart of a dual navigation SAR image trace detection method based on multivariate statistics and deep learning provided by an embodiment of the present invention;
FIG. 2 is a detailed flow chart of a dual navigation SAR image trace detection method based on multivariate statistics and deep learning provided by an embodiment of the present invention;
FIG. 3 is a graph of backscattering coefficients for waters, sandy, fields, hills and mountains with a range of wipe angles of 0-80;
FIG. 4 is a flowchart of training CUnet from coarse to fine images using inductive transfer learning provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a CUnet network according to an embodiment of the present invention;
FIG. 6 is a double-navigation SAR image and trace label map thereof provided by an embodiment of the present invention;
FIG. 7 is a graph of the results of different detection estimators provided by an embodiment of the present invention;
FIG. 8 is a graph of ROC for different window sizes and decision thresholds provided by embodiments of the present invention;
FIG. 9 is a diagram of water area detection results for different methods provided by embodiments of the present invention;
FIG. 10 is a graph of vegetation detection results for various methods provided by embodiments of the present invention;
FIG. 11 is a graph of a false alarm removal result provided by an embodiment of the present invention;
FIG. 12 is a graph of CTF results provided by an embodiment of the present invention;
FIG. 13 is a graph showing the change in accuracy of the CUnet pre-training process and the fine tuning process according to an embodiment of the present invention;
FIG. 14 is a diagram of an intermediate process feature of a test sample provided by an embodiment of the present invention;
FIG. 15 is a graph of trace detection results for various methods provided by embodiments of the present invention;
FIG. 16 is a graph of test results for different numbers of training samples provided by an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects adopted by the invention to achieve the preset aim, the following describes the double-navigation SAR image trace detection method based on the multi-element statistics and the deep learning in detail by combining the drawings and the specific embodiments.
The foregoing and other features, aspects, and advantages of the present invention will become more apparent from the following detailed description of the preferred embodiments when taken in conjunction with the accompanying drawings. The technical means and effects adopted by the present invention to achieve the intended purpose can be more deeply and specifically understood through the description of the specific embodiments, however, the attached drawings are provided for reference and description only, and are not intended to limit the technical scheme of the present invention.
It should be noted that in this document relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in an article or apparatus that comprises the element.
Referring to fig. 1 and 2, the dual-navigation SAR image trace detection method based on the multivariate statistic and the deep learning of the present embodiment includes the following steps:
s1: and obtaining a difference image of the double-navigation SAR image by using a complex reflection change detection estimator.
Although the existing CCD method can be used for detecting trace change, the false alarm is higher in a low Clutter-To-Noise Ratio (CNR) area. The complex reflection change detection (CRCD, complex Reflectance Change Detection) proposed in the embodiment can integrate clutter and noise energy in the SAR image into a unified statistical model, reduce false alarms in a low CNR region, improve detection probability of a trace pixel region, and the generated difference image can be simply and effectively applied to subsequent analysis. In the present embodiment, a difference image of the original SAR image is obtained using a complex reflection change estimator.
Specifically, the S1 includes:
s11: and constructing a mathematical model of the double-navigation SAR image obtained at the same geometric position at different moments.
The double-navigation SAR image is two SAR images obtained by repeatedly flying the same position at different time, wherein one SAR image with earlier acquisition time does not contain a trace area and is called a reference image; the later acquired SAR image contains a trace area, called a mission image.
Assume that two SAR images acquired at different times and same geometric positions are respectively X 1 And X 2 The two images can be modeled as:
wherein,and->Respectively represent image X 1 And X 2 K e [1, M]M is the total number of image complex data, alpha is the relative image X 1 And X is 2 Estimation of complex reflectivity changes occurring between them, the value range of alpha is [0,1],/>For image X 1 And X 2 Unchanged data between->For image X 1 And X 2 Change data between->And->Respectively represent image X 1 And X 2 Additive systematic thermal noise, phase +.>Representing the constant phase difference is an interference parameter.
S12: performing a small amount of algebraic operations according to the constructed mathematical models of the two SAR images to obtain a complex reflection change detection (Complex reflectance change detection, CRCD) estimator:
wherein,representing +.>Performing conjugate operation, wherein N is the number of neighborhood pixel points, sigma n1 Sum sigma n2 Respectively, are image X 1 And X 2 Is an additive system thermal noise estimate of +.>Known by the system design specifications or measured by the shaded areas of the SAR image pair. />Representing a complete change of the neighboring pixels, 1 represents unchanged, the estimated value obtained by the CRCD estimator compared to the estimation of CCD +. >The effect of clutter and thermal noise energy of adjacent pixels can be reflected. CRCD estimationThe counter not only improves the coherence value of the low CNR area without change around, but also increases the difference between the no-change value and the trace change value in the coherence result.
S13: and processing the double-navigation SAR image by using the CRCD estimator to obtain the difference image.
Two SAR images X 1 And X 2 Is substituted into the CRCD estimator to calculate the difference image. Specifically, image X 1 And X 2 Respectively substituting the M groups of complex data into the CRCD estimator to respectively obtain M estimated values, and forming the difference image by the M estimated values.
S2: and carrying out recognition and false alarm elimination of water areas and vegetation areas on the difference image by using an unsupervised multivariate statistic method to obtain a false alarm elimination image.
Specifically, the S2 includes:
s21: and carrying out intensity superposition by using an image pair consisting of the two SAR images to obtain water area extraction statistics.
Firstly, the scattering characteristics of various terrestrial coverage in SAR images are analyzed by using a Morchin model. In the Morchin model, the backscattering coefficient can be expressed as:
wherein lambda is the signal wavelength of the radar, theta g A, B, mu and beta as floor wiping angles 0 Is a characteristic parameter, and is related to the ground type, theta c =arcsin(λ/4πh c ),θ c And h c For assisting the intermediate parameter of calculation, when the ground type is desert and theta g <θ c When sigma c =θ gc The method comprises the steps of carrying out a first treatment on the surface of the When the ground type is other ground type or theta g >θ c When sigma c =1. Referring to FIG. 3, FIG. 3 shows a water area, sand area, field when the wiping angle is in the range of 0-80 DEGAnd a back scattering coefficient map of hills and mountains, wherein the water area is approximated by the sea surface of the first-order sea condition. As shown in fig. 3, the backscattering coefficient sigma of water bodies is compared with other land covered areas 0 Significantly smaller, which results in lower intensities of the waters than other areas in the SAR image. Based on this analysis, a close threshold can be selected to extract the water from the SAR image pair. Since incoherent summation between the intensity of the double-navigation SAR images helps to suppress speckle noise and energy differences of different ground coverage, the embodiment takes the incoherent summation between the intensity of the double-navigation SAR images as water area extraction statistic:
wherein,and->Each corresponding pixel value in the selected region of the SAR image pair. Using an averaging filter in the estimation to suppress speckle noise, N s Neighborhood pixel values filtered for the mean value within the selected region.
S22: and obtaining a global threshold of the water area extraction statistic based on an OTSU method and carrying out water area extraction on the difference image.
Specifically, after obtaining the water area extraction statistic S, obtaining the global threshold tau by an OTSU method s Judging whether the current pixel m belongs to a water area or not, wherein the judgment standard is as follows:
wherein S is m A water volume representing the current pixel m of the difference image extracts statistics.
The OTSU method (maximum inter-class variance method) was proposed by the japanese scholars, the body fluid (nobuki OTSU) in 1979, and is an adaptive threshold determination method, also called the body fluid method, abbreviated as OTSU.
S23: and subtracting intensities by using the image pairs to obtain vegetation extraction statistics.
Specifically, when the thermal noise of the double-navigation SAR image is close to each other, the entire observation correlation coefficient can be expressed as:
ρ total =ρ spatial ·ρ thermal ·ρ temporal
wherein ρ is thermal And ρ spatial Representing the heat uncorrelated coefficient and the spatial reference uncorrelated coefficient, ρ, respectively temporal Representing the time uncorrelated coefficient, which is caused by physical changes in the surface of the photographed region during observation (i.e., the time interval between two repeated flights), can be expressed as:
where θ is the nominal angle of incidence and Δy and Δz are the vertical and height direction position changes during scattering, respectively. For a millimeter wave SAR system, a weak scattering offset produces a weak thermal uncorrelated coefficient ρ temporal Further results in an observed correlation coefficient ρ total Both the irregular vegetation wobbles and the creation of traces can create a low correlation coefficient in the difference image, so that a false alarm is necessary to remove the vegetation area.
The movement of vegetation not only affects the observed correlation, but also causes a change in the backscattering coefficient over time. Detailed ground measurements show that the scattering coefficient changes rapidly with changes in the ground angle, which means that there is a significant difference in the intensity of the vegetation areas of the reference image and the mission image (i.e., the two SAR images obtained by repeatedly flying the same area). Unlike the vegetation region, the trace region is not easily affected by natural factors such as wind blowing, and thus the intensity difference between the trace regions of the image pair is small compared to the vegetation region. Based on this analysis, vegetation extraction statistics for identifying vegetation areas with the intensity differences of the image pairs:
wherein,and->And respectively corresponding pixel values of the selected region in the SAR image pair. Using an averaging filter in the estimation to suppress speckle noise, N d Neighborhood pixel values filtered for the mean value within the selected region.
S24: and based on a method of manually determining the threshold, acquiring the threshold of the vegetation extraction statistic and extracting the vegetation region from the difference image.
Specifically, for one pixel m of an image pair, it is considered that the pixel belongs to a vegetation region if the following condition is satisfied:
wherein τ D And τ α Statistics D and CRCD estimator derived estimates for vegetation region identification, respectivelyThe threshold value of (D) is determined manually m Representing the vegetation extraction statistics corresponding to pixel m, < ->Representing the estimated amount corresponding to pixel m. τ D Is to distinguish as much of the water area as possible from other areas, τ α The determination criterion of (2) is to divide the trace area into water areas as little as possible.
S25: and constructing a false alarm elimination image for eliminating the water area and the vegetation area according to the difference image.
Specifically, setting the pixel values of the water area and the vegetation area in the obtained difference image to be 1, and representing the pixel area which is unchanged, thus obtaining the false alarm elimination image.
S3: training a CUnet network by utilizing induction transfer learning and coarse-to-fine images;
referring to fig. 4, fig. 4 is a flowchart of training a CUnet using induction transfer learning and coarse-to-fine images according to an embodiment of the present invention.
In this embodiment, the S3 includes:
s31: and based on a channel parallel connection method, the original SAR image, the difference image and the false alarm elimination image are connected in parallel according to channels, so that a thick-to-thin image is obtained.
Specifically, the intensity image of the original SAR image containing the trace is first taken as a "rough image" I 1 It reflects the rich land cover information of the observation area. The difference image generated by the CRCD estimator is then taken as an "intermediate image" I 2 It reflects the changes in the double-navigation SAR image caused by traces and natural phenomena. The false alarm cancellation image is then taken as a "fine image" I 3 It enhances the trace area while reducing false alarms caused by natural phenomena. Finally, the graph I 1 、I 2 And I 3 Connected into one image according to dimensions. The combined image is named Coarse To Fine (CTF) image.
Unlike each individual image, CTF images include not only a wide variety of land cover information, but also weaken the false alarm area and strengthen the trace area. The land coverage area and the trace area of the CTF image can be easily extracted as compared to a single image. The CTF image builds a bridge between the existing task and the target task in the transfer learning, and can remarkably improve trace detection performance.
S32: and acquiring a CUnet network, wherein the weight number of the CUnet network is smaller than that of the Unet network.
The Unet network structure is widely applied to image segmentation of optical images, hyperspectral images and SAR images, and is mainly because an encoder can extract important features in an input image, and a jump connection-based decoder can effectively up-sample feature images of different layers to obtain an accurate segmentation result. When dealing with difference image analysis, the Unet network has too many weight parameters to train and too many pooling layers. In order to solve the problem of small number of trace samples and small size, the present embodiment uses a Compressed uoet (CUnet) structure.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a CUnet network according to an embodiment of the present invention. The encoder and decoder of the CUnet network each comprise four Convolution modules, each having two Convolution (Conv) layers, the Convolution kernel size being 3*3. Each convolution layer is followed by an activation layer and a bulk regularization (BN, batch Normalization) layer. In particular, the LeakyReLU is used as the activation function:
wherein a is i ∈(1,+∞)。x i Is the value of neuron i. To avoid overfitting, a pooling (Drop) layer is added after the first convolution layer of each module. The maximum pooling (Maxpooling) layer has a pooling size of 2 x 2, which is used to reduce the size of the feature map while increasing receptive field. A convolution layer with a convolution kernel size of 1*1 is added at the end of the decoder to transform the feature map into class probabilities. The training weights of the Unet total 7846723 and the CUnet total 1947763. Fewer parameters means that CUnet can improve the efficiency of trace sample use and reduce the risk of overfitting.
S33: dividing the thick image into the thin image based on a migration learning method to obtain a source label and a target label.
In particular, while the construction of CTF images and the CUnet network can help train the recognition model, a small number of trace samples remains an important issue for model training. In this embodiment, the transfer learning is used to perform small sample learning, and since the training samples for trace detection are insufficient to train a CUnet network with good effect, a source task which has a large number of marked samples and is related to trace data is constructed, so that the trace detection task can be helped to train, and the learning strategy belongs to the transfer learning. The widely used ImageNet, pascal VOC, COCO datasets are typically used as source fields. However, SAR images are images based on sparse scattering centers, which are not as intuitive and sharp as optical images. Under such conditions, brute force migration is unsuccessful and may affect the learning of the trace detection model.
SAR images are typically obtained by a focused beam SAR system with a resolution of a few centimeters to tens of centimeters. High resolution means large size imaging scenes and large image sizes. The trace area occupies only a small part of the SAR image, and the majority is various land coverage areas. Since the trace area and the land area have the same imaging conditions and imaging algorithms, the correlation between them far exceeds the correlation between the SAR image and the optical image. Based on these analyses, using the land coverage in CTF images as data for the source domain, a transition learning training mode of CUnet is proposed, comprising a pre-training phase and a fine-tuning phase. The details are described below:
(1) Slicing a CTF image excluding trace pixels into blocks of 128 x 128 pixels to create source domain dataJudging each pixel area as a water area, vegetation or background pixel according to an unsupervised multi-statistics method or manual labeling, and further establishing a source label +.>
(2) Slicing the trace area in the CTF picture into 128 x 128 pixel blocks to establish target dataCorrespondingly establish the target label->Wherein (1)>A value of 0 or 1 means a trace pixel region or a background pixel region, respectively.
S34: and pre-training the CUnet network by utilizing a source domain task.
The migration learning is divided into a source domain task and a target task. The source domain task is a task of performing network training by using source domain data and source domain labels. And migrating the network parameters obtained by training the source domain task to the network of the target task. The target task is a task of performing network training by using target data and target labels.
Specifically, a prediction function f of a source domain task is established by using the CUnet network S (. Cndot.) using training samples in source domain dataTraining the CUnet, which is referred to as "pre-training", to obtain a trained source domain task prediction function f S (·)。
S35: and fine-tuning the CUnet network by utilizing the target task to obtain the trained CUnet network.
Specifically, a predictive function f of trace detection task is established by using the CUnet network T (. Cndot.) use of trained source domain task prediction function f S Weight initialization trace detection task prediction function f T Weight of (-), and training a predictive function f using the target data and the target tag T (. Cndot.), this step is referred to as "fine tuning", thereby obtaining a trained CUnet network.
S4: and performing trace recognition on the image to be processed by using the trained CUnet network.
Specifically, through the induction transfer learning, a trained CUnet network model is obtained. An image block is input and the model automatically identifies each pixel region in the image block as a trace region or a background region. In these steps, the CTF image builds a bridge between the land coverage identified source domain task and the trace-detected target task, while the maximum correlation between the source domain task and the target task also effectively facilitates the learning of trace detection in the case of small samples.
The effect of the double-navigation SAR image trace detection method based on the multivariate statistic and the deep learning can be verified through the following experimental data.
Experimental conditions:
referring to fig. 6, fig. 6 is a double-navigation SAR image and a trace label thereof provided by the embodiment of the present invention, wherein the double-navigation SAR image including two 4096×4096 pixels is an image obtained by a SAR system in a beam-focusing imaging mode, HH polarization, and Ka bandwidth. The time interval between two image acquisitions was 4 hours, the range resolution was 0.13m and the azimuth resolution was 0.21m. In the obtained SAR image, there are various land coverage areas, such as water areas, vegetation areas and other background areas. An automobile travels from the top to the bottom of the upper left corner region. While leaving some footprint in this area. The task area generated is shown in fig. 6 (b), and the mark data of the trace area is shown in fig. 6 (c), where white pixels mean trace areas, and black pixels mean background areas.
Description of experimental evaluation criteria:
the undetected trace pixels are denoted as False Negatives (FN), the erroneously detected background as (FP), the correctly detected trace pixels as True (TP), and the correctly undetected background as True Negatives (TN). Some evaluation criteria are as follows:
P FP =FP/(TN+FP)×100%
P FN =FN/(TP+FN)×100%
P OE =(FP+FN)/(TP+FP+TN+FN)×100%
P CC =(TP+TN)/(TP+FP+TN+FN)×100%
wherein P is OE Representing the proportion of all errors, P CC Indicating the proportion of correct recognition. P (P) FP 、P FN 、P OE And P CC The value range is [0,1]],P FP 、P FN 、P OE The smaller the value of (2), the lower the detection false alarm rate. P (P) CC The larger the value is, the higher the detection accuracy is. Kappa coefficients were calculated as follows:
Kappa=(P CC -PRE)/(1-PRE)×100%
wherein,
kappa is generally used in the value range of 0,1, and Kappa coefficient is a common method for measuring consistency among evaluators, and the larger Kappa value is, the higher the detection accuracy is.
Analysis of experimental results:
CRCD results and analysis:
using the CRCD estimator described in this embodimentTo produce a difference image with low false alarm rate and high detection rate. Calculating thermal noise +_in CRCD equation using the shaded area of FIG. 6 (a) and FIG. 6 (b)>And->
Referring to fig. 7, fig. 7 is a diagram showing the results of different detection estimators according to an embodiment of the present invention. Fig. 7 (a) and 7 (b) are respectively a CCD estimatorAnd CRCD estimator->As a result of which the adjacent pixel size is 9 x 9. The brighter a pixel, the higher the likelihood of the pixel being identified as unchanged, the darker the pixel, the likelihood of the pixel being identified as changedThe higher the sex. It can be seen that there are many black pixels next to the trace pixels of the variations of fig. 7 (a) and 7 (b). If these images are identified as trace areas or background areas directly from pixel values, vegetation areas, waters, and other areas of low correlation pixels will cause a serious false alarm rate.
To understand CCD estimatorAnd CRCD evaluator->The entire trace area is cut out from fig. 7 (a) and 7 (b), as in fig. 7 (c) and 7 (d), respectively.
Referring to fig. 8, fig. 8 is a ROC graph with different window sizes and decision thresholds according to an embodiment of the present invention, wherein the abscissa is a false alarm rate and the ordinate is a detection accuracy rate. From the receiver operating characteristics (ROC, receiver Operating Characteristic) graph, it is known that when the false alarm rate is the same, the accuracy increases with both the CCD estimator (denoted as coherent in fig. 8) and the CRCD estimator due to the increase in window size. In particular, using 9×9 windows and 13×13 windows, the best detection accuracy can be achieved. Since an excessive size will result in blurred boundaries and a huge computational load of the trace area, a 9×9 window is selected instead of a 13×13 window to generate the difference image. The CRCD estimator has a higher accuracy than the CCD estimator at the same false alarm rate. At the same time, the CRCD estimator has a lower false alarm rate than the CCD at the same accuracy. Therefore, the CRCD method can obtain a difference image having a low false alarm rate and high detection accuracy.
And (3) detecting and analyzing a water area:
referring to fig. 9, a water area detection result diagram of different methods according to the embodiment of the present invention is shown. FIGS. 9 (a) - (c) show the water area detection results obtained by the MRF method by using the reference image intensity, the task image intensity, the image pair intensity and the statistics, respectively, and FIGS. 9 (d) - (f) show the water area detection results obtained by the LevelSet method by using the reference image intensity, the task image intensity, the image pair intensity, and the statistics, respectivelyThe water area detection results obtained by the degree, task image intensity and image pair intensity and as statistics are shown in fig. 9 (g) - (i), the water area detection results obtained by the OTSU method by the reference image intensity, task image intensity and image pair intensity and as statistics, respectively, and the real label of the water area in fig. 6 (a) is shown in fig. 9 (j). The Markov random field (Markov random field, MRF) is a model widely applied to image segmentation, and the image segmentation effect is improved by integrating textures, context information and the like of images in the form of prior distribution. The level set (level set) method is an energy-based image segmentation method, and an expression of a target contour is obtained by solving a minimum energy functional. The OTSU method is to determine a global detection threshold by the OTSU method with the sum of the intensities of the image pairs as statistics. It is noted that the upper area of fig. 6 (a) is a beach covered with seawater, which will rise and fall over time, and thus that area will also change over time, and this area is not marked as a real label in this embodiment. Fig. 9 (a) - (c) show that a large number of land pixels are erroneously identified as water by the MRF method. Fig. 9 (d) - (f) show that there are a large number of isolated false alarm pixels, even though most of the water area is identified correctly. Fig. 9 (g) - (i) show that the false alarm pixels are greatly reduced by the OTSU method. In particular, the image pair intensities and as statistics help suppress speckle noise in the reference image and the task image. Referring to table 1, table 1 shows the quantitative results of the different water area detection methods. The method provided by the embodiment realizes the highest P CC And the highest Kappa. Meanwhile, the method of this embodiment consumes only 0.2s for 4096×4096 pixels of image pair processing.
Table 1 quantitative results of different water area detection methods
Vegetation detection results and analysis:
referring to fig. 10, fig. 10 is a graph of vegetation detection results of different methods according to an embodiment of the present invention, where fig. 10 (a) is a detection result of an unsupervised MRF method, and is a taskThe image is taken as input, the detection result of a genetic algorithm (abbreviated as GA-FCM) of fuzzy C-means clustering is taken as input in FIG. 10 (b), the task image is taken as input, the detection result of the method provided by the embodiment of the invention is taken as input in FIG. 10 (C), the intensity difference of the image is taken as input, the window size of a mean filter is 3*3, and the statistic D and the estimated quantity are determined through early experimentsThreshold τ of D Is 0.73 τ α 0.35; fig. 10 (d) is a true label of the vegetation area. The GA-FCM method is a method for analyzing and modeling important data by using a fuzzy theory, establishes uncertainty description of sample class, can objectively reflect the real world, is effectively applied to the field of image segmentation, optimizes clustering results by using a genetic algorithm, and can obtain satisfactory results in time performance and clustering quality. It can be seen that the MRF and GA-CFM methods can extract a large portion of vegetation areas from the mission image. However, there are a large number of false alarms caused by shadow areas and building areas in fig. 10 (a) and (b). Compared with the methods, the method provided by the embodiment of the invention can extract most vegetation areas while eliminating most false alarms.
Referring to Table 2, table 2 shows the quantitative results of the different water area detection methods. As shown in Table 2, the proposed method of the present invention has a low global error and a high accuracy compared to the MRF and GA-FCM methods. In addition, the method provided by the embodiment of the invention has a very fast reasoning speed, and the image of 4096×4096 pixels takes 0.68s for processing. Based on the analysis, the method provided by the embodiment of the invention can effectively extract the vegetation region by utilizing the intensity difference of the image pair and the low correlation characteristic of the vegetation region.
TABLE 2 quantitative results of different Water area detection methods
And (3) eliminating a false alarm result of the difference image and analyzing:
referring to fig. 11, fig. 11 is a false alarm removal result chart provided in the embodiment of the present invention, where fig. 11 (a) is a CRCD result chart without performing false alarm cancellation, fig. 11 (b) is a CRCD result chart with performing false alarm cancellation, and fig. 11 (b) removes most of the low correlation pixels in fig. 11 (a), so that the trace area can be easily identified from the background. It is noted that although these false alarm regions have been removed, the other low correlation regions in fig. 11 (b) still affect the trace detection performance.
CUnet induction and transfer learning results and analysis:
referring to fig. 12, a CTF result graph is provided in an embodiment of the present invention. The CTF image is obtained by connecting the original SAR image containing the trace in fig. 6 (b), the difference image in fig. 7 (b), and the false alarm cancellation image in fig. 11 (b) by channels. As shown in fig. 12 (a), the trace pixels can be easily distinguished from other areas, which is helpful for training a robust neural network. As shown in fig. 12 (b) and 12 (c), the pre-recognition result of the multi-statistics method is that white pixels in fig. 12 (b) represent a water area, and white pixels in fig. 12 (c) represent a vegetation area. For the generalized transfer learning strategy, the pre-training of the network is performed with the pixel region outside the white frame region of fig. 12 (a) as the source domain data and the corresponding labels in fig. 12 (b) and 12 (c) as the source labels. Fine tuning of the network is performed with the pixel area in fig. 12 (a) as target data and the corresponding tags in fig. 12 (b) and 12 (c) as target tags.
For the pre-training step, the source domain data of the CTF image is first cut into 128 x 128 pixel blocks, with a step size of 112 pixels. The number of tiles of the source domain data is 1330. Here 80% was chosen as training samples and 20% as test samples. Random gradient descent (SGD, stochastic Gradient Descent) method was used to optimize the cross entropy loss of CUnet. The batch size was 32 and the number of iterations was 50. The learning rate was initialized to 0.001 and the momentum parameter of sgd was 0.9. The weights of CUnet use a random initialization method. During the training of the model, no data augmentation strategy is used. Referring to fig. 13, fig. 13 is a graph showing the accuracy rate change of the compression uiet pre-training process and the fine tuning process according to an embodiment of the present invention. As can be seen from fig. 13 (a), when the source domain dataset is used as a training image and the unsupervised recognition result is used as a training label, the CUnet has a very stable convergence process.
For the fine tuning step, the white frame region of fig. 12 (a) is cut into 128 x 128 pixel blocks in steps of 64. Unlike the cut step of the source domain data, the smaller the cut step, the more training and test samples. In order to understand the detection result of the whole image, the pixel region below the broken line of the white frame region is used as a training sample, and the pixel region above the broken line of the white frame region is used as a test sample. In addition, some pixel blocks beside the white frame area are also selected as negative samples to increase the robustness of the method described in this embodiment. By the above strategy, 138 tiles (49 tiles contain trace areas) were obtained. Unlike the random initialization of the CUnet by the source domain task, the weights of the CUnet for trace detection are initialized using weights trained by the source domain task. The batch size was 32 and the number of iterations was 60. Adam method was used to optimize the cross entropy loss function. Fig. 13 (b) is a training curve of the fine tuning process, and it can be seen that the induction transfer learning based on land coverage pre-recognition facilitates the learning of the small sample trace detection model.
Referring to fig. 14, fig. 14 is a schematic diagram of an intermediate process of a test sample provided in an embodiment of the present invention, in which a test sample is input as shown in fig. 14 (a), and a feature diagram extracted through a trained CUnet network is shown in fig. 14 (b) and fig. 14 (c). Fig. 14 (b) shows that the CUnet network can extract important and abstract features from the trace area, and fig. 14 (c) shows that the decoder of the CUnet network can discriminate between background pixels and trace pixels using the extracted features very efficiently.
Comparison results and analysis of different detection methods:
in order to confirm the efficiency of the CUnet network according to the embodiment of the present invention, some unsupervised detection models and supervised detection models are used as comparison methods. For the unsupervised-based method, levelSet, OTSU, GA-FCM was used for trace detection of CTF images. These methods take the entire CTF image as input and use different features to detect objects of interest, e.g. intensity features, contour features. For the supervised deep learning method, each pixel of the CTF image is cut into image blocks using a DBN (deep belief network), CNN (convolutional neural network) based on a sliding window, and whether a pixel region is a trace region or a background region is identified, and the size of the cut image block is 45×45 pixels. For each hidden layer in the DBN, the entire training set is used for pre-training 40 times, using a 300-250-100-2 network. In CNN, 3 convolutional layers with a convolution kernel of 3*3 and 2 max-pooling layers with a kernel size of 2 x 2 are used. For the full convolutional neural network, lightweight SegNet and Unet were used as comparison methods. These deep learning methods use the same training set.
Referring to fig. 15, fig. 15 is a graph showing trace detection results of different methods according to the embodiment of the present invention, where fig. 15 (a) is a result obtained by a level set method, fig. 15 (b) is a result obtained by an OTSU method, fig. 15 (c) is a result obtained by a GA-FCM method, fig. 15 (d) is a result obtained by a DBN method, fig. 15 (e) is a result obtained by a CNN method, fig. 15 (f) is a result obtained by a SegNet method, fig. 15 (g) is a result obtained by a Unet method, and fig. 15 (h) is a result obtained by a CUnet method. The DBN is a classical neural network structure, and solves the problem of optimizing the deep neural network by adopting a layer-by-layer training mode, so that the network can reach an optimal solution only by fine adjustment. SegNet and Unet are convolution networks commonly used for image segmentation, and specific implementation modes are different from application scenes. As shown in fig. 15 (a) - (c), although the unsupervised method can easily extract a large number of trace pixels, the detection result is contaminated with noise points and complicated background areas. The main reason is that these unsupervised methods cannot extract valid features to identify trace and background pixels. In contrast, the results of fig. 15 (d) - (h) demonstrate that the supervised deep learning approach well suppresses false alarms caused by background areas. In the methods, the CNN, the Unet and the CUnet network disclosed by the embodiment of the invention keep clearer trace outline, and meanwhile, the CUnet network has the lowest false alarm detection result. Referring to Table 3, table 3 shows the trace detection quantitative results of the different methods. P of the method provided by the embodiment of the invention CC 99.76%Compared with other methods, the method is higher, and the method P provided by the embodiment of the invention FN 40.34% is lower than other methods. Kappa is taken as an overall evaluation standard, the method of the embodiment of the invention reaches 70.50 percent, and is greatly improved compared with other methods. In addition, the CUnet of the embodiment of the invention has a fast reasoning speed, and only 8.23 seconds are needed for reasoning about an image with a size of 4096×4096 pixels.
TABLE 3 quantitative results of trace detection for different methods
Comparison results and analysis of different pretraining strategies:
to verify the effect of the unsupervised pre-training model on trace detection, two other methods were used to pre-train the CUnet network. On the one hand, the CTF image is still used as an input image of a source task, and unlike the method provided by the embodiment of the invention, the land coverage label is marked manually, and comprises a water area, a vegetation area and a background type as a real label of the source task. On the other hand, the task of semantic segmentation of Pascal VOCs is used to pretrain the proposed CUnet network. The detection model is initialized by using the CUnet network model trained by the pre-training strategies to obtain three corresponding task detection models, and besides, the detection model which is not pre-trained and is used for initializing the weights is also used. Referring to table 4, table 4 shows the trace detection results of different pre-training methods. It can be seen that the model without pre-training has a minimum recognition accuracy and Kappa value. After pre-training of the VOC data, the Kappa coefficient is significantly improved and the global error of detection is reduced. The method replaces the VOC data with the land coverage pixels of the CTF image, and the supervised pre-training and the unsupervised pre-training methods realize higher detection precision. Notably, the unsupervised pre-training model detects trace pixels to achieve the highest P CC (99.76%) and the highest Kappa (70.50%), which confirm the effectiveness of unsupervised pretrained induction transfer learning based on CTF images.
TABLE 4 Trace detection results for different Pre-training methods
Comparison and analysis of different numbers of training samples:
the CUnet network and induction transfer learning are used for overcoming the problem of small trace area sample number in the CTF image. To verify the suitability of the method, the present embodiment randomly selects a different number of samples from the training set to fine tune the CUnet network. Referring to fig. 16, fig. 16 is a graph of detection results of different numbers of training samples according to an embodiment of the present invention. Kappa and P at different ratios of selected samples to full trace training samples OE The variation of (a) is shown in fig. 15 (a) and (b). As the number of training samples increases, kappa increases, P OE Continuously falls. Therefore, the CUnet network and the generalized transfer learning model provided by the embodiment have strong adaptability to the number of training samples, and the more the training samples are, the better the detection result is.
Comparison results and analysis of different data expansion strategies:
data augmentation such as image rotation and inversion is a fundamental step for deep learning, as this can provide more training samples and in most cases improve performance. This embodiment also uses different strategies to augment the trace samples, including random rotation by 0 ° to 30 °, random scaling by 0.8 to 1.2, brightness adjustment by 0.8 to 1.2, and random inversion. The number of expanded samples is 0.5 times, 1 time and 2 times that of the original samples. Referring to table 5, table 5 shows the detection results of different data expansion strategies. It can be seen that these expansion strategies have a negative effect on the individual parameters, in particular Kappa, with a great attenuation compared to the unexpanded sample. The more samples that are expanded, the more false alarms and errors rise, so these widely used data expansion methods cannot be directly applied to trace detection tasks. Because of the small number of samples in the trace detection task, expansion of these samples can result in severe test sample overfitting. At the same time, these expansion strategies differ from the double-navigation SAR image, and unreasonable expansion strategies affect the learning process of the method.
Table 5 detection results of different data expansion strategies
In summary, the embodiment of the invention provides a dual-navigation SAR image trace detection method based on multi-element statistics and deep learning, which is characterized in that a difference image of an SAR image pair is obtained through a complex reflection change detection estimator, a water area and a vegetation area are obtained through an unsupervised multi-element statistics, a false alarm elimination image is further obtained, a coarse-to-fine image is constructed by combining an original image, the difference image and a false alarm removal image, and induction transfer learning is performed by utilizing the coarse-to-fine image and a CUnet network, so that dual-navigation SAR image trace detection under the condition of a small sample is realized, and the detection effect is good.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (6)

1. The double-navigation SAR image trace detection method based on the multivariate statistic and the deep learning is characterized by comprising the following steps of:
S1: obtaining a difference image of the double-navigation SAR image by using a complex reflection change detection estimator;
s2: carrying out recognition and false alarm elimination of water areas and vegetation areas on the difference image by using an unsupervised multivariate statistic method to obtain a false alarm elimination image;
s3: training a CUnet network by utilizing induction transfer learning and coarse-to-fine images;
s4: trace recognition is carried out on the image to be processed by utilizing the trained CUnet network,
the S1 comprises the following steps:
s11: constructing mathematical models of double navigation SAR images obtained at the same geometric position at different moments;
s12: obtaining a complex reflection change detection estimator according to the mathematical model, wherein the expression of the complex reflection change detection estimator is as follows:
wherein,and->Respectively represent image X 1 And X 2 Is the kth complex data of>Representing +.>Performing conjugate operation, wherein N is the number of neighborhood pixel points, sigma n1 Sum sigma n2 Respectively, are image X 1 And X 2 An additive system thermal noise estimate of (2);
s13: processing the double-navigation SAR image by using the complex reflection change detection estimator to obtain the difference image,
the step S2 comprises the following steps:
s21: performing intensity superposition by using an image pair consisting of the double-navigation SAR image to obtain water area extraction statistics;
S22: acquiring a global threshold of the water area extraction statistic based on an OTSU method and extracting the water area from the difference image;
s23: performing intensity subtraction by using the image pairs to obtain vegetation extraction statistics;
s24: based on a method of manually determining a threshold, acquiring a threshold of vegetation extraction statistics and extracting a vegetation region from the difference image;
s25: constructing a false alarm elimination image for eliminating water areas and vegetation areas according to the difference image,
the step S3 comprises the following steps:
s31: the original SAR image, the difference image and the false alarm elimination image are connected in parallel according to channels, and a coarse-to-fine image is obtained;
s32: acquiring a CUnet network;
s33: dividing the thick image to the thin image based on a migration learning method to obtain a source label and a target label;
s34: pre-training the CUnet network by utilizing a source domain task;
s35: and fine-tuning the CUnet network by utilizing the target task to obtain the trained CUnet network.
2. The method for dual navigation SAR image trace detection based on multivariate statistics and deep learning according to claim 1, wherein S22 comprises:
after obtaining water area extraction statistics, obtaining a global threshold tau by an OTSU method s Judging whether the current pixel m belongs to a water area or not, wherein the judgment standard is as follows:
wherein S is m A water volume representing the current pixel m of the difference image extracts statistics.
3. The method for dual navigation SAR image trace detection based on multivariate statistics and deep learning according to claim 2, wherein S24 comprises:
for each pixel m in the image pair, determining whether or not the image pair satisfiesIf so, the pixel m belongs to a vegetation region, wherein τ D And τ α Statistics D and CRCD estimator derived estimates for vegetation region identification, respectivelyThreshold value of D m Representing the vegetation extraction statistics corresponding to pixel m, < ->Representing the estimated amount corresponding to pixel m.
4. The dual navigation SAR image trace detection method based on multivariate statistics and deep learning according to claim 3, wherein S33 comprises:
s331: slicing an area of the CTF image excluding trace pixels into a plurality of image blocks, and creating source domain dataJudging each pixel area as a water area, vegetation or background pixel according to an unsupervised multi-statistics method or manual labeling, and establishing a source domain label +.>
S332: slicing the trace region in the CTF image into a plurality of image blocks, and establishing target data Judging each pixel area as a water area, vegetation or background pixel according to an unsupervised multi-statistics method or manual labeling, and establishing a target label +.>
5. The dual navigation SAR image trace detection method based on multivariate statistics and deep learning of claim 4, wherein S34 comprises:
establishing a prediction function f of a source domain task by using the CUnet network S (. Cndot.) the CUnet network is pre-trained by using the source domain data and the source domain label to obtain a trained source domain task prediction function f S (·)。
6. The dual navigation SAR image trace detection method based on multivariate statistics and deep learning of claim 5, wherein S35 comprises:
establishing a predictive function f of trace detection task by using the CUnet network T (. Cndot.) use of trained source domain task prediction function f S Weight initialization trace detection task prediction function f T Weight of (-), and training a predictive function f using the target data and the target tag T (. Cndot.) thus obtaining a trained CUnet network.
CN202110401084.XA 2021-04-14 2021-04-14 Double-navigation SAR image trace detection method based on multi-element statistics and deep learning Active CN113222898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110401084.XA CN113222898B (en) 2021-04-14 2021-04-14 Double-navigation SAR image trace detection method based on multi-element statistics and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110401084.XA CN113222898B (en) 2021-04-14 2021-04-14 Double-navigation SAR image trace detection method based on multi-element statistics and deep learning

Publications (2)

Publication Number Publication Date
CN113222898A CN113222898A (en) 2021-08-06
CN113222898B true CN113222898B (en) 2024-02-09

Family

ID=77087277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110401084.XA Active CN113222898B (en) 2021-04-14 2021-04-14 Double-navigation SAR image trace detection method based on multi-element statistics and deep learning

Country Status (1)

Country Link
CN (1) CN113222898B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114966693B (en) * 2022-07-20 2022-11-04 南京信息工程大学 Airborne ship target ISAR refined imaging method based on deep learning
CN117437523B (en) * 2023-12-21 2024-03-19 西安电子科技大学 Weak trace detection method combining SAR CCD and global information capture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908138A (en) * 2010-06-30 2010-12-08 北京航空航天大学 Identification method of image target of synthetic aperture radar based on noise independent component analysis
US9239384B1 (en) * 2014-10-21 2016-01-19 Sandia Corporation Terrain detection and classification using single polarization SAR
WO2019136946A1 (en) * 2018-01-15 2019-07-18 中山大学 Deep learning-based weakly supervised salient object detection method and system
CN111681197A (en) * 2020-06-12 2020-09-18 陕西科技大学 Remote sensing image unsupervised change detection method based on Siamese network structure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908138A (en) * 2010-06-30 2010-12-08 北京航空航天大学 Identification method of image target of synthetic aperture radar based on noise independent component analysis
US9239384B1 (en) * 2014-10-21 2016-01-19 Sandia Corporation Terrain detection and classification using single polarization SAR
WO2019136946A1 (en) * 2018-01-15 2019-07-18 中山大学 Deep learning-based weakly supervised salient object detection method and system
CN111681197A (en) * 2020-06-12 2020-09-18 陕西科技大学 Remote sensing image unsupervised change detection method based on Siamese network structure

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王艳恒 ; 高连如 ; 陈正超 ; 张兵 ; .结合深度学习和超像元的高分遥感影像变化检测.中国图象图形学报.2020,(第06期),全文. *
陆文星 ; 戴一茹 ; 李克卿 ; .基于自适应惯性权重优化后的粒子群算法优化误差反向传播神经网络和深度置信网络(DBN-APSOBP)组合模型的短期旅游需求预测研究.科技促进发展.2020,(第05期),全文. *

Also Published As

Publication number Publication date
CN113222898A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
Modava et al. Integration of spectral histogram and level set for coastline detection in SAR images
CN107680120B (en) Infrared small target tracking method based on sparse representation and transfer limited particle filtering
Karakuş et al. Ship wake detection in SAR images via sparse regularization
CN109345472B (en) Infrared moving small target detection method for complex scene
CN113222898B (en) Double-navigation SAR image trace detection method based on multi-element statistics and deep learning
CN101727662A (en) SAR image nonlocal mean value speckle filtering method
CN110889843B (en) SAR image ship target detection method based on maximum stable extremal region
CN103473755B (en) Based on the sparse denoising method of SAR image that change detects
CN111275627B (en) Image snow removing algorithm based on snow model and deep learning fusion
CN113327231B (en) Hyperspectral abnormal target detection method and system based on space-spectrum combination
Ravanfar et al. Low contrast sperm detection and tracking by watershed algorithm and particle filter
CN108830808B (en) On-satellite infrared image stripe noise removing method based on similar line window mean value compensation
Prendes et al. Change detection for optical and radar images using a Bayesian nonparametric model coupled with a Markov random field
CN113850204A (en) Human body action recognition method based on deep learning and ultra-wideband radar
CN114764801A (en) Weak and small ship target fusion detection method and device based on multi-vision significant features
Mukherjee et al. CNN-based InSAR denoising and coherence metric
CN113191996A (en) Remote sensing image change detection method and device and electronic equipment thereof
CN109285148B (en) Infrared weak and small target detection method based on heavily weighted low rank and enhanced sparsity
Albalooshi et al. Deep belief active contours (DBAC) with its application to oil spill segmentation from remotely sensed sea surface imagery
Pelliza et al. Optimal Canny’s parameters regressions for coastal line detection in satellite-based SAR images
Lin et al. Infrared ship target detection based on the combination of Bayesian theory and SVM
CN112686222B (en) Method and system for detecting ship target by satellite-borne visible light detector
CN110211124B (en) Infrared imaging frozen lake detection method based on MobileNet V2
Mukherjee et al. CNN-based InSAR coherence classification
CN113807206A (en) SAR image target identification method based on denoising task assistance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant