CN113298056A - Multi-mode remote sensing image change detection method, model generation method and terminal equipment - Google Patents

Multi-mode remote sensing image change detection method, model generation method and terminal equipment Download PDF

Info

Publication number
CN113298056A
CN113298056A CN202110847669.4A CN202110847669A CN113298056A CN 113298056 A CN113298056 A CN 113298056A CN 202110847669 A CN202110847669 A CN 202110847669A CN 113298056 A CN113298056 A CN 113298056A
Authority
CN
China
Prior art keywords
domain
image
change detection
remote sensing
objective function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110847669.4A
Other languages
Chinese (zh)
Inventor
刘力荣
甘宇航
唐新明
尤淑撑
罗征宇
莫凡
何芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Original Assignee
Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ministry Of Natural Resources Land Satellite Remote Sensing Application Center filed Critical Ministry Of Natural Resources Land Satellite Remote Sensing Application Center
Priority to CN202110847669.4A priority Critical patent/CN113298056A/en
Publication of CN113298056A publication Critical patent/CN113298056A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a multi-mode remote sensing image change detection method, a model generation method and terminal equipment, wherein the multi-mode remote sensing image change detection model comprises two domain converters, two condition discriminators and two change decision models; the generation method comprises the following steps: constructing two domain converters and two condition discriminators by using a cyclic consistency countermeasure network for cross-domain conversion among remote sensing images in different modes; constructing respective change decision models of different image domains by using a twin neural network so as to carry out change detection on two images which are converted into the same image domain from cross-domain; constructing a first objective function corresponding to cross-domain conversion and a second objective function corresponding to change detection; model training is performed based on two objective functions. According to the technical scheme, the integrated model of the parallel multi-network is constructed by combining the cyclic consistency countermeasure network and the twin neural network, so that the image domain difference among the multi-mode remote sensing images can be effectively eliminated, the change detection precision is improved, and the like.

Description

Multi-mode remote sensing image change detection method, model generation method and terminal equipment
Technical Field
The application relates to the technical field of remote sensing images, in particular to a multi-mode remote sensing image change detection method, a model generation method and terminal equipment.
Background
The change detection is a process of determining the change of the earth surface coverage state according to multiple observations at different times, and the real-time and accurate acquisition of the earth surface change information has important significance for natural resource management, homeland space planning, ecological environment protection and the like. As an advanced and mature technical means, the remote sensing earth observation gradually forms a multi-type satellite remote sensing system with optics, hyperspectrum, radar, laser altimetry and the like, has the advantages of large-range, all-time, all-weather and periodic earth observation, can quickly, macroscopically and dynamically acquire earth surface images, and provides important data support for solving the problem of change detection of land coverage. Most of the change detection researches use multi-temporal image data of the same sensor, but due to observation difficulty, cost, coverage period and the like, the homologous remote sensing image cannot obtain proper repeated observation data. With the continuous development of novel sensor technology, the spatial resolution of remote sensing images is gradually improved, the spectrum information is gradually enriched, how to effectively utilize massive multi-mode remote sensing data from different acquisition platforms and various sensors to realize the discovery of the cooperative change of multi-mode spectrum and multi-load satellite images becomes an important research direction for the current detection of the earth surface coverage change.
Due to the fact that the multi-mode remote sensing images are different in imaging mechanism and observation characteristics, certain difficulties and challenges are brought to change detection. The existing remote sensing image change detection methods are mainly divided into two main categories, namely a traditional method and a deep learning-based method. The traditional method can be divided into change detection methods based on differential images, characteristics and targets, the traditional change detection method generally has the problems of incapability of separating from manual control and intervention, low automation degree and the like, and is easily influenced by image imaging conditions, image acquisition cycles, image pair matching quality, noise and the like, so that the change detection result is not ideal.
Most of the processing of the existing change detection method based on deep learning is based on post-classification, namely, a neural network model is used for respectively extracting high-dimensional features of two different images, and then the extracted high-dimensional features are compared to obtain a change area. Therefore, the processing method does not consider the negative influence of the domain difference between the images on the change detection task, so that the obtained change detection result is not ideal enough.
Disclosure of Invention
The embodiment of the application provides a multi-mode remote sensing image change detection method, a multi-mode remote sensing image change model generation method and terminal equipment.
The embodiment of the application provides a method for generating a multi-mode remote sensing image change detection model, wherein the multi-mode remote sensing image change detection model comprises two domain converters, two condition discriminators and two change decision-making devices; the generation method comprises the following steps:
constructing the two domain converters and the two condition discriminators by using a cyclic consistency countermeasure network, wherein the domain converters are used for cross-domain conversion among remote sensing images of different modes, and the condition discriminators are used for condition discrimination during the cross-domain conversion;
constructing a first objective function corresponding to the cross-domain conversion;
constructing the two change decision makers of different image domains by using a twin neural network, wherein the change decision maker is used for carrying out pixel-level change detection on two images converted into the same image domain from cross-domain;
constructing a second objective function corresponding to the change detection;
and training the multi-mode remote sensing image change detection model based on the first objective function and the second objective function to obtain the trained multi-mode remote sensing image change detection model.
In some embodiments, the training the multi-modal remote sensing image change detection model based on the first objective function and the second objective function includes:
training the two domain converters based on the first objective function to obtain the two trained domain converters;
training each change decision maker based on the second objective function to respectively obtain the trained change decision makers;
and setting the trained domain converter and the change decision maker in the same image domain in series and then in parallel to obtain the trained multi-mode remote sensing image change detection model.
In some embodiments, the training the multi-modal remote sensing image change detection model based on the first objective function and the second objective function includes:
constructing a combined objective function according to the first objective function, the second objective function and the respective corresponding preset weights to obtain a combined objective function;
and jointly training the two domain converters and the two change decision makers based on the combined objective function to obtain a trained multi-mode remote sensing image change detection model.
In some embodiments, the two domain converters comprise a first domain converter of the first image domain and a second domain converter of the second image domain; the training of the two domain converters based on the first objective function and the two conditional discriminators comprises:
converting the respective input image samples into the image domain where the image samples are located by using the first domain converter and the second domain converter to obtain a converted image of each image sample;
converting the respective input converted images into an original image domain again by using the first domain converter and the second domain converter to obtain a reconstructed image of each image sample;
judging whether unchanged areas between the image sample, the converted image and the reconstructed image are consistent or not by using the condition judger corresponding to the image domain so as to calculate to obtain a value of the first objective function;
and judging whether the current value of the first objective function meets a preset condition, if not, adjusting network parameters in the two domain converters according to the value of the first objective function, carrying out next sample training, and stopping training until the obtained value of the first objective function meets the preset condition.
In some embodiments, the first objective function comprises a cyclical consistency loss function between the first domain converter and the second domain converter, and a warspersein distance metric-based opposition loss function for each of the first domain converter and the second domain converter; the expression of the first objective function is:
Figure 602147DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 997357DEST_PATH_IMAGE002
in order to be said first objective function,
Figure DEST_PATH_IMAGE003
representing the first domain converter G and a second condition discriminator D2YA penalty function of antagonism between;
Figure 936363DEST_PATH_IMAGE004
representing the second domain converter F and the first conditional arbiter D1XA penalty function of antagonism between;
Figure DEST_PATH_IMAGE005
representing the cyclical consistency loss function,
Figure 696508DEST_PATH_IMAGE006
and
Figure DEST_PATH_IMAGE007
respectively representing the cross-domain coherence loss functions of the first domain converter G and the second domain converter F;
Figure 153422DEST_PATH_IMAGE008
representing a self-consistency loss function of the first domain converter G to the second domain converter F;
Figure DEST_PATH_IMAGE009
representing the self-consistency loss function of the second domain converter F to the first domain converter G.
In some embodiments, the twin neural network comprises two feature extraction layers and one decision layer for constructing the variation decision maker; the change detection of the two images converted into the same image domain from the cross-domain by using the change decision device comprises the following steps:
respectively extracting the features of the initial remote sensing image in the same image domain and the converted image obtained after the cross-domain conversion through the two feature extraction layers to obtain two feature maps;
and calculating Euclidean distances between pixel pairs at the same positions in the two feature images through the decision layer, and judging whether the corresponding pixels belong to a change region or not based on the Euclidean distances so as to obtain a current change detection image in the same image region.
In some embodiments, the second objective function is constructed from a pixel-level contrast loss function; the expression of the pixel-level contrast loss function is:
Figure 290005DEST_PATH_IMAGE010
wherein the content of the first and second substances,Spresentation instrumentThe twin neural network;
Figure DEST_PATH_IMAGE011
a pixel-level contrast loss function representing a current image domain; w and H respectively represent the width and height of the feature map image in the current image domain;
Figure 145835DEST_PATH_IMAGE012
indicating the position of two feature images
Figure DEST_PATH_IMAGE013
The Euclidean distance between pairs of pixels of a location;
Figure 76882DEST_PATH_IMAGE014
and
Figure DEST_PATH_IMAGE015
respectively representing the pixel distribution weights of the unchanged area and the changed area;
Figure 18162DEST_PATH_IMAGE016
the label graph representing the binary reference change is positioned in
Figure DEST_PATH_IMAGE017
A grey value of the pixel of the location;mrepresenting a preset pixel level euclidean distance threshold.
The embodiment of the application further provides a multi-mode remote sensing image change detection method, which comprises the following steps:
preprocessing two acquired remote sensing images in different modes to obtain two preprocessed images;
inputting the two preprocessed images into the multi-mode remote sensing image change detection model, and outputting to obtain respective change detection graphs of two image domains; the multimode remote sensing image change detection model is obtained by adopting the multimode remote sensing image change detection model generation method;
and calculating to obtain a change detection image between the two remote sensing images in different modes according to the change detection images of the two image domains.
The embodiment of the application further provides a terminal device, the terminal device comprises a processor and a memory, the memory stores a computer program, and the processor is used for executing the computer program to implement the multi-mode remote sensing image change detection model generation method or the multi-mode remote sensing image change detection method.
An embodiment of the present application further provides a readable storage medium, which stores a computer program, and when the computer program is executed on a processor, the method for generating a multi-modal remote sensing image change detection model or the method for detecting a multi-modal remote sensing image change is implemented.
The embodiment of the application has the following beneficial effects:
according to the method for generating the multi-mode remote sensing image change detection model, two domain converters and two condition discriminators are obtained by constructing the cyclic consistency confrontation network, two change decision devices are obtained by constructing the double twin neural networks respectively, and then the parallel multi-network integrated model is obtained by constructing the double twin neural networks, wherein cross-domain conversion is carried out based on cyclic consistency confrontation learning, so that image domain differences among multi-mode remote sensing images can be effectively eliminated, and reliable data are provided for a change detection task; meanwhile, a full convolution twin neural network is introduced to realize the pixel level change decision of the image, so that the precision of change detection can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows a schematic flow chart of a method for generating a multi-modal remote sensing image change detection model according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating a domain converter training process of a multi-modal remote sensing image change detection model generation method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating the training of a change decision maker of the multi-modal remote sensing image change detection model generation method according to the embodiment of the present application;
fig. 4 shows a schematic structural diagram of a multi-modal remote sensing image change detection model according to an embodiment of the present application;
fig. 5 is a schematic flow chart illustrating a multi-modal remote sensing image change detection method according to an embodiment of the present application;
fig. 6 shows a schematic structural diagram of a multi-modal remote sensing image change detection model generation device according to an embodiment of the present application;
fig. 7 shows a schematic structural diagram of the multi-modal remote sensing image change detection apparatus according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Because the imaging mechanism and the observation characteristics of the multi-mode remote sensing image are different, certain difficulty and challenge are brought to change detection, and the method mainly comprises the following steps: (1) obvious geometric deformation exists among a plurality of remote sensing images acquired from the same geographical position through different sensors, different wave bands and different observation angles at different time, and image registration is the basis for realizing change detection. (2) The heterogeneous remote sensing images have obvious radiation characteristic difference on color distribution, textural features and context information, that is, for the two-phase remote sensing images subjected to accurate registration, due to different illumination conditions, atmospheric conditions, acquisition time and sensors, the images have obvious radiation characteristic difference in color distribution, textural features and context information no matter in changed or unchanged areas, and the domain difference among the images can obviously influence the change detection effect. (3) The domain difference, image noise, information redundancy and the like of the multi-mode remote sensing image restrict the universality, application effect and the like of the existing change detection method.
In order to solve at least one of the above problems, an embodiment of the present application provides a multi-modal remote sensing image change detection model, and a cyclic consistency countermeasure network and a twin neural network are introduced to jointly construct a parallel multi-network integrated change detection model, so that not only can domain differences between different modal remote sensing images be effectively eliminated to ensure conversion accuracy of cross-domain conversion, but also a series of operations from cross-domain conversion, feature extraction to change decision and the like can be realized, and thus, an end-to-end change detection task is realized.
Example 1
Referring to fig. 1, the present embodiment provides a method for generating a multi-modal remote sensing image change detection model, where the multi-modal remote sensing image change detection model includes two domain converters and two change decision makers, so that not only can image domain differences between multi-modal remote sensing images be effectively eliminated, but also change detection between integrated multi-modal cross-domain images can be realized.
Exemplarily, the multi-modal remote sensing image change detection model mainly comprises two domain converters, two condition discriminators and two change decision makers, wherein the domain converters and the change decision makers in the same image domain are arranged in series, and different image domains are arranged in parallel, so that an integrated structure is formed. A method for generating the multimodal remote sensing image change detection model will be described below. As shown in fig. 1, the generation method includes:
step S110, two domain converters and two condition discriminators are constructed by using the cyclic consistency countermeasure network, wherein the domain converters are used for cross-domain conversion among the remote sensing images of different modes, and the condition discriminators are used for condition discrimination during the cross-domain conversion.
The cyclic consistency countermeasure network is a countermeasure network (GAN) generated based on a cyclic consistency mechanism. According to the theory of the cycle consistency mechanism, in an ideal cross-domain conversion process, unchanged areas in an input image and a converted output image should be similar; on the other hand, by reconverting the converted image to the original image domain, the resulting reconstructed image should be identical to the original input image. To ensure that cross-domain conversion is achieved without losing video information, for example, in one embodiment, the countermeasure network will be constructed based on the residual error module ResNet to obtain the above-mentioned domain converter, e.g., one domain converter can be constructed by 6-12 ResNet modules, etc.
In this embodiment, the domain converter and the condition discriminator of two different image domains in the change detection model are constructed by using the cyclic consistency countermeasure network, which can be used to eliminate the difference between color distribution, texture features and context information between input remote sensing images, so as to avoid the occurrence of "same-object different-spectrum" and "same-spectrum foreign matter".
The remote sensing images in different modes mainly refer to two-stage remote sensing images which are from different shooting periods and are registered, and the two-stage remote sensing images can be regarded as a pair of heterogeneous remote sensing images with similar content parts but different domain characteristics due to obvious differences of color distribution, texture characteristics and context information of the two remote sensing images.
The above-mentioned cross-domain conversion refers to converting the remote sensing image in one image domain into another image domain. Assuming that the two domain converters are a first domain converter and a second domain converter respectively, exemplarily, the two domain converters are used for performing cross-domain conversion between remote sensing images of different modalities, including: the remote sensing image in the first image domain is converted into the second image domain by the first domain converter, and the remote sensing image in the second image domain is converted into the first image domain by the second domain converter, so that the first converted image in the second image domain and the second converted image in the first image domain can be obtained.
To train the two constructed domain converters, the present embodiment will also construct two condition discriminators for only the unchanged regions corresponding to the two domain converters. For example, the two condition discriminators can be used to discriminate the unchanged regions of the input image and the output image of the domain converter corresponding to the image domain, so as to calculate the loss of the current domain converter during the cross-domain conversion.
For the change detection task, only the unchanged area can be used as the reference true value of the domain conversion, in the backward propagation process, the pixel level mapping of the changed area can be realized by using a general discriminator, and the training mode forces the domain converter to modify the inherent information of the changed area while converting the domain, so that the training precision of the domain converter can be reduced, and the change detection effect is finally influenced. Therefore, in this embodiment, two condition discriminators designed for training are designed, and only the unchanged area in the two-phase video is discriminated with reference to the change label map, so that the domain converter performs domain conversion only according to the image feature of the unchanged area, thereby well ensuring the training accuracy of the domain converter.
Step S120, a first objective function corresponding to cross-domain conversion is constructed.
The objective function is also called a loss function, and is mainly used for evaluating the loss generated by each domain converter during the cross-domain conversion so as to continuously update the network parameters for constructing the two domain converters. It is to be understood that the "first" of the first objective functions is only used to distinguish the loss function corresponding to the change detection after that.
Considering that the native countermeasure network structure usually adopts KL divergence or JS divergence to construct the loss function, but in actual application, problems such as gradient disappearance, gradient instability, mode collapse and the like occur, for this reason, Wasserstein distance measurement is introduced in the embodiment to perform the loss discrimination of cross-domain conversion. The Wasserstein distance has excellent smooth characteristic relative to KL divergence and JS divergence, the continuous difference between two image domains can be calculated, the problems of instability and gradient disappearance of the traditional training are solved, a reliable training progress index is provided, and the training quality is improved.
In this embodiment, the first objective function mainly includes two major types of losses, namely a cyclic consistency loss between the first domain converter and the second domain converter, and a countervailing loss based on the Wasserstein distance metric corresponding to the first domain converter and the second domain converter, respectively. The cyclic consistency loss can prevent the two learned domain converters from contradicting each other, and the antagonistic loss can enable the image generated by the source domain to be closer to the target domain.
In one embodiment, the expression of the first objective function is:
Figure 692857DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 403192DEST_PATH_IMAGE019
as a first objective function, G and D1X(abbreviated as D1) respectively representing a first domain converter and a corresponding first condition discriminator in a first video domain; f and D2Y(abbreviated as D2) respectively representing a second domain converter and a corresponding second condition discriminator in the second video domain;
Figure 833037DEST_PATH_IMAGE020
represents a penalty function between the first domain converter G and the second condition discriminator D2;
Figure 12345DEST_PATH_IMAGE021
represents the penalty function between the second domain converter F and the first conditional arbiter D1;
Figure 287469DEST_PATH_IMAGE022
representing a cyclic consistency loss function between the two domain converters G and F. Function(s)
Figure 930940DEST_PATH_IMAGE023
To representfWhen the minimum value is taken, the value of the first variable i is solved; andfand when the maximum value is taken, solving the value of a second variable j.
(a) For the penalty function mentioned above
Figure 455987DEST_PATH_IMAGE024
And
Figure 122591DEST_PATH_IMAGE025
since the first domain converter G converts the picture x of the first picture domain (T1 domain) into the second picture domain (T2 domain), the second domain converter F converts the picture y of the T2 domain into the T1 domain, and the two condition discriminators D1 and D2 respectively attempt to distinguish the converted picture from the original picture in the respective picture domains, the two domain converters G and F attempt to reduce the loss value at the time of domain conversion during model training, but the two condition discriminators D1 and D2 attempt to increase the loss value at the time of domain conversion, and thus the two confrontation processes are represented here as two confrontation processes
Figure 388356DEST_PATH_IMAGE026
And
Figure 886334DEST_PATH_IMAGE027
exemplarily, two symmetric warspersein distance-based counterloss structure expressions are:
Figure 595664DEST_PATH_IMAGE028
wherein
Figure 546302DEST_PATH_IMAGE029
A converted image representing the image x passing through the first domain converter G;
Figure 615758DEST_PATH_IMAGE030
a converted image representing the image y passing through the second domain converter F;
Figure 968242DEST_PATH_IMAGE031
indicating the expected value.
(b) For the two domain converters G and F described aboveLoss of cyclic consistency between
Figure 848474DEST_PATH_IMAGE032
The method comprises two types, namely cross-domain consistency loss and self-consistency loss, namely:
Figure 286408DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure 893976DEST_PATH_IMAGE034
respectively representing the cross-domain consistency loss functions of the first domain converter G and the second domain converter F;
Figure 366546DEST_PATH_IMAGE035
representing a self-consistency loss function of the first domain converter G to the second domain converter F;
Figure 417678DEST_PATH_IMAGE036
representing the self-consistency loss function of the second domain converter F to the first domain converter G.
It will be appreciated that the above-described cross-domain coherency loss
Figure 342909DEST_PATH_IMAGE037
Is a pixel-level distance measure that is used to limit only the unchanged areas on the paired images; with loss of self-consistency
Figure 757097DEST_PATH_IMAGE038
It is ensured that the image can be remapped back to the original input image after passing through the two converters in sequence. In one embodiment, both of these loss functions can be constructed using the L1 loss function.
It will be appreciated that the input image and the output image of the domain converter are used as the basic data for detecting the change, and therefore, a change decision device is required to be constructed for detecting the pixel level change.
Step S130, two change decision makers of different image domains are constructed by using the twin neural network, wherein the change decision maker is used for performing pixel-level change detection on two images converted into the same image domain across domains.
The twin neural network is a coupling framework established based on two neural networks. In this embodiment, a twin neural network is used to construct a change decision maker for performing weight-sharing-based feature extraction and pixel-level change region determination operations on two images in the same image domain. It is worth noting that the twin neural network of the present embodiment will employ a full convolution neural network and a pixel level distance measure to replace the conventional twin network employing a convolution neural network and a vector level distance measure for detecting image pixel level differences, wherein the pixel level distance measure is mainly used for measuring the similarity between pairs of pixels. This can further improve the change detection accuracy.
In one embodiment, the twin neural network used to construct the change decision maker may include two feature extraction layers and one decision layer that share weights. Exemplarily, the change detection is performed on two videos converted into the same video domain across domains by using a change decision device, including: and respectively extracting the characteristics of the initial remote sensing image in the same image domain and the converted image converted from the other image domain through two characteristic extraction layers to obtain two characteristic graphs. And then, calculating Euclidean distances between pixel pairs at the same positions in the two feature maps through a decision layer, and judging the probability that the corresponding pixel belongs to a change region based on the Euclidean distances so as to obtain a change prediction map in the current image domain, namely an output change detection map.
For the above-mentioned probability that the pixel belongs to the change region, the following method can be adopted to calculate:
Figure 818594DEST_PATH_IMAGE039
wherein the content of the first and second substances,P(i,j) Is shown ati,j) Belonging to a changing areaWhen the value of the probability of (2) is close to 0, it means that the pixel belongs to the unchanged area, and when the value of the probability of (1) is close to 1, it means that the pixel belongs to the changed area,P 1(i,j) AndP 2(i,j) The probabilities in the T1 and T2 domains are indicated, respectively.mRepresenting a preset pixel level euclidean distance threshold,
Figure 40628DEST_PATH_IMAGE040
to representL2 A distance measure.
Figure 453155DEST_PATH_IMAGE041
Represents the input video in the T1 domain input to the change decider S1;
Figure 668104DEST_PATH_IMAGE042
a converted image representing an image originally in the T2 domain converted into the T1 domain by the second domain converter F;
Figure 849687DEST_PATH_IMAGE043
represents the input video in the T2 domain input to the change decider S2;
Figure 977043DEST_PATH_IMAGE044
which represents a converted image converted from an image originally in the T1 domain into the T2 domain by the first domain converter G.
Step S140, a second objective function corresponding to the change detection is constructed.
The second objective function is mainly used for evaluating the contrast loss condition between the change prediction graph output by the change decision device and the change reference truth value so as to continuously update the parameters of the twin neural network for constructing the two domains to obtain the trained change decision device.
Taking a change decision device as an example, in one embodiment, the expression of the second objective function is:
Figure 876866DEST_PATH_IMAGE045
wherein the content of the first and second substances,Srepresenting the twin neural network, and may also represent a change decider;
Figure 629927DEST_PATH_IMAGE046
a pixel-level contrast loss function representing a current image domain; w and H respectively represent the width and height of the feature map image in the current image domain;
Figure 931596DEST_PATH_IMAGE047
indicating the position of two feature images
Figure 229853DEST_PATH_IMAGE048
The Euclidean distance between pairs of pixels of a location;
Figure 616972DEST_PATH_IMAGE049
and
Figure 173724DEST_PATH_IMAGE050
the pixel distribution weights representing the unchanged area and the changed area, respectively, may be calculated using, for example, a global average balance algorithm.
Figure 64320DEST_PATH_IMAGE051
The label graph representing the binary reference change is positioned in
Figure 799057DEST_PATH_IMAGE052
The grey value of the pixel of the location.
For the above Euclidean distance
Figure 407893DEST_PATH_IMAGE053
The calculation formula is as follows:
Figure 782985DEST_PATH_IMAGE054
therefore, after the structure and the loss function of the multi-modal remote sensing image change detection model are constructed, the model can be used after being trained.
And S150, training the multi-mode remote sensing image change detection model based on the first objective function and the second objective function to obtain the trained multi-mode remote sensing image change detection model.
The trained multi-mode remote sensing image change detection model is used for being deployed in an actual system so as to carry out change detection on subsequently acquired remote sensing images in different modes, and therefore change detection results among the remote sensing images in different modes are obtained.
In order to obtain the trained multi-mode remote sensing image change detection model, exemplarily, two domain converters at the front stage and two change decision makers at the rear stage can be respectively trained, and then the trained domain converters are combined with the change decision makers to generate the trained multi-mode remote sensing image change detection model.
In one embodiment, the step S150 includes: training the two domain converters based on the first objective function and the two condition discriminators to obtain the two trained domain converters; training each change decision maker based on a second objective function to respectively obtain the trained change decision makers; and connecting the trained domain converter and the trained change decision maker of the same image domain in series and then in parallel to obtain a trained model.
The training process of the above-described domain converter is explained below. If two domain converters are defined as a first domain converter G in a first image domain (T1 domain) and a second domain converter F in a second image domain (T2 domain), respectively, as shown in fig. 2, the training process comprises:
converting the input image samples A and B into the image domain of the opposite side by using a first domain converter G and a second domain converter F to obtain a converted image of each image sample; then, the first domain converter G and the second domain converter F are used to convert the respective input converted images into the original image domain again, so as to obtain a reconstructed image of each image sample.
In each image domain, judging whether unchanged areas between the image sample, the converted image and the reconstructed image are consistent or not by using condition judgers D1 and D2 in the corresponding image domain to obtain a corresponding judgment map; and calculating the value of the first objective function according to the constructed first objective function. Here, the Wasserstein distance metric will be used for the penalty calculation.
Then, whether the current value of the first objective function meets a preset condition is judged, wherein the preset condition can be set according to actual requirements. If the first objective function value does not meet the preset condition, network parameters of the two domain converters are adjusted according to the current first objective function value, next image sample training is carried out, and the training is stopped until the obtained first objective function value meets the preset condition.
The training process of the above-described change decision maker is explained below. Taking an example of a change decision-maker in an image domain, as shown in fig. 3, the change decision-maker is constructed by a full convolution twin neural network, which includes two identical neural networks (CNN) for feature extraction.
Respectively inputting two image samples of different time phases subjected to registration and cross-domain conversion into two CNN networks for feature extraction to obtain two feature maps; and then, calculating the Euclidean distance of the pixel level between the characteristic graphs, and judging the probability that the pixel belongs to the change region to obtain the corresponding probability distribution, thereby obtaining a change prediction graph. And calculating the comparison loss between the change prediction graph and the reference change annotation graph, namely calculating the value of the second objective function, so as to continuously update the network parameters of the change decision-making device until the finally calculated value of the second objective function meets the preset condition, and obtaining the trained change decision-making device at the moment.
Then, as shown in fig. 4, the trained domain converter and the change decision device in the same image domain are serially arranged, and then the two image domains are arranged in parallel to form a parallel-type multi-network integration model.
As an optional implementation manner, in the step S150, the multimodal remote sensing image change detection model may be trained in an overall model training manner, that is, a first objective function and a second objective function obtained by construction are used for joint training, that is, a joint objective function is obtained by construction according to the first objective function, the second objective function and respective corresponding preset weights. And then, co-training two domain converters and two change decision makers in the model based on the combined objective function to obtain a trained multi-mode remote sensing image change detection model. For example, in one embodiment, the joint objective function may be set to:
Figure 793666DEST_PATH_IMAGE055
wherein, S1 and S2 respectively represent change decision makers in two image domains, which are also twin neural networks;
Figure 699305DEST_PATH_IMAGE056
the weight values corresponding to the loss functions may be specifically set according to actual requirements, and are not limited herein.
The multi-modal remote sensing image change detection model generation method of the embodiment constructs a parallel multi-network integrated model by combining a cycle consistency antagonistic network and a double twin neural network, wherein cross-domain remote sensing image conversion based on cycle consistency antagonistic learning and Wasserstein distance measurement can realize mapping from a source data domain to a target domain under the condition of lack of paired annotation data; meanwhile, mapping is learned by adopting antagonistic loss, a condition discriminator only aiming at an unchanged area is designed, and loss discrimination is carried out by adopting Wassertein distance measurement, so that data of a source domain and target domain are aligned in a feature space, image domain difference among multi-mode remote sensing images is effectively eliminated, and reliable data is provided for a change detection task; and a full-convolution twin neural network is introduced to realize the pixel level change decision of the image, so that the precision of change detection can be improved.
Example 2
Referring to fig. 5, the present embodiment provides a method for detecting a change of a multi-modal remote sensing image, which can be used to detect a change of a multi-modal remote sensing image in different time phases. Exemplarily, the method comprises:
step S210, preprocessing two acquired original remote sensing images in different modes to obtain two preprocessed images.
In consideration of the fact that geometric deformation often exists among a plurality of remote sensing images shot at the same geographic position in different periods, geometric correction operation, also called image registration, is required before change detection is carried out in order to ensure the accuracy of the change detection. Exemplary, the pre-processing may include, but is not limited to, operations including geometric correction, calibration, and atmospheric correction. After geometric correction processing is carried out on the collected most original remote sensing image, two preprocessed images which are mutually registered in geometry can be obtained.
And S220, inputting the two preprocessed images into the multi-modal remote sensing image change detection model, and outputting to obtain respective change detection graphs of the two image domains.
The multi-modal remote sensing image change detection model can be obtained by adopting the method of the embodiment 1, and the description is not repeated here. Exemplarily, the two registered preprocessed images are input into a trained multi-modal remote sensing image change detection model, and two change detection graphs can be predicted and output.
In one embodiment, the multi-modal remote sensing image change detection model respectively performs cross-domain conversion processing on the two preprocessed images by using two domain converters and two condition discriminators to obtain converted images of the two preprocessed images in the image domain of the other preprocessed image; and then, the change decision device of the image domain where each image domain is located is used for carrying out change detection on the preprocessed image and the converted image of the same image domain, so that the change detection results of the two image domains are output and obtained.
And step S230, calculating to obtain a change detection diagram between the two remote sensing images in different modes according to the change detection diagrams of the two image domains.
Considering that there may be some differences between the two change detection maps in different image domains, the resulting change detection map may be calculated by means of averaging or the like, for example, for the two obtained change detection maps.
The multi-mode remote sensing image change detection method of the embodiment carries out registration processing and the like on the collected original remote sensing image, and adopts the multi-mode remote sensing image change detection model to carry out change detection on the remote sensing image after registration processing, so that the image domain difference among the remote sensing images can be effectively eliminated, and reliable data are provided for subsequent change detection; and finally, the change detection results in the two image domains are integrated to obtain a final change detection result, so that the change detection accuracy and the like can be further improved.
Example 3
Referring to fig. 6, based on the method of embodiment 1, this embodiment provides a device 100 for generating a multi-modal remote sensing image change detection model, where the multi-modal remote sensing image change detection model includes two domain converters, two condition discriminators, and two change decision makers. Exemplarily, the multi-modal remote sensing image change detection model generation apparatus 100 includes:
a domain conversion construction module 110, configured to construct the two domain converters and the two condition discriminators by using a round robin consistency countermeasure network; the domain converter is used for cross-domain conversion among remote sensing images in different modes, and the condition discriminator is used for condition discrimination during the cross-domain conversion.
An objective function constructing module 120, configured to construct a first objective function corresponding to the cross-domain conversion;
a change decision device constructing module 130, configured to construct the two change decision devices in different image domains by using a twin neural network, where the change decision device is configured to perform change detection on two images converted into the same image domain across domains.
The objective function constructing module 120 is further configured to construct a second objective function corresponding to the change detection.
And the model training module 140 is configured to train the multi-modal remote sensing image change detection model based on the first objective function and the second objective function to obtain a trained multi-modal remote sensing image change detection model.
It is to be understood that the apparatus of the present embodiment corresponds to the method of embodiment 1 described above, and the alternatives of embodiment 1 described above are equally applicable to the present embodiment, and therefore, the description thereof will not be repeated.
Example 4
Referring to fig. 7, based on the method of the embodiment 2, the present embodiment provides a multi-modal remote sensing image change detection apparatus 200, where the multi-modal remote sensing image change detection apparatus 200 performs change detection by using the multi-modal remote sensing image change detection model generated by the method of the embodiment 1. Exemplarily, the multi-modal remote sensing image change detection apparatus 200 includes:
the preprocessing module 210 is configured to preprocess the two acquired remote sensing images in different modalities to obtain two preprocessed images.
And the change detection module 220 is configured to input the two preprocessed images into the multi-modal remote sensing image change detection model, and output a change detection diagram of each of the two image domains.
And the calculating module 230 is configured to calculate a change detection map between the two remote sensing images in different modalities according to the change detection maps of the two image domains.
It is to be understood that the apparatus of the present embodiment corresponds to the method of the above embodiment 2, and the alternatives of the above embodiment 2 are also applicable to the present embodiment, so that the description thereof will not be repeated.
The application also provides a terminal device, for example, the terminal device can be a computer, a server and the like. The terminal device exemplarily includes a memory and a processor, where the memory stores a computer program, and the processor executes the computer program, so that the terminal device executes the functions of the above method or the above modules in the above apparatus.
The application also provides a readable storage medium for storing the computer program used in the terminal equipment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (10)

1. A multi-mode remote sensing image change detection model generation method is characterized in that the multi-mode remote sensing image change detection model comprises two domain converters, two condition discriminators and two change decision makers; the generation method comprises the following steps:
constructing the two domain converters and the two condition discriminators by using a cyclic consistency countermeasure network, wherein the domain converters are used for cross-domain conversion among remote sensing images of different modes, and the condition discriminators are used for condition discrimination during the cross-domain conversion;
constructing a first objective function corresponding to the cross-domain conversion;
constructing the two change decision makers of different image domains by using a twin neural network, wherein the change decision maker is used for carrying out pixel-level change detection on two images converted into the same image domain from cross-domain;
constructing a second objective function corresponding to the change detection;
and training the multi-mode remote sensing image change detection model based on the first objective function and the second objective function to obtain the trained multi-mode remote sensing image change detection model.
2. The method of claim 1, wherein training the multi-modal remote sensing imagery change detection model based on the first objective function and the second objective function comprises:
training the two domain converters based on the first objective function and the two condition discriminators to obtain the two trained domain converters;
training each change decision maker based on the second objective function to respectively obtain the trained change decision makers;
and setting the trained domain converter and the change decision maker in the same image domain in series and then in parallel to obtain the trained multi-mode remote sensing image change detection model.
3. The method of claim 1, wherein training the multi-modal remote sensing imagery change detection model based on the first objective function and the second objective function comprises:
constructing a combined objective function according to the first objective function, the second objective function and the respective corresponding preset weights to obtain a combined objective function;
and jointly training the two domain converters and the two change decision makers based on the combined objective function to obtain a trained multi-mode remote sensing image change detection model.
4. The method of claim 2, wherein the two domain converters comprise a first domain converter of a first image domain and a second domain converter of a second image domain; the training of the two domain converters based on the first objective function and the two conditional discriminators comprises:
converting the respective input image samples into the image domain where the image samples are located by using the first domain converter and the second domain converter to obtain a converted image of each image sample;
converting the respective input converted images into an original image domain again by using the first domain converter and the second domain converter to obtain a reconstructed image of each image sample;
judging whether unchanged areas between the image sample, the converted image and the reconstructed image are consistent or not by using the condition judger corresponding to the image domain so as to calculate to obtain a value of the first objective function;
and judging whether the current value of the first objective function meets a preset condition, if not, adjusting network parameters in the two domain converters according to the value of the first objective function, carrying out next sample training, and stopping training until the obtained value of the first objective function meets the preset condition.
5. The method of claim 4, wherein the first objective function comprises a cyclic consistency loss function between the first domain converter and the second domain converter, and a warspersein distance metric-based counterloss function for each of the first domain converter and the second domain converter; the expression of the first objective function is:
Figure 366665DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 78269DEST_PATH_IMAGE003
in order to be said first objective function,
Figure 881140DEST_PATH_IMAGE004
representing the first domain converter G and a second condition discriminator D2YA penalty function of antagonism between;
Figure 19997DEST_PATH_IMAGE005
representing the second domain converter F and the first conditional arbiter D1XA penalty function of antagonism between;
Figure 790376DEST_PATH_IMAGE006
representing the cyclical consistency loss function,
Figure 40092DEST_PATH_IMAGE007
and
Figure 963049DEST_PATH_IMAGE008
respectively representing the first domain converter G and the second domainA cross-domain coherence loss function for each of the converters F;
Figure 272807DEST_PATH_IMAGE009
representing a self-consistency loss function of the first domain converter G to the second domain converter F;
Figure 530482DEST_PATH_IMAGE010
representing the self-consistency loss function of the second domain converter F to the first domain converter G.
6. The method of claim 1, wherein the twin neural network comprises two feature extraction layers and one decision layer for constructing the change decision maker; the change detection of the two images converted into the same image domain from the cross-domain by using the change decision device comprises the following steps:
respectively extracting the features of the initial remote sensing image in the same image domain and the converted image obtained after the cross-domain conversion through the two feature extraction layers to obtain two feature maps;
and calculating Euclidean distances between pixel pairs at the same positions in the two feature images through the decision layer, and judging whether the corresponding pixels belong to a change region or not based on the Euclidean distances so as to obtain a current change detection image in the same image region.
7. The method of claim 6, wherein the second objective function is constructed from a pixel-level contrast loss function; the expression of the pixel-level contrast loss function is:
Figure 521572DEST_PATH_IMAGE011
wherein the content of the first and second substances,Srepresenting the twin neural network;
Figure 361352DEST_PATH_IMAGE012
a pixel-level contrast loss function representing a current image domain; w and H respectively represent the width and height of the feature map image in the current image domain;
Figure 31892DEST_PATH_IMAGE013
indicating the position of two feature images
Figure 324333DEST_PATH_IMAGE014
The Euclidean distance between pairs of pixels of a location;
Figure 119114DEST_PATH_IMAGE015
and
Figure 351DEST_PATH_IMAGE016
respectively representing the pixel distribution weights of the unchanged area and the changed area;
Figure 651912DEST_PATH_IMAGE017
the label graph representing the binary reference change is positioned in
Figure 634912DEST_PATH_IMAGE018
A grey value of the pixel of the location;mrepresenting a preset pixel level euclidean distance threshold.
8. A multi-mode remote sensing image change detection method is characterized by comprising the following steps:
preprocessing two acquired remote sensing images in different modes to obtain two preprocessed images;
inputting the two preprocessed images into the multi-mode remote sensing image change detection model, and outputting to obtain respective change detection graphs of two image domains; the multi-modal remote sensing image change detection model is generated by adopting the method of any one of claims 1-7;
and calculating to obtain a change detection image between the two remote sensing images in different modes according to the change detection images of the two image domains.
9. A terminal device, characterized in that the terminal device comprises a processor and a memory, the memory storing a computer program for execution by the processor to carry out the method of any one of claims 1 to 8.
10. A readable storage medium, characterized in that it stores a computer program which, when executed on a processor, implements the method according to any one of claims 1 to 8.
CN202110847669.4A 2021-07-27 2021-07-27 Multi-mode remote sensing image change detection method, model generation method and terminal equipment Pending CN113298056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110847669.4A CN113298056A (en) 2021-07-27 2021-07-27 Multi-mode remote sensing image change detection method, model generation method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110847669.4A CN113298056A (en) 2021-07-27 2021-07-27 Multi-mode remote sensing image change detection method, model generation method and terminal equipment

Publications (1)

Publication Number Publication Date
CN113298056A true CN113298056A (en) 2021-08-24

Family

ID=77331109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110847669.4A Pending CN113298056A (en) 2021-07-27 2021-07-27 Multi-mode remote sensing image change detection method, model generation method and terminal equipment

Country Status (1)

Country Link
CN (1) CN113298056A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215085A (en) * 2020-09-17 2021-01-12 云南电网有限责任公司昆明供电局 Power transmission corridor foreign matter detection method and system based on twin network
CN112801037A (en) * 2021-03-01 2021-05-14 山东政法学院 Face tampering detection method based on continuous inter-frame difference
CN114419464A (en) * 2022-03-29 2022-04-29 南湖实验室 Twin network change detection model based on deep learning
CN115797163A (en) * 2023-02-13 2023-03-14 中国人民解放军火箭军工程大学 Target data cross-domain inversion augmentation method based on remote sensing image
CN116384494A (en) * 2023-06-05 2023-07-04 安徽思高智能科技有限公司 RPA flow recommendation method and system based on multi-modal twin neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126482A (en) * 2019-12-23 2020-05-08 自然资源部国土卫星遥感应用中心 Remote sensing image automatic classification method based on multi-classifier cascade model
CN111539316A (en) * 2020-04-22 2020-08-14 中南大学 High-resolution remote sensing image change detection method based on double attention twin network
CN111640159A (en) * 2020-05-11 2020-09-08 武汉大学 Remote sensing image change detection method based on twin convolutional neural network
CN112488025A (en) * 2020-12-10 2021-03-12 武汉大学 Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126482A (en) * 2019-12-23 2020-05-08 自然资源部国土卫星遥感应用中心 Remote sensing image automatic classification method based on multi-classifier cascade model
CN111539316A (en) * 2020-04-22 2020-08-14 中南大学 High-resolution remote sensing image change detection method based on double attention twin network
CN111640159A (en) * 2020-05-11 2020-09-08 武汉大学 Remote sensing image change detection method based on twin convolutional neural network
CN112488025A (en) * 2020-12-10 2021-03-12 武汉大学 Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI LIU ET.AL: "An Unsupervised Domain Adaptation Method for Multi-Modal Remote Sensing Image Classification", 《2018 26TH INTERNATIONAL CONFERENCE ON GEOINFORMATICS》 *
方博: "对抗学习在光学遥感影像分类及变化检测中的方法研究", 《中国优秀博硕士学位论文全文数据库(博士)基础科学辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215085A (en) * 2020-09-17 2021-01-12 云南电网有限责任公司昆明供电局 Power transmission corridor foreign matter detection method and system based on twin network
CN112801037A (en) * 2021-03-01 2021-05-14 山东政法学院 Face tampering detection method based on continuous inter-frame difference
CN114419464A (en) * 2022-03-29 2022-04-29 南湖实验室 Twin network change detection model based on deep learning
CN114419464B (en) * 2022-03-29 2022-07-26 南湖实验室 Construction method of twin network change detection model based on deep learning
CN115797163A (en) * 2023-02-13 2023-03-14 中国人民解放军火箭军工程大学 Target data cross-domain inversion augmentation method based on remote sensing image
CN116384494A (en) * 2023-06-05 2023-07-04 安徽思高智能科技有限公司 RPA flow recommendation method and system based on multi-modal twin neural network
CN116384494B (en) * 2023-06-05 2023-08-08 安徽思高智能科技有限公司 RPA flow recommendation method and system based on multi-modal twin neural network

Similar Documents

Publication Publication Date Title
CN113298056A (en) Multi-mode remote sensing image change detection method, model generation method and terminal equipment
US20190294970A1 (en) Systems and methods for polygon object annotation and a method of training an object annotation system
Li et al. Generalizing to the open world: Deep visual odometry with online adaptation
JP6397379B2 (en) CHANGE AREA DETECTION DEVICE, METHOD, AND PROGRAM
JP4689758B1 (en) Image coincidence point detection apparatus, image coincidence point detection method, and recording medium
US11354772B2 (en) Cross-modality image generation
US11740321B2 (en) Visual inertial odometry health fitting
JP6565600B2 (en) Attention detection device and attention detection method
CN108428220A (en) Satellite sequence remote sensing image sea island reef region automatic geometric correction method
JP2014523572A (en) Generating map data
Li et al. Bifnet: Bidirectional fusion network for road segmentation
EP3012781A1 (en) Method and apparatus for extracting feature correspondences from multiple images
US20230206594A1 (en) System and method for correspondence map determination
CN111144213A (en) Object detection method and related equipment
CN111553296B (en) Two-value neural network stereo vision matching method based on FPGA
Walz et al. Uncertainty depth estimation with gated images for 3D reconstruction
El Bouazzaoui et al. Enhancing rgb-d slam performances considering sensor specifications for indoor localization
CN110942097A (en) Imaging-free classification method and system based on single-pixel detector
CN114067251B (en) Method for detecting anomaly of unsupervised monitoring video prediction frame
WO2021051382A1 (en) White balance processing method and device, and mobile platform and camera
Cantrell et al. Practical Depth Estimation with Image Segmentation and Serial U-Nets.
CN117252778A (en) Color constancy method and system based on semantic preservation
CN111275751A (en) Unsupervised absolute scale calculation method and system
CN114913472A (en) Infrared video pedestrian significance detection method combining graph learning and probability propagation
CN108564594A (en) A kind of target object three-dimensional space motion distance calculating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210824