CN114494154A - Unsupervised change detection method based on content coding - Google Patents
Unsupervised change detection method based on content coding Download PDFInfo
- Publication number
- CN114494154A CN114494154A CN202111670468.8A CN202111670468A CN114494154A CN 114494154 A CN114494154 A CN 114494154A CN 202111670468 A CN202111670468 A CN 202111670468A CN 114494154 A CN114494154 A CN 114494154A
- Authority
- CN
- China
- Prior art keywords
- content
- change detection
- images
- function
- follows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an unsupervised change detection method based on content coding, which comprises the following steps: constructing a content encoding network structure; establishing a probability function model for change detection; defining a mask loss function to measure a deviation between content encodings; constructing a content constraint function, and constraining a difference result of the images according to the objects; and defining an energy function by combining a mask loss function and a content constraint function, and training to obtain a change detection result. The method can effectively avoid the dependence of the existing change detection method on accurate registration, and is applicable to most of change detection problems, including the unsupervised change detection problem of multi-view images which cannot be solved by the existing method.
Description
Technical Field
The invention belongs to the field of change detection, and particularly relates to an unsupervised change detection method based on content coding.
Background
The change detection is to detect a changed region of the same scene image taken at different times. It is of great interest in many applications, including video monitoring, medical diagnosis and treatment, particularly in remote monitoring and land use analysis. Learning can be classified into supervised, semi-supervised and unsupervised detection methods according to whether the change detection method requires a manually annotated sample. The supervision method can adapt to different complex scenes with the help of training samples. But their use is often limited due to the variety of images in practical problems. For example, models trained from natural images are difficult to apply directly to remote sensing images. Unsupervised methods are widely used, but their accuracy depends on the effectiveness of preprocessing methods such as geometry adjustment (co-registration) and radiation correction (de-noising, atmospheric correction, normalization).
By way of comparison, change detection methods can be broadly classified into pixel-based, feature-based, and object-based methods. The pixel-based change detection method is intuitive and it performs pixel comparison on the preprocessed multi-time images to generate a degree of difference for each pixel. And then determining the changed pixels by adopting an image segmentation method. For different types of data, different pixel-based approaches have been devised. For example, the logarithmic ratio operator and the average ratio operator are widely applied to a change detection task of a Synthetic Aperture Radar (SAR) image due to the influence of speckle noise. Whereas for multi/hyperspectral images, Change Vector Analysis (CVA) is a classical method, which generates multiple changes by analyzing the vector of changes through pixel and channel comparisons. Due to its intuitive mechanism, pixel-based approaches are usually designed as unsupervised approaches. It also results in difficulties in processing complex scenes such as multi-source images and misaligned images.
Feature-based change detection methods focus more on the comparison process to produce significant changes in complex scenes. Some methods learn comparable features of multi-source images. Presdes et al propose a physical model based on a multidimensional distribution mixed with available invariant samples. Since change detection can be a classification problem, many supervised methods have been developed based on deep learning methods that learn to compare multi-time images to trainable hierarchical features. However, to train a deep network, enough labeled training samples are required, which limits their widespread use. Feature-based change detection methods can avoid many insignificant changes caused by sensor noise, illumination changes, non-uniform attenuation, atmospheric absorption, and even heterogeneous sensors. However, for unsupervised methods, they require a higher alignment accuracy, i.e. a co-registration method, since they typically compare local features.
Given two images, most co-registration methods take the captured scene as a plane and transform the images using a fixed transformation template, such as shift, rotation, and affine transformations. Therefore, for high resolution, images captured from different angles are difficult to align perfectly. Such images are common in many situations, such as Very High Resolution (VHR) optical remote sensing images and those captured by a moving unmanned aerial vehicle (UVA). Therefore, in many change detection scenarios, a method that is robust against co-registration errors is needed. The target-based change detection method first classifies objects in the image and then compares the objects, which is robust to co-registration errors. In order to generate accurate regions of variation, the classification method should be specially designed. The intuitive approach is to classify the multiple time images separately, then compare the corresponding classes, and generate the changed regions. The accuracy of these methods depends on the accuracy of the classification method, respectively, and there is error propagation. Even with high self-matching and robustness to co-registration errors, object-based methods are typically supervised for learning accurate classifiers.
Disclosure of Invention
The invention aims to provide an unsupervised change detection method based on content coding.
The technical solution for realizing the purpose of the invention is as follows: an unsupervised change detection method based on content coding comprises the following steps:
the method comprises the steps of firstly, constructing a content extraction network based on a convolutional neural network, encoding an input image and outputting the encoded input image as a feature vector without a reference label;
secondly, assuming that each element in the network output vector represents a certain content in the input image, obtaining the codes of the two images by inputting the two images, and defining a content alignment loss function on the basis of the codes;
thirdly, optimizing an energy model, and defining a content constraint function of the code to meet the content assumption;
fourthly, combining the alignment loss function and the content constraint function and establishing a probability model based on energy;
fifthly, after optimization, comparing the feature vectors of the two images, and solving a probability model;
and sixthly, comparing the characteristic vectors of the two images, generating a change region by optimizing a change mask in the change region, and finally generating a difference map by using an image clustering algorithm of the FLICM.
An electronic device comprises a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the above unsupervised change detection method based on content encoding.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the above-mentioned content-coding-based unsupervised change detection method.
Compared with the prior art, the invention has the remarkable characteristics that: (1) defining a content extraction network based on a convolutional neural network; (2) establishing a probability model based on an energy function, and learning network parameters by utilizing two input images in an unsupervised mode; (3) and optimizing an energy model, defining a mask loss function to measure the deviation between the two image codes, constructing a content constraint function to realize content assumption, and constraining the difference result of the codes to the input image according to a content object.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
FIG. 1 is a flow chart of the unsupervised change detection method based on content coding according to the present invention.
Fig. 2 is a schematic diagram of a content encoding network structure according to the present invention.
Fig. 3(a) and 3(b) are ROC and PR curves of different methods on the drone dataset.
Fig. 4(a) and 4(b) show ROC and PR curves of different methods on a remote sensing data set U2.
FIG. 5 shows two images I1And I2Schematic diagram of the whole change detection process.
Detailed Description
The invention provides an unsupervised change detection method based on content coding, which is suitable for the change detection task of multi-view images which are common but difficult to process in recent application, by constructing a content coding network based on a convolutional neural network, establishing a probability model which combines mask loss and a content constraint function and is driven by an energy function, and learning network parameters in an unsupervised mode to finally obtain a change detection result. The method comprises the following specific steps:
the method comprises the steps of firstly, constructing a content extraction network based on a Convolutional Neural Network (CNN), encoding an input image and outputting the encoded input image as a feature vector without a reference label.
Secondly, in order to learn the network, assuming that each element in the network output vector represents a certain content in the input image, obtaining the codes of the two images by inputting the two images, defining a content alignment loss function on the basis of the codes, and learning the distribution of the input images; the specific process is as follows:
1) the probabilistic model is defined as:
where Z is the partition function and u is a pseudo probability vector representing the invariant probability of each content object. The probability model is optimized by maximizing the log-likelihood of the probability, which means that the input data I is increased1、I2While reducing the energy of all other data, the network can then specifically learn the relationship between the two images.
2) To enable change detection, the energy function E (I) is designed by defining the change detection1,I2U, θ) that aligns the same content in both images, which means that the feature vector v1=fθ(I1) And v2=fθ(I2) Should be similar, thus defining the loss of alignment of the two images, expressed as follows:
where i is the component index of the feature vector and u is a pseudo probability vector, representing the invariant probability of each content object, which is a trainable parameter to be trained with the network parameter set θ.
Thirdly, defining a content constraint function of the coding to meet the content assumption, defining content constraint, and realizing that each coded element represents a content through the constraint, wherein the specific process comprises the following steps:
where (i, j, k) denotes the pixel in (i, j) in the k-th channel, Ω(i,j)Represents a square neighborhood of pixel (i, j); δ I denotes the differential coefficient of the output feature vector v with respect to the input image I, and is expressed as follows:
where ω represents the weight matrix in each pixel neighborhood, calculated from the difference of the neighboring pixels and the center pixel, as follows:
where σ represents the standard deviation of the pixels in the neighborhood. The weight matrix is derived by a superpixel segmentation method. A superpixel should contain pixels that are similar in color, texture, etc., and therefore likely belong to the same physical world object; therefore, after superpixel segmentation, we use a weight matrix to define the boundaries of different contents; the larger the weight value is, the more likely the corresponding pixel belongs to the same content as the center pixel; for a content extractor, if two pixels belong to the same content, the difference of the corresponding output code with respect to the two pixels should also be similar, and the variation of the feature vector represents the variation of the entire content, not other randomly learned semantic regions, so that similar pixels within a region can be encoded by the constraint of the content.
Fourthly, combining the alignment loss function and the content constraint function and establishing a probability model based on energy, wherein the energy function is as follows:
E(I1,I2,θ,u)=L(I1,I2,θ,u)+λ[C(I1,θ)+C(I2,θ)]
wherein λ is a user-defined parameter, the two weights are controlled, then the energy function is set as a probability model for change detection, and then an optimization framework of the probability model is calculated according to the log-likelihood gradient of the probability model, which is expressed as follows:
wherein I'1,I′2All possible data within the data space is represented, but this presents difficulties for the calculation. Increase relative to inputProbability of dataUsing a contrast divergence algorithm to ensure efficiency; the parameter update gradient is as follows:
from the above derivative point of view, the primary operator of the optimization is the gradient. Throughout the optimization process, two types of gradients must be derived, including trainable parametersAnd inputting dataOf the gradient of (c). There are two terms in the energy function and we derive their gradients separately. By the back propagation algorithm, the gradient of alignment loss is obtained as follows:
for content gradients, a similar gradient can also be derived, the formula is as follows:
wherein lkRepresents the output of the k layer of the network; using the gradients described above, the model is updated because we define u as a probability vector and each component is at [0,1 ]]Within the range of (1). But u is updated according to the gradient without any constraint. Then the optimized u may not be within this range. Therefore, we consider u as a sigmoid function u-sigmoid (t). T can then be updated by the following formula:
after optimization, the feature vector u1And u2Can represent two images I1And I1The contents of (1); and marking the changed pixels of the input image and highlighting the changed content of the input image.
And fifthly, after optimization, comparing the feature vectors of the two images, and solving the probability model. First, a constant loss function is defined, the formula is as follows:
wherein L iscRepresents the unchanged loss, used here as a function of energy for the probability model; an indication of dot product, M1And M2Is that it is in [0,1 ]]A change mask within the range; this loss function cannot be minimized because when M is present1And M2The optimal value is reached when both are 0, and therefore, the probability model is established as follows:
Pc(I1,I2;M1,M2) For the probabilistic model to be solved, with the above-mentioned model P (I)1,I2(ii) a u, theta) are similar, Z is a partition functionAlbeit with P (I)1,I2(ii) a u, theta) are different, but here the optimization parameter M is1,M2The optimization procedure can be obtained according to the above description; like u, define Mk=sigmoid(Sk) K is 1, 2; the optimization process comprises sampling and updating parameters, and in the sampling process, sampling data I'1I′2By gradient to obtainThen, the parameters S1 and S2 are updated, and the formula is as follows:
sixthly, comparing the feature vectors of the two images, generating a change area by optimizing a change mask in the change area, and finally generating a difference map by using an image clustering algorithm of the FLICM, wherein the method specifically comprises the following steps:
after optimization, to highlight the changed regions, the difference image is represented by: dk=1-MkK is 1, 2; then, the FLICM image segmentation method is adopted to divide the pixels into variable pixels and invariable pixels, and a final variable map is generated through an image clustering algorithm.
FIG. 5 shows two images I1And I2The entire change detection process.
The invention utilizes two input images for learning network parameters in an unsupervised manner. And simultaneously, the content assumption can be satisfied for the comparison of image contents. The method can effectively avoid the false alarm of the unaligned object in the existing change detection method, and improves the robustness of the co-registration error.
The effect of the present invention can be further illustrated by the following simulation experiments:
simulation conditions
The simulation experiment used 2 data sets: unmanned aerial vehicle data set and remote sensing data set. Their size is 512 × 512 pixels. As the drone continues to move, the position and angle of the view are different even though it captures the same scene, which leads to difficulties in registration. Since satellite or camera drones cannot capture the same scene in exactly the same position at different times, we compared the proposed method with some unsupervised change detection methods, simulation experiments were all configured under Windows operating system with environments Inteli7-8700KCPU (3.7GHz) and nvidiartx 3090GPU, and programs were written using C + + and VisualStudio 2017.
The evaluation indexes adopted by the invention are an evaluation method of clustering Precision (ACC), Precision recall rate (PR), Receiver Operating Characteristic curve (ROC), average accuracy rate (AP) and kappa coefficient.
Emulated content
The invention adopts a real unmanned aerial vehicle data set and a remote sensing data set to check the performance of the algorithm. In order to test the performance of the algorithm, the unsupervised change detection method based on content coding is compared with the current international popular change detection algorithm. The comparison method comprises the following steps: CVA, DCVA, SFA, DCCN.
Analysis of simulation experiment results
Table 1 shows the comparison results of different evaluation indexes under different change detection algorithms in two data sets, and it can be seen from table 1 that, in the data set of the unmanned aerial vehicle, the change detection method based on content coding proposed by the present invention can better constrain background objects by means of robustness to local positions, can highlight changed areas, avoid the influence of invariant buildings, and significantly improve the precision in different evaluation indexes compared with CVA, DCVA, SFA, and DCCN. Table 2 shows the run times under this method. The effect graphs of the method of the invention on different data sets are shown in fig. 3(a), fig. 3(b), fig. 4(a) and fig. 4(b), and the effectiveness of the method of the invention is shown by the simulation experiment results of the above 2 groups of real data sets.
TABLE 1 different algorithmic quantitative evaluation of unmanned aerial vehicle data sets (AP, AuR, ACC, Kappa)
TABLE 2 time cost per data set
Data set | Unmanned aerial vehicle data set | Remote sensing data set |
Time(s) | 23.9 | 190.5 |
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (8)
1. An unsupervised change detection method based on content coding is characterized by comprising the following steps:
the method comprises the steps of firstly, constructing a content extraction network based on a convolutional neural network, encoding an input image and outputting the encoded input image as a feature vector without a reference label;
secondly, assuming that each element in the network output vector represents a certain content in the input image, obtaining the codes of the two images by inputting the two images, and defining a content alignment loss function on the basis of the codes;
thirdly, optimizing an energy model, and defining a content constraint function of the code to meet the content assumption;
fourthly, combining the alignment loss function and the content constraint function and establishing a probability model based on energy;
fifthly, after optimization, comparing the feature vectors of the two images, and solving a probability model;
and sixthly, comparing the characteristic vectors of the two images, generating a change region by optimizing a change mask in the change region, and finally generating a difference map by using an image clustering algorithm of the FLICM.
2. The unsupervised change detection method based on content coding according to claim 1, wherein the second step is to learn the network, assuming that each element in the network output vector represents a certain content in the input image, obtain the coding of two images by inputting the two images, define a content alignment loss function on the basis of the coding, learn the distribution of the input image; the specific process is as follows:
(1) the probabilistic model is defined as:
wherein Z is a partition function, and u is a pseudo probability vector representing the invariant probability of each content object; optimizing a probability model by maximizing the log-likelihood of the probability, increasing input data I1、I2The energy of all other data is reduced at the same time, and the network can learn the relationship between the two images;
(2) designing an energy function E (I) by defining a change detection1,I2U, theta) that aligns the same content in both images, a feature vector v1=fθ(I1) And v2=fθ(I2) The components of the invariant content are similar, and therefore the alignment loss of the two images is defined as follows:
where i is the component index of the feature vector and u is a pseudo probability vector, representing the invariant probability of each content object, which is a trainable parameter to be trained with the network parameter set θ.
3. The unsupervised change detection method based on content coding according to claim 1, wherein the third step defines a content constraint function of the coding to satisfy the content assumption, defines a content constraint, and realizes that each coded element represents a content by the constraint, which comprises the following specific processes:
where (i, j, k) denotes the pixel in (i, j) in the k-th channel, Ω(i,j)Represents a square neighborhood of pixel (i, j); δ I denotes the differential coefficient of the output feature vector v with respect to the input image I, and is expressed as follows:
where ω represents the weight matrix in each pixel neighborhood, calculated from the difference of the neighboring pixels and the center pixel, as follows:
where σ represents the standard deviation of the pixels in the neighborhood.
4. The unsupervised change detection method based on content coding according to claim 1, wherein the fourth step combines the alignment loss function and the content constraint function and establishes an energy-based probability model, the energy model is defined as the difference between two feature vectors with content constraint, and the following energy functions are obtained by combining:
E(I1,I2,θ,u)=L(I1,I2,θ,u)+λ[C(I1,θ)+C(I2,θ)]
wherein λ is a user-defined parameter, the two weights are controlled, then the energy function is set as a probability model for change detection, and an optimization framework of the probability model is calculated according to a log-likelihood gradient of the probability model, and is expressed as follows:
wherein I'1,I′2Representing all possible data within the data space; increasing probability with respect to input dataUsing a contrast divergence algorithm to ensure efficiency; the parameter update gradient is as follows:
from the above derivative point of view, the optimal base operator is the gradient; throughout the optimization process, two types of gradients must be derived, including trainable parametersAnd inputting dataA gradient of (a); there are two terms in the energy function, and their gradients are derived respectively; by the back propagation algorithm, the gradient of alignment loss is obtained as follows:
for content gradients, a similar gradient can also be derived, the formula is as follows:
wherein lkRepresents the output of the k-th layer of the network;
updating the model by using the gradient; taking u as a sigmoid function u ═ sigmoid (t), then t can be updated by the following formula:
after optimization, the feature vector v1And v2Can represent two images I1And I1The contents of (1); tagging input imagesThe change pixels of (2) highlight the changed content of the input image.
5. The unsupervised change detection method based on content coding according to claim 1, wherein after the fifth step of optimization, the feature vectors of the two images are compared, and a probability model is solved, wherein a constant loss function is defined first, and the formula is as follows:
wherein L iscRepresenting the unchanged loss as a function of energy of the probabilistic model; an indication of dot product, M1And M2Is that it is in [0,1 ]]A change mask within the range; the probability model is established as follows:
Pc(I1,I2;M1,M2) For the probability model to be solved, Z is the partition functionAlbeit with P (I)1,I2(ii) a u, theta) are different, but here the optimization parameter M is1,M2The optimization procedure can be obtained according to the above description; definition Mk=sigmoid(Sk) K is 1, 2; the optimization process comprises sampling and updating parameters, and in the sampling process, sampling data I'1,I′2By gradient to obtainThen to the parameter S1,S2Updating is carried out, and the formula is as follows:
6. the unsupervised change detection method based on content coding according to claim 1, wherein the sixth step adopts FLICM image segmentation method to divide the pixels into changed pixels and unchanged pixels, and generates the final change map through image clustering algorithm.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the content encoding-based unsupervised change detection method of any of claims 1-6 when executing the program.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for unsupervised change detection based on content coding according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111670468.8A CN114494154A (en) | 2021-12-30 | 2021-12-30 | Unsupervised change detection method based on content coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111670468.8A CN114494154A (en) | 2021-12-30 | 2021-12-30 | Unsupervised change detection method based on content coding |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114494154A true CN114494154A (en) | 2022-05-13 |
Family
ID=81507549
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111670468.8A Pending CN114494154A (en) | 2021-12-30 | 2021-12-30 | Unsupervised change detection method based on content coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114494154A (en) |
-
2021
- 2021-12-30 CN CN202111670468.8A patent/CN114494154A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11551333B2 (en) | Image reconstruction method and device | |
CN110363122B (en) | Cross-domain target detection method based on multi-layer feature alignment | |
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
Liu et al. | A deep convolutional coupling network for change detection based on heterogeneous optical and radar images | |
CN108780508B (en) | System and method for normalizing images | |
CN108108764B (en) | Visual SLAM loop detection method based on random forest | |
Venugopal | Automatic semantic segmentation with DeepLab dilated learning network for change detection in remote sensing images | |
US20210342643A1 (en) | Method, apparatus, and electronic device for training place recognition model | |
CN109871902B (en) | SAR small sample identification method based on super-resolution countermeasure generation cascade network | |
Liu et al. | Bipartite differential neural network for unsupervised image change detection | |
CN111950453A (en) | Optional-shape text recognition method based on selective attention mechanism | |
ElMikaty et al. | Detection of cars in high-resolution aerial images of complex urban environments | |
CN111626267B (en) | Hyperspectral remote sensing image classification method using void convolution | |
CN110287798B (en) | Vector network pedestrian detection method based on feature modularization and context fusion | |
Etezadifar et al. | A new sample consensus based on sparse coding for improved matching of SIFT features on remote sensing images | |
CN113705375A (en) | Visual perception device and method for ship navigation environment | |
CN116740418A (en) | Target detection method based on graph reconstruction network | |
CN114998630B (en) | Ground-to-air image registration method from coarse to fine | |
Song et al. | HDTFF-Net: Hierarchical deep texture features fusion network for high-resolution remote sensing scene classification | |
WO2023222643A1 (en) | Method for image segmentation matching | |
CN116630637A (en) | optical-SAR image joint interpretation method based on multi-modal contrast learning | |
CN114494152A (en) | Unsupervised change detection method based on associated learning model | |
Ma et al. | Infrared target tracking based on proximal robust principal component analysis method | |
CN114494154A (en) | Unsupervised change detection method based on content coding | |
Lin et al. | Ml-capsnet meets vb-di-d: A novel distortion-tolerant baseline for perturbed object recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |