CN113808180A - Method, system and device for registering different-source images - Google Patents

Method, system and device for registering different-source images Download PDF

Info

Publication number
CN113808180A
CN113808180A CN202111098634.1A CN202111098634A CN113808180A CN 113808180 A CN113808180 A CN 113808180A CN 202111098634 A CN202111098634 A CN 202111098634A CN 113808180 A CN113808180 A CN 113808180A
Authority
CN
China
Prior art keywords
image
training
sample
matching
matching network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111098634.1A
Other languages
Chinese (zh)
Other versions
CN113808180B (en
Inventor
陈舒雅
王青松
焦润之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202111098634.1A priority Critical patent/CN113808180B/en
Publication of CN113808180A publication Critical patent/CN113808180A/en
Application granted granted Critical
Publication of CN113808180B publication Critical patent/CN113808180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system and a device for registering different-source images, wherein the method comprises the following steps: filtering the SAR image and forming an image block pair with the corresponding optical image; inputting the image block into a sample book to be deeply convolved to generate a confrontation network for training; carrying out data enhancement on the training samples and dividing; training a deep twin matching network based on a training set; generating a matching point pair based on the trained matching network; and calculating a transformation matrix according to the matching point pairs and registering the images. The system comprises: the system comprises an image pair sample module, a training sample module, a dividing module, a training module, a matching module and a registering module. The apparatus includes a memory and a processor for performing the above-described heterogeneous image registration method. By using the method and the device, the heterogeneous registration accuracy can be improved. The method, the system and the device for registering the images of different sources can be widely applied to the field of image registration.

Description

Method, system and device for registering different-source images
Technical Field
The invention relates to the field of image registration, in particular to a method, a system and a device for registering different-source images.
Background
Existing image registration techniques are broadly divided into three types: the image registration method based on the region, the image registration method based on the feature and the image registration method based on the network which is popular in recent years have the defects that the registration methods have large defects, the difference of imaging principles and the difference of imaging conditions cause the nonlinear intensity difference between the SAR image and the optical image, and therefore the result of the method based on the gray scale is poor; the imaging principle of the SAR image causes serious speckle noise in the SAR image, so that a method based on point characteristics is difficult to extract reliable characteristic points on the SAR image, and a good registration method applied to an optical image is generally difficult to obtain an expected effect in the registration of a different-source image; the convolutional neural network-based method requires a large amount of training data to obtain a good model and prevent overfitting, but the data volume of the optical image and SAR image data set in real application is often far insufficient to train a good network
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a method, a system and a device for registering different-source images, which improve the accuracy of different-source registration.
The first technical scheme adopted by the invention is as follows: a method of heterogeneous image registration, comprising the steps of:
filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
inputting the image block into a sample book to generate a confrontation network for training, and obtaining a training sample;
performing data enhancement and division on the training samples to obtain a training set and a test set;
training a deep twin matching network based on a training set to obtain a trained matching network;
extracting image blocks of the pictures in the test set and inputting the image blocks into a trained matching network to obtain matching point pairs;
and calculating a transformation matrix according to the matching point pairs and registering the images.
Further, the filtering the SAR image specifically includes the following steps:
reading an image data matrix of the SAR image;
sliding a preset filtering window to process the SAR image, and calculating parameters in the window by combining an image data matrix;
and outputting a filtering result based on the parameters and the filtering equation in the window to obtain the filtered SAR image.
Further, the filter equation is formulated as follows:
Figure BDA0003269903000000011
in the above formula, the first and second carbon atoms are,
Figure BDA0003269903000000021
represents a filtered value, b represents a preset parameter, m represents an observed value,
Figure BDA0003269903000000022
representing the mean of the pixels within the local window.
Further, the deep convolution generation countermeasure network is provided with two groups, and the step of inputting the image block into the deep convolution generation countermeasure network for training to obtain the training sample specifically includes:
inputting a group of depth convolutions into an optical image in a sample by an image block to generate a countermeasure network, generating a pseudo SAR image based on a generator, and generating a discrimination label based on a discriminator;
inputting the SAR image filtered in the sample by the image block into another group of depth convolution to generate a countermeasure network, generating a pseudo-optical image based on a generator, and generating a discrimination label based on a discriminator;
and forming a training set according to the SAR image, the pseudo SAR image, the optical image, the pseudo optical image and the discrimination label.
Further, the step of performing data enhancement and division on the training samples to obtain a training set and a test set specifically includes:
carrying out geometric transformation on the training sample to obtain an enhanced training sample;
the geometric transformation comprises turning, rotating, clipping, translating and scaling;
the enhanced training samples were run at 7: 3, the training set and the test set are divided.
Further, the training of the deep twin matching network based on the training set to obtain the trained matching network specifically includes:
training one branch of the depth twin matching network according to a pseudo SAR image generated by the optical image in the training set and the corresponding SAR image;
training the other branch of the depth twin matching network according to a pseudo-optical image generated by the SAR image in the training set and a corresponding optical image;
and performing loss calculation on the deep twin matching network by combining the discrimination label to obtain the trained matching network.
Further, the step of extracting image blocks of the pictures in the test set and inputting the image blocks into the trained matching network to obtain matching point pairs specifically includes:
detecting feature points of the pictures in the test set based on an SIFT method;
according to the characteristic point of the picture, image blocks of the SAR image and the optical image are taken and input to two branches of the trained matching network, and a matching result is obtained;
and processing the matching result based on a progressive consistent sampling method to obtain matching point pairs.
Further, the formula for calculating the transformation matrix according to the matching point pairs is as follows:
Figure BDA0003269903000000023
in the above formula, T represents an image I1And I2A geometric transformation matrix between, s represents I2Relative to I1Scaled scale factor, θ denotes I2Relative to I1Rotational angle of (d), txIs represented by2Relative to I1Horizontal displacement parameter of tyRepresents I2Relative to I1The vertical displacement parameter of (1).
The second technical scheme adopted by the invention is as follows: a heterogeneous image registration system, comprising:
the image pair sample module is used for filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the training sample module is used for inputting the image blocks into the sample book to be deeply convolved to generate a confrontation network for training to obtain a training sample;
the dividing module is used for performing data enhancement on the training samples and dividing the training samples to obtain a training set and a test set;
the training module is used for training the deep twin matching network based on a training set to obtain a trained matching network;
the matching module is used for extracting image blocks of the pictures in the test set and inputting the image blocks into a trained matching network to obtain matching point pairs;
and the registration module is used for calculating a transformation matrix according to the matching point pairs and registering the images.
The third technical scheme adopted by the invention is as follows: a heterogeneous image registration system, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a method of heterogeneous image registration as described above.
The method, the system and the device have the advantages that: according to the method, a large number of labeled image block pairs can be generated through data enhancement for network training, the problem that the size of a data set is not enough for training a deep network is solved, and the problem of heterogeneous image registration is converted into the problem of homogeneous image registration through generation of a countermeasure network, so that the accuracy of heterogeneous registration is improved.
Drawings
FIG. 1 is a flow chart of the steps of a method of registration of heterogeneous images according to the present invention;
fig. 2 is a block diagram of a system for registration of heterogeneous images according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
Referring to fig. 1, the present invention provides a method of heterogeneous image registration, the method comprising the steps of:
filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
inputting the image block into a sample book to generate a confrontation network for training, and obtaining a training sample;
performing data enhancement and division on the training samples to obtain a training set and a test set;
training a deep twin matching network based on a training set to obtain a trained matching network;
extracting image blocks of the pictures in the test set and inputting the image blocks into a trained matching network to obtain matching point pairs;
and calculating a transformation matrix according to the matching point pairs and registering the images.
Further as a preferred embodiment of the method, the method for filtering the SAR image by using the LEE filtering method specifically comprises the following steps:
reading an image data matrix of the SAR image;
sliding a preset filtering window to process the SAR image, and calculating parameters in the window by combining an image data matrix;
specifically, a filter window size (7 × 7 sliding window) is set, the window is slid, and parameters within the sliding window are calculated. Given that the a priori means and variances can be derived from computing the means and variances of the local area,
Figure BDA0003269903000000047
and
Figure BDA0003269903000000048
respectively, the prior mean and variance of the Lee filtering method, byTwo formulas calculate:
Figure BDA0003269903000000041
Figure BDA0003269903000000042
where v is the noise within the local window,
Figure BDA0003269903000000043
the linear mathematical model of Lee filtering is
Figure BDA0003269903000000044
Wherein:
Figure BDA0003269903000000045
and outputting a filtering result based on the parameters and the filtering equation in the window to obtain the filtered SAR image.
In particular, Lee Filter equation
Figure BDA0003269903000000046
Further, as a preferred embodiment of the method, the deep convolution generation countermeasure network includes two groups, and the step of inputting the image block into the deep convolution generation countermeasure network to train the sample to obtain the training sample specifically includes:
inputting a group of depth convolutions into an optical image in a sample by an image block to generate a countermeasure network, generating a pseudo SAR image based on a generator, and generating a discrimination label based on a discriminator;
inputting the SAR image filtered in the sample by the image block into another group of depth convolution to generate a countermeasure network, generating a pseudo-optical image based on a generator, and generating a discrimination label based on a discriminator;
and forming a training set according to the SAR image, the pseudo SAR image, the optical image, the pseudo optical image and the discrimination label.
Specifically, there are two groups of generated countermeasure networks, and the two groups of networks have the same structure and different weights. Each network consists of 4 convolutional layers, with respect to the activation function: a linear unit with leakage correction is used in the discriminator to prevent excessive sparsity of gradient, a generator still adopts a linear rectification function, and finally an output layer adopts a Tanh function (hyperbolic tangent function). Training by using an adaptive learning rate optimization algorithm, and setting the learning rate to be 0.0002. Training to generate a countermeasure network according to the following loss function:
Figure BDA0003269903000000051
Figure BDA0003269903000000052
Figure BDA0003269903000000053
LDCGANconstraint of penalty for representation generator and arbiter, where LL1(G) The representation generator generates a pixel-level loss constraint, p, between the image and the real imagedataIs the true data distribution, pzThe noise distribution is shown, g (z) is a pseudo image which is generated by the generation model based on the random noise z and simulates a real image, D (x) is the probability that the discrimination model judges the real image to be true, and D (g (z)) is the probability that the discrimination model judges the pseudo image generated by the generator to be true.
Further, as a preferred embodiment of the method, the step of performing data enhancement on the training samples and dividing the training samples to obtain a training set and a test set specifically includes:
carrying out geometric transformation on the training sample to obtain an enhanced training sample;
the geometric transformation comprises turning, rotating, clipping, translating and scaling;
the enhanced training samples were run at 7: 3, the training set and the test set are divided.
Specifically, data enhancement is completed through the five geometric transformations, and the number of pictures of a training set is increased
Further, as a preferred embodiment of the method, the step of training the deep twin matching network based on the training set to obtain a trained matching network specifically includes:
training one branch of the depth twin matching network according to a pseudo SAR image generated by the optical image in the training set and the corresponding SAR image;
training the other branch of the depth twin matching network according to a pseudo-optical image generated by the SAR image in the training set and a corresponding optical image;
and performing loss calculation on the deep twin matching network by combining the discrimination label to obtain the trained matching network.
Specifically, two branches of the deep twin network are used for training an optical image and an SAR image respectively, the two branches have the same structure, weights are not shared, and the network adopts a cross entropy loss function and an SGD optimization algorithm. Obviously, the number of unmatched point pairs in the images during training is far larger than that of matched point pairs, and in order to avoid accuracy reduction caused by unbalance of the number of positive and negative samples, the number of the positive and negative samples is consistent by adopting a random sampling strategy.
Cross entropy loss function:
Figure BDA0003269903000000061
wherein, yiIs an input image pair xiThe label of (1). 1 represents a match and 0 represents a mismatch.
Figure BDA0003269903000000062
Representing the predicted match probability, may be represented by two points v on the FC3 layer0(xi) And v1(xi) And (4) calculating. The calculation formula is as follows:
Figure BDA0003269903000000063
the matching network can be divided into two parts, namely a feature extraction network and a metric learning network. The feature extraction network adopts a dual-branch structure, and each branch comprises 5 convolutional layers and 3 multiplied by 3 pooling layers. Wherein, the three pooling layers are respectively positioned behind the first, the second and the five convolution layers. A bottleneck layer (actually a full connection layer) is connected between the feature extraction network and the metric learning network and used for reducing the dimensionality of the feature representation vector and avoiding overfitting. The metric learning network is composed of two full-connected layers followed by a ReLU activation function and a full-connected layer followed by a softmax function, and outputs a probability value representing the similarity of the image blocks.
Further, as a preferred embodiment of the method, the step of extracting image blocks of the pictures in the test set and inputting the image blocks into the trained matching network to obtain matching point pairs specifically includes:
detecting feature points of the pictures in the test set based on an SIFT method;
according to the characteristic point of the picture, image blocks of the SAR image and the optical image are taken and input to two branches of the trained matching network, and a matching result is obtained;
and processing the matching result based on a progressive consistent sampling method to obtain matching point pairs.
Further as a preferred embodiment of the method, the formula for calculating the transformation matrix according to the matching point pairs is as follows:
Figure BDA0003269903000000064
in the above formula, T represents an image I1And image I2A geometric transformation matrix between, s represents the image I2Relative to image I1Scaled scale factor, θ, represents the image I2Relative to image I1Rotational angle of (d), txRepresenting an image I2Relative to image I1Horizontal displacement parameter of tyRepresentative image I2Relative to image I1The vertical displacement parameter of (1).
The method also has the beneficial effects that: the training samples all have known labels, and the matching network is robust to rotation, translation and scale change by learning images subjected to different geometric transformations; the network learns the metric function through the network and integrates the similarity metric into the matching network, which directly outputs the matching labels.
As shown in fig. 2, a heterogeneous image registration system includes:
the image pair sample module is used for filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the training sample module is used for inputting the image blocks into the sample book to be deeply convolved to generate a confrontation network for training to obtain a training sample;
the dividing module is used for performing data enhancement on the training samples and dividing the training samples to obtain a training set and a test set;
the training module is used for training the deep twin matching network based on a training set to obtain a trained matching network;
the matching module is used for extracting image blocks of the pictures in the test set and inputting the image blocks into a trained matching network to obtain matching point pairs;
and the registration module is used for calculating a transformation matrix according to the matching point pairs and registering the images.
A heterogeneous image registration apparatus:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a method of heterogeneous image registration as described above.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of heterogeneous image registration, comprising the steps of:
filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
inputting the image block into a sample book to generate a confrontation network for training, and obtaining a training sample;
performing data enhancement and division on the training samples to obtain a training set and a test set;
training a deep twin matching network based on a training set to obtain a trained matching network;
extracting image blocks of the pictures in the test set and inputting the image blocks into a trained matching network to obtain matching point pairs;
and calculating a transformation matrix according to the matching point pairs and registering the images.
2. The method for registration of a different source image according to claim 1, wherein the filtering the SAR image specifically comprises the steps of:
reading an image data matrix of the SAR image;
sliding a preset filtering window to process the SAR image, and calculating parameters in the window by combining an image data matrix; and outputting a filtering result based on the parameters and the filtering equation in the window to obtain the filtered SAR image.
3. The method of claim 2, wherein the filter equation is formulated as follows:
Figure FDA0003269902990000011
in the above formula, the first and second carbon atoms are,
Figure FDA0003269902990000012
represents a filtered value, b represents a preset parameter, m represents an observed value,
Figure FDA0003269902990000013
representing the mean of the pixels within the local window.
4. The method for registering the heterogeneous images according to claim 3, wherein there are two sets of the deep convolution generation countermeasure networks, and the step of training the image block to the sample input deep convolution generation countermeasure network to obtain the training sample specifically comprises:
inputting a group of depth convolutions into an optical image in a sample by an image block to generate a countermeasure network, generating a pseudo SAR image based on a generator, and generating a discrimination label based on a discriminator;
inputting the SAR image filtered in the sample by the image block into another group of depth convolution to generate a countermeasure network, generating a pseudo-optical image based on a generator, and generating a discrimination label based on a discriminator;
and forming a training set according to the SAR image, the pseudo SAR image, the optical image, the pseudo optical image and the discrimination label.
5. The method for registering the images of different sources according to claim 4, wherein the step of performing data enhancement and division on the training samples to obtain a training set and a test set specifically comprises:
carrying out geometric transformation on the training sample to obtain an enhanced training sample;
the geometric transformation comprises turning, rotating, clipping, translating and scaling;
the training set and the test set were divided in a 7: 3 ratio for the enhanced training samples.
6. The method for registering the heterogeneous images according to claim 5, wherein the step of training the deep twin matching network based on the training set to obtain the trained matching network specifically comprises:
training one branch of the depth twin matching network according to a pseudo SAR image generated by the optical image in the training set and the corresponding SAR image;
training the other branch of the depth twin matching network according to a pseudo-optical image generated by the SAR image in the training set and a corresponding optical image;
and performing loss calculation on the deep twin matching network by combining the discrimination label to obtain the trained matching network.
7. The method according to claim 6, wherein the step of extracting image blocks of the pictures in the test set and inputting the image blocks into the trained matching network to obtain matching point pairs specifically comprises:
detecting feature points of the pictures in the test set based on an SIFT method;
according to the characteristic point of the picture, image blocks of the SAR image and the optical image are taken and input to two branches of the trained matching network, and a matching result is obtained;
and processing the matching result based on a progressive consistent sampling method to obtain matching point pairs.
8. The method of claim 7, wherein the formula for computing the transformation matrix according to the matching point pairs is as follows:
Figure FDA0003269902990000021
in the above formula, T represents an image I1And image I2A geometric transformation matrix between, s represents the image I2Relative to image I1Scaled scale factor, θ, represents the image I2Relative to image I1Rotational angle of (d), txRepresenting an image I2Relative to image I1Horizontal displacement parameter of tyRepresentative image I2Relative to image I1The vertical displacement parameter of (1).
9. A heterogeneous image registration system, comprising:
the image pair sample module is used for filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the training sample module is used for inputting the image blocks into the sample book to be deeply convolved to generate a confrontation network for training to obtain a training sample;
the dividing module is used for performing data enhancement on the training samples and dividing the training samples to obtain a training set and a test set;
the training module is used for training the deep twin matching network based on a training set to obtain a trained matching network;
the matching module is used for extracting image blocks of the pictures in the test set and inputting the image blocks into a trained matching network to obtain matching point pairs;
and the registration module is used for calculating a transformation matrix according to the matching point pairs and registering the images.
10. A heterogeneous image registration apparatus, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a method of heterogeneous image registration according to any of claims 1-8.
CN202111098634.1A 2021-09-18 2021-09-18 Heterologous image registration method, system and device Active CN113808180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111098634.1A CN113808180B (en) 2021-09-18 2021-09-18 Heterologous image registration method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111098634.1A CN113808180B (en) 2021-09-18 2021-09-18 Heterologous image registration method, system and device

Publications (2)

Publication Number Publication Date
CN113808180A true CN113808180A (en) 2021-12-17
CN113808180B CN113808180B (en) 2023-10-17

Family

ID=78939711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111098634.1A Active CN113808180B (en) 2021-09-18 2021-09-18 Heterologous image registration method, system and device

Country Status (1)

Country Link
CN (1) CN113808180B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565653A (en) * 2022-03-02 2022-05-31 哈尔滨工业大学 Heterogeneous remote sensing image matching method with rotation change and scale difference
CN114972453A (en) * 2022-04-12 2022-08-30 南京雷电信息技术有限公司 Improved SAR image region registration method based on LSD and template matching
CN115356599A (en) * 2022-10-21 2022-11-18 国网天津市电力公司城西供电分公司 Multi-mode urban power grid fault diagnosis method and system
CN116563569A (en) * 2023-04-17 2023-08-08 昆明理工大学 Hybrid twin network-based heterogeneous image key point detection method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510532A (en) * 2018-03-30 2018-09-07 西安电子科技大学 Optics and SAR image registration method based on depth convolution GAN
CN111462012A (en) * 2020-04-02 2020-07-28 武汉大学 SAR image simulation method for generating countermeasure network based on conditions

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510532A (en) * 2018-03-30 2018-09-07 西安电子科技大学 Optics and SAR image registration method based on depth convolution GAN
CN111462012A (en) * 2020-04-02 2020-07-28 武汉大学 SAR image simulation method for generating countermeasure network based on conditions

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565653A (en) * 2022-03-02 2022-05-31 哈尔滨工业大学 Heterogeneous remote sensing image matching method with rotation change and scale difference
CN114972453A (en) * 2022-04-12 2022-08-30 南京雷电信息技术有限公司 Improved SAR image region registration method based on LSD and template matching
CN115356599A (en) * 2022-10-21 2022-11-18 国网天津市电力公司城西供电分公司 Multi-mode urban power grid fault diagnosis method and system
CN116563569A (en) * 2023-04-17 2023-08-08 昆明理工大学 Hybrid twin network-based heterogeneous image key point detection method and system
CN116563569B (en) * 2023-04-17 2023-11-17 昆明理工大学 Hybrid twin network-based heterogeneous image key point detection method and system

Also Published As

Publication number Publication date
CN113808180B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN113808180A (en) Method, system and device for registering different-source images
CN109035149B (en) License plate image motion blur removing method based on deep learning
Wang et al. Dehazing for images with large sky region
CN111340738B (en) Image rain removing method based on multi-scale progressive fusion
CN111709909B (en) General printing defect detection method based on deep learning and model thereof
CN112184577B (en) Single image defogging method based on multiscale self-attention generation countermeasure network
CN107301661A (en) High-resolution remote sensing image method for registering based on edge point feature
CN106709964B (en) Sketch generation method and device based on gradient correction and multidirectional texture extraction
CN112233129B (en) Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device
CN110992366B (en) Image semantic segmentation method, device and storage medium
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN111310508B (en) Two-dimensional code identification method
Mei et al. Illumination-invariance optical flow estimation using weighted regularization transform
CN116310095A (en) Multi-view three-dimensional reconstruction method based on deep learning
CN113139904A (en) Image blind super-resolution method and system
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN115272303A (en) Textile fabric defect degree evaluation method, device and system based on Gaussian blur
CN114119987A (en) Feature extraction and descriptor generation method and system based on convolutional neural network
CN116721216A (en) Multi-view three-dimensional reconstruction method based on GCF-MVSNet network
Kwok et al. Adaptive scale adjustment design of unsharp masking filters for image contrast enhancement
CN112529081A (en) Real-time semantic segmentation method based on efficient attention calibration
Ooi et al. Enhanced dense space attention network for super-resolution construction from single input image
CN113012072A (en) Image motion deblurring method based on attention network
Zhu et al. Rgb-d saliency detection based on cross-modal and multi-scale feature fusion
Shi et al. Comparative Study of Digital Instrument Image Enhancement in Complex Industrial Environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant