CN113838104A - Registration method based on multispectral and multi-mode image consistency enhancement network - Google Patents

Registration method based on multispectral and multi-mode image consistency enhancement network Download PDF

Info

Publication number
CN113838104A
CN113838104A CN202110890638.7A CN202110890638A CN113838104A CN 113838104 A CN113838104 A CN 113838104A CN 202110890638 A CN202110890638 A CN 202110890638A CN 113838104 A CN113838104 A CN 113838104A
Authority
CN
China
Prior art keywords
image
consistency
network
phase
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110890638.7A
Other languages
Chinese (zh)
Other versions
CN113838104B (en
Inventor
肖泽琪
曹思源
沈会良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110890638.7A priority Critical patent/CN113838104B/en
Publication of CN113838104A publication Critical patent/CN113838104A/en
Application granted granted Critical
Publication of CN113838104B publication Critical patent/CN113838104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a registration method based on a multi-spectral and multi-modal image consistency enhancement network. The method comprises the following steps: firstly, training an image consistency enhancing network, wherein the training method comprises the following steps: acquiring a plurality of training image pairs; respectively inputting the training image pair into a multi-spectral and multi-modal image consistency enhancement network, outputting the images after forward propagation processing, and calculating a consistency enhancement loss function; updating network parameters of the image consistency enhancement network by using a loss function until the network parameters meet a preset condition, finishing training and obtaining a trained image consistency enhancement network; and then, the test image is enhanced through an image consistency enhancement network, and then the registration is carried out by utilizing an image registration algorithm based on multi-scale motion estimation. The image consistency enhancement network can effectively extract the consistency of multispectral and multimodal images, so that the multispectral and multimodal images have excellent effect in the following registration method.

Description

Registration method based on multispectral and multi-mode image consistency enhancement network
Technical Field
The invention relates to the technical field of image processing, in particular to a registration method based on a multi-spectral and multi-modal image consistency enhancement network.
Background
Multispectral and multimodal images are important data in the field of computer vision and computational photography. Image registration is important because multispectral and multimodal data are typically not aligned because of translation or movement of the imaging device. Due to the fact that multispectral and multimodal images have the characteristic of nonlinear brightness and gradient change, good results cannot be obtained by the traditional image registration technology. Therefore, a solution capable of improving image consistency is needed to solve the challenge of image registration caused by non-linear variation.
Currently, the mainstream consistency enhancement algorithms include lat (local area transform), ei (entry image), ct (center transform), and the like.
Statistical information of local area utilized by LAT, assuming that intensity variation of local area follows functional relationship
Figure BDA0003195890350000011
Wherein τ (x, y) is 1 when x is y, and is 0 otherwise. Such algorithms rely heavily on the richness of the intensity levels of the input image and are less effective on images with less intensity fluctuations.
EI is a structure obtained by using Shannon entropy, which is defined by the following formula
Figure BDA0003195890350000021
Wherein X is a discrete random variable and may take the value { X1,...,xnAnd the probability distribution is P (X). In a local area, EI takes a local histogram to calculate the probability of different gray levels, and thus its entropy to represent its structural information.
CT encodes consistent structures with a fixed pattern. The most commonly used CT mode is defined by the following formula
Figure BDA0003195890350000022
CT typically operates over a 3x3 window, resulting in 8 channel per pixel features as similarity enhancement structures, capable of reducing non-linear variations in gray scale and gradient.
However, the above algorithms extract features of the picture by using statistical assumptions, and the proposed assumptions do not have a solid theoretical basis, so that a stable effect cannot be obtained on the multi-spectral and multi-modal data sets.
Disclosure of Invention
The invention aims to provide a registration method based on a multispectral and multimodal image consistency enhancing network, wherein the image consistency enhancing network can effectively extract the consistency of multispectral and multimodal images, so that the multispectral and multimodal images can obtain excellent effects in the following registration method.
The invention is realized by adopting the following technical scheme:
a registration method based on a multispectral and multi-modal image consistency enhancement network comprises the following steps:
(1) a plurality of training image pairs are acquired. Each training image pair comprises a first image of a certain wave band or mode and a second image of another wave band or mode;
(2) respectively inputting the training image pairs into an image consistency enhancement network, outputting images after forward propagation processing, and calculating a consistency enhancement module loss function;
(3) updating network parameters of the image consistency enhancement network by using the loss function until the network parameters meet a preset condition (namely the loss function is lower than a threshold value or the iteration times reach an upper limit), finishing training and obtaining a trained image consistency enhancement network;
(4) obtaining a plurality of test image pairs, enhancing the test image pairs through an image consistency enhancing network, and then carrying out registration by using an image registration algorithm based on multi-scale motion estimation. The invention can achieve excellent registration result.
In the above technical solution, further, the step of obtaining a plurality of training image pairs and test image pairs is as follows: the method comprises the steps of collecting samples of different spectral bands or different modals, selecting a certain spectral band or modal image as a first image, selecting another spectral band or modal image as a second image, and taking one image as a reference image and the other image as an image to be registered. And respectively cutting out sub-images with preset sizes from the same positions of the reference image and the image to be registered, and forming a training image pair or a test image pair.
Further, the image consistency enhancement network comprises two modules: a low brightness enhancement module and a uniformity enhancement module.
The low-brightness enhancement module is used for improving the brightness value of the low-brightness area of the image and balancing the overall brightness of the image, so that the structural information extracted by the consistency enhancement module is richer; the consistency enhancing module is used for enhancing the similarity of structures and relieving the nonlinear intensity and gradient change of the multispectral and multimodal images.
The processing method of the low-brightness enhancement module image comprises the following steps: the image is output with N nonlinear mapping parameters after passing through a lightweight convolution network, the image is iteratively mapped for N times to obtain a low-brightness enhanced image, and the mapping mode is calculated by the following formula
Iout=Iin+αIin(1-Iin) (1)
Wherein Iin,IoutThe input image and the output image of each iteration are respectively, and alpha is a nonlinear mapping parameter.
The consistency enhancement module consists of two parts, namely an improved learnable Gabor core and a phase consistency structure. The improved learnable Gabor kernel is obtained by point-multiplying a Gabor filter with a learnable convolution kernel. The Gabor filter is defined by the following equation
Figure BDA0003195890350000041
Wherein k isu,vAnd in the wave vector of the Gabor, subscripts u and v are respectively the horizontal and vertical coordinates of the image, sigma is the standard deviation of a Gaussian function and is related to the bandwidth of the wavelet, wherein a constant is taken, and z is the substituted value of the image.
The improved learnable Gabor core can extract multi-scale, multi-directional and odd-even frequency components, and the difficulty of network training is obviously reduced while the network universality is improved.
After the image is processed by improved learnable Gabor kernel, multi-scale, multi-direction and odd-even frequency component channels are generated, and the amplitude spectrum and phase spectrum of the frequency component channels with the same scale are respectively calculated by the following formulas
Figure BDA0003195890350000042
φs(x,y)=arctan(os(x,y)/es(x,y)) (4)
Wherein o iss(x, y) is the odd frequency component channel of the image (x, y) point, es(x, y) is the even frequency component channel, and s is the scale. The local energy of the amplitude spectrum and the corresponding phase spectrum are respectively calculated by the following formulas
Figure BDA0003195890350000051
Figure BDA0003195890350000052
The phase consistency structure takes a phase consistency theory as guidance, a phase consistency system structure is established, and 3 trainable layers are adopted to extract trainable characteristics. The phase consistency of the image is preliminarily calculated by adopting the following formula
Figure BDA0003195890350000053
In which the bold variables are vectorized representations, e.g. ΦsIs phisA vectorized representation of (x, y),
Figure BDA0003195890350000058
it is shown that the matrix is dot-multiplied,
Figure BDA0003195890350000054
indicating the division of the matrix points, ξ 1 is a small quantity, avoiding zero operations. Delta phisCalculated by the following formula
Figure BDA0003195890350000055
Wherein phisA phase spectrum representing the image is generated,
Figure BDA0003195890350000056
representing the average phase spectrum of the image.
Furthermore, the phase consistency architecture includes three trainable layers, which are: the device comprises a noise estimation layer, an improved phase deviation estimation layer and a frequency abundance estimation layer.
The calculation of equation (7) is very sensitive to noise, and the noise estimation layer is used to estimate the image noise. The noise of the local energy will have a Rayleigh distribution, and therefore the noise distribution is calculated by the following formula
Figure BDA0003195890350000057
Wherein N issRepresenting the number of frequency scales, τ is derived from the local energy, and α is the variable learning quantity. Will be in the formula (7)
Figure BDA0003195890350000059
The noise term T is subtracted from the term and then activated by the ReLU function to remove the noise-drowned area.
The improved phase deviation estimation layer introduces a learnable quantity beta to control significance of an output structure, and the improved phase deviation delta phi'sCalculated by the following formula
Figure BDA0003195890350000061
The calculation of the formula (7) does not take into consideration the number of scales of the frequency at which the phases coincide, and the larger the number of scales, the higher the weight of the coincidence obtained by the calculation should be, the weight being calculated by the following formula
Figure BDA0003195890350000062
Where γ is the learnable amount and S represents the sum of the scale numbers.
The final phase consistency is calculated by the following formula
Figure BDA0003195890350000063
Wherein the subscript s denotes the dimension, the subscript o denotes the direction,
Figure BDA0003195890350000064
repeatedly expand to T
Figure BDA0003195890350000068
The results after the same dimension.
Further, the consistency enhancement module loss function is calculated by the following formula
Figure BDA0003195890350000065
Where SSIM is a measure of structural similarity, NoIs the sum of the number of directions, P1、P2Respectively a reference image and a registered image,
Figure BDA0003195890350000067
where l ∈ { x, y }, denotes the gradient in both the transverse and longitudinal directions, and c is a hyperparameter for balancing consistency enhancement with structure retention.
Further, the image registration method based on multi-scale motion estimation performs registration, specifically: using the Sum of Squared Differences (SSD) as the registration result metric, it is calculated by the following equation
Figure BDA0003195890350000066
Wherein IRRepresenting as a reference picture, IFRepresenting the registered images, omega (a) representing the active area where the two overlap, a representing the parameters of the affine transformation,
Figure BDA0003195890350000071
for optimal values, p is an element in the active area where the two overlap. The multi-scale motion estimation is hierarchical motion parameter estimation of a Gaussian image pyramid.
The invention has the beneficial effects that:
the invention provides and realizes an image consistency enhancing network based on a registration method of a multi-spectral and multi-modal image consistency enhancing network, can effectively reduce the influence of the non-linear intensity and gradient change of the multi-spectral and multi-modal images on the registration, and obtains excellent registration effect by combining with the proposed image registration algorithm based on hierarchical motion estimation.
Drawings
FIG. 1 is a flow chart of image enhancement and registration using a registration method of a multi-spectral and multi-modal image consistency enhancement network.
Fig. 2 is a low brightness enhancement module in an image consistency enhancement network.
Fig. 3 is a consistency enhancement module in an image consistency enhancement network.
Fig. 4 shows the result of enhancing an image by using an image consistency enhancing network, wherein the image consistency enhancing network comprises (a) a multispectral image 1, (b) an enhancement result of the multispectral image 1, (c) a multispectral image 2, and (d) an enhancement result of the multispectral image 2.
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings.
As shown in fig. 1, the training method and the registration of the present invention include the following steps:
(1) a plurality of training image pairs are acquired. Each training image pair comprises a first image of a certain wave band or mode and a second image of another wave band or mode;
(2) respectively inputting the training image pairs into an image consistency enhancement network, outputting images after forward propagation processing, and calculating a consistency enhancement module loss function;
(3) updating network parameters of the image consistency enhancement network by using a loss function until the network parameters meet a preset condition, finishing training and obtaining a trained image consistency enhancement network;
(4) and acquiring a plurality of test image pairs, and enhancing the first image and the second image through an image consistency enhancing network. And then registering by using an image registration method based on multi-scale motion estimation. The invention can achieve excellent registration result.
As shown in fig. 2, the low brightness enhancement module processes an image by: the image is output with N nonlinear mapping parameters after passing through a lightweight convolution network, and the image is iteratively mapped for N times to obtain a low-brightness enhanced image, wherein N is preferably 8. The mapping is calculated by the following formula
Iout=Iin+αIin(1-Iin)
Wherein Iin,IoutThe input image and the output image of each iteration are respectively, and alpha is a nonlinear mapping parameter.
Fig. 3 shows a consistency enhancement module, which is composed of two parts, an improved learnable Gabor core and a phase consistency structure. The improved learnable Gabor kernel is obtained by multiplying a Gabor filter with a learnable convolution kernel. The Gabor filter is defined by:
Figure BDA0003195890350000081
wherein k isu,vIs the wavevector of Gabor, the following Tableu and v are horizontal and vertical coordinates of the image respectively, sigma is a standard deviation of a Gaussian function and is related to the bandwidth of the wavelet, a constant is taken here, and z is a substituted value of the image.
Preferably, the invention takes the direction number N of the multidirectional Gabor filteroIs 6.
After passing through a Gabor kernel, the image generates multi-scale, multi-directional and odd-even frequency component channels.
The phase consistency structure takes a phase consistency theory as guidance, a phase consistency system structure is established, 3 trainable layers are adopted for trainable feature extraction, and the 3 trainable layers are respectively as follows: the device comprises a noise estimation layer, an improved phase deviation estimation layer and a frequency abundance estimation layer. The final phase consistency is calculated by:
Figure BDA0003195890350000091
where the subscript s denotes the dimension, o denotes the direction,
Figure BDA0003195890350000092
repeatedly expand to T
Figure BDA0003195890350000095
The results after the same dimension.
A loss function of the coherence enhancing network calculated by:
Figure BDA0003195890350000093
wherein
Figure BDA0003195890350000094
Where l ∈ { x, y }, denotes the gradient in both the transverse and longitudinal directions, and c is a hyperparameter for balancing consistency enhancement with structure retention. Preferably, c is 0.8.
The enhanced image changes from the original as shown in fig. 4. As can be seen from the figure, after low luminance enhancement and consistency enhancement are performed on two images with large luminance difference, the obtained consistency structure is much closer to that of the original image. The final registration results on the multispectral dataset are shown in table 1. Table 1 shows the effect of image enhancement and registration on a multispectral dataset compared to the mainstream algorithm using an image consistency enhancement network and an image registration method based on hierarchical motion estimation. Compared with the traditional consistency enhancement algorithm, the network with only the consistency enhancement module has obviously reduced errors, and after the low-brightness enhancement module is added, a plurality of numerical values are improved compared with the network without the low-brightness enhancement module, so that the combination of the two is proved to have positive influence on the registration of the image.
TABLE 1 registration Effect of algorithms on images
Figure BDA0003195890350000101
The above description is only an embodiment of the present invention, and the scope of the present invention should not be limited thereby, and all equivalent changes made by those skilled in the art according to the present invention and modifications known to those skilled in the art should still fall within the scope of the present invention.

Claims (7)

1. A registration method based on a multispectral and multi-modal image consistency enhancement network is characterized by comprising the following steps:
(1) acquiring a plurality of training image pairs; each training image pair comprises a first image of a certain wave band or mode and a second image of another wave band or mode;
(2) respectively inputting the training image pairs into an image consistency enhancement network, outputting images after forward propagation processing, and calculating a consistency enhancement module loss function; the image consistency enhancing network comprises a low brightness enhancing module and a consistency enhancing module, wherein the low brightness enhancing module is used for enhancing the brightness value of a low brightness area of an image and balancing the overall brightness of the image; the consistency enhancing module is used for enhancing the similarity of the structure and relieving the nonlinear intensity and gradient change of the multispectral and multimodal images;
(3) updating network parameters of the image consistency enhancement network by using a loss function until the network parameters meet a preset condition, finishing training and obtaining a trained image consistency enhancement network;
(4) obtaining a plurality of test image pairs, enhancing the test image pairs through an image consistency enhancing network, and then carrying out registration by using an image registration algorithm based on multi-scale motion estimation.
2. The multi-spectral and multi-modal image consistency enhancement network based registration method of claim 1, wherein the step of obtaining a plurality of training image pairs and test image pairs is as follows:
collecting samples of different spectral bands or different modals, selecting a certain spectral band or modal image as a first image, selecting another spectral band or modal image as a second image, and taking one image as a reference image and the other image as a registration image; and respectively cutting out sub-images with preset sizes from the same positions of the reference image and the image to be registered, and forming a training image pair or a test image pair.
3. The multi-spectral and multi-modal image consistency enhancement network based registration method according to claim 1, wherein the low brightness enhancement module processes the image by:
the image is output with N nonlinear mapping parameters after passing through a lightweight convolution network, the image is iteratively mapped for N times to obtain a low-brightness enhanced image, and the mapping mode is calculated by the following formula
Iout=Iin+αIin(1-Iin) (1)
Wherein Iin,IoutThe input image and the output image of each iteration are respectively, and alpha is a nonlinear mapping parameter.
4. The multi-spectral and multi-modal image consistency enhancement network based registration method according to claim 1, wherein the consistency enhancement module comprises an improved learnable Gabor kernel and phase consistency structure; the improved learnable Gabor kernel is obtained by multiplying a Gabor filter and a learnable convolution kernel by a point; the Gabor filter is defined by the following equation
Figure FDA0003195890340000021
Wherein k isu,vThe wave vector of Gabor is shown in the following table u and v, which are respectively the horizontal and vertical coordinates of the image, sigma is the standard deviation of a Gaussian function and is related to the bandwidth of the wavelet, a constant is taken here, and z is the substituted value of the image;
after the image is processed by improved learnable Gabor kernel, multi-scale, multi-direction and odd-even frequency component channels are generated, and the amplitude spectrum and phase spectrum of the frequency component channels with the same scale are respectively calculated by the following formulas
Figure FDA0003195890340000022
φs(x,y)=arctan(os(x,y)/es(x,y)) (4)
Wherein o iss(x, y) is the odd frequency component channel of the image (x, y) point, es(x, y) is an even frequency component channel, and s is a scale;
the local energy of the amplitude spectrum and the corresponding phase spectrum are respectively calculated by the following formulas
Figure FDA0003195890340000031
Figure FDA0003195890340000032
The phase consistency structure calculates the phase consistency of the image according to formula (7)
Figure FDA0003195890340000033
Wherein, the bold variables are vectorized expressions,
Figure FDA0003195890340000034
it is shown that the matrix is dot-multiplied,
Figure FDA0003195890340000035
the division of matrix points is shown, xi 1 is a small quantity, and zero operation is avoided; phase difference delta phisCalculated by the following formula
Figure FDA0003195890340000036
Wherein phisA phase spectrum representing the image is generated,
Figure FDA0003195890340000037
representing the average phase spectrum of the image.
5. The method according to claim 4, wherein the phase consistency structure comprises three trainable layers, namely: the device comprises a noise estimation layer, an improved phase deviation estimation layer and a frequency abundance estimation layer;
the noise estimation layer is used for estimating image noise, and the noise of local energy has Rayleigh distribution, so the noise distribution is calculated by the following formula
Figure FDA0003195890340000038
Wherein N issRepresenting the number of frequency scales, τ being obtained from the local energy, α being the variable learning quantity; will be in the formula (7)
Figure FDA0003195890340000039
After the noise term T is subtracted from the term, the term passes through the ReLU functionActivating the number to remove areas that are drowned by noise;
the improved phase deviation estimation layer introduces a learnable quantity beta to control the significance of an image output structure, and the improved phase deviation delta phi'sCalculated by the following formula
Figure FDA0003195890340000041
The weight of phase coincidence is calculated by considering the number of scales of the frequency of phase coincidence, and the weight W is calculated by the following formula
Figure FDA0003195890340000042
Where γ is the learnable amount and S represents the sum of the scale numbers.
The final phase consistency is calculated by the following formula
Figure FDA0003195890340000043
Where subscript s denotes the dimension, subscript o denotes the direction,
Figure FDA0003195890340000044
repeatedly expand to T
Figure FDA0003195890340000045
The results after the same dimension.
6. The multi-spectral and multi-modal image coherence enhancement network-based registration method according to claim 1, wherein the coherence enhancement module loss function is calculated by the following formula
Figure FDA0003195890340000046
Where SSIM is a measure of structural similarity, NoIs the sum of the number of directions, P1、P2Respectively a reference image and a registered image,
Figure FDA0003195890340000047
where l ∈ { x, y }, denotes the gradient in both the transverse and longitudinal directions, and c is a hyperparameter for balancing consistency enhancement with structure retention.
7. The registration method based on the multi-spectral and multi-modal image consistency enhancing network as claimed in claim 1, wherein the image registration method based on multi-scale motion estimation performs registration, specifically: using the squared difference and SSD as the registration result metric, it is calculated by the following equation
Figure FDA0003195890340000051
Wherein IRRepresenting as a reference picture, IFRepresenting the registered images, omega (a) representing the active area where the two overlap, a representing the parameters of the affine transformation,
Figure FDA0003195890340000052
the optimal value is obtained; the multi-scale motion estimation is a hierarchical motion parameter estimation of a gaussian image pyramid.
CN202110890638.7A 2021-08-04 2021-08-04 Registration method based on multispectral and multimodal image consistency enhancement network Active CN113838104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110890638.7A CN113838104B (en) 2021-08-04 2021-08-04 Registration method based on multispectral and multimodal image consistency enhancement network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110890638.7A CN113838104B (en) 2021-08-04 2021-08-04 Registration method based on multispectral and multimodal image consistency enhancement network

Publications (2)

Publication Number Publication Date
CN113838104A true CN113838104A (en) 2021-12-24
CN113838104B CN113838104B (en) 2023-10-27

Family

ID=78963181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110890638.7A Active CN113838104B (en) 2021-08-04 2021-08-04 Registration method based on multispectral and multimodal image consistency enhancement network

Country Status (1)

Country Link
CN (1) CN113838104B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117670753A (en) * 2024-01-30 2024-03-08 浙江大学金华研究院 Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833280A (en) * 2017-11-09 2018-03-23 交通运输部天津水运工程科学研究所 A kind of outdoor moving augmented reality method being combined based on geographic grid with image recognition
RU2706891C1 (en) * 2019-06-06 2019-11-21 Самсунг Электроникс Ко., Лтд. Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts
CN110533620A (en) * 2019-07-19 2019-12-03 西安电子科技大学 The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE
CN111105432A (en) * 2019-12-24 2020-05-05 中国科学技术大学 Unsupervised end-to-end driving environment perception method based on deep learning
US20200184660A1 (en) * 2018-12-11 2020-06-11 Siemens Healthcare Gmbh Unsupervised deformable registration for multi-modal images
CN112288663A (en) * 2020-09-24 2021-01-29 山东师范大学 Infrared and visible light image fusion method and system
CN112330724A (en) * 2020-10-15 2021-02-05 贵州大学 Unsupervised multi-modal image registration method based on integrated attention enhancement

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833280A (en) * 2017-11-09 2018-03-23 交通运输部天津水运工程科学研究所 A kind of outdoor moving augmented reality method being combined based on geographic grid with image recognition
US20200184660A1 (en) * 2018-12-11 2020-06-11 Siemens Healthcare Gmbh Unsupervised deformable registration for multi-modal images
RU2706891C1 (en) * 2019-06-06 2019-11-21 Самсунг Электроникс Ко., Лтд. Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts
CN110533620A (en) * 2019-07-19 2019-12-03 西安电子科技大学 The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE
CN111105432A (en) * 2019-12-24 2020-05-05 中国科学技术大学 Unsupervised end-to-end driving environment perception method based on deep learning
CN112288663A (en) * 2020-09-24 2021-01-29 山东师范大学 Infrared and visible light image fusion method and system
CN112330724A (en) * 2020-10-15 2021-02-05 贵州大学 Unsupervised multi-modal image registration method based on integrated attention enhancement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨超;杨斌;黄国玉;: "基于多光谱图像超分辨率处理的遥感图像融合", 激光与光电子学进展, no. 02 *
沈会良;张哲超;忻浩忠;: "光谱反射率重建中代表颜色分步选取方法", 光谱学与光谱分析, no. 04 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117670753A (en) * 2024-01-30 2024-03-08 浙江大学金华研究院 Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network

Also Published As

Publication number Publication date
CN113838104B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US11551333B2 (en) Image reconstruction method and device
Thakur et al. State‐of‐art analysis of image denoising methods using convolutional neural networks
Zhang et al. Learning multiple linear mappings for efficient single image super-resolution
Yuan et al. Factorization-based texture segmentation
Ma et al. Efficient and fast real-world noisy image denoising by combining pyramid neural network and two-pathway unscented Kalman filter
Ren et al. Single image super-resolution using local geometric duality and non-local similarity
Chen et al. Convolutional neural network based dem super resolution
Ren et al. Adjusted non-local regression and directional smoothness for image restoration
CN112634149B (en) Point cloud denoising method based on graph convolution network
CN112419153A (en) Image super-resolution reconstruction method and device, computer equipment and storage medium
CN110176023B (en) Optical flow estimation method based on pyramid structure
CN111340697B (en) Image super-resolution method based on clustered regression
Geng et al. Truncated nuclear norm minimization based group sparse representation for image restoration
CN114897728A (en) Image enhancement method and device, terminal equipment and storage medium
Ren et al. Learning image profile enhancement and denoising statistics priors for single-image super-resolution
CN109961435B (en) Brain image acquisition method, device, equipment and storage medium
Liang et al. Multi-scale hybrid attention graph convolution neural network for remote sensing images super-resolution
CN113838104B (en) Registration method based on multispectral and multimodal image consistency enhancement network
Wang et al. Global aligned structured sparsity learning for efficient image super-resolution
CN114049491A (en) Fingerprint segmentation model training method, fingerprint segmentation device, fingerprint segmentation equipment and fingerprint segmentation medium
Luo et al. Piecewise linear regression-based single image super-resolution via Hadamard transform
Luo et al. A fast denoising fusion network using internal and external priors
CN108596831B (en) Super-resolution reconstruction method based on AdaBoost example regression
CN113269812B (en) Training and application method, device, equipment and storage medium of image prediction model
CN112801908B (en) Image denoising method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant