CN109461115A - A kind of automatic Image Registration Method based on depth convolutional network - Google Patents

A kind of automatic Image Registration Method based on depth convolutional network Download PDF

Info

Publication number
CN109461115A
CN109461115A CN201810847008.XA CN201810847008A CN109461115A CN 109461115 A CN109461115 A CN 109461115A CN 201810847008 A CN201810847008 A CN 201810847008A CN 109461115 A CN109461115 A CN 109461115A
Authority
CN
China
Prior art keywords
layer
sub
image
matching
nearest neighbor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810847008.XA
Other languages
Chinese (zh)
Inventor
科菲尔·阿博曼
陈宝权
史明镒
达尼·李其思
达尼·科恩尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING FILM ACADEMY
Original Assignee
BEIJING FILM ACADEMY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING FILM ACADEMY filed Critical BEIJING FILM ACADEMY
Priority to CN201810847008.XA priority Critical patent/CN109461115A/en
Publication of CN109461115A publication Critical patent/CN109461115A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of automatic Image Registration Methods based on depth convolutional network.It can be suitable for color, poor appearance away from biggish image using the present invention, and robustness is preferable.The present invention utilizes the convolutional layer, pond layer of different levels in depth convolutional network or the tensor information of active coating, inverted pyramid formula gradually accurate Feature Points Matching is successively up carried out from bottom grade, then the match point of top is utilized, image registration is carried out using method for registering images such as moving least square methods, it can be suitable for the image that external appearance characteristic changes greatly, and robustness is preferable.Meanwhile the arest neighbors matching pair for proposing that semantic information is not strong to screening is matched to arest neighbors using tensor response, so that the arest neighbors screened is matched to semantic dependency.

Description

Automatic image registration method based on deep convolutional network
Technical Field
The invention relates to the technical field of image processing, in particular to an automatic image registration method based on a depth convolution network.
Background
The image registration belongs to the field of image processing, and aims at spatially aligning two or more images in the same scene so as to provide guarantee for the next processing of the images.
At present, the image registration method based on the external features of the image is widely applied to various scenes such as visual application, medicine, remote sensing and the like. The association between the images is established by extracting the characteristic points in the images and then searching for matching points among different image characteristic points. And then, obtaining a transformation equation between the source image and the target image by solving the characteristic matching corresponding relation. In this process, the image content does not change. Therefore, the quality of the matching points determines the quality of the registration result. The error of any one matching point will also have a large effect on the result.
Liu Xiaojun et al propose an image registration method based on SIFT features, but it is difficult to find suitable matching points for images with large differences in color and appearance only according to external features. Meanwhile, matching points based on appearance characteristics do not all conform to human intuition. People pay more attention to semantic correlation among images, and the traditional feature extraction method is difficult to obtain semantic distribution of the images.
Disclosure of Invention
In view of this, the invention provides an automatic image registration method based on a deep convolutional network, which is applicable to images with large color and appearance difference and has good robustness.
The invention discloses an automatic image registration method based on a depth convolution network, which comprises the following steps:
step 1, constructing a deep convolutional network, and training the deep convolutional network to obtain a trained deep convolutional network capable of extracting image characteristics;
step 2, inputting two images A, B to be matched into the deep convolutional network trained in the step 1 respectively, and extracting the output of a convolutional layer, a pooling layer or an activation layer in each level in the deep convolutional network; wherein, one layer extracts the output of one layer, and each layer can extract the output of the same layer (for example, all extracts the active layer) or extract the output of different layers;
step 3, aiming at the output of each extracted layer, respectively setting a search subarea of the layer; respectively performing nearest neighbor matching in the corresponding search subareas of each layer from bottom to top from the bottommost layer; the nearest neighbor matching process is as follows:
for theThe nth search sub-region of the l-th layer, for each sub-tensor in the nth search sub-region of the l-th layer of the image ASub-tensors in the same position in the l-th layer of image BSearch and within centered matching areaA nearest sub-tensor, wherein the matching region is smaller than the search sub-region; similarly, each of the sub-tensors in the nth search sub-area for the l-th layer of image BSub-tensors at the same position in the l-th layer of image ASearch and within centered matching areaThe nearest sub-tensor; if two sub-tensors exist in the nth search sub-area of the ith layer of A, B and are closest to each other, the two sub-tensors are called as nearest neighbor matching pairs;
the nth search subarea of the l layer is the mapping of the nth matching pair of the next layer (the l +1 th layer) on the l layer; the search sub-area of the bottommost layer L ═ L (L is the total number of the extraction layers in the step 1) is the whole area of the bottommost layer;
by analogy, obtaining a nearest neighbor matching pair with the topmost layer l being 1;
step 4, using the top-most nearest neighbor matching, registration of the image A, B is performed using an image registration method.
Further, in step 1, the deep convolutional network is a network having a function of extracting image features, such as an image classification network.
Further, in step 1, a public data set such as ImageNet is used for network training or an existing public pre-trained deep convolutional network is directly used.
Further, in the step 2, only the outputs of the convolution layer, the pooling layer or the activation layer of the first 4 or 5 levels are extracted to perform the subsequent steps.
Further, in step 3, the size of the matching area is determined according to the search distance and the network structure.
Further, in step 3, for the nth search sub-region ln of the ith layer, the nearest neighbor matching process is as follows:
searching each sub-tensor in sub-region ln for image ASub-tensors at the same position in image BSearch and within centered matching areaNearest sub-tensorThen calculating the sub-tensorSub-tensors at the same position in image ACentered within the matching region andwhether the nearest sub-tensor isIf so, the method is considered to beAnda pair of nearest neighbor matching pairs.
Further, in step 3, for each layer, a tensor response value of the nearest neighbor matching pair of the layer is calculated, the nearest neighbor matching pair of which the response value is greater than or equal to a set threshold is selected as a final nearest neighbor matching pair of the layer, and the final nearest neighbor matching pair is used to perform subsequent steps.
Further, in step 3, if the difference between the source image and the target image is large, during the distance calculation, the search sub-region corresponding to the image A, B is subjected to style conversion, and is converted into a uniform common style, and then the distance calculation is performed to obtain the nearest neighbor matching pair.
Further, in step 4, before the registration of the image A, B is performed, the top-most nearest neighbor matching pairs obtained in step 3 are clustered by an unsupervised clustering method, and then the registration of the image A, B is performed by using each clustering center pair.
Furthermore, the unsupervised clustering method is a K-Means clustering method, a DBSCAN clustering method or a Mean-Shift clustering method.
Further, in step 4, the registration of the image A, B is realized by using a minimum shift two-multiplication method, a rigid image deformation method or a differential method.
Has the advantages that:
according to the method, tensor information of convolutional layers, pooling layers or activation layers of different levels in a deep convolutional network is utilized, feature point matching is performed gradually and accurately from the bottommost layer level to the top in an inverted pyramid mode, then the matching points of the topmost layer are utilized, image registration is performed by using methods such as minimum motion two-multiplication and the like, the method is suitable for images with large appearance feature changes, and robustness is good.
Before the nearest neighbor matching is carried out on the search subareas, the styles of the search subareas are unified, and the influence of style differences such as colors of images on appearance characteristics is eliminated.
And screening the nearest neighbor matching pairs by using tensor response values, and providing the nearest neighbor matching pairs with weak semantic information, so that the screened nearest neighbor matching pairs have semantic relevance.
Clustering the nearest neighbor matching pair of the topmost active layer by using an unsupervised clustering method, and performing image matching by using a clustering center pair to improve the matching efficiency; and the number of the matching points can be flexibly changed according to the application requirement.
Drawings
FIG. 1 is an exemplary diagram of the present invention showing the sequential upward progressive exact matching of the bottommost active layers.
Fig. 2 is an exemplary diagram of the corresponding area extending from the matching pair of the l-1 layer to the l layer.
Fig. 3 shows matching points and their registration results according to an embodiment of the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides an automatic image registration method based on a depth convolution network, which comprises the steps of projecting an image to a feature space in the depth convolution network, gradually and accurately searching matching points from bottom to top based on different levels of the feature space in a pyramid manner as shown in figure 1, and then performing distortion transformation on the image to a middle matching state as a registration result by using a traditional image registration method such as a sliding least square method according to the matching points of an original image layer of the image. The whole process is divided into two parts of finding matching points and warping transformation.
(one) finding matching points
(1) And constructing a deep convolutional network, and training the deep convolutional network to obtain a trained deep convolutional network capable of extracting image features.
The depth convolution network with the image feature extraction function, such as an image classification network, can be selected, so that the training amount is reduced, and the network complexity is reduced; during training, the constructed network can be pre-trained by using a large data set such as ImageNet; alternatively, a pre-trained image classification network such as VGG19, which is known in the art, may be used directly.
(2) Taking two images A, B to be matched as the input of the network, respectively inputting the two images into the deep convolutional network trained in the step 1, recording the output results of the convolutional layer, the pooling layer or the activation layer of each hierarchy of the deep convolutional network, and respectively recording the output results as the output results L ═ 1,2, …, L being the total number of layers recorded; wherein, one layer extracts the output of one layer, and each layer can extract the output of the same layer (for example, all extracts the active layer) or extract the output of different layers; the following description will be given taking an example in which active layers of each hierarchy are extracted.
In this regard, the use of the image processing library in language scales the images A, B to the same size, respectively, to ensure that the outputs of the layers of the network have the same shape.
In order to improve the calculation efficiency, only the output results of the first N levels can be extracted, the output results of the activation layers of the first 4 or 5 levels can be extracted, and the calculation amount is small under the condition of ensuring the matching precision.
(3) And each activation layer is respectively provided with a search subarea of the layer, and from the activation layer of the bottom layer, nearest neighbor matching is respectively carried out on each layer from bottom to top to obtain nearest neighbor matching pairs of each layer.
The output of each active layer is a tensor with the shape of (length multiplied by width multiplied by channel), and a search subarea of each active layer is set on a (length multiplied by width) plane of the layer from the bottommost active layer; for the bottommost active layer L ═ L, the search sub-region is the bottommost active layer (length × width) plane.
Firstly, image a and image B are nearest-neighbor matched at the bottommost activation layer L ═ L: a) constructing a matching area, wherein the matching area is smaller than the search sub-area, and the size of the matching area is flexibly determined according to the search distance and the network structure; the sizes of the matching areas of the active layers can be the same or different, but the sizes of the matching areas in the same layer need to be consistent; b) lowest active layer output for image AFor sub-tensors at each position of its tensor planeActive layer output at the bottom of image BIn the same position of the sub-tensorCentered matching regionInner search andthe nearest sub-tensor in euclidean distance; similarly, for the bottom most active layer of image BFor sub-tensors at each position of its tensor planeIn thatSub-tensors at the same position in (1)Centered matching region Pi LInner search andthe nearest sub-tensor in euclidean distance; if it is If there is a pair of sub-tensors that are both closest to each other, the two sub-tensors are said to be a pair of nearest neighbor matching pairs.
The nearest neighbor matching is realized by adopting a traversal method in the search subarea, and in addition, the following method can be adopted for faster nearest neighbor matching:
for the nth search sub-region ln of the active layer l, the nearest neighbor matching process is as follows:
searching each sub-tensor in sub-region ln for image ASub-tensors at the same position in image BSearch and within centered matching areaNearest sub-tensorThen calculating the sub-tensorSub-tensors at the same position in image ACentered within the matching region andwhether the nearest sub-tensor isIf so, the method is considered to beAnda pair of nearest neighbor matching pairs.
When the sub-tensor Euclidean distance calculation is carried out, the difference of colors and the like of the image on appearance characteristics is considered, if the style difference between the image A and the image B is large, the style conversion can be preferentially carried out on the search sub-regions corresponding to the image A, B, the search sub-regions are uniformly changed into a common style, and then the Euclidean distance calculation is carried out to obtain the nearest neighbor matching pair.
Wherein the style conversion is obtained by the following formula:
wherein:is composed ofAt PlThe value within the region; mu.sA,μB,σA,σBRespectively representing a characteristic matrix and a content matrix of the designated area; mu.sm,σmThe common feature matrix representing the corresponding region can be calculated by:
the distance between tensors is then formulated as:
in addition, by using the response value characteristics of the output tensor of the active layer, a part of nearest neighbor matching pairs with tensor response values lower than a set threshold value can be screened out, so that the nearest neighbor matching pairs retained by the active layer are rich in semantic information.
Wherein the response value of each sub tensor is obtained by the following formula:
wherein,respectively p points and i points areThe output of the layer, "| | |" is an absolute value symbol.
After all nearest neighbor matching pairs of the bottommost activation layer are obtained, mapping the nearest neighbor matching pairs into the upper activation layer according to the network structure, as shown in fig. 2; the mapping area of the nearest neighbor matching pair on the L-1 layer active layer is a searching subarea of the L-1 layer active layer; then, respectively carrying out nearest neighbor matching in each searching subarea in the L-1 layer activation layer by adopting the same method to obtain nearest neighbor matching pairs of the L-1 layer activation layer; and repeating the above steps to obtain the nearest neighbor matching pair of the topmost layer l-1 activation layer.
If the number of nearest neighbor matching pairs of the topmost active layer is large, the nearest neighbor matching pairs of the topmost active layer can be clustered by using an unsupervised clustering method (such as a K-Means clustering method, a DBSCAN clustering method, a Mean-Shift clustering method and the like), all matching pairs in the image A and the image B are respectively divided into 5, 10, 15, 20 or other number classes, and various clustering center pairs are obtained. And then taking each clustering center pair as a final matching pair.
(II) distortion transformation
And (3) performing image registration by using the matching pairs obtained in the step (one) by adopting a traditional image registration method, such as minimum motion two-multiplication, rigid image deformation algorithm or differential method image deformation algorithm.
The following is a detailed description with reference to a specific example.
Step one, after the images A, B are respectively scaled to 224 × 224 by using an image processing library in language, the images are respectively input into a trained VGG19 image classification network, and output results of activation layers of the first 5 levels of the network are extracted:
step two, fromA search for matches is initiated. The output result of the fifth layer is a 14 × 14 × 512 tensor, the first dimension and the second dimension are used as search planes, and the fifth layer has only one search subarea, namely the fifth layer search plane itself. For each sub-tensor of the image A with the shape of 1 multiplied by 512, finding one sub-tensor with the closest Euclidean distance in a matching area with the radius of 5 and taking the sub-tensor at the same position as the center in the image B; similarly, for each of the sub-tensors of the shape of 1 × 1 × 512 of the image B, the one nearest in euclidean distance is found in the matching region of radius 5 centered on the same-position sub-tensor in the image a. If there is a pair of sub-tensors that are both closest to each other, then they are a pair of nearest neighbor matching pairs.
In the searching process, considering that the difference of appearance characteristics such as the color of the original image is large, before the distance is calculated, the style of the searching subarea is converted.
The style conversion of search sub-regions refers to the style conversion proposed in article Johnson in 2016 using the mean and variance of deep features as normalization parameters. The object of style conversion is the search subregion corresponding to image A, B The following formula is used to convert to a public style:
wherein:
the style conversion target is a specific search sub-area, and thus within the search sub-area The distance between tensors p, q for two specific locations is also defined in relation to the search sub-area, and the formula is defined as:
and searching in the sub-searching area after the style conversion to obtain the nearest neighbor matching pair.
Then, by constructing a response function, the response degree of the feature vector at each position is obtained, and the semantic rich information of the position can be described. The formula is as follows:
after the response values of all the nearest neighbor matching pairs are obtained, the nearest neighbor matching pairs lower than a set threshold value are screened out, and the points do not have strong semantic information. And meanwhile, nearest neighbor matching pairs at the edges of the tensor are screened out, and the corresponding search areas are difficult to find in the upper activation layer.
Step three, as shown in fig. 1, after the position of the nearest neighbor matching pair is obtained at the lower layer, the matching pair coordinates of the lower layer can be expanded into the upper layer space by using the network structure, so as to obtain the search subregion pair of the upper layer space. And repeating the step two in each search subarea pair to generate the nearest neighbor matching pair of the layer.
And repeating the third step, continuing to expand to the upper layer, and finally obtaining the nearest neighbor matching pair of the original image at the topmost layer.
And step four, in the artwork on the topmost layer, more than hundreds of nearest neighbor matching pairs can be often found. In order to enable the nearest neighbor matching pair to express more different semantic information as much as possible, a K-Means clustering algorithm is used, clustering is carried out by taking x and y coordinates of the nearest neighbor matching pair as the characteristics of points to obtain 5, 10, 30 and other classes, and the point of each class closest to the center is taken as a class representative point to be left in a list of the nearest neighbor matching pair.
And step five, carrying out image registration by using minimum moving two multiplication by using the final nearest neighbor matching pair obtained in the step four.
In particular, the method comprises the following steps of,
(1) constructing a corresponding affine transformation l for each pixel point v on the original image A, BvA(x)、lvB(x)。
(2) With the center point q of the matching point pair of image A, BiAs avA(x)、lvB(x) And respectively constructing a minimization function for the transformed target position, taking the least square distance between the transformed position obtained by affine transformation and the target position as an error, and for any transformation, obtaining a formula:
wherein: w is ai=1/|pi-v|2α,piMatching points for the image source; q. q.siIs the target matching point, v is the point to be transformed, α is 1.
(3) In minimizing the error, the affine transformation may be decomposed into a transformation matrix M and a translation amount T, and the error formula (equation (3)) may be derived from TWherein Then it can be simplified to:
(4) solving the error equation (4)) directly using the classical normal equation yields:
from the expression of the new transformation matrix (equation (5)), the transformation equation can be derived:
finally, substituting all the matching points into equation (6) can obtain the final result. From this, the automatic registration of the two images is completed.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An automatic image registration method based on a deep convolutional network is characterized by comprising the following steps:
step 1, constructing a deep convolutional network, and training the deep convolutional network to obtain a trained deep convolutional network capable of extracting image characteristics;
step 2, inputting two images A, B to be matched into the deep convolutional network trained in the step 1 respectively, and extracting the output of a convolutional layer, a pooling layer or an activation layer in each level in the deep convolutional network;
step 3, aiming at the output of each extracted layer, respectively setting a search subarea of the layer; respectively performing nearest neighbor matching in the corresponding search subareas of each layer from bottom to top from the bottommost layer; the nearest neighbor matching process is as follows:
for the nth search sub-region of the l-th layer, for each sub-tensor in the nth search sub-region of the l-th layer of image ASub-tensors in the same position in the l-th layer of image BSearch and within centered matching areaA nearest sub-tensor, wherein the matching region is smaller than the search sub-region; similarly, each of the sub-tensors in the nth search sub-area for the l-th layer of image BSub-tensors at the same position in the l-th layer of image ASearch and within centered matching areaThe nearest sub-tensor; if two sub-tensors exist in the nth search sub-area of the ith layer of A, B and are closest to each other, the two sub-tensors are called as nearest neighbor matching pairs;
the nth search subarea of the l layer is the mapping of the nth matching pair of the l +1 layer on the l layer; the searching subarea at the bottommost layer is the whole area at the bottommost layer;
by analogy, obtaining the nearest neighbor matching pair of the topmost layer;
and 4, registering the image A, B by using the nearest neighbor matching of the topmost layer and adopting an image registration method.
2. The automatic image registration method based on the deep convolutional network as claimed in claim 1, wherein in step 1, the deep convolutional network is an image classification network.
3. The method for automatic image registration based on deep convolutional network as claimed in claim 1 or 2, wherein in step 1, the public data set is used for network training or the existing public pre-trained deep convolutional network is directly used.
4. The automatic image registration method based on the deep convolutional network of claim 1, wherein in step 2, the subsequent steps are performed by extracting only the outputs of the convolutional layers, the pooling layers or the activation layers of the top 4 or 5 levels.
5. The method of claim 1, wherein in step 4, the registration of the image A, B is achieved by a least-motion two-multiplication, a rigid-preserving image deformation method or a differential method image deformation method.
6. The automatic image registration method based on deep convolutional network of claim 1, wherein in step 3, for the nth search sub-region l of the l layernThe nearest neighbor matching process is as follows:
search sub-region l for image AnEach sub-tensor inSub-tensors at the same position in image BSearch and within centered matching areaNearest sub-tensorThen calculating the sub-tensorSub-tensors at the same position in image ACentered within the matching region andwhether the nearest sub-tensor isIf so, the method is considered to beAnda pair of nearest neighbor matching pairs.
7. The automatic image registration method based on the deep convolutional network of claim 1, wherein in step 3, for each layer, tensor response values of nearest neighbor matching pairs of the layer are calculated, the nearest neighbor matching pair with a response value greater than or equal to a set threshold is selected as a final nearest neighbor matching pair of the layer, and the subsequent steps are performed by using the final nearest neighbor matching pair.
8. The automatic image registration method based on the deep convolutional network as claimed in claim 1, wherein in step 3, during the distance calculation, the search sub-region corresponding to the image A, B is subjected to style conversion to be converted into a uniform common style, and then the distance calculation is performed to obtain the nearest neighbor matching pair.
9. The method as claimed in claim 1, wherein in step 4, before the registration of the image A, B, the top-most nearest neighbor matching pair obtained in step 3 is clustered by an unsupervised clustering method, and then the images A, B are registered with each cluster center pair.
10. The automatic image registration method based on the deep convolutional network of claim 9, wherein the unsupervised clustering method is a K-Means clustering method, a DBSCAN clustering method or a Mean-Shift clustering method.
CN201810847008.XA 2018-07-27 2018-07-27 A kind of automatic Image Registration Method based on depth convolutional network Pending CN109461115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810847008.XA CN109461115A (en) 2018-07-27 2018-07-27 A kind of automatic Image Registration Method based on depth convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810847008.XA CN109461115A (en) 2018-07-27 2018-07-27 A kind of automatic Image Registration Method based on depth convolutional network

Publications (1)

Publication Number Publication Date
CN109461115A true CN109461115A (en) 2019-03-12

Family

ID=65606316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810847008.XA Pending CN109461115A (en) 2018-07-27 2018-07-27 A kind of automatic Image Registration Method based on depth convolutional network

Country Status (1)

Country Link
CN (1) CN109461115A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717538A (en) * 2019-10-08 2020-01-21 广东工业大学 Color picture clustering method based on non-negative tensor ring
CN111524170A (en) * 2020-04-13 2020-08-11 中南大学 Lung CT image registration method based on unsupervised deep learning
CN111724424A (en) * 2020-06-24 2020-09-29 上海应用技术大学 Image registration method
CN114037913A (en) * 2022-01-10 2022-02-11 成都国星宇航科技有限公司 Automatic deviation rectifying method and device for remote sensing image, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020048393A1 (en) * 2000-09-19 2002-04-25 Fuji Photo Film Co., Ltd. Method of registering images
US20070047840A1 (en) * 2005-08-24 2007-03-01 Siemens Corporate Research Inc System and method for salient region feature based 3d multi modality registration of medical images
CN105718960A (en) * 2016-01-27 2016-06-29 北京工业大学 Image ordering model based on convolutional neural network and spatial pyramid matching
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020048393A1 (en) * 2000-09-19 2002-04-25 Fuji Photo Film Co., Ltd. Method of registering images
US20070047840A1 (en) * 2005-08-24 2007-03-01 Siemens Corporate Research Inc System and method for salient region feature based 3d multi modality registration of medical images
CN105718960A (en) * 2016-01-27 2016-06-29 北京工业大学 Image ordering model based on convolutional neural network and spatial pyramid matching
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JING LIAO 等: ""Visual attribute transfer through deep image analogy"", 《ACM TRANSACTIONS ON GRAPHICS》 *
JING LIAO 等: ""Visual attribute transfer through deep image analogy"", 《ACM TRANSACTIONS ON GRAPHICS》, vol. 36, no. 04, 31 July 2017 (2017-07-31), pages 1 - 4 *
KFIR ABERMAN等: "Neural Best-Buddies: Sparse Cross-Domain Correspondence", pages 1 - 3, Retrieved from the Internet <URL:https://arxiv.org/abs/1805.04140v1> *
ORON S 等: ""Best-Buddies Similarity-Robust Template Matching Using Mutual Nearest Neighbors"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
ORON S 等: ""Best-Buddies Similarity-Robust Template Matching Using Mutual Nearest Neighbors"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 40, no. 08, 9 August 2017 (2017-08-09) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717538A (en) * 2019-10-08 2020-01-21 广东工业大学 Color picture clustering method based on non-negative tensor ring
CN110717538B (en) * 2019-10-08 2022-06-24 广东工业大学 Color picture clustering method based on non-negative tensor ring
CN111524170A (en) * 2020-04-13 2020-08-11 中南大学 Lung CT image registration method based on unsupervised deep learning
CN111524170B (en) * 2020-04-13 2023-05-26 中南大学 Pulmonary CT image registration method based on unsupervised deep learning
CN111724424A (en) * 2020-06-24 2020-09-29 上海应用技术大学 Image registration method
CN111724424B (en) * 2020-06-24 2024-05-14 上海应用技术大学 Image registration method
CN114037913A (en) * 2022-01-10 2022-02-11 成都国星宇航科技有限公司 Automatic deviation rectifying method and device for remote sensing image, electronic equipment and storage medium
CN114037913B (en) * 2022-01-10 2022-04-26 成都国星宇航科技有限公司 Automatic deviation rectifying method and device for remote sensing image, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107767328B (en) Migration method and system of any style and content generated based on small amount of samples
CN109461115A (en) A kind of automatic Image Registration Method based on depth convolutional network
CN108009559B (en) Hyperspectral data classification method based on space-spectrum combined information
Tuia et al. Semisupervised manifold alignment of multimodal remote sensing images
CN112036447B (en) Zero-sample target detection system and learnable semantic and fixed semantic fusion method
CN111062885B (en) Mark detection model training and mark detection method based on multi-stage transfer learning
JP2021517330A (en) A method for identifying an object in an image and a mobile device for carrying out the method.
CN111507334B (en) Instance segmentation method based on key points
CN110532920A (en) Smallest number data set face identification method based on FaceNet method
JP2016057918A (en) Image processing device, image processing method, and program
CN106815323B (en) Cross-domain visual retrieval method based on significance detection
CN113361542A (en) Local feature extraction method based on deep learning
CN111709317B (en) Pedestrian re-identification method based on multi-scale features under saliency model
CN112364747B (en) Target detection method under limited sample
CN115063526A (en) Three-dimensional reconstruction method and system of two-dimensional image, terminal device and storage medium
CN111127353B (en) High-dynamic image ghost-removing method based on block registration and matching
CN109145770B (en) Automatic wheat spider counting method based on combination of multi-scale feature fusion network and positioning model
CN113496149B (en) Cross-view gait recognition method for subspace learning based on joint hierarchy selection
CN109460773A (en) A kind of cross-domain image sparse matching process based on depth convolutional network
Boujemaa On competitive unsupervised clustering
Dong et al. Scene-oriented hierarchical classification of blurry and noisy images
JP6486084B2 (en) Image processing method, image processing apparatus, and program
CN108765384B (en) Significance detection method for joint manifold sequencing and improved convex hull
Chen et al. Illumination-invariant video cut-out using octagon sensitive optimization
Li et al. A robust incremental learning framework for accurate skin region segmentation in color images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190312

WD01 Invention patent application deemed withdrawn after publication