CN112164100B - Image registration method based on graph convolution neural network - Google Patents

Image registration method based on graph convolution neural network Download PDF

Info

Publication number
CN112164100B
CN112164100B CN202011025452.7A CN202011025452A CN112164100B CN 112164100 B CN112164100 B CN 112164100B CN 202011025452 A CN202011025452 A CN 202011025452A CN 112164100 B CN112164100 B CN 112164100B
Authority
CN
China
Prior art keywords
matching
point
graph
layer
pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011025452.7A
Other languages
Chinese (zh)
Other versions
CN112164100A (en
Inventor
肖国宝
郑伟
钟振
刘鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minjiang University
Original Assignee
Minjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minjiang University filed Critical Minjiang University
Priority to CN202011025452.7A priority Critical patent/CN112164100B/en
Publication of CN112164100A publication Critical patent/CN112164100A/en
Application granted granted Critical
Publication of CN112164100B publication Critical patent/CN112164100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image registration method based on a graph convolution neural network, which comprises the following steps: obtaining key points of the image pairs and initializing key point matching; inputting the initial matching pairs into a multi-layer perceptron to obtain characteristic point information of each matching pair; inputting the initial matching pairs into a graph convolution neural network to obtain local graph space characteristic information of each matching point pair; the characteristic point information of the matched point pairs and the spatial characteristic information of the partial graph are combined and then input into a multi-layer perceptron to learn the combined characteristic, and the final characteristic is output; and calculating a loss value by utilizing the output final characteristics, and adopting a back propagation algorithm to adjust network parameters. The application effectively improves the registration precision.

Description

Image registration method based on graph convolution neural network
Technical Field
The application relates to the technical field of computer vision, in particular to an image registration method based on a graph convolution neural network.
Background
More and more computer vision products are incorporated into our daily lives, and complex data in real life are increasingly demanding on computer vision algorithms. Estimating the geometric relationship between the two images is a fundamental problem for computer vision neighborhood, playing an important role in Structure from Motion and Simultaneous Localization and Mapping. In image registration, conventional outlier removal algorithms such as RANSAC (Fischler MA, bolles RC. Random sample coherent: a paradigm for model fitting with applications to image analysis and automated cartographic. Communications of the ACM.1981Jun 1;24 (6): 381-95) are standard algorithms and are also the most popular outlier removal algorithms. GMS (Bian J, lin WY, matsushita Y, yeung SK, nguyen TD, cheng MM.Gms: grid-based motion statistics for fast, ultra-robustfeature core approach.InProceding of the IEEE Conference on Computer Vision and Pattern Recognition 2017 (pp.4181-4190)). As the performance of deep learning has been higher in recent years, algorithms for image matching using deep learning have been increasing. However, there is no solution for image registration using a graph convolutional neural network (GCN), firstly because no effective graph convolution can be used for image registration, and secondly because no effective neighbor-taking strategy can be adopted due to non-rigid transformation.
Disclosure of Invention
Therefore, the application aims to provide an image registration method based on a graph convolution neural network, which effectively improves registration accuracy.
The application is realized by adopting the following scheme: an image registration method based on a graph convolution neural network specifically comprises the following steps:
obtaining key points of the image pairs and initializing key point matching;
inputting the initial matching pairs into a multi-layer perceptron to obtain characteristic point information of each matching pair;
inputting the initial matching pairs into a graph convolution neural network to obtain local graph space characteristic information of each matching point pair;
the characteristic point information of the matched point pairs and the spatial characteristic information of the partial graph are combined and then input into a multi-layer perceptron to learn the combined characteristic, and the final characteristic is output;
and calculating a loss value by utilizing the output final characteristics, and adopting a back propagation algorithm to adjust network parameters.
Further, the obtaining the keypoints of the image pair and initializing the keypoint matching pair and the data set are specifically as follows:
extracting hypothesis matching pairs of key point coordinates in the 2D registration image by adopting a feature extraction algorithm to obtainWherein N is the number of matching point pairs, p i Matching pairs for one of the hypotheses, +.>And->Is the coordinates of a matched pair.
Further, the inputting the initial matching pairs into the multi-layer perceptron to obtain the characteristic point information of each matching pair specifically comprises the following steps:
step S21: adopting a layer of shared perceptron to make the set P= [ P ] of initial matching pair 1 ;p 2 ;...;p i ;...;p N ]Mapping toWherein N is the number of matching point pairs, and M is the number of characteristic channels;
step S22: inputting the N matching pairs into a basic residual error network structure to obtain mapping characteristic output of each matching pairWherein C is 1 Is the dimension of the feature channel.
Further, the basic residual network structure includes a layer of shared perceptron MLP, an instance normalization layer IN, a batch normalization layer BN, and a corrective linear unit ReLU.
Further, the graph rolling neural network comprises more than one graph rolling module, more than one pooling layer and a shared perceptron network MLP; each graph convolution module comprises a graph convolution layer and a ReLU layer;
extracting graph convolution characteristics of input data through a graph convolution layer, carrying out pooling operation on the extracted graph convolution characteristics through a pooling layer, constructing nearest neighbors of each matching pair again after pooling each time to input the nearest neighbors to the next graph convolution layer, merging the graph convolution characteristics under different spatial relations in a channel dimension, and finally learning the combined characteristics by adopting a shared perceptron network (MLP) to obtain local graph spatial characteristic information of each matching point pairWhere N is the number of pairs of matching points of the input, C 2 Is the dimension of the feature channel.
Further, after combining the feature point information of the matching point pairs with the spatial feature information of the local map, inputting the feature point information into a multi-layer perceptron to learn the combined feature, and outputting final features specifically as follows:
characteristic point information out of each matching point pair 1 And local map spatial feature information out 2 Through feature channel dimension combination, obtain
Will beSequentially inputting into more than one basic residual error network and a layer of perceptron layer to obtain final logic output value +.>
Further, the calculating the loss value by using the output final feature and adjusting the network parameter by adopting a back propagation algorithm specifically comprises:
calculation of final output by weighted eight-point algorithmIs>
The loss value is calculated using the loss function and the network parameters are adjusted by a back propagation algorithm.
Further, the loss value loss is calculated as:
wherein, I ess To be weighted by eightPoint algorithm prediction final outputIs>And an essential matrix error between the true essential matrix E; l (L) cls For the binary cross entropy loss function, z represents the truth label, s represents the logic value of the network output, i.e.>
Compared with the prior art, the application has the following beneficial effects: the method introduces the graph convolution based on space into an image registration method, combines the advantages of the graph convolution with a neural network module commonly used in image registration at the present stage, and extracts the characteristics. Therefore, the application can improve the matching precision finally. Experimental results show that the application achieves the most advanced performance on the reference data set.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the application.
Fig. 2 is a schematic diagram of a graph rolling network according to an embodiment of the application.
Fig. 3 is a schematic diagram of an overall network structure according to an embodiment of the present application.
Detailed Description
The application will be further described with reference to the accompanying drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
As shown in fig. 1 and 3, the present embodiment provides an image registration method based on a graph convolution neural network, which specifically includes the following steps:
obtaining key points of the image pairs and initializing key point matching;
inputting the initial matching pairs into a multi-layer perceptron to obtain characteristic point information of each matching pair;
inputting the initial matching pairs into a graph convolution neural network to obtain local graph space characteristic information of each matching point pair;
the characteristic point information of the matched point pairs and the spatial characteristic information of the partial graph are combined and then input into a multi-layer perceptron to learn the combined characteristic, and the final characteristic is output;
and calculating a loss value by utilizing the output final characteristics, and adopting a back propagation algorithm to adjust network parameters.
In this embodiment, the obtaining the keypoints of the image pair and initializing the keypoint matching pair and the data set are specifically:
extracting hypothesis matching pairs of key point coordinates in the 2D registration image by adopting a feature extraction algorithm to obtainWherein N is the number of matching point pairs, p i Matching pairs for one of the hypotheses, +.>And->Is the coordinates of a matched pair.
In this embodiment, the inputting the initial matching pair into the multi-layer perceptron to obtain the feature point information of each matching pair specifically includes the following steps:
step S21: employing one layer sharingThe perceptron sets the initial matching pair p= [ P ] 1 ;p 2 ;...;p i ;...;p N ]Mapping toWherein N is the number of matching point pairs, and M is the number of characteristic channels;
step S22: inputting the N matching point pairs into a basic residual error network structure, and obtaining the mapping characteristic output of each matching point pairWherein C is 1 Is the dimension of the feature channel.
IN this embodiment, the basic residual network structure includes a layer of shared perceptron MLP, an instance normalization layer IN, a batch normalization layer BN, and a corrective linear unit ReLU.
As shown in fig. 2, in the present embodiment, the graph rolling neural network includes more than one graph rolling module (GCN), more than one pooling layer, and a shared perceptron network MLP; each graph convolution module comprises a graph convolution layer and a ReLU layer;
wherein, the graph roll lamination conv calculates a convolution kernel K S Receptive fieldCosine similarity of (c). The method comprises the following steps:
in the receptive fieldIs of a graph structure, p n In order to center the receptive field,is p n M euclidean distance based nearest neighbors given by a pre-calculation;<·>representation ofDot product; w (·) represents the weight parameter given by the network learning; here, f (x) =w·x+b, w is a weight matrix, and b is a deviation; graph convolution kernel K S ={k C ,k 1 ,k 2 ,...,k S The } represents +.1 convolution kernel node by S +.>Composition, k C =(0,0,0,0);And calculating the cosine similarity of the characteristics of each neighbor and the weight of each convolution kernel. Wherein d m,n =p m -p n Representing the direction of the receptive field center to the neighbors.
Extracting graph convolution characteristics of input data through a graph convolution layer, carrying out pooling operation on the extracted graph convolution characteristics through a pooling layer, and inputtingPooling into->r is the pooling rate; it should be noted that, because the original matching pair after pooling has changes to the number, scale and spatial relationship, after each pooling, the nearest neighbor of each matching pair is built again to be input to the next graph volume lamination, then the graph volume features under different spatial relationships are merged in a channel dimension to obtain multi-scale features, finally a small shared perceptron network MLP is adopted to learn the combined features (multi-scale features) to obtain the local graph space feature information of each matching point pair>Where N is the number of pairs of matching points of the input, C 2 Is the dimension of the feature channel.
In this embodiment, after combining the feature point information of the matching point pair with the spatial feature information of the partial graph, the feature point information is input to a multi-layer perceptron to learn a combined feature, and the final feature is output specifically as follows:
characteristic point information out of each matching point pair 1 And local map spatial feature information out 2 Through feature channel dimension combination, obtain
Will beSequentially inputting into more than one basic residual error network and a layer of perceptron layer to obtain logic value +.>N is the number of input matching pairs.
In this embodiment, the calculating the loss value by using the output final feature and adjusting the network parameter by using the back propagation algorithm specifically includes:
calculation of final output by weighted eight-point algorithmIs>
The loss value is calculated using the loss function and the network parameters are adjusted by a back propagation algorithm.
In this embodiment, the loss value loss is calculated as:
wherein, I ess To predict logical values by a weighted eight-point algorithmIs>And the true value essential matrix E, specifically adopting geometric loss: />Here P 1 ,P 2 Is a corresponding matching point, t [i] Is the i-th element of vector t, +.>Representing the true value essence matrix E and the point P 1 Distance of->Representing the true value essence matrix E and the point P 2 Distance of->Essence matrix representing network predictions->To P 1 Is a distance of (3). l (L) cls For the binary cross entropy loss function, z represents the truth label, s represents the logic value of the network output, i.e.>
The present example performed quantitative and qualitative experiments of the proposed method with the current most advanced matching method on a common data set (yfcs 100M), and the results showed that the method of the present example was significantly superior to other algorithms. The following table is a quantitative comparison of the accuracy and recall of the F-measurements of this example with several other matching algorithms. The comparison method is Ranac, LPM, pointCN. From the table, it can be seen that the application significantly improves the detection accuracy, and the best effect is obtained in the 4 methods.
F-score Precision Recall
Ransac 0.1914 0.2222 0.1879
LPM 0.2213 0.2415 0.2579
PointCN 0.3319 0.2745 0.5588
Method of the present embodiment 0.3975 0.3237 0.6234
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the application in any way, and any person skilled in the art may make modifications or alterations to the disclosed technical content to the equivalent embodiments. However, any simple modification, equivalent variation and variation of the above embodiments according to the technical substance of the present application still fall within the protection scope of the technical solution of the present application.

Claims (6)

1. An image registration method based on a graph convolution neural network is characterized by comprising the following steps:
obtaining key points of the image pairs and initializing key point matching;
inputting the initial matching pairs into a multi-layer perceptron to obtain characteristic point information of each matching pair;
inputting the initial matching pairs into a graph convolution neural network to obtain local graph space characteristic information of each matching point pair;
the characteristic point information of the matched point pairs and the spatial characteristic information of the partial graph are combined and then input into a multi-layer perceptron to learn the combined characteristic, and the final characteristic is output;
calculating a loss value by utilizing the output final characteristics, and adjusting network parameters by adopting a back propagation algorithm;
the method for calculating the loss value by utilizing the output final characteristics and adopting a back propagation algorithm to adjust network parameters comprises the following steps:
calculation of final output by weighted eight-point algorithmIs>
Calculating a loss value by using a loss function, and adjusting network parameters by using a back propagation algorithm;
the loss value loss is calculated as follows:
wherein, I ess To predict final output by weighted eight-point algorithmIs of the matrix of the nature of (2)/>And an essential matrix error between the true essential matrix E; l (L) cls For a binary cross entropy loss function, z represents a truth value label, and s represents the weight of network output;
the geometric loss is adopted specifically:here P 1 ,P 2 Is a corresponding matching point, t [i] Is the i-th element of vector t, +.>Representing the true value essence matrix E and the point P 1 Distance of->Representing the true value essence matrix E and the point P 2 Distance of->Essence matrix representing network predictions->To P 1 Is a distance of (2); l (L) cls For the binary cross entropy loss function, z represents the truth label, s represents the logic value of the network output, i.e.>
2. The image registration method based on a graph convolution neural network according to claim 1, wherein the obtaining the keypoints of the image pair and initializing the keypoint matching pair and the data set are specifically as follows:
adopting a feature extraction algorithm to extract hypothesis matching pairs of key point coordinates in the 2D registration image to obtain P=[p 1 ;p 2 ;...;p i ;...;p N ]∈R N×4 ,Wherein N is the number of matching point pairs, p i Matching pairs for one of the hypotheses, +.>And->Is the coordinates of a matched pair.
3. The image registration method based on a graph convolution neural network according to claim 1, wherein the step of inputting the initial matching pairs into the multi-layer perceptron to obtain the feature point information of each matching pair specifically comprises the following steps:
step S21: adopting a layer of shared perceptron to make the set P= [ P ] of initial matching pair 1 ;p 2 ;...;p i ;...;p N ]Mapping toWherein N is the number of matching point pairs, and M is the number of characteristic channels;
step S22: inputting the N matching point pairs into a basic residual error network structure, and obtaining the mapping characteristic output of each matching point pairWherein C is 1 Is the dimension of the feature channel.
4. A method of image registration based on a graph roll-up neural network according to claim 3, wherein the basic residual network structure comprises a layer of shared perceptron MLP, an instance normalization layer IN, a batch normalization layer BN, and a corrective linear unit ReLU.
5. The image registration method based on a graph rolling neural network according to claim 1, wherein the graph rolling neural network comprises more than one graph rolling module, more than one pooling layer and a shared perceptron network (MLP); each graph convolution module comprises a graph convolution layer and a ReLU layer;
extracting graph convolution characteristics of input data through a graph convolution layer, carrying out pooling operation on the extracted graph convolution characteristics through a pooling layer, constructing nearest neighbors of each matching pair again after pooling each time to input the nearest neighbors to the next graph convolution layer, merging the graph convolution characteristics under different spatial relations in a channel dimension, and finally learning the combined characteristics by adopting a shared perceptron network (MLP) to obtain local graph spatial characteristic information of each matching point pairWhere N is the number of pairs of matching points of the input, C 2 Is the dimension of the feature channel.
6. The image registration method based on the graph convolution neural network according to claim 1, wherein the feature point information of the matching point pair and the local graph space feature information are input into a multi-layer perceptron learning combination feature after being combined, and the final feature is output specifically as follows:
characteristic point information out of each matching point pair 1 And local map spatial feature information out 2 Through feature channel dimension combination, obtain
Will beSequentially inputting into more than one basic residual error network and a layer of perceptron layer to obtain logic value +.>
CN202011025452.7A 2020-09-25 2020-09-25 Image registration method based on graph convolution neural network Active CN112164100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011025452.7A CN112164100B (en) 2020-09-25 2020-09-25 Image registration method based on graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011025452.7A CN112164100B (en) 2020-09-25 2020-09-25 Image registration method based on graph convolution neural network

Publications (2)

Publication Number Publication Date
CN112164100A CN112164100A (en) 2021-01-01
CN112164100B true CN112164100B (en) 2023-12-12

Family

ID=73864043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011025452.7A Active CN112164100B (en) 2020-09-25 2020-09-25 Image registration method based on graph convolution neural network

Country Status (1)

Country Link
CN (1) CN112164100B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801206B (en) * 2021-02-23 2022-10-14 中国科学院自动化研究所 Image key point matching method based on depth map embedded network and structure self-learning
CN114677502B (en) * 2022-05-30 2022-08-12 松立控股集团股份有限公司 License plate detection method with any inclination angle
CN117253060A (en) * 2023-09-04 2023-12-19 江苏势通生物科技有限公司 Image matching method, image matching device, storage medium and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064502A (en) * 2018-07-11 2018-12-21 西北工业大学 The multi-source image method for registering combined based on deep learning and artificial design features
CN110335337A (en) * 2019-04-28 2019-10-15 厦门大学 A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network
CN111462867A (en) * 2020-04-05 2020-07-28 武汉诶唉智能科技有限公司 Intelligent mobile medical method and system based on 5G network and block chain
CN111488937A (en) * 2020-04-15 2020-08-04 闽江学院 Image matching method based on multi-scale neighbor deep neural network
CN111488938A (en) * 2020-04-15 2020-08-04 闽江学院 Image matching method based on two-step switchable normalized depth neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064502A (en) * 2018-07-11 2018-12-21 西北工业大学 The multi-source image method for registering combined based on deep learning and artificial design features
CN110335337A (en) * 2019-04-28 2019-10-15 厦门大学 A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network
CN111462867A (en) * 2020-04-05 2020-07-28 武汉诶唉智能科技有限公司 Intelligent mobile medical method and system based on 5G network and block chain
CN111488937A (en) * 2020-04-15 2020-08-04 闽江学院 Image matching method based on multi-scale neighbor deep neural network
CN111488938A (en) * 2020-04-15 2020-08-04 闽江学院 Image matching method based on two-step switchable normalized depth neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于偏好统计数据表征的鲁棒几何模型拟合方法;郭翰林 等;《计算机学报》;第43卷(第7期);第1199-1214页 *

Also Published As

Publication number Publication date
CN112164100A (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN112164100B (en) Image registration method based on graph convolution neural network
US20220027603A1 (en) Fast, embedded, hybrid video face recognition system
CN111695415B (en) Image recognition method and related equipment
Dai et al. MS2DG-Net: Progressive correspondence learning via multiple sparse semantics dynamic graph
CN107871103B (en) Face authentication method and device
Guo et al. JointPruning: Pruning networks along multiple dimensions for efficient point cloud processing
CN112070058A (en) Face and face composite emotional expression recognition method and system
CN116580257A (en) Feature fusion model training and sample retrieval method and device and computer equipment
CN111968150A (en) Weak surveillance video target segmentation method based on full convolution neural network
CN112232134A (en) Human body posture estimation method based on hourglass network and attention mechanism
CN112308128B (en) Image matching method based on attention mechanism neural network
CN112084895B (en) Pedestrian re-identification method based on deep learning
CN110083734B (en) Semi-supervised image retrieval method based on self-coding network and robust kernel hash
CN109993070B (en) Pedestrian re-identification method based on global distance scale loss function
He et al. Patch tracking-based streaming tensor ring completion for visual data recovery
Zhao et al. Single-branch self-supervised learning with hybrid tasks
CN114519863A (en) Human body weight recognition method, human body weight recognition apparatus, computer device, and medium
He et al. Classification of metro facilities with deep neural networks
CN112990356A (en) Video instance segmentation system and method
CN115619822A (en) Tracking method based on object-level transformation neural network
CN115205554A (en) Retrieval method based on semantic concept extraction
CN115100694A (en) Fingerprint quick retrieval method based on self-supervision neural network
CN112529081A (en) Real-time semantic segmentation method based on efficient attention calibration
Li et al. Spatial frequency enhanced salient object detection
Norouzi et al. VGG16-based Feature Fusion For Image Kyepoint Description

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant