CN109344845B - Feature matching method based on triple deep neural network structure - Google Patents

Feature matching method based on triple deep neural network structure Download PDF

Info

Publication number
CN109344845B
CN109344845B CN201811112938.7A CN201811112938A CN109344845B CN 109344845 B CN109344845 B CN 109344845B CN 201811112938 A CN201811112938 A CN 201811112938A CN 109344845 B CN109344845 B CN 109344845B
Authority
CN
China
Prior art keywords
feature
matching
neural network
deep neural
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811112938.7A
Other languages
Chinese (zh)
Other versions
CN109344845A (en
Inventor
王滨
王栋
刘宏
赵京东
柳强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201811112938.7A priority Critical patent/CN109344845B/en
Publication of CN109344845A publication Critical patent/CN109344845A/en
Application granted granted Critical
Publication of CN109344845B publication Critical patent/CN109344845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A feature matching method based on a triple deep neural network structure belongs to the technical field of image processing. The invention aims to solve the problem of poor matching performance of SIFT and feature description methods such as TFeat, HardNet and the like based on deep learning in the prior art. The invention designs a novel loss function which restrains the mean value and the variance of a training sample, combines a triple deep neural network to obtain characteristic description with excellent performance, describes the distance distribution of a matched characteristic pair and a unmatched characteristic pair by Gaussian distribution, and obtains the novel loss function which restrains the mean value and the variance of the training sample according to the principle of reducing the overlapping area of the distance distribution of the two matched characteristic pairs by equivalently reducing the characteristic matching error. The experimental result shows that compared with the existing characteristic description method, the matching performance of the method is improved.

Description

Feature matching method based on triple deep neural network structure
Technical Field
The invention belongs to the technical field of image processing, and relates to an image feature matching method based on deep learning.
Background
In computer vision application, well-calculated feature description is an important component of image matching, target positioning, three-dimensional reconstruction and the like, and plays a very critical role in the accuracy of a final algorithm. Over the last decade, computational feature description has been a research focus in the field of image processing. In general, computing a feature description can be divided into artificial design and learning-based methods. When a manual design method is used for feature extraction, various factors are difficult to comprehensively consider so as to obtain effective description; good performance is difficult to achieve in complex situations and adjustment requires a lot of time. The characteristic description is calculated by adopting a learning-based method, so that good characteristics can be automatically learned, and the manual design process is omitted. The traditional SIFT and feature description methods such as TFeat, HardNet and the like based on deep learning are poor in matching performance, and the application value of image matching is influenced.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the invention aims to solve the problem of poor matching performance of SIFT and feature description methods such as TFeat, HardNet and the like based on deep learning in the prior art, and provides a feature matching method based on a triple deep neural network structure.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a feature matching method based on a triple deep neural network structure comprises the following steps:
the method comprises the following steps: deep neural network based on triple structure training
The deep neural network input of the Triplet structure is a triple consisting of three samples: one is to randomly select a sample from the training data set, the sample is called as a reference sample, then randomly select a sample belonging to the same class as the reference sample and a sample of different class from the reference sample, and the two samples are respectively called as a similar sample and a heterogeneous sample; three samples form a triple, and the whole network is trained through a loss function;
the deep neural network training process based on the triple structure is as follows:
generating triplets from a training database, wherein
Figure GDA0002272921570000011
A reference sample is shown which is,
Figure GDA0002272921570000012
a sample of the same type is represented,
Figure GDA0002272921570000013
representing heterogeneous samples, λ representing a training interval;
training a deep neural network, outputting by the network to obtain the corresponding characteristic expression of each group of samples, and respectively recording as:
Figure GDA0002272921570000014
Figure GDA0002272921570000021
and
Figure GDA0002272921570000022
let
Figure GDA0002272921570000023
And
Figure GDA0002272921570000024
the distance between the feature expressions is as small as possible, and
Figure GDA0002272921570000025
and
Figure GDA0002272921570000026
is as large as possible and is to be made
Figure GDA0002272921570000027
And
Figure GDA0002272921570000028
a distance between
Figure GDA0002272921570000029
And
Figure GDA00022729215700000210
a minimum spacing λ is provided between the distances; the above is expressed in the following inequality form:
Figure GDA00022729215700000211
the inequality defines the distance relationship between homogeneous and heterogeneous samples, i.e.: adding the minimum spacing lambda to the distances between all similar samples, wherein the minimum spacing lambda is smaller than the distances between the heterogeneous samples; when the distance relationship does not satisfy the above inequality, the following loss function can be solved:
Figure GDA00022729215700000212
+ represents that when the value in [ ] is greater than zero, the value is taken as error, and when the value is less than zero, the error is zero;
according to the analysis of the distance distribution of the positive and negative matching pairs, the mean value and the variance of the positive and negative matching pairs are constrained to reduce the area of the overlapping area, wherein the mean value adopts the following constraint:
Figure GDA00022729215700000213
where m is the spacing factor of the minimum distance between two distribution means, μnegDistance mean, μ, representing negative matched pairsposRepresents the distance mean of the positive matching pairs;
the variance of the distribution is constrained as follows:
Lvar=σposneg(4)
wherein sigmaposThe variance, σ, representing a positive matching pairnegRepresents the variance of the negative matching pairs;
combining the triple error function, the mean constraint and the variance constraint to obtain a final loss function:
Lloss=Ltri+Lmean+Lvar(5)
using a loss function LlossAre respectively paired
Figure GDA00022729215700000214
And
Figure GDA00022729215700000215
calculating the partial derivatives, respectively calculating LlossIn that
Figure GDA00022729215700000216
And
Figure GDA00022729215700000217
the gradient of the direction, and the parameter of the deep neural network is adjusted according to the back propagation algorithm until the network converges to a stable state;
step two: feature point detection of images
Respectively detecting the characteristic points of the target image and the image to be matched,
using FAST (features from accessed Segment test) algorithm to detect image feature points: firstly, quickly screening out possible interest points by judging whether the difference value between each pixel point on an image and the pixel on the corresponding circumference meets a set threshold value, then training a decision tree by using an ID3(Iterative dichotomier 3) algorithm, inputting 16 pixels on the circumference of the feature points into the decision tree, and further screening out the optimal feature points;
removing locally dense feature points using a non-maxima suppression (NMS) algorithm to reduce local feature point clustering; calculating the response size of each feature point, comparing adjacent feature points, reserving the feature points with large response values, and deleting the rest feature points; respectively obtaining the characteristic points of a target image and an image to be matched;
step three: calculating the feature descriptors of the feature points on the target image and the image to be matched by using the trained neural network,
extracting a square image block with the resolution of 32 × 32 by taking each feature point as the center, and inputting the square image block into a trained deep neural network to obtain a feature descriptor with 128-dimensional output;
step four: fast matching using approximate nearest neighbor (FLANN) algorithm for high dimensional data
Calculating the Euclidean distance between each feature point on the target image and a 128-dimensional feature descriptor of all feature points on the image to be matched by using a FLANN algorithm to realize quick matching, wherein the smaller the Euclidean distance is, the higher the similarity is, and when the Euclidean distance is less than or equal to a set threshold value, the matching is judged to be successful;
the FLANN algorithm is a fast nearest neighbor search algorithm realized by using a k-d tree and is suitable for fast matching of high-dimensional features. The feature point matching of the invention is realized by calculating the Euclidean distance of 128-dimensional feature descriptors of two groups of feature points by using a FLANN algorithm. The smaller the Euclidean distance is, the higher the similarity is, and when the Euclidean distance is smaller than a set threshold value, the matching can be judged to be successful;
step five: calculating affine transformation matrix to complete feature matching
Because the result of the feature matching often contains some wrong matching pairs, on the basis of the feature matching, an affine transformation matrix of the two images is calculated by using a random sample consensus (RANSAC) algorithm.
In the fourth step, feature point matching is realized by calculating the euclidean distance of the 128-dimensional feature descriptors of the two sets of feature points by using a FLANN algorithm, and the specific process is as follows:
(1) calculating the variance of each dimension of the feature points to be matched, selecting the dimension with the largest variance to divide the feature set into two parts, and repeating the same process for each subset, thereby establishing k-d tree storage features;
(2) and when the features are matched, performing feature search based on the k-d tree, and finding out nearest neighbor matching through binary search and backtracking operation.
The concrete implementation process of the fifth step is as follows:
(1) randomly selecting 3 groups of non-collinear point pairs from all feature matching results of a target image and an image to be matched each time, calculating an affine transformation matrix, testing errors of all other matching results under the affine transformation matrix, and counting the number of matches smaller than a set error threshold;
(2) repeating the step (1) n times, and selecting a group of parameters with the largest matching number from the final result as a final affine transformation matrix.
The invention has the beneficial effects that:
the method of the invention uses the distance distribution of the Gaussian distribution approximate matching feature pair (positive matching pair) and the unmatched feature pair (negative matching pair) to obtain the relationship between the area of the overlapping area of the two matching features and the two distribution statistical information through analysis. Since the overlapping region of the two matching feature pairs in the distance distribution is a part which is easy to generate misjudgment, that is, the distance in the interval cannot be accurately judged to be matched or not matched, the reduction of the confusion region is beneficial to improving the accuracy of feature measurement. In order to reduce the overlapping area of the distance distribution of the two matching features, the invention provides a new loss function, and combines with a triple deep neural network, so that the feature description with excellent performance can be obtained, and the feature matching accuracy of the image is improved.
The method provided by the invention makes up the defects that various comprehensive factors are difficult to consider and a large amount of time is required during manual design feature description in image matching. By the aid of the novel loss function which is used for constraining the mean value and the variance of the training sample and the triple deep neural network, the feature description of the image feature points with excellent performance can be automatically learned, the matching accuracy of the image can be greatly improved, and the method can be completely applied to actual image matching.
The feature matching method based on the Triplet deep neural network structure provided by the invention designs a novel loss function which restrains the mean value and the variance of a training sample, and combines the Triplet deep neural network to obtain feature description with excellent performance. According to the method, the distance distribution of a matched feature pair and a unmatched feature pair is described by Gaussian distribution, and a novel loss function which restrains the mean value and the variance of a training sample is obtained according to the principle that the reduction of the feature matching error is equivalent to the reduction of the overlapping area of the distance distribution of the two matched feature pairs. The experimental result shows that compared with the traditional SIFT and feature description methods such as TFeat, HardNet and the like based on deep learning, the feature description method based on the triple deep neural network structure is improved in matching performance and has certain application value in actual image matching.
Through testing, the performance of the triple structure-based deep neural network trained by the invention is superior to that of the existing method (see table 1). The FPR95 index was used to evaluate network performance. After the network is trained, inputting the matching pairs generated from the test type data set, calculating the characteristics of each matching pair and calculating the distance of the matching pair, and evaluating the network performance by using an FPR95 index based on the distances of all the matching pairs; that is, all the calculated distances are sorted from small to large, a distance threshold value mu is set, and in the process that mu moves from minimum to maximum, the matching pairs smaller than the threshold value are all regarded as positive matches, and the matching pairs exceeding the threshold value are regarded as negative matching pairs, so that the recall ratio of the positive matching pairs is gradually increased from 0 to 1. When the recall rate reaches 0.95, the proportion of matching pairs below the threshold μ that contain negative matching pairs is the FPR95 value. Obviously, the smaller the value, the fewer samples representing misclassifications, the more accurate the network can calculate the distance.
Drawings
FIG. 1 is a flow chart of image matching of the present invention;
FIG. 2 is a Triplet deep neural network architecture of the present invention;
FIG. 3 is a comparison graph of feature matching according to the present invention, in which (a) is SIFT algorithm matching and (b) is triple method matching.
Detailed Description
The present invention will be described below with reference to the accompanying drawings.
As shown in fig. 1 to 3, a feature matching method based on a Triplet deep neural network structure includes the following steps:
step one, generating a large amount of triple training data by using a UBC image block matching database and combining a target loss function L provided by the inventionlossTraining and testing a deep neural network of the triple structure, wherein the triple deep neural network structure is shown in FIG. 2. The network architecture used was: and adding a full connection layer to the three convolution layers, performing nonlinear transformation after the first two convolution layers and connecting the two convolution layers in parallel to the maximum pooling layer, and normalizing the features to unit vectors after the last full connection layer.
And step two, respectively detecting feature points in the original image and the target image by using a FAST algorithm, training a decision tree by using an ID3 algorithm, inputting 16 pixels on the circumference of the feature points into the decision tree, screening out the optimal feature points, and removing local dense feature points by using an NMS algorithm.
And step three, respectively taking the feature points obtained on the original image and the target image as centers, extracting a square image block with the size of 32 x 32, and inputting the square image block into the trained deep neural network to obtain a corresponding 128-dimensional feature descriptor.
And fourthly, utilizing a FLANN algorithm to realize feature matching by calculating the Euclidean distance of the 128-dimensional feature descriptors of the two groups of feature points.
And step five, calculating affine transformation matrixes of the two images by using a random sample consensus (RANSAC) algorithm, and calculating correct feature matching.
Figure 3 and table 1 show the results of qualitative and quantitative comparisons, respectively, of the method of the invention with other methods.
As shown in the comparison graph of the feature matching graph shown in fig. 3, the correctness of the image matching realized by the Triplet method of the present invention is significantly higher than that of the image matching based on the SIFT descriptor.
As shown in table 1, compared with the traditional SIFT and the feature description methods based on deep learning, such as TFeat, HardNet, and the like, which are performed on the Notredame, Yosemite, and Liberty public data sets in the UBC database, the matching accuracy of the triple method of the present invention is improved. The numerical value indicates the magnitude of false positive rate when the true rate reaches 95%, and a smaller numerical value indicates better performance.
Experiments show that the feature description with excellent performance is obtained by the novel loss function which is used for constraining the mean value and the variance of the training sample and the Triplet deep neural network, so that the matching accuracy of the images is greatly improved, and the feature matching method based on the Triplet deep neural network structure has a certain application value in actual image matching.
TABLE 1
Figure GDA0002272921570000061
Table 1 is a comparison of the performance of the feature matching of the present invention.

Claims (3)

1. A feature matching method based on a triple deep neural network structure is characterized by comprising the following steps:
the method comprises the following steps: deep neural network based on triple structure training
The deep neural network input of the Triplet structure is a triple consisting of three samples: one is to randomly select a sample from the training data set, the sample is called as a reference sample, then randomly select a sample belonging to the same class as the reference sample and a sample of different class from the reference sample, and the two samples are respectively called as a similar sample and a heterogeneous sample; three samples form a triple, and the whole network is trained through a loss function;
the deep neural network training process based on the triple structure is as follows:
generating triplets from a training database, wherein
Figure FDA0002272921560000011
A reference sample is shown which is,
Figure FDA0002272921560000012
a sample of the same type is represented,
Figure FDA0002272921560000013
representing heterogeneous samples, λ representing a training interval;
training a deep neural network, outputting by the network to obtain the corresponding characteristic expression of each group of samples, and respectively recording as:
Figure FDA0002272921560000014
Figure FDA0002272921560000015
and
Figure FDA0002272921560000016
let
Figure FDA0002272921560000017
And
Figure FDA0002272921560000018
the distance between the feature expressions is as small as possible, and
Figure FDA0002272921560000019
and
Figure FDA00022729215600000110
is as large as possible and is to be made
Figure FDA00022729215600000111
And
Figure FDA00022729215600000112
a distance between
Figure FDA00022729215600000113
And
Figure FDA00022729215600000114
a minimum spacing λ is provided between the distances; the above is expressed in the following inequality form:
Figure FDA00022729215600000115
the inequality defines the distance relationship between homogeneous and heterogeneous samples, i.e.: adding the minimum spacing lambda to the distances between all similar samples, wherein the minimum spacing lambda is smaller than the distances between the heterogeneous samples; when the distance relationship does not satisfy the above inequality, the following loss function can be solved:
Figure FDA00022729215600000116
+ represents that when the value in [ ] is greater than zero, the value is taken as error, and when the value is less than zero, the error is zero;
according to the analysis of the distance distribution of the positive and negative matching pairs, the mean value and the variance of the positive and negative matching pairs are constrained to reduce the area of the overlapping area, wherein the mean value adopts the following constraint:
Figure FDA00022729215600000117
where m is the spacing factor of the minimum distance between two distribution means, μnegDistance mean, μ, representing negative matched pairsposRepresents the distance mean of the positive matching pairs;
the variance of the distribution is constrained as follows:
Lvar=σposneg(4)
wherein sigmaposThe variance, σ, representing a positive matching pairnegRepresents the variance of the negative matching pairs;
combining the triple error function, the mean constraint and the variance constraint to obtain a final loss function:
Lloss=Ltri+Lmean+Lvar(5)
using a loss function LlossAre respectively paired
Figure FDA0002272921560000021
And
Figure FDA0002272921560000022
calculating the partial derivatives, respectively calculating LlossIn thatAnd
Figure FDA0002272921560000024
the gradient of the direction, and the parameter of the deep neural network is adjusted according to the back propagation algorithm until the network converges to a stable state;
step two: feature point detection of images
Respectively detecting the characteristic points of the target image and the image to be matched,
detecting image characteristic points by using a FAST algorithm: firstly, quickly screening out possible interest points by judging whether the difference value between each pixel point on the image and the pixel on the corresponding circumference meets a set threshold value, then training a decision tree by using an ID3 algorithm, inputting 16 pixels on the circumference of the feature point into the decision tree, and further screening out the optimal feature point;
removing local dense feature points by using a non-maximum suppression algorithm to reduce local feature point aggregation; calculating the response size of each feature point, comparing adjacent feature points, reserving the feature points with large response values, and deleting the rest feature points; respectively obtaining the characteristic points of a target image and an image to be matched;
step three: calculating the feature descriptors of the feature points on the target image and the image to be matched by using the trained neural network,
extracting a square image block with the resolution of 32 × 32 by taking each feature point as the center, and inputting the square image block into a trained deep neural network to obtain a feature descriptor with 128-dimensional output;
step four: approximate nearest neighbor algorithm using high dimensional data for fast matching
Calculating the Euclidean distance between each feature point on the target image and a 128-dimensional feature descriptor of all feature points on the image to be matched by using a FLANN algorithm to realize quick matching, wherein the smaller the Euclidean distance is, the higher the similarity is, and when the Euclidean distance is less than or equal to a set threshold value, the matching is judged to be successful;
step five: calculating affine transformation matrix to complete feature matching
On the basis of feature matching, an affine transformation matrix of the two images is calculated by using a random sampling consistency algorithm.
2. The feature matching method based on the Triplet deep neural network structure as claimed in claim 1, wherein in step four, feature point matching is implemented by calculating euclidean distances of 128-dimensional feature descriptors of two sets of feature points by using a FLANN algorithm, and the specific process is as follows:
(1) calculating the variance of each dimension of the feature points to be matched, selecting the dimension with the largest variance to divide the feature set into two parts, and repeating the same process for each subset, thereby establishing k-d tree storage features;
(2) and when the features are matched, performing feature search based on the k-d tree, and finding out nearest neighbor matching through binary search and backtracking operation.
3. The feature matching method based on the Triplet deep neural network structure according to claim 1 or 2, wherein the step five is implemented by the following steps:
(1) randomly selecting 3 groups of non-collinear point pairs from all feature matching results of a target image and an image to be matched each time, calculating an affine transformation matrix, testing errors of all other matching results under the affine transformation matrix, and counting the number of matches smaller than a set error threshold;
(2) repeating the step (1) n times, and selecting a group of parameters with the largest matching number from the final result as a final affine transformation matrix.
CN201811112938.7A 2018-09-21 2018-09-21 Feature matching method based on triple deep neural network structure Active CN109344845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811112938.7A CN109344845B (en) 2018-09-21 2018-09-21 Feature matching method based on triple deep neural network structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811112938.7A CN109344845B (en) 2018-09-21 2018-09-21 Feature matching method based on triple deep neural network structure

Publications (2)

Publication Number Publication Date
CN109344845A CN109344845A (en) 2019-02-15
CN109344845B true CN109344845B (en) 2020-06-09

Family

ID=65306260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811112938.7A Active CN109344845B (en) 2018-09-21 2018-09-21 Feature matching method based on triple deep neural network structure

Country Status (1)

Country Link
CN (1) CN109344845B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948577B (en) * 2019-03-27 2020-08-04 无锡雪浪数制科技有限公司 Cloth identification method and device and storage medium
CN110135474A (en) * 2019-04-26 2019-08-16 武汉市土地利用和城市空间规划研究中心 A kind of oblique aerial image matching method and system based on deep learning
CN110148120B (en) * 2019-05-09 2020-08-04 四川省农业科学院农业信息与农村经济研究所 Intelligent disease identification method and system based on CNN and transfer learning
CN111915480B (en) * 2020-07-16 2023-05-23 抖音视界有限公司 Method, apparatus, device and computer readable medium for generating feature extraction network
CN113505929B (en) * 2021-07-16 2024-04-16 中国人民解放军军事科学院国防科技创新研究院 Topological optimal structure prediction method based on embedded physical constraint deep learning technology
CN114972740A (en) * 2022-07-29 2022-08-30 上海鹰觉科技有限公司 Automatic ship sample collection method and system
CN115546521B (en) * 2022-11-07 2024-05-07 佳木斯大学 Point matching method based on key point response constraint

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574215B (en) * 2016-03-04 2019-11-12 哈尔滨工业大学深圳研究生院 A kind of instance-level image search method indicated based on multilayer feature
CN106845330A (en) * 2016-11-17 2017-06-13 北京品恩科技股份有限公司 A kind of training method of the two-dimension human face identification model based on depth convolutional neural networks
CN106780906B (en) * 2016-12-28 2019-06-21 北京品恩科技股份有限公司 A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
CN106980641B (en) * 2017-02-09 2020-01-21 上海媒智科技有限公司 Unsupervised Hash quick picture retrieval system and unsupervised Hash quick picture retrieval method based on convolutional neural network
CN108399428B (en) * 2018-02-09 2020-04-10 哈尔滨工业大学深圳研究生院 Triple loss function design method based on trace ratio criterion

Also Published As

Publication number Publication date
CN109344845A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109344845B (en) Feature matching method based on triple deep neural network structure
WO2021134871A1 (en) Forensics method for synthesized face image based on local binary pattern and deep learning
CN109697692B (en) Feature matching method based on local structure similarity
CN103403704B (en) For the method and apparatus searching arest neighbors
CN108346154B (en) Method for establishing lung nodule segmentation device based on Mask-RCNN neural network
CN107633226B (en) Human body motion tracking feature processing method
CN110909643B (en) Remote sensing ship image small sample classification method based on nearest neighbor prototype representation
CN105528638A (en) Method for grey correlation analysis method to determine number of hidden layer characteristic graphs of convolutional neural network
CN112633382A (en) Mutual-neighbor-based few-sample image classification method and system
CN109903282B (en) Cell counting method, system, device and storage medium
CN114091606A (en) Tunnel blasting blast hole half-hole mark identification and damage flatness evaluation classification method
CN108898269A (en) Electric power image-context impact evaluation method based on measurement
CN107301643A (en) Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms
CN110211127A (en) Image partition method based on bicoherence network
CN112365511A (en) Point cloud segmentation method based on overlapped region retrieval and alignment
CN116366313A (en) Small sample abnormal flow detection method and system
CN112529908A (en) Digital pathological image segmentation method based on cascade convolution network and model thereof
CN113033345B (en) V2V video face recognition method based on public feature subspace
CN113470085B (en) Improved RANSAC-based image registration method
CN110083724A (en) A kind of method for retrieving similar images, apparatus and system
CN113762151A (en) Fault data processing method and system and fault prediction method
CN103823889B (en) L1 norm total geometrical consistency check-based wrong matching detection method
CN104123382B (en) A kind of image set abstraction generating method under Social Media
CN102609732A (en) Object recognition method based on generalization visual dictionary diagram
CN113177602B (en) Image classification method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant