CN104281572B - A kind of target matching method and its system based on mutual information - Google Patents

A kind of target matching method and its system based on mutual information Download PDF

Info

Publication number
CN104281572B
CN104281572B CN201310271950.3A CN201310271950A CN104281572B CN 104281572 B CN104281572 B CN 104281572B CN 201310271950 A CN201310271950 A CN 201310271950A CN 104281572 B CN104281572 B CN 104281572B
Authority
CN
China
Prior art keywords
category
feature
mutual information
target matching
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310271950.3A
Other languages
Chinese (zh)
Other versions
CN104281572A (en
Inventor
秦磊
刘昊
黄庆明
成仲炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201310271950.3A priority Critical patent/CN104281572B/en
Publication of CN104281572A publication Critical patent/CN104281572A/en
Application granted granted Critical
Publication of CN104281572B publication Critical patent/CN104281572B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于互信息的目标匹配方法及其系统,该方法包括:步骤1,将查询图像和参考图像的特征拼接在一起;步骤2,将拼接后的特征对按照类别组成对应至类别下的SET特征集合,每个类别对应一个SET特征集合,SET特征集合中包含查询图像与每个类别的参考图像组成的特征对;步骤3,使用互信息表征SET特征集合与其类别标签之间的关系,通过对互信息的计算,得到目标匹配类别。该方法充分利用了gallery中的多张图片信息提高匹配精度和性能。

The present invention discloses a mutual information-based target matching method and its system. The method includes: step 1, splicing together features of a query image and a reference image; step 2, matching the spliced feature pairs to The SET feature set under the category, each category corresponds to a SET feature set, and the SET feature set contains the feature pair composed of the query image and the reference image of each category; step 3, use mutual information to characterize the relationship between the SET feature set and its category label Through the calculation of the mutual information, the target matching category is obtained. This method makes full use of the information of multiple pictures in the gallery to improve the matching accuracy and performance.

Description

一种基于互信息的目标匹配方法及其系统A Target Matching Method and System Based on Mutual Information

技术领域technical field

本发明涉及数字图像处理领域中的图像数据库中多类别的目标匹配技术,特别是涉及一种基于互信息的目标匹配方法及其系统。The invention relates to a multi-category object matching technology in an image database in the field of digital image processing, in particular to an object matching method and system based on mutual information.

背景技术Background technique

现有关于目标匹配的应用较多,比如涉及到物体识别和分类,图像检索领域等。比如,在多类别的图像数据库检索中,对于给定的查询图像,通过距离度量与查询图像和数据库中每一类的参考图像打分并排序,打分排名越高得到最终结果。目标匹配一般应用于物体检索与识别,也可以应用于跟踪。这里以Person Re-Identification(行人重检)为例,介绍目标匹配在行人重检中的应用。There are many existing applications for target matching, such as object recognition and classification, image retrieval, etc. For example, in multi-category image database retrieval, for a given query image, score and rank the query image and reference images of each category in the database through the distance measure, and the higher the score ranking is, the final result is obtained. Object matching is generally applied to object retrieval and recognition, and can also be applied to tracking. Here, Person Re-Identification (pedestrian re-examination) is taken as an example to introduce the application of target matching in pedestrian re-examination.

尽管近几年计算机视觉研究者致力于人体重检的研究中,但是人体重检问题依旧挑战性很大。这主要是因为有几个原因:Although computer vision researchers have devoted themselves to the research of human body re-examination in recent years, the problem of human re-examination is still very challenging. This is mainly because of several reasons:

其一,在比较复杂难控制的摄像环境下,通过像人脸或者步态这种生物学信息难以验证人体身份。First, in a relatively complex and difficult-to-control camera environment, it is difficult to verify human identity through biological information such as human face or gait.

其次,对于摄像机的多视角有多种不确定性,从而很难得到鲁棒的时空信息,所以人体重检问题很难通过特征来模型化。Secondly, there are many uncertainties in the multi-view of the camera, so it is difficult to obtain robust spatio-temporal information, so the human body re-examination problem is difficult to model through features.

再者,视觉外形特征,比如从人体衣服或者轮廓上提取的特征,比较直接没有一定的可区分性;另外,一个人的穿着在多摄像机的条件下都有很大的变化,因为不同的摄像机下成像会与光照,角度,背景遮挡有关,导致不同的人的外观在不同的视角下呈现出不同的效果。Furthermore, visual appearance features, such as features extracted from human clothing or contours, are relatively direct and not distinguishable; in addition, a person's clothing changes greatly under the condition of multiple cameras, because different cameras Down imaging will be related to lighting, angle, and background occlusion, resulting in different appearances of different people showing different effects under different viewing angles.

给定一张查询图像,为了找到与之匹配的其他视角的参考图像,需要有以下两步:Given a query image, in order to find matching reference images from other perspectives, the following two steps are required:

第一,首先计算出查询图像与数据库图像的特征表示;First, first calculate the feature representation of the query image and the database image;

第二,通过一定距离度量来计算两者之间的距离,然后排序,得到的top1就是所要求的结果,有时候要求的结果可能不是很严格,所以得到的前top k都可以列入返回候选结果中。Second, the distance between the two is calculated by a certain distance measure, and then sorted. The top1 obtained is the required result. Sometimes the required result may not be very strict, so the obtained top k can be listed as return candidates results.

现如今,主要的研究方法可以被分为三类:基于特征描述的方法,基于度量学习的机器学习的方法,以及基于分类的方法。基于特征描述的方法主要是对应于图像特点以及任务,从而寻求这些特征描述子对于不同视角下具有可区分性和稳定性。比如,在文献1“M.Farenzena,L.B.,A.Perina,V.Murino,M.Cristani(2010).Person Re-Identificationby Symmetry-Driven Accumulation of Local Features.CVPR.”中,是将不用的特征用于描述同一个人,首先适用对称的方法将人的骨架提取,在这个基础上挖掘图像中人的外形的颜色特征,这种方法很强的依赖于预处理即人体分割,所以所得的特征表示不会鲁棒。在文献2“Bingpeng MA,Y.S.,Frederic Jurie(2012).BiCov:a novel imagerepresentation for person re-identification and face verification.BMVC.”中,将图像的HSV三个颜色通道分别做Gabor,然后对相邻的尺度上特征融合,最后采用协方差描述子衡量特征之间的距离。再如,文献3“Bingpeng Ma,Y.S.(2012).Local Descriptorsencoded by Fisher Vectors for Person Re-identification.Workshop in ECCV.”是将低层的特征的抽象到中层表示所谓的Attribute,提取视觉或者语义上的特征,比如对于不同的人,将其在不同的条带分为不同的穿着,比如短袖,裤子等。Nowadays, the main research methods can be divided into three categories: feature description based methods, metric learning based machine learning methods, and classification based methods. The method based on feature description mainly corresponds to image characteristics and tasks, so as to seek the distinguishability and stability of these feature descriptors for different viewing angles. For example, in document 1 "M.Farenzena, L.B., A.Perina, V.Murino, M.Cristani(2010).Person Re-Identification by Symmetry-Driven Accumulation of Local Features.CVPR.", the unused features are used To describe the same person, first apply the symmetrical method to extract the human skeleton, and then mine the color features of the person’s appearance in the image. This method strongly relies on preprocessing, that is, human body segmentation, so the obtained features are different. will be robust. In document 2 "Bingpeng MA, Y.S., Frederic Jurie(2012).BiCov: a novel image representation for person re-identification and face verification. BMVC." In the document, the HSV three color channels of the image are respectively Gabored, and then the adjacent Feature fusion on the scale of , and finally the covariance descriptor is used to measure the distance between features. For another example, document 3 "Bingpeng Ma, Y.S.(2012).Local Descriptorsencoded by Fisher Vectors for Person Re-identification.Workshop in ECCV." is to abstract the low-level features to the middle-level representation of the so-called Attribute, extract visual or semantic Features, such as for different people, divide them into different clothes in different strips, such as short sleeves, pants, etc.

相比之下,很多研究者都着手于度量学习,取代了一般在提取特征后直接使用简单的欧氏距离,而是将该度量通过机器学习的方法学习求解得到,这里首先要引入马氏距离,从而将人体重检形式化为一个匹配的问题,度量学习学习上图式中的M。文献4“Zheng,W.S.(2012)."Re-identification by Relative Distance Comparison."PAMI.”中介绍了一种使用最小化相关距离比较的方法,作者将人体重检采用LDA的思想形式化了一种最优化最优化问题,然后求解和证明,整个过程相对比较复杂。文献5“Martin Kostinger,M.H.,Paul Wohlhart,Peter M.Roth,Horst Bischof(2012).Large Scale Metric Learningfrom Equivalence Constraints.CVPR.”中将pair之间的关系看出likelihood ratiotest进行推导,从而直接解得M,该方法速度比价快捷。同样的,文献6“Martin Hirzer,PeterM.Roth,Martin Kostinger,and Horst Bischof(2012).Relaxed Pairwise LearnedMetric for Person Re-identification.ECCV.”中也使用了类似的方法将度量写成F范数从而得到使用矩阵的迹就可以得到最后推导,该方法的优点也是计算量较小,速度较快。另外,在文献7“Prosser,B.(2010).Person Re-Identification by Support VectorRanking.BMVC.”中也提到了一种将相似与不相似作为二分类问题,以及在文献8“[16]Tamar Avraham,I.G.,Michael Lindenbaum,and Shaul Markovitch(2012).LearningImplicit Transfer for Person Re-identification.ECCV Workshop.”也提到了将两个特征向量直连到一起作为一个pair,然后使用SVM做二分类问题。In contrast, many researchers have started with metric learning, instead of using simple Euclidean distance directly after extracting features, but learning to solve the metric through machine learning methods. Here, the Mahalanobis distance must first be introduced , thus formalizing the human body re-examination as a matching problem, and the metric learning learns M in the above schema. Document 4 "Zheng, W.S. (2012). "Re-identification by Relative Distance Comparison." PAMI." Introduced a method using the minimum correlation distance comparison. The author formalized the idea of using LDA for human re-examination. An optimization optimization problem, and then solve and prove that the whole process is relatively complicated. Document 5 "Martin Kostinger, M.H., Paul Wohlhart, Peter M.Roth, Horst Bischof (2012). Large Scale Metric Learning from Equivalence Constraints. CVPR." The relationship between pairs can be seen from the likelihood ratiotest for derivation, so as to directly solve M, this method is faster and cheaper. Similarly, document 6 "Martin Hirzer, PeterM.Roth, Martin Kostinger, and Horst Bischof(2012).Relaxed Pairwise LearnedMetric for Person Re-identification.ECCV." also used a similar method to write the measure as the F norm to get The final derivation can be obtained by using the trace of the matrix. The advantage of this method is that the calculation amount is small and the speed is fast. In addition, in literature 7 "Prosser, B. (2010). Person Re-Identification by Support VectorRanking.BMVC." also mentioned a kind of similarity and dissimilarity as a binary classification problem, and in literature 8 "[16]Tamar Avraham, I.G., Michael Lindenbaum, and Shaul Markovitch(2012).LearningImplicit Transfer for Person Re-identification.ECCV Workshop."It also mentioned that two feature vectors are directly connected together as a pair, and then use SVM to do two classification problems.

另外,关于行人重检的任务,可用于实验训练与测试的数据主要有三个公共数据集,VIPER,i-LIDS以及ETHZ。ETHZ起初是为了给人体检测与跟踪设计的,使用多视角在移动摄像机拍摄于复杂的街道场景。这个数据集中共有146人以及8555张图片,一般实验中将图片归一化到64*128大小。在i-LIDS数据集采集与复杂机场,共有119个人和476张图片,平均每人有4张图片左右。VIPER数据集中,共有632人,每个人只有两张图片。In addition, regarding the task of pedestrian re-examination, the data that can be used for experimental training and testing mainly include three public data sets, VIPER, i-LIDS and ETHZ. ETHZ was originally designed for human detection and tracking, using multi-view cameras to shoot complex street scenes. There are 146 people and 8555 pictures in this data set. In general experiments, the pictures are normalized to a size of 64*128. In the i-LIDS data set collection and complex airports, there are 119 people and 476 pictures, with an average of about 4 pictures per person. In the VIPER dataset, there are a total of 632 people, each with only two pictures.

目标匹配技术亦是如此,下面总结主流的目标匹配方法存在的问题:The same is true for target matching technology. The following summarizes the problems existing in mainstream target matching methods:

一种基于特征的匹配方法,这种方法比较常见,提取一些图像特征,比如颜色特征,以及局部特征比如SIFT、SURF,或HOG特征,然后通过欧氏距离度量相似性,即特征之间的最近邻匹配,通过固定阈值就可以得到匹配结果。另外,单独使用特征匹配的方法程序运行速度较慢,所以后来将这种特征做成BOW(bag of word)的模型,即将特性相似的特征量化到同一个聚类中心上,这样加快了匹配的速度,不过这种方法是以牺牲精度换取时间。基于表观特征的方法局限性在于匹配结果很强依赖于特征表达,并且对于不同数据库下特征的选取不同,很大程度取决于该数据库中图像的特性。A feature-based matching method, which is relatively common, extracts some image features, such as color features, and local features such as SIFT, SURF, or HOG features, and then measures the similarity by Euclidean distance, that is, the closest feature between features Adjacent matching, the matching result can be obtained through a fixed threshold. In addition, the method of using feature matching alone runs slowly, so this feature is later made into a BOW (bag of word) model, that is, features with similar characteristics are quantified to the same cluster center, which speeds up the matching process. Speed, but this method sacrifices accuracy for time. The limitation of the method based on appearance features is that the matching results strongly depend on the feature expression, and the selection of features in different databases is different, which largely depends on the characteristics of the images in the database.

另一种主流的方法是基于距离度量学习的方法,这种方法从马氏距离出发,通过机器学习的方法作为理论依据,因为马氏距离矩阵实际上是对数据特征不同维度的加权,抑或对特征的一种线性变换操作,然后根据数据的特点,设计一些目标函数以及和数据性质相关的约束,这种约束增强了具有可区分能力的特征维度,削弱了那些没有判别能力的特征,最终求解出该马氏距离矩阵。这种机器学习的方法需要在数据预处理的时候必须有训练过程。Another mainstream method is the method based on distance metric learning. This method starts from the Mahalanobis distance and uses the machine learning method as a theoretical basis, because the Mahalanobis distance matrix is actually a weighting of different dimensions of data features, or A linear transformation operation of features, and then according to the characteristics of the data, design some objective functions and constraints related to the nature of the data. This constraint enhances the feature dimension with distinguishability, weakens those features that have no discriminative ability, and finally solves Get the Mahalanobis distance matrix. This machine learning method requires a training process during data preprocessing.

研究中潜在的问题:基于低层特征匹配的方法遇到了性能瓶颈,因为同一事物的多视角造成的视觉差总是妨碍了局部特征表示;度量学习像LMNN、ITML、LDML这些度量学习方法都是相当复杂的最优化模型,难解且计算复杂度较高。现阶段,将合适的特征与比较快捷的度量学习有效地相结合,不失为一种解决目标匹配的办法。Potential problems in research: The method based on low-level feature matching encounters performance bottlenecks, because the visual difference caused by multiple views of the same thing always hinders local feature representation; metric learning methods such as LMNN, ITML, and LDML are quite The complex optimization model is difficult to solve and has high computational complexity. At this stage, effectively combining appropriate features with relatively fast metric learning is a solution to target matching.

发明内容Contents of the invention

本发明的目的在于提供一种基于互信息的目标匹配方法及其系统,用于解决现有目标匹配技术存在的低精度问题。The object of the present invention is to provide a mutual information-based object matching method and system thereof, which are used to solve the low precision problem existing in the existing object matching technology.

为了实现上述目的,本发明提供一种基于互信息的目标匹配方法,其特征在于,包括:In order to achieve the above object, the present invention provides a target matching method based on mutual information, which is characterized in that it includes:

步骤1,将查询图像和参考图像的特征拼接在一起;Step 1, stitch together the features of the query image and the reference image;

步骤2,将拼接后的特征对按照类别组成对应至类别下的SET特征集合,每个类别对应一个SET特征集合,SET特征集合中包含查询图像与每个类别的参考图像组成的特征对;Step 2, the spliced feature pairs are corresponding to the SET feature set under the category according to the category composition, each category corresponds to a SET feature set, and the SET feature set includes the feature pair composed of the query image and the reference image of each category;

步骤3,使用互信息表征SET特征集合与其类别标签之间的关系,通过对互信息的处理,得到目标匹配类别。Step 3, use mutual information to characterize the relationship between the SET feature set and its category label, and obtain the target matching category by processing the mutual information.

所述的基于互信息的目标匹配方法,其中,所述步骤1中,该图像特征为颜色特征或纹理的直方图。The object matching method based on mutual information, wherein, in the step 1, the image feature is a color feature or a texture histogram.

所述的基于互信息的目标匹配方法,其中,所述步骤3中,包括:以如下公式表征SET特征集合与其类别标签之间的关系:The target matching method based on mutual information, wherein, in the step 3, includes: characterizing the relationship between the SET feature set and its category label with the following formula:

其中,c表示类别标签的取值,N为类别标签的最大取值,S表示对应于c类别标签的SET特征集合,C代表类别标签,MI为SET特征集合与类别标签之间互信息的关系。Among them, c represents the value of the category label, N is the maximum value of the category label, S represents the SET feature set corresponding to the c category label, C represents the category label, and MI is the mutual information relationship between the SET feature set and the category label .

所述的基于互信息的目标匹配方法,其中,所述步骤3中,包括:通过最近邻的方式获取目标匹配类别。The target matching method based on mutual information, wherein, in the step 3, includes: obtaining the target matching category by means of nearest neighbor.

所述的基于互信息的目标匹配方法,其中,所述步骤3中,包括:通过对所有SET特征集合和类别标签之间的互信息的取值进行排序,将互信息最大值对应的类别标签赋给查询图像,以得到目标匹配类别。The target matching method based on mutual information, wherein, in the step 3, includes: by sorting the values of the mutual information between all SET feature sets and category labels, the category label corresponding to the maximum value of the mutual information Assigned to the query image to get the target matching category.

为了实现上述目的,本发明提供一种基于互信息的目标匹配系统,其特征在于,包括:In order to achieve the above object, the present invention provides a target matching system based on mutual information, which is characterized in that it includes:

特征拼接模块,用于将查询图像和参考图像的特征拼接在一起;A feature splicing module for splicing together the features of the query image and the reference image;

类别对应模块,连接所述特征拼接模块,用于将拼接后的特征对按照类别组成对应至类别下的SET特征集合,每个类别对应一个SET特征集合,SET特征集合中包含查询图像与每个类别的参考图像组成的特征对;The category corresponding module is connected to the feature splicing module, which is used to map the spliced feature pairs to the SET feature set under the category according to the category composition, each category corresponds to a SET feature set, and the SET feature set includes the query image and each Feature pairs composed of reference images of categories;

目标匹配模块,连接所述类别对应模块,用于使用互信息表征SET特征集合与其类别标签之间的关系,通过对互信息的处理,得到目标匹配类别。The target matching module is connected to the category corresponding module, and is used to use mutual information to characterize the relationship between the SET feature set and its category label, and obtain the target matching category by processing the mutual information.

所述的基于互信息的目标匹配系统,其中,该图像特征为颜色特征或纹理的直方图。In the object matching system based on mutual information, the image feature is a color feature or a texture histogram.

所述的基于互信息的目标匹配系统,其中,所述目标匹配模块以如下公式表征SET特征集合与其类别标签之间的关系:The target matching system based on mutual information, wherein the target matching module characterizes the relationship between the SET feature set and its category label with the following formula:

其中,c表示类别标签的取值,N为类别标签的最大取值,S表示对应于c类别标签的SET特征集合,C代表类别标签,MI为SET特征集合与类别标签之间互信息的关系。Among them, c represents the value of the category label, N is the maximum value of the category label, S represents the SET feature set corresponding to the c category label, C represents the category label, and MI is the mutual information relationship between the SET feature set and the category label .

所述的基于互信息的目标匹配系统,其中,所述目标匹配模块通过最近邻的方式获取目标匹配类别。In the target matching system based on mutual information, the target matching module obtains the target matching category through the nearest neighbor method.

所述的基于互信息的目标匹配系统,其中,所述目标匹配模块通过对所有SET特征集合和类别标签之间的互信息的取值进行排序,将互信息最大值对应的类别标签赋给查询图像,以得到目标匹配类别。The target matching system based on mutual information, wherein the target matching module assigns the category label corresponding to the maximum value of the mutual information to the query by sorting the values of the mutual information between all SET feature sets and category labels image to get the target matching category.

与现有技术相比,本发明的有益技术效果是:Compared with the prior art, the beneficial technical effect of the present invention is:

本发明解决了现有目标匹配技术存在的低精度问题,通过将图片拼接并配对的方式,利用互信息来表示特征对与其类别之间的关系,从而提出了一种有效的目标匹配的方法,该方法充分利用了gallery中的多张图片信息提高匹配精度和性能。The present invention solves the low-precision problem existing in the existing target matching technology, and uses mutual information to represent the relationship between feature pairs and their categories by splicing and pairing pictures, thereby proposing an effective target matching method. This method makes full use of the information of multiple pictures in the gallery to improve the matching accuracy and performance.

首先对于构造特征,将查询图像与参考图像所提取的特征拼接,从而有效地将查询图像与参考图像的信息融入pairwise特征中;考虑到充分利用每个类别中多幅参考图像信息,使用这些pairwise特征构造对应于每一个类别的SET特征集合,对SET特征集合以及其类别标签之间建模;为了表征SET特征集合与类别标签之间的关系,引入信息论中互信息的概念,通过对每个类别互信息的计算并排序,从分类的角度切入,提出了一种有效的面向监控的鲁棒识别的行人重检的算法。实验结果证明,该方法很大程度上提高了性能。First of all, for constructing features, the features extracted from the query image and the reference image are spliced, so as to effectively integrate the information of the query image and the reference image into the pairwise feature; considering making full use of the information of multiple reference images in each category, using these pairwise The feature construction corresponds to the SET feature set of each category, and models the SET feature set and its category label; in order to characterize the relationship between the SET feature set and the category label, the concept of mutual information in information theory is introduced, and each The calculation and sorting of category mutual information, from the perspective of classification, proposes an effective pedestrian re-examination algorithm for robust recognition of monitoring. Experimental results prove that this method greatly improves the performance.

附图说明Description of drawings

图1是本发明基于互信息的目标匹配方法流程图;Fig. 1 is the flow chart of the target matching method based on mutual information of the present invention;

图2是本发明基于互信息的目标匹配系统结构图;Fig. 2 is a structural diagram of the target matching system based on mutual information in the present invention;

图3A-3D是本发明基于互信息的目标匹配的实施例。3A-3D are embodiments of target matching based on mutual information in the present invention.

具体实施方式detailed description

以下结合附图和具体实施例对本发明进行详细描述,但不作为对本发明的限定。如图1所示,是本发明基于互信息的目标匹配方法流程图。该流程具体包括如下步骤:The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments, but not as a limitation of the present invention. As shown in FIG. 1 , it is a flowchart of the target matching method based on mutual information in the present invention. The process specifically includes the following steps:

步骤101,将查询图像和gallery中图像特征拼接在一起。Step 101, stitch together the query image and image features in the gallery.

该步骤中,通过将查询图像和gallery中的图像特征拼接,可以加强匹配特征维度,参考图像含有更多信息。In this step, by splicing the query image and image features in the gallery, the matching feature dimension can be strengthened, and the reference image contains more information.

在图像检索中,对于每一个图像类别选出一幅或者几幅图像作为参考图像,将这些图像集合组成图像数据库,称为gallery。In image retrieval, one or several images are selected as reference images for each image category, and these image collections form an image database, which is called a gallery.

步骤102,将拼接的特征对按照类别组成对应至类别下的SET特征集合,每个类别对应一个SET特征集合,SET特征集合中包含查询图像与每个类别的参考图像组成的特征对。In step 102, the concatenated feature pairs are mapped to the SET feature set under the category according to the category composition, each category corresponds to a SET feature set, and the SET feature set includes the feature pair composed of the query image and the reference image of each category.

该步骤中,由pairwise特征得到每个类别相对应的SET特征集合,将查询图像与候选图像数据库的图像视觉信息融合在一起,充分利用gallery中的多张图像信息。In this step, the SET feature set corresponding to each category is obtained from the pairwise feature, and the query image and the image visual information of the candidate image database are fused together to make full use of the multiple image information in the gallery.

pairwise特征,即两幅图像的特征拼接到一起,称之为一个特征对。这样增强了图像特征表达能力。如,查询图像特征与gallery中的候选图像特征拼接到一起,组成了特征对,称为pairwise特征。The pairwise feature, that is, the features of the two images are spliced together, called a feature pair. This enhances the ability to express image features. For example, the query image features and the candidate image features in the gallery are spliced together to form a feature pair, which is called pairwise feature.

步骤103,使用互信息表征SET特征集合与其类别之间的关系,通过对互信息的计算和排序,最终得到目标匹配类别。Step 103, use the mutual information to characterize the relationship between the SET feature set and its category, and finally obtain the target matching category by calculating and sorting the mutual information.

该步骤中,使用互信息计算SET特征集合与类别之间的关系,即查询图像与参考图像拼接后与类别的相似程度,通过该相似度的度量,可以使用最近邻的方法得到图片最终的类别,从而最大程度发掘多幅参考图像信息对应于不同类别的贡献。In this step, the mutual information is used to calculate the relationship between the SET feature set and the category, that is, the degree of similarity between the query image and the reference image and the category after splicing. Through the measurement of the similarity, the final category of the picture can be obtained using the nearest neighbor method. , so as to maximize the contribution of multiple reference image information corresponding to different categories.

如图2所示,是本发明基于互信息的目标匹配系统结构图。结合图1,该目标匹配系统200包括:As shown in FIG. 2 , it is a structural diagram of the target matching system based on mutual information in the present invention. 1, the target matching system 200 includes:

特征拼接模块201,用于将查询图像和gallery中图像特征拼接在一起。The feature splicing module 201 is used for splicing together the query image and image features in the gallery.

进一步地,通过特征拼接模块201将查询图像和gallery中的图像特征拼接,可以加强匹配特征维度,参考图像含有更多信息。Further, by splicing the query image and image features in the gallery through the feature splicing module 201, the matching feature dimension can be strengthened, and the reference image contains more information.

类别对应模块202,连接特征拼接模块201,用于将拼接的特征对按照类别组成对应类别下的SET特征集合,每个类别对应一个SET特征集合,SET特征集合中包含查询图像与每个类别的参考图像组成的特征对。The category corresponding module 202 and the connection feature splicing module 201 are used to form the spliced feature pairs according to the category into a SET feature set under the corresponding category, each category corresponds to a SET feature set, and the SET feature set includes the query image and each category. Feature pairs composed of reference images.

进一步地,类别对应模块202由pairwise特征得到每个类别相对应的SET特征集合,将查询图像与候选图像数据库的图像视觉信息融合在一起,充分利用gallery中的多张图像信息。Further, the category correspondence module 202 obtains the SET feature set corresponding to each category from the pairwise feature, fuses the query image and the image visual information of the candidate image database, and makes full use of the multiple image information in the gallery.

类别对应模块202将查询图像与候选图像特征拼接在一起,组成了特征对,从而增强了图像特征的判别能力。The category correspondence module 202 stitches together the query image and the candidate image features to form a feature pair, thereby enhancing the discrimination ability of image features.

目标匹配模块203,连接类别对应模块202,用于使用互信息表征SET特征集合与其类别之间的关系,通过对互信息的计算和排序,最终得到目标匹配类别。The target matching module 203 and the connection category corresponding module 202 are used to use mutual information to characterize the relationship between the SET feature set and its category, and finally obtain the target matching category by calculating and sorting the mutual information.

进一步地,目标匹配模块203使用互信息计算SET特征集合与类别之间的关系,即查询图像与参考图像拼接后与类别的相似程度,通过该相似度的度量,可以使用最近邻的方法得到图片最终的类别,从而最大程度发掘多幅参考图像信息对应于不同类别的贡献。Further, the target matching module 203 uses mutual information to calculate the relationship between the SET feature set and the category, that is, the degree of similarity between the query image and the reference image and the category after splicing. Through the measurement of the similarity, the nearest neighbor method can be used to obtain the image The final category, so as to maximize the contribution of multiple reference image information corresponding to different categories.

如图3A-3D所示,是本发明基于互信息的目标匹配的实施例。结合图1、2对该实施例描述如下。As shown in Fig. 3A-3D, it is an embodiment of target matching based on mutual information in the present invention. This embodiment is described below with reference to FIGS. 1 and 2 .

在图3A中,提出了一种基于集合分类的方式通过利用互信息解决图像匹配技术。如图所示,给定查询图像,在数据库中查找与该查询图像最相似的图像类别,从而得到了该查询图像的类别标签。为了克服传统的这种目标匹配缺陷,首先,本发明构造了特征集合:对于每一个类别,如类别A、B、C,均有一个基于SET的特征集合SET A、SET B、SET C,该特征集合囊括了查询图像与gallery中所有图像特征拼接后特征对的集合。SET特征集合表示查询图像特征与gallery中所有图像拼接组成新的pairwise特征。构造完SET特征集合之后,将拼接的特征应用到一种SET-CLASS的模型中。用该SET-CLASS的模型表示SET特征集合和其对应的类别标签的关系,本发明使用互信息表征他们之间的关系,图中使用MI A、MI B、MI C表示SET特征集合与类别标签之间的互信息取值。最后,通过最近邻的方法近似求得该互信息的值,最终通过排序将互信息最大值对应的类别标签赋给查询图像,从而得到查询图像的类别标签In Fig. 3A, a set classification based approach is proposed to solve the image matching technique by utilizing mutual information. As shown in the figure, given a query image, the image category most similar to the query image is searched in the database, thereby obtaining the category label of the query image. In order to overcome this traditional target matching defect, first, the present invention constructs a feature set: for each category, such as categories A, B, and C, there is a SET-based feature set SET A, SET B, and SET C. The feature set includes the set of feature pairs after splicing the query image and all image features in the gallery. The SET feature set represents the query image feature and all images in the gallery to form a new pairwise feature. After the SET feature set is constructed, the spliced features are applied to a SET-CLASS model. The SET-CLASS model is used to represent the relationship between the SET feature set and its corresponding category label. The present invention uses mutual information to represent the relationship between them. In the figure, MI A, MI B, and MIC are used to represent the SET feature set and category label The value of the mutual information between them. Finally, the value of the mutual information is approximated by the nearest neighbor method, and finally the category label corresponding to the maximum value of the mutual information is assigned to the query image by sorting, so as to obtain the category label of the query image

具体步骤如下:Specific steps are as follows:

S0:数据表示,提取图像特征,如图3B所示。S0: Data representation, extracting image features, as shown in Figure 3B.

给定一张查询图像,衡量查询图像与数据库中图像的相似性,首先需要提取图像特征。本发明提取查询图像和gallery中的所有图像特征,由于在实际应用场景中,图像的尺寸大小都不一样,所以在提取图像特征之前,需要将图像尺寸大小统一为相同尺寸,即图像大小归一化(例如将所有图像都放缩为64*128的图像分辨率)。对图像大小归一化之后,提取图像特征,该特征可以是颜色特征或纹理的直方图,图3B中不同的纹理代表不同的图像特征通道。然后,将查询图像和gallery中图像的特征拼接,这样做的目的是可以增强特征表达。Given a query image, to measure the similarity between the query image and the images in the database, it is first necessary to extract image features. The present invention extracts the query image and all the image features in the gallery. Since the sizes of the images are different in the actual application scene, it is necessary to unify the image size to the same size before extracting the image features, that is, the image size is normalized. (for example, all images are scaled to 64*128 image resolution). After normalizing the image size, image features are extracted, which can be color features or texture histograms, and different textures in Figure 3B represent different image feature channels. Then, the features of the query image and the image in the gallery are spliced to enhance the feature expression.

S1:构造SET特征集合,如图3C所示。S1: Construct a SET feature set, as shown in Figure 3C.

在gallery中,每一个类别,比如行人A,行人B,行人C(也作类别A,类别B,类别C)都对应一幅或多幅图像作为参考图像,给定查询图像,比对查询图像与gallery中各个类别的相似度,最相似的被判决为属于该类别。正如步骤S0中所述,将查询图像与gallery中该类下的图像特征拼接之后,即查询图像特征为xq,gallery中类别A的图像特征将两个特征拼接在一起,即组成了对应于每一个类,都有多张拼接后的特征对,即对于图示中行人B与行人C,也分别有对应的拼接特征对。本发明将该SET特征集合包含查询图像与gallery中图像特征拼接作为一个假设,即假设该查询图像属于该类,那么拼接之后与该类别的关系应该越紧密,相似程度越高。In the gallery, each category, such as pedestrian A, pedestrian B, and pedestrian C (also known as category A, category B, and category C) corresponds to one or more images as reference images. Given a query image, compare the query image The similarity with each category in the gallery, the most similar is judged to belong to this category. As described in step S0, after splicing the query image with the image features of this category in the gallery, that is, the query image feature is x q , and the image features of category A in the gallery Stitching the two features together constitutes Corresponding to each class, there are multiple spliced feature pairs, that is, for pedestrian B and pedestrian C in the illustration, there are also corresponding spliced feature pairs. In the present invention, the SET feature set includes the splicing of the query image and the image features in the gallery as an assumption, that is, assuming that the query image belongs to this category, the closer the relationship with the category after splicing, the higher the similarity.

S3:计算互信息。S3: Calculate mutual information.

互信息是可以度量特征集合与其类别标签这两组变量之间相互依赖关系。对于目标匹配问题,本发明可以通过利用互信息来标识对应于某类的SET特征集合与该类别标签的关系:Mutual information is a measure of the interdependence between two sets of variables, a feature set and its class label. For the target matching problem, the present invention can identify the relationship between the SET feature set corresponding to a certain class and the class label by using mutual information:

公式中c表示类别标签的取值,如c={1,2,3,…N},N为类别标签的最大取值,Sc表示对应于c类别标签的SET特征集合,C代表类别标签,MI即SET特征集合与类别标签之间互信息的关系。所以最终的目标匹配问题就变成了一个求互信息最大值的问题。c代表类别标签的取值,如行人A,行人B,行人C,对应的类别标签的取值c分别为1、2、3。假设对于每个SET特征集合中所有的拼接特征之间都是相互独立的,可以将互信息表达式MI写成如下公式,其中P代表概率分布,xqj表示查询图像q与gallery中图像j的拼接特征对:In the formula, c represents the value of the category label, such as c={1,2,3,...N}, N is the maximum value of the category label, S c represents the SET feature set corresponding to the category label c, and C represents the category label , MI is the mutual information relationship between the SET feature set and the category label. Therefore, the final target matching problem becomes a problem of finding the maximum value of mutual information. c represents the value of the category label, such as pedestrian A, pedestrian B, and pedestrian C, and the corresponding category label values c are 1, 2, and 3, respectively. Assuming that all splicing features in each SET feature set are independent of each other, the mutual information expression MI can be written as the following formula, where P represents the probability distribution, and x qj represents the splicing of the query image q and the image j in the gallery feature pair:

以及as well as

从上式可以看出,求互信息最终变成了求解下式:It can be seen from the above formula that seeking mutual information eventually becomes the solution of the following formula:

通常意义下,概率表达需要求其概率密度函数,计算比较复杂,所以本发明引用了一种现有的算法,通过该算法近似概率值,见下式In general sense, the probability expression needs to obtain its probability density function, and the calculation is more complicated, so the present invention cites an existing algorithm, and the probability value is approximated by this algorithm, as shown in the following formula

logP(xqj|C)∝-|xqj-xc||2 logP(x qj |C)∝-|x qj -x c || 2

其中,xc表示属于取值为类别c的任一个特征对,所以最终的互信息表达式变成Among them, x c represents any feature pair that belongs to the value category c, so the final mutual information expression becomes

其中ω(xqj)代表NN+表示gallery中同一类图像的自由组合,如图3D所示,|C|表示类别个数(即数据库中共有多少人),xqj表示查询图像q的特征和gallery中图像j的特征组成的特征对,xij表示gallery中图像i、j的特征组成的特征对,对于xij,具体地,NN+即目标集合,同一个类别下多幅行人图像特征自由组合形成的特征对,这样的组合体现在对于同一个类两幅相同类别的图像拼接作为同一个目标特征,NN-即为两个不同类别的特征自由组合,作为非目标特征。这样采用最近邻的方法,就可以近似互信息的取值。where ω(x qj ) represents NN+ represents the free combination of images of the same type in the gallery, as shown in Figure 3D, |C| represents the number of categories (that is, how many people are in the database), x qj represents the features of the query image q and the features of the image j in the gallery Feature pair, x ij represents the feature pair composed of the features of image i and j in the gallery. For x ij , specifically, NN+ is the target set, a feature pair formed by free combination of features of multiple pedestrian images under the same category. Such a combination It is reflected in the splicing of two images of the same category as the same target feature, and NN- is the free combination of features of two different categories as a non-target feature. In this way, using the nearest neighbor method, the value of mutual information can be approximated.

S4:最近邻分类。S4: Nearest neighbor classification.

对于每一个SET特征集合,最终的结果取决于所有SET特征集合和类别标签之间的互信息的取值,将这些取值排序,然后将取值最大的类别标签赋给查询图像,从而查询图像得到类别标签。For each SET feature set, the final result depends on the values of mutual information between all SET feature sets and category labels, sort these values, and then assign the category label with the largest value to the query image, thereby querying the image Get the category labels.

本发明解决了现有目标匹配技术存在的低精度问题,通过将图片拼接并配对的方式,利用互信息来表示特征对与其类别标签之间的关系,从而提出了一种有效的目标匹配的方法,该方法充分利用了gallery中的多张图片信息提高匹配精度和性能。The present invention solves the low-precision problem existing in the existing target matching technology, and uses mutual information to represent the relationship between feature pairs and their category labels by splicing and pairing pictures, thereby proposing an effective target matching method , this method makes full use of the information of multiple pictures in the gallery to improve the matching accuracy and performance.

当然,本发明还可有其它多种实施例,在不背离本发明精神及其实质的情况下,熟悉本领域的技术人员当可根据本发明做出各种相应的改变和变形,但这些相应的改变和变形都应属于本发明所附的权利要求的保护范围。Of course, the present invention can also have other various embodiments, and those skilled in the art can make various corresponding changes and deformations according to the present invention without departing from the spirit and essence of the present invention. All changes and deformations should belong to the protection scope of the appended claims of the present invention.

Claims (10)

1.一种基于互信息的目标匹配方法,其特征在于,包括:1. A target matching method based on mutual information, characterized in that, comprising: 步骤1,将查询图像和参考图像的特征拼接在一起;Step 1, stitch together the features of the query image and the reference image; 步骤2,将拼接后的特征对按照类别组成对应至类别下的SET特征集合,每个类别对应一个SET特征集合,SET特征集合中包含查询图像与每个类别的参考图像组成的特征对;Step 2, the spliced feature pairs are corresponding to the SET feature set under the category according to the category composition, each category corresponds to a SET feature set, and the SET feature set includes the feature pair composed of the query image and the reference image of each category; 步骤3,使用互信息表征SET特征集合与其类别标签之间的关系,通过对互信息的处理,得到目标匹配类别。Step 3, use mutual information to characterize the relationship between the SET feature set and its category label, and obtain the target matching category by processing the mutual information. 2.根据权利要求1所述的基于互信息的目标匹配方法,其特征在于,所述步骤1中,该图像特征为颜色特征或纹理的直方图。2. The object matching method based on mutual information according to claim 1, characterized in that, in the step 1, the image feature is a color feature or a texture histogram. 3.根据权利要求1或2所述的基于互信息的目标匹配方法,其特征在于,所述步骤3中,包括:以如下公式表征SET特征集合与其类别标签之间的关系:3. The target matching method based on mutual information according to claim 1 or 2, characterized in that, in the step 3, comprising: characterizing the relationship between the SET feature set and its category label with the following formula: cc ^^ == argarg maxmax cc ∈∈ {{ 1,21,2 ,, .. .. .. ,, NN }} MIMI (( CC == cc ;; SS cc )) 其中,c表示类别标签的取值,N为类别标签的最大取值,S表示对应于c类别标签的SET特征集合,C代表类别标签,MI为SET特征集合与类别标签之间互信息的关系。Among them, c represents the value of the category label, N is the maximum value of the category label, S represents the SET feature set corresponding to the c category label, C represents the category label, and MI is the mutual information relationship between the SET feature set and the category label . 4.根据权利要求1或2所述的基于互信息的目标匹配方法,其特征在于,所述步骤3中,包括:通过最近邻的方式获取目标匹配类别。4. The mutual information-based target matching method according to claim 1 or 2, characterized in that, in step 3, comprising: obtaining the target matching category by means of nearest neighbor. 5.根据权利要求4所述的基于互信息的目标匹配方法,其特征在于,所述步骤3中,包括:通过对所有SET特征集合和类别标签之间的互信息的取值进行排序,将互信息最大值对应的类别标签赋给查询图像,以得到目标匹配类别。5. The target matching method based on mutual information according to claim 4, characterized in that, in the step 3, comprising: by sorting the values of the mutual information between all SET feature sets and category labels, the The category label corresponding to the maximum value of mutual information is assigned to the query image to obtain the target matching category. 6.一种基于互信息的目标匹配系统,其特征在于,包括:6. A target matching system based on mutual information, comprising: 特征拼接模块,用于将查询图像和参考图像的特征拼接在一起;A feature splicing module for splicing together the features of the query image and the reference image; 类别对应模块,连接所述特征拼接模块,用于将拼接后的特征对按照类别组成对应至类别下的SET特征集合,每个类别对应一个SET特征集合,SET特征集合中包含查询图像与每个类别的参考图像组成的特征对;The category corresponding module is connected to the feature splicing module, which is used to map the spliced feature pairs to the SET feature set under the category according to the category composition, each category corresponds to a SET feature set, and the SET feature set includes the query image and each Feature pairs composed of reference images of categories; 目标匹配模块,连接所述类别对应模块,用于使用互信息表征SET特征集合与其类别标签之间的关系,通过对互信息的处理,得到目标匹配类别。The target matching module is connected to the category corresponding module, and is used to use mutual information to characterize the relationship between the SET feature set and its category label, and obtain the target matching category by processing the mutual information. 7.根据权利要求6所述的基于互信息的目标匹配系统,其特征在于,该图像特征为颜色特征或纹理的直方图。7. The object matching system based on mutual information according to claim 6, wherein the image feature is a color feature or a texture histogram. 8.根据权利要求6或7所述的基于互信息的目标匹配系统,其特征在于,所述目标匹配模块以如下公式表征SET特征集合与其类别标签之间的关系:8. The target matching system based on mutual information according to claim 6 or 7, wherein the target matching module characterizes the relationship between the SET feature set and its category label with the following formula: cc ^^ == argarg maxmax cc ∈∈ {{ 1,21,2 ,, .. .. .. ,, NN }} MIMI (( CC == cc ;; SS cc )) 其中,c表示类别标签的取值,N为类别标签的最大取值,S表示对应于c类别标签的SET特征集合,C代表类别标签,MI为SET特征集合与类别标签之间互信息的关系。Among them, c represents the value of the category label, N is the maximum value of the category label, S represents the SET feature set corresponding to the c category label, C represents the category label, and MI is the mutual information relationship between the SET feature set and the category label . 9.根据权利要求6或7所述的基于互信息的目标匹配系统,其特征在于,所述目标匹配模块通过最近邻的方式获取目标匹配类别。9. The mutual information-based target matching system according to claim 6 or 7, wherein the target matching module obtains the target matching category by means of nearest neighbor. 10.根据权利要求9所述的基于互信息的目标匹配系统,其特征在于,所述目标匹配模块通过对所有SET特征集合和类别标签之间的互信息的取值进行排序,将互信息最大值对应的类别标签赋给查询图像,以得到目标匹配类别。10. The target matching system based on mutual information according to claim 9, wherein the target matching module maximizes the mutual information by sorting the values of the mutual information between all SET feature sets and category labels The category label corresponding to the value is assigned to the query image to obtain the target matching category.
CN201310271950.3A 2013-07-01 2013-07-01 A kind of target matching method and its system based on mutual information Expired - Fee Related CN104281572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310271950.3A CN104281572B (en) 2013-07-01 2013-07-01 A kind of target matching method and its system based on mutual information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310271950.3A CN104281572B (en) 2013-07-01 2013-07-01 A kind of target matching method and its system based on mutual information

Publications (2)

Publication Number Publication Date
CN104281572A CN104281572A (en) 2015-01-14
CN104281572B true CN104281572B (en) 2017-06-09

Family

ID=52256457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310271950.3A Expired - Fee Related CN104281572B (en) 2013-07-01 2013-07-01 A kind of target matching method and its system based on mutual information

Country Status (1)

Country Link
CN (1) CN104281572B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557533B (en) * 2015-09-24 2020-03-06 杭州海康威视数字技术股份有限公司 Single-target multi-image joint retrieval method and device
CN105701501B (en) * 2016-01-04 2019-01-18 北京大学 A kind of trademark image recognition methods
CN107292365B (en) * 2017-06-27 2021-01-08 百度在线网络技术(北京)有限公司 Method, device and equipment for binding commodity label and computer readable storage medium
CN109492601A (en) * 2018-11-21 2019-03-19 泰康保险集团股份有限公司 Face comparison method and device, computer-readable medium and electronic equipment
CN109993221B (en) * 2019-03-25 2021-02-09 新华三大数据技术有限公司 Image classification method and device
CN110059755B (en) * 2019-04-22 2023-10-13 中国石油大学(华东) A seismic attribute optimization method based on the fusion of multi-feature evaluation criteria
CN111368934B (en) * 2020-03-17 2023-09-19 腾讯科技(深圳)有限公司 Image recognition model training method, image recognition method and related device
CN112906557B (en) * 2021-02-08 2023-07-14 重庆兆光科技股份有限公司 Multi-granularity feature aggregation target re-identification method and system under multi-view angle
CN113343069B (en) * 2021-06-10 2024-08-23 北京字节跳动网络技术有限公司 User information processing method, device, medium and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303767A (en) * 2007-11-15 2008-11-12 复旦大学 Registration Method of Digital Silhouette Image Based on Adaptive Classification of Block Image Content

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8805038B2 (en) * 2011-06-30 2014-08-12 National Taiwan University Longitudinal image registration algorithm for infrared images for chemotherapy response monitoring and early detection of breast cancers

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303767A (en) * 2007-11-15 2008-11-12 复旦大学 Registration Method of Digital Silhouette Image Based on Adaptive Classification of Block Image Content

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Discriminative Video Pattern Search for Efficient Action Detection";Junsong Yuan等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20110217;第33卷;第1728-1743页 *
"基于互信息的高性能遥感图像配准算法研究与实现";周静;《中国优秀硕士学位论文全文数据库 信息科技辑》;20111215(第S2期);第I140-1283页 *
"基于图像匹配的目标跟踪算法研究";李琼;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130415(第4期);第I138-1086页 *

Also Published As

Publication number Publication date
CN104281572A (en) 2015-01-14

Similar Documents

Publication Publication Date Title
CN104281572B (en) A kind of target matching method and its system based on mutual information
Liu et al. Localization guided learning for pedestrian attribute recognition
Siméoni et al. Local features and visual words emerge in activations
Zheng et al. SIFT meets CNN: A decade survey of instance retrieval
Hinterstoisser et al. Gradient response maps for real-time detection of textureless objects
Tabia et al. Covariance-based descriptors for efficient 3D shape matching, retrieval, and classification
Sivic et al. Video Google: A text retrieval approach to object matching in videos
Lin et al. Discriminatively trained and-or graph models for object shape detection
Kim et al. Boundary preserving dense local regions
Lisanti et al. Group re-identification via unsupervised transfer of sparse features encoding
Zhang et al. Hierarchical building recognition
Lee et al. Place recognition using straight lines for vision-based SLAM
Zhao et al. Strategy for dynamic 3D depth data matching towards robust action retrieval
Candemir et al. Rsilc: rotation-and scale-invariant, line-based color-aware descriptor
Morales-González et al. Simple object recognition based on spatial relations and visual features represented using irregular pyramids
He et al. Deep feature embedding learning for person re-identification based on lifted structured loss
Tseng et al. Person retrieval in video surveillance using deep learning–based instance segmentation
Cai et al. Beyond photo-domain object recognition: Benchmarks for the cross-depiction problem
Liao et al. Multi-scale saliency features fusion model for person re-identification
Shen et al. Gestalt rule feature points
CN108170729A (en) Utilize the image search method of hypergraph fusion multi-modal information
Bhattacharjee et al. Query-adaptive small object search using object proposals and shape-aware descriptors
Che et al. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance
Cao et al. Learning multi-scale features and batch-normalized global features for person re-identification
Nguyen et al. Video instance search via spatial fusion of visual words and object proposals

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170609