CN107122795A - A kind of pedestrian integrated based on coring feature and stochastic subspace discrimination method again - Google Patents

A kind of pedestrian integrated based on coring feature and stochastic subspace discrimination method again Download PDF

Info

Publication number
CN107122795A
CN107122795A CN201710212251.XA CN201710212251A CN107122795A CN 107122795 A CN107122795 A CN 107122795A CN 201710212251 A CN201710212251 A CN 201710212251A CN 107122795 A CN107122795 A CN 107122795A
Authority
CN
China
Prior art keywords
msub
mrow
mover
pedestrian
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710212251.XA
Other languages
Chinese (zh)
Other versions
CN107122795B (en
Inventor
赵才荣
陈亦鹏
王学宽
卫志华
苗夺谦
田元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201710212251.XA priority Critical patent/CN107122795B/en
Publication of CN107122795A publication Critical patent/CN107122795A/en
Application granted granted Critical
Publication of CN107122795B publication Critical patent/CN107122795B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明涉及一种基于核化特征和随机子空间集成的行人再辨识方法,包括以下步骤:S1,获取行人图像的训练样本集和测试样本集,确定两个样本间的核化函数;S2,分别将两个样本集的原始特征转化为核化特征;S3,在训练样本集的核化特征空间中,随机选取多个不同的子空间,分别计算不同和相同行人图像对的核化特征差值的协方差矩阵及其逆矩阵,得到图像对的核化特征差值的分布函数;S4,分别在各子空间下,计算样本对为相同行人的概率和为不同行人的概率,将两个概率的比值作为样本间的距离;S5,对距离进行集成,得到各样本对间的最终距离。与现有技术相比,本发明具有良好的行人再辨识能力,适用于各种不同的特征,具有较强的鲁棒性。

The present invention relates to a pedestrian re-identification method based on kernelization features and random subspace integration, comprising the following steps: S1, obtaining a training sample set and a test sample set of pedestrian images, and determining a kernelization function between the two samples; S2, Transform the original features of the two sample sets into kernelized features; S3, in the kernelized feature space of the training sample set, randomly select multiple different subspaces, and calculate the kernelized feature differences between different and the same pedestrian image pairs Value covariance matrix and its inverse matrix, to obtain the distribution function of the kernelized feature difference of the image pair; S4, in each subspace, calculate the probability that the sample pair is the same pedestrian and the probability of being a different pedestrian, and the two The ratio of the probabilities is used as the distance between the samples; S5, the distance is integrated to obtain the final distance between each sample pair. Compared with the prior art, the present invention has good pedestrian re-identification ability, is applicable to various features, and has strong robustness.

Description

一种基于核化特征和随机子空间集成的行人再辨识方法A Pedestrian Re-ID Method Based on Kernelized Feature and Random Subspace Integration

技术领域technical field

本发明涉及监控视频智能分析中的特征提取和距离度量学习方法,尤其是涉及一种基于核化特征和随机子空间集成的行人再辨识方法。The invention relates to a method for feature extraction and distance measurement learning in intelligent analysis of monitoring video, in particular to a pedestrian re-identification method based on kernelized features and random subspace integration.

背景技术Background technique

行人再辨识是指在一个多摄像机组成的系统中,对不同摄像机视角下的行人图像进行匹配的问题。它对于行人身份、行为等不同方面的分析提供了关键性帮助,并发展成为智能视频监控领域的关键组成部分。Pedestrian re-identification refers to the problem of matching pedestrian images from different camera perspectives in a multi-camera system. It provides critical assistance in the analysis of different aspects such as pedestrian identity and behavior, and has developed into a key component in the field of intelligent video surveillance.

行人再辨识领域中主要的方法可以分为以下两类:1)基于特征表示的行人再辨识方法;2)基于度量学习的行人再辨识方法。The main methods in the field of person re-ID can be divided into the following two categories: 1) person re-ID methods based on feature representation; 2) person re-ID methods based on metric learning.

在基于特征表示的行人再辨识方法中,低层视觉特征是最常用的特征。常用的低层视觉特征有颜色直方图、纹理等,具体描述如下:颜色直方图通过统计图像上颜色分布来描述整张图像或者其中一个小区域的颜色分布特征。它对于视角变化较为鲁棒,但容易受光照等亮度变换的影响,因此它通常在特定的彩色空间上提取。纹理特征描述的整张图像或者其中一个小区域的结构信息,对颜色直方图特征描述的颜色信息是个很好的补充。大部分行人再辨识算法都基于底层视觉特征实现的,然而当人类自身在进行行人再辨识任务时,往往不是完全通过底层视觉特征,而是更多通过具有语义的属性特征来判断两张图像是否属于同一行人。比如:发型,恤的类型,外套的类型,鞋子等信息。与底层视觉特征相比,基于语义属性特征的方法具有天然的优势:语义属性对于不同监控视频下的行人外貌特征差异更为鲁棒,同一个行人在不同监控视频下,其语义属性的描述通常是不变的;语义属性和人类的理解更为接近,因此,基于语义属性的特征方法得到的结果更符合人的需求;基于语义属性的方法更方便人的交互。In the method of person re-identification based on feature representation, low-level visual features are the most commonly used features. Commonly used low-level visual features include color histogram, texture, etc. The specific description is as follows: the color histogram describes the color distribution characteristics of the entire image or a small area of it by counting the color distribution on the image. It is robust to viewing angle changes, but is susceptible to brightness changes such as lighting, so it is usually extracted on a specific color space. The structure information of the whole image or a small area described by the texture feature is a good supplement to the color information described by the color histogram feature. Most of the pedestrian re-identification algorithms are implemented based on the underlying visual features. However, when humans perform pedestrian re-identification tasks, they often judge whether two images are not completely based on the underlying visual features, but more on semantic attribute features. Belong to the same pedestrian. For example: hair style, type of shirt, type of coat, shoes and other information. Compared with the underlying visual features, the method based on semantic attribute features has natural advantages: semantic attributes are more robust to the differences in pedestrian appearance characteristics under different surveillance videos, and the description of the semantic attributes of the same pedestrian in different surveillance videos is usually is unchanged; semantic attributes are closer to human understanding, therefore, the results obtained by feature methods based on semantic attributes are more in line with human needs; methods based on semantic attributes are more convenient for human interaction.

在特征表示方法之后,如何度量不同行人图像的距离也是行人再辨识领域的关键问题之一。基于特征的方法在计算特征向量相似性时,通常采用欧式距离、余弦距离和测地距离等经典的距离函数。这些经典的距离函数未考虑样本的特性因此性能往往不好。近年来,大量文献采用距离测度的方法,通过对标注样本的训练,得到一个更符合样本特性的距离函数,从而提高性能。这些方法通过学习一个马氏形式的距离函数来实现。其中基于简单而直接策略(KISS)的行人再辨识方法在性能上处于前列。然而,这种方法是建立在样本分布服从高斯分布的理论假设基础之上,但现实中的样本不仅不会完美地服从高斯分布,甚至有可能严重地偏离,从而导致性能下降。另外,在实际情况中,样本规模往往远小于特征维数,从而导致度量学习中马氏距离的计算变得困难甚至不可解。After the feature representation method, how to measure the distance between different pedestrian images is also one of the key issues in the field of pedestrian re-identification. Feature-based methods usually use classic distance functions such as Euclidean distance, cosine distance, and geodesic distance when calculating the similarity of feature vectors. These classical distance functions do not take into account the characteristics of samples and thus tend to perform poorly. In recent years, a large number of literatures have adopted the method of distance measurement, and through the training of labeled samples, a distance function that is more in line with the characteristics of the sample is obtained, thereby improving performance. These methods do this by learning a distance function of the Markanobis form. Among them, the pedestrian re-identification method based on the simple and direct strategy (KISS) is at the forefront in terms of performance. However, this method is based on the theoretical assumption that the sample distribution obeys the Gaussian distribution, but in reality, the samples not only do not obey the Gaussian distribution perfectly, but may even deviate seriously, resulting in performance degradation. In addition, in actual situations, the sample size is often much smaller than the feature dimension, which makes the calculation of the Mahalanobis distance in metric learning difficult or even unsolvable.

发明内容Contents of the invention

本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种基于核化特征和随机子空间集成的行人再辨识方法,使提取的特征更加近似地服从高斯分布能调和样本规模与特征维数的矛盾,避免SSS(小规模样本)问题,从而提升性能。The purpose of the present invention is to provide a pedestrian re-identification method based on the integration of kernelized features and random subspaces in order to overcome the above-mentioned defects in the prior art, so that the extracted features more closely obey the Gaussian distribution and can reconcile the sample size and feature dimension. The contradiction of the number avoids the SSS (small sample size) problem, thereby improving performance.

本发明的目的可以通过以下技术方案来实现:The purpose of the present invention can be achieved through the following technical solutions:

一种基于核化特征和随机子空间集成的行人再辨识方法,包括以下步骤:A pedestrian re-identification method based on kernelized features and random subspace integration, comprising the following steps:

S1,获取行人图像的训练样本集和测试样本集,确定两个样本间的核化函数,核化函数的输出值为一维实数,在每个样本集中,同一个行人具有多个图像;在训练样本集中,每个行人与其多幅图像的对应关系是已知的,在测试样本集中,是未知的;S1, obtain the training sample set and test sample set of pedestrian images, determine the kernelization function between the two samples, the output value of the kernelization function is a one-dimensional real number, in each sample set, the same pedestrian has multiple images; in In the training sample set, the correspondence between each pedestrian and its multiple images is known, but in the test sample set, it is unknown;

S2,分别将两个样本集的原始特征转化为核化特征,核化特征的维度均为训练样本集中的样本个数;S2, respectively convert the original features of the two sample sets into kernelized features, and the dimensions of the kernelized features are the number of samples in the training sample set;

S3,在训练样本集的核化特征空间中,随机选取多个不同的子空间,分别在各子空间下,计算不同行人图像对的核化特征差值的协方差矩阵及其逆矩阵,计算相同行人图像对的核化特征差值的协方差矩阵及其逆矩阵,得到图像对的核化特征差值的分布函数;使用训练样本的目的是通过训练样本学习出一个合适的分布函数;S3. In the kernelized feature space of the training sample set, a plurality of different subspaces are randomly selected, and in each subspace, the covariance matrix and its inverse matrix of the kernelized feature difference of different pedestrian image pairs are calculated, and the calculation The covariance matrix and its inverse matrix of the kernelized feature difference of the same pedestrian image pair are used to obtain the distribution function of the kernelized feature difference of the image pair; the purpose of using the training sample is to learn a suitable distribution function through the training sample;

S4,分别在各子空间下(此处子空间即步骤S3中选取的子空间),计算测试样本集中样本对的核化特征的差值,根据差值协方差矩阵及其逆矩阵和分布函数,计算样本对为相同行人的概率和为不同行人的概率,并将两个概率的比值作为两个样本间的距离;S4, respectively under each subspace (the subspace here is the subspace selected in step S3), calculate the difference of the kernelization feature of the sample pair in the test sample set, according to the difference covariance matrix and its inverse matrix and distribution function, Calculate the probability of the sample pair being the same pedestrian and the probability of being a different pedestrian, and use the ratio of the two probabilities as the distance between the two samples;

S5,对各不同子空间中计算出的距离进行集成,得到测试样本集中各样本对间的最终距离,用于行人辨识,最终距离越小,样本对为相同行人的可能性越高。如此可辨识出测试样本集中属于同一个行人的图像。S5. Integrate the distances calculated in different subspaces to obtain the final distance between each sample pair in the test sample set, which is used for pedestrian identification. The smaller the final distance, the higher the possibility that the sample pair is the same pedestrian. In this way, the images belonging to the same pedestrian in the test sample set can be identified.

所述的步骤S1中,核化函数为高斯核函数,步骤S3得到的分布函数为高斯分布函数。In the step S1, the kernelization function is a Gaussian kernel function, and the distribution function obtained in the step S3 is a Gaussian distribution function.

所述的步骤S1中,核化函数为k(xi,xj),其中,σ=1,xi、xj分别表示第i、j个训练样本。In the step S1, the kernelization function is k( xi , x j ), Wherein, σ=1, x i , x j represent the i-th and j-th training samples respectively.

所述的步骤S2中,原始特征转化为核化特征的具体过程包括:In the step S2, the specific process of converting the original feature into a kernelized feature includes:

将训练集X转化为核化特征 Convert the training set X to kernelized features

其中,m为训练集样本个数,X∈Rd×m,d为样本特征维度,Among them, m is the number of samples in the training set, X∈R d×m , d is the dimension of sample features,

将测试集Z转换为核化特征 Convert the test set Z to kernelized features

其中,n为测试集样本个数, Among them, n is the number of samples in the test set,

所述的步骤S3中,不同行人图像对的核化特征差值的协方差Σ0计算式为:In the described step S3, the covariance Σ 0 calculation formula of the kernelized feature difference of different pedestrian image pairs is:

其中,yij=0代表所有的不属于同一行人的样本对,N0为符合该条件的样本总数;Among them, y ij = 0 represents all sample pairs that do not belong to the same pedestrian, and N 0 is the total number of samples that meet this condition;

相同行人图像对的核化特征差值的协方差矩阵Σ1计算式为:The covariance matrix Σ 1 calculation formula of the kernelized feature difference of the same pedestrian image pair is:

其中,yij=1代表所有的不属于同一行人的样本对,N1为符合该条件的样本总数。Wherein, y ij =1 represents all sample pairs that do not belong to the same pedestrian, and N 1 is the total number of samples meeting this condition.

所述的步骤S4中,两个样本间的距离计算式为:In the described step S4, the distance calculation formula between two samples is:

与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

(1)将原始特征转化为核化特征,特征对之间的差异分布将更近似服从高斯分布,这是基于简单而直接策略的距离度量学习的基础理论假设;(1) Transform the original features into kernelized features, and the difference distribution between feature pairs will more closely obey the Gaussian distribution, which is the basic theoretical assumption of distance metric learning based on simple and direct strategies;

(2)在非线性空间中,核化特征往往具有更强的辨识能力;(2) In the nonlinear space, the kernelized features tend to have stronger discriminative ability;

(3)将复杂的特征向量投影到多个维度较小的子空间中分别计算距离,用随机选取的子空间投影的方式代替传统的距离计算方式,能使算法性能有较明显的提升,优化了样本距离计算的过程,节省了矩阵运算的开销;(3) Project the complex eigenvectors into multiple subspaces with smaller dimensions to calculate the distance respectively, and replace the traditional distance calculation method with the randomly selected subspace projection method, which can significantly improve the performance of the algorithm and optimize The process of calculating the sample distance is simplified, and the overhead of matrix operations is saved;

(4)随机选取的若干子空间维度数量远远小于样本规模,调和了现实应用中样本规模远远小于特征维度数的矛盾,进而使得距离的计算更加准确,高效。(4) The number of randomly selected subspace dimensions is much smaller than the sample size, reconciling the contradiction that the sample size is much smaller than the number of feature dimensions in practical applications, and thus makes the distance calculation more accurate and efficient.

附图说明Description of drawings

图1为本发明方法的流程图;Fig. 1 is the flowchart of the inventive method;

图2(a)-2(d)为在使用本实施例方法前后的样本对间特征差值的概率分布图,其中:2(a)为原始特征下同一行人样本对的差异分布;2(b)为原始特征下不同行人样本对的差异分布;2(c)为核化特征下同一行人样本对的差异分布;2(d)为核化特征下不同行人样本对的差异分布。Fig. 2 (a)-2 (d) is the probability distribution figure of the characteristic difference between the sample pair before and after using the present embodiment method, wherein: 2 (a) is the difference distribution of the same pedestrian sample pair under the original feature; 2 ( b) is the difference distribution of different pedestrian sample pairs under the original feature; 2(c) is the difference distribution of the same pedestrian sample pair under the kernelization feature; 2(d) is the difference distribution of different pedestrian sample pairs under the kernelization feature.

图3(a)、3(b)为本实施例方法在使用不同参数和不同特征的情况下在VIPeR(P=316)行人再辨识公开数据集上的CMC曲线,其中:3(a)为LOMO特征下使用不同参数的CMC曲线;3(b)为kCCA特征下使用不同参数的CMC曲线。Fig. 3 (a), 3 (b) are the CMC curves on VIPeR (P=316) pedestrian re-identification public data set under the situation of using different parameters and different characteristics of the method of this embodiment, wherein: 3 (a) is The CMC curves using different parameters under the LOMO feature; 3(b) is the CMC curve using different parameters under the kCCA feature.

具体实施方式detailed description

下面结合附图和具体实施例对本发明进行详细说明。本实施例以本发明技术方案为前提进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. This embodiment is carried out on the premise of the technical solution of the present invention, and detailed implementation and specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.

实施例Example

一种基于核化特征和随机子空间集成的行人再辨识方法,包括以下步骤:A pedestrian re-identification method based on kernelized features and random subspace integration, comprising the following steps:

步骤一:将原始特征转化为核化特征表示,具体描述如下:Step 1: Transform the original feature into a kernelized feature representation, as described below:

获取行人图像的训练集X∈Rd×m和测试集Z∈Rd×n,其中样本特征维度为d,训练集样本个数为m,测试集样本个数为n,并用xi表示第i个训练样本,用zi表示第i个测试样本。使用核化函数将训练集X转换为核化特征将测试集Z转换为核化特征其中σ=1。具体的转换过程可表示为Obtain the training set X∈R d×m and the test set Z∈R d×n of pedestrian images, where the sample feature dimension is d, the number of samples in the training set is m, and the number of samples in the test set is n . There are i training samples, and z i represents the i-th test sample. use the kernel function Convert the training set X to kernelized features Convert the test set Z to kernelized features where σ=1. The specific conversion process can be expressed as

步骤二:在核化特征空间中,随机选取L个不同的子空间。具体描述如下:完成步骤一后,核化特征空间的维度和训练集的样本数相同,即m维。在这m维中随机选取L个不同的子空间Dk(k=1,2,...,L),子空间的维度为 Step 2: In the kernelized feature space, randomly select L different subspaces. The specific description is as follows: After step 1 is completed, the dimension of the kernelized feature space is the same as the number of samples in the training set, that is, m dimension. Randomly select L different subspaces D k (k=1,2,...,L) in this m dimension, and the dimension of the subspace is

步骤三:分别在不同的子空间下,计算不同行人图像对间的特征差值的协方差矩阵Σ0,并求出逆矩阵计算相同行人图像对间的特征差值的协方差矩阵Σ1,并求出逆矩阵具体描述如下:分别在不同的子空间下,计算不同行人图像对间的特征差值的协方差矩阵Σ0,具体计算方式为Step 3: In different subspaces, calculate the covariance matrix Σ 0 of the feature difference between different pedestrian image pairs, and find the inverse matrix Calculate the covariance matrix Σ 1 of the feature difference between the same pedestrian image pair, and find the inverse matrix The specific description is as follows: in different subspaces, calculate the covariance matrix Σ 0 of the feature difference between different pedestrian image pairs, and the specific calculation method is

其中yij=0代表所有的不属于同一行人的样本对,N0为符合该条件的样本总数。计算相同行人图像对间的特征差值的协方差矩阵协方差Σ1,Σ1的具体计算方式为Where y ij =0 represents all sample pairs that do not belong to the same pedestrian, and N 0 is the total number of samples meeting this condition. Calculate the covariance matrix covariance Σ 1 of the feature difference between the same pedestrian image pair, the specific calculation method of Σ 1 is

其中yij=1代表所有的属于相同行人的样本对,N1为符合该条件的样本总数。Where y ij =1 represents all sample pairs belonging to the same pedestrian, and N 1 is the total number of samples meeting this condition.

步骤四:分别在不同的子空间下,基于高斯分布函数和两个逆矩阵计算两个特征的差值属于相同行人的概率和两个特征的差值属于不同行人的概率,并把两个概率的比值看作样本间的距离,具体描述如下:用表示子空间下测试样本的差值,用H0表示第i个测试样本和第j个测试样本属于不同行人的假设,用H1表示第i个测试样本和第j个测试样本属于相同行人的假设。当样本对间的差值服从高斯分布时,可以分别得到两个假设条件下差值为的概率Step 4: In different subspaces, based on the Gaussian distribution function and two inverse matrices Calculate the probability that the difference between two features belongs to the same pedestrian and the probability that the difference between two features belongs to different pedestrians, and regard the ratio of the two probabilities as the distance between samples. The specific description is as follows: Denotes the test samples in the subspace with The difference of , use H 0 to represent the hypothesis that the i-th test sample and the j-th test sample belong to different pedestrians, and use H 1 to represent the hypothesis that the i-th test sample and the j-th test sample belong to the same pedestrian. When the difference between the sample pairs obeys the Gaussian distribution, the difference under the two assumptions can be respectively obtained as The probability

表示服从两个假设的概率的比值的对数,可得use express The logarithm of the ratio of the probabilities subject to the two hypotheses gives

根据贝叶斯公式,式(5)可转换成According to Bayesian formula, formula (5) can be transformed into

which is

除去常数,可得在第k个子空间下第i个测试样本和第j个测试样本间的距离,表示为By removing the constant, the distance between the i-th test sample and the j-th test sample in the k-th subspace can be obtained, expressed as

步骤五:对在L个不同子空间中计算出的距离进行集成,得到最终距离,具体描述如下:对于步骤四中得到的L个距离,采用加权平均法对距离进行集成,最终的距离公式可以表示为:Step 5: Integrate the distances calculated in L different subspaces to obtain the final distance. The specific description is as follows: For the L distances obtained in step 4, the weighted average method is used to integrate the distances, and the final distance formula can be Expressed as:

如图1所示,是本实施例的流程图,具体实施方式如下:As shown in Figure 1, it is a flow chart of the present embodiment, and the specific implementation is as follows:

1)确定核函数;1) Determine the kernel function;

2)将训练样本的原始特征转换为核化特征;2) Convert the original features of the training samples into kernelized features;

3)将测试样本的原始特征转换为核化特征;3) Convert the original features of the test samples into kernelized features;

4)为了判断zi和zj是否属于同一行人,用H0表示它们不相似,即不属于同一行人,用H1表示它们相似,即属于同一行人;4) In order to judge whether z i and z j belong to the same pedestrian, use H 0 to indicate that they are not similar, that is, they do not belong to the same pedestrian, and use H 1 to indicate that they are similar, that is, they belong to the same pedestrian;

5)在核化特征空间中随机选取L个子空间Dk(k=1,2,...,L);5) Randomly select L subspaces D k (k=1,2,...,L) in the kernelized feature space;

6)分别在不同的子空间中,计算不同行人图像对间的特征差值的协方差矩阵Σ0,并求出逆矩阵 6) In different subspaces, calculate the covariance matrix Σ 0 of the feature difference between different pedestrian image pairs, and find the inverse matrix

7)分别在不同的子空间中,计算相同行人图像对间的特征差值的协方差矩阵Σ1,并求出逆矩阵 7) In different subspaces, calculate the covariance matrix Σ 1 of the feature difference between the same pair of pedestrian images, and find the inverse matrix

8)分别在不同的子空间中,计算特征差值服从两种假设的概率分布函数,并将概率比值的对数函数作为样本间的距离;8) Calculate the feature difference in different subspaces A probability distribution function that obeys two assumptions, and the logarithmic function of the probability ratio as the distance between samples;

9)分别在不同的子空间中,将概率比值距离转化为马氏距离dk(zi,zj);9) In different subspaces, transform the probability ratio distance into Mahalanobis distance d k (z i , z j );

10)集成在L个子空间的中计算得到的距离,得到最终的样本距离。10) Integrate the calculated distances in the L subspaces to obtain the final sample distance.

图2(a)-2(d)为在使用本实施例方法前后的样本对间特征差值的概率分布图,柱状图为实际的概率分布,线性图为根据数据方差绘制的高斯分布曲线,比较的四个原始特征分别为LOMO、kCCA、SCNCD、ELF18,它们均是行人再辨识方法中应用较为广泛的特征,其中:2(a)为原始特征下同一行人样本对的差异分布;2(b)为原始特征下不同行人样本对的差异分布;2(c)为核化特征下同一行人样本对的差异分布;2(d)为核化特征下不同行人样本对的差异分布。Fig. 2 (a)-2 (d) is the probability distribution figure of the characteristic difference between sample pairs before and after using the method of this embodiment, the histogram is the actual probability distribution, and the linear graph is the Gaussian distribution curve drawn according to the variance of the data, The four original features compared are LOMO, kCCA, SCNCD, and ELF18, which are widely used in pedestrian re-identification methods, where: 2(a) is the difference distribution of the same pedestrian sample pair under the original features; 2( b) is the difference distribution of different pedestrian sample pairs under the original feature; 2(c) is the difference distribution of the same pedestrian sample pair under the kernelization feature; 2(d) is the difference distribution of different pedestrian sample pairs under the kernelization feature.

图3(a)、3(b)为本实施例方法在使用不同参数和不同特征的情况下在VIPeR(P=316)行人再辨识公开数据集上的rank-1匹配率,并与传统的正则化方法做了比较。其中:3(a)为LOMO特征下使用不同参数的rank-1匹配率;3(b)为kCCA特征下使用不同参数的rank-1匹配率。Figure 3 (a), 3 (b) is the rank-1 matching rate on VIPeR (P=316) pedestrian re-identification public data set under the situation of using different parameters and different features of the method of this embodiment, and it is compared with the traditional Regularization methods were compared. Among them: 3(a) is the rank-1 matching rate using different parameters under the LOMO feature; 3(b) is the rank-1 matching rate using different parameters under the kCCA feature.

表1Table 1

表1为本实施例方法同其它基于度量学习算法在VIPeR(P=316)行人再辨识公开数据集上性能比较。Table 1 shows the performance comparison between the method of this embodiment and other metric-based learning algorithms on the public dataset of VIPeR (P=316) pedestrian re-identification.

表2Table 2

表2为本实施例方法同其它基于度量学习算法在PRID 450S(P=225)行人再辨识公开数据集上性能比较。Table 2 shows the performance comparison between the method of this embodiment and other metric-based learning algorithms on the public dataset of PRID 450S (P=225) pedestrian re-identification.

表3table 3

KRKISSKRKISS NFSTNFST MLAPGMLAPG XQDAwxya MFAMFA kLFDAwxya KISSKISS LFDALFDA 时间time 5.045.04 2.482.48 40.940.9 3.863.86 2.582.58 2.742.74 7.417.41 229.3229.3

表3为本实施例方法同其它基于度量学习算法的训练时间开销比较。Table 3 shows the comparison of training time overhead between the method of this embodiment and other metric-based learning algorithms.

Claims (6)

1. a kind of pedestrian integrated based on coring feature and stochastic subspace discrimination method again, it is characterised in that including following step Suddenly:
S1, obtains the training sample set and test sample collection of pedestrian image, the coring function between two samples is determined, in each sample This concentration, same pedestrian has multiple images;
S2, is converted into coring feature, the dimension of coring feature is training sample set by the primitive character of two sample sets respectively In number of samples;
S3, in the coring feature space of training sample set, randomly selects multiple different subspaces, respectively in each subspace Under, the covariance matrix and its inverse matrix of the coring feature difference of different pedestrian images pair are calculated, identical pedestrian image pair is calculated Coring feature difference covariance matrix and its inverse matrix, obtain the distribution function of the coring feature difference of image pair;
S4, respectively under each subspace, calculates the difference that test sample concentrates the coring feature of sample pair, according to difference covariance Matrix and its inverse matrix and distribution function, calculate sample to the probability for identical pedestrian and be different pedestrians probability, and by two The ratio of individual probability is used as the distance between two samples;
S5, carries out integrated to the distance that is calculated in variant subspace, obtains test sample and concentrates final between this pair of various kinds Distance, for pedestrian's identification, finally apart from smaller, sample is higher to the possibility for identical pedestrian.
2. a kind of pedestrian integrated based on coring feature and stochastic subspace according to claim 1 discrimination method again, its It is characterised by, in described step S1, coring function is gaussian kernel function, and the distribution function that step S3 is obtained is Gaussian Profile letter Number.
3. a kind of pedestrian integrated based on coring feature and stochastic subspace according to claim 1 or 2 discrimination method again, Characterized in that, in described step S1, coring function is k (xi,xj),Wherein, σ= 1, xi、xjI-th, j training sample is represented respectively.
4. a kind of pedestrian integrated based on coring feature and stochastic subspace according to claim 3 discrimination method again, its It is characterised by, in described step S2, the detailed process that primitive character is converted into coring feature includes:
Training set X is converted into coring feature
<mrow> <mover> <mi>X</mi> <mo>~</mo> </mover> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mn>1</mn> </msub> <mo>,</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mn>2</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>m</mi> </msub> <mo>&amp;rsqb;</mo> <mo>=</mo> <mo>&amp;lsqb;</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mi>m</mi> <mo>&amp;times;</mo> <mi>m</mi> </mrow> </msup> <mo>,</mo> </mrow>
Wherein, m is training set number of samples, X ∈ Rd×m, d is sample characteristics dimension,
<mrow> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>&amp;lsqb;</mo> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mi>T</mi> </msup> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mi>m</mi> <mo>&amp;times;</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> </mrow>
Test set Z is converted into coring feature
<mrow> <mover> <mi>Z</mi> <mo>~</mo> </mover> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mn>1</mn> </msub> <mo>,</mo> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mn>2</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>n</mi> </msub> <mo>&amp;rsqb;</mo> <mo>=</mo> <mo>&amp;lsqb;</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mn>2</mn> </msub> <mo>,</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>n</mi> </msub> <mo>,</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mi>m</mi> <mo>&amp;times;</mo> <mi>n</mi> </mrow> </msup> <mo>,</mo> </mrow>
Wherein, n is test set number of samples,
5. a kind of pedestrian integrated based on coring feature and stochastic subspace according to claim 4 discrimination method again, its It is characterised by, in described step S3, the covariance matrix Σ of the coring feature difference of different pedestrian images pair0Calculating formula is:
<mrow> <msub> <mi>&amp;Sigma;</mi> <mn>0</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mn>0</mn> </msub> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </munder> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow>
Wherein, yij=0 represents all samples pair for being not belonging to same a group traveling together, N0To meet the total sample number of the condition;
The covariance matrix Σ of the coring feature difference of identical pedestrian image pair1Calculating formula is:
<mrow> <msub> <mi>&amp;Sigma;</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mn>1</mn> </msub> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </munder> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow>
Wherein, yij=1 represents all samples pair for being not belonging to same a group traveling together, N1To meet the total sample number of the condition.
6. a kind of pedestrian integrated based on coring feature and stochastic subspace according to claim 5 discrimination method again, its It is characterised by, in described step S4, between two samples is apart from calculating formula:
<mrow> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>z</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <msubsup> <mi>&amp;Sigma;</mi> <mn>1</mn> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <msubsup> <mi>&amp;Sigma;</mi> <mn>0</mn> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>-</mo> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> 2
CN201710212251.XA 2017-04-01 2017-04-01 A Pedestrian Re-Identification Method Based on Kernelized Feature and Random Subspace Integration Expired - Fee Related CN107122795B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710212251.XA CN107122795B (en) 2017-04-01 2017-04-01 A Pedestrian Re-Identification Method Based on Kernelized Feature and Random Subspace Integration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710212251.XA CN107122795B (en) 2017-04-01 2017-04-01 A Pedestrian Re-Identification Method Based on Kernelized Feature and Random Subspace Integration

Publications (2)

Publication Number Publication Date
CN107122795A true CN107122795A (en) 2017-09-01
CN107122795B CN107122795B (en) 2020-06-02

Family

ID=59724629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710212251.XA Expired - Fee Related CN107122795B (en) 2017-04-01 2017-04-01 A Pedestrian Re-Identification Method Based on Kernelized Feature and Random Subspace Integration

Country Status (1)

Country Link
CN (1) CN107122795B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909049A (en) * 2017-11-29 2018-04-13 广州大学 Pedestrian's recognition methods again based on least square discriminant analysis distance study

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295242A (en) * 2013-06-18 2013-09-11 南京信息工程大学 Multi-feature united sparse represented target tracking method
CN103885049A (en) * 2014-03-06 2014-06-25 西安电子科技大学 Meter-wave radar low elevation estimating method based on minimum redundancy linear sparse submatrix
CN104008380A (en) * 2014-06-16 2014-08-27 武汉大学 Pedestrian detection method and system based on salient regions
CN104408705A (en) * 2014-09-23 2015-03-11 西安电子科技大学 Anomaly detection method of hyperspectral image
CN104616319A (en) * 2015-01-28 2015-05-13 南京信息工程大学 Multi-feature selection target tracking method based on support vector machine
WO2016026370A1 (en) * 2014-08-22 2016-02-25 Zhejiang Shenghui Lighting Co., Ltd. High-speed automatic multi-object tracking method and system with kernelized correlation filters

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295242A (en) * 2013-06-18 2013-09-11 南京信息工程大学 Multi-feature united sparse represented target tracking method
CN103885049A (en) * 2014-03-06 2014-06-25 西安电子科技大学 Meter-wave radar low elevation estimating method based on minimum redundancy linear sparse submatrix
CN104008380A (en) * 2014-06-16 2014-08-27 武汉大学 Pedestrian detection method and system based on salient regions
WO2016026370A1 (en) * 2014-08-22 2016-02-25 Zhejiang Shenghui Lighting Co., Ltd. High-speed automatic multi-object tracking method and system with kernelized correlation filters
CN104408705A (en) * 2014-09-23 2015-03-11 西安电子科技大学 Anomaly detection method of hyperspectral image
CN104616319A (en) * 2015-01-28 2015-05-13 南京信息工程大学 Multi-feature selection target tracking method based on support vector machine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909049A (en) * 2017-11-29 2018-04-13 广州大学 Pedestrian's recognition methods again based on least square discriminant analysis distance study
CN107909049B (en) * 2017-11-29 2020-07-31 广州大学 Pedestrian re-identification method based on least square discriminant analysis distance learning

Also Published As

Publication number Publication date
CN107122795B (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN106778604B (en) Pedestrian re-identification method based on matching convolutional neural network
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN107977661B (en) Region-of-interest detection method based on FCN and low-rank sparse decomposition
CN111709311A (en) A pedestrian re-identification method based on multi-scale convolutional feature fusion
CN109410184B (en) Live broadcast pornographic image detection method based on dense confrontation network semi-supervised learning
CN104077613A (en) Crowd density estimation method based on cascaded multilevel convolution neural network
CN115984850A (en) Lightweight remote sensing image semantic segmentation method based on improved Deeplabv3+
CN109684922A (en) A kind of recognition methods based on the multi-model of convolutional neural networks to finished product dish
CN110706294A (en) Method for detecting color difference degree of colored textile fabric
CN105447532A (en) Identity authentication method and device
CN105809672A (en) Super pixels and structure constraint based image&#39;s multiple targets synchronous segmentation method
CN101930547A (en) An Automatic Classification Method of Remote Sensing Images Based on Object-Oriented Unsupervised Classification
CN107977660A (en) Region of interest area detecting method based on background priori and foreground node
CN109543546B (en) Gait age estimation method based on depth sequence distribution regression
CN105046714A (en) Unsupervised image segmentation method based on super pixels and target discovering mechanism
CN109657715A (en) A kind of semantic segmentation method, apparatus, equipment and medium
CN105740915A (en) Cooperation segmentation method fusing perception information
CN116740384B (en) Intelligent control method and system of floor washing machine
CN114581761A (en) Remote sensing image recognition method, device, equipment and computer readable storage medium
CN107644203B (en) A Feature Point Detection Method for Shape Adaptive Classification
CN110458064B (en) Combining data-driven and knowledge-driven low-altitude target detection and recognition methods
CN110795995A (en) Data processing method, device and computer readable storage medium
CN102156879A (en) Human target matching method based on weighted terrestrial motion distance
CN104778478A (en) Handwritten numeral identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200602

CF01 Termination of patent right due to non-payment of annual fee