CN112069929A - Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium - Google Patents

Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112069929A
CN112069929A CN202010842782.9A CN202010842782A CN112069929A CN 112069929 A CN112069929 A CN 112069929A CN 202010842782 A CN202010842782 A CN 202010842782A CN 112069929 A CN112069929 A CN 112069929A
Authority
CN
China
Prior art keywords
training
pedestrian
prototype
samples
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010842782.9A
Other languages
Chinese (zh)
Other versions
CN112069929B (en
Inventor
陆易
叶喜勇
王军
徐晓刚
何鹏飞
张文广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202010842782.9A priority Critical patent/CN112069929B/en
Publication of CN112069929A publication Critical patent/CN112069929A/en
Application granted granted Critical
Publication of CN112069929B publication Critical patent/CN112069929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unsupervised pedestrian re-identification method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: pre-training a pedestrian re-recognition model in a labeled source domain data set; extracting training characteristics of a training set in the label-free target domain by using the model; according to the training characteristics, dividing a target domain training set into a plurality of clusters based on a self-adaptive clustering method, and distributing corresponding pseudo labels; setting each cluster as a prototype, selecting a sample with the distance from the center of the prototype to be smaller than a set threshold value from the prototype, and retraining the model by using the training characteristics and the pseudo labels of the sample to obtain a pedestrian re-identification model with updated parameters; and inputting the query set and the to-be-selected set of the target domain into the model, respectively obtaining the test characteristics of the query set and the to-be-selected set, and selecting pictures meeting the requirements of the query pictures from the to-be-selected set according to the similarity of the test characteristics. The method effectively relieves the problem of inter-domain separation and improves the accuracy of cross-domain pedestrian re-identification.

Description

一种无监督行人重识别方法、装置、电子设备及存储介质An unsupervised pedestrian re-identification method, device, electronic device and storage medium

技术领域technical field

本发明属于人工智能及计算机视觉技术领域,尤其涉及一种无监督行人重识别方法、装置、电子设备及存储介质。The invention belongs to the technical field of artificial intelligence and computer vision, and in particular relates to an unsupervised pedestrian re-identification method, device, electronic device and storage medium.

背景技术Background technique

随着城市化进程的加快,公共安全已成为了人们日益关注的焦点和需求。大学校园、主题公园、医院、街道等许多重要的公共卫生区域都广泛覆盖了监控摄像头,为利用计算机视觉技术自动化监控创造了良好的客观条件。With the acceleration of urbanization, public safety has become the focus and demand of people. Many important public health areas such as university campuses, theme parks, hospitals, and streets are widely covered by surveillance cameras, creating good objective conditions for automated surveillance using computer vision technology.

近年来,行人重识别作为视频监控领域的一个重要研究方向,日益受到人们的关注。具体来说,行人重识别是指在跨摄像头、跨场景下利用计算机视觉技术判断图像或视频序列中是否存在特定行人的技术。作为人脸识别技术的重要补充,该技术能够根据行人的穿着、体态、发型等信息认知行人,在实际监控场景下对无法获取清晰拍摄人脸的行人进行跨摄像头连续跟踪,增强数据的时空连续性,有助于节省大量的人力物力,具有重要的研究意义。In recent years, person re-identification, as an important research direction in the field of video surveillance, has received increasing attention. Specifically, pedestrian re-identification refers to the technology of using computer vision technology to determine whether there is a specific pedestrian in an image or video sequence under cross-camera and cross-scene conditions. As an important supplement to face recognition technology, this technology can recognize pedestrians based on their clothing, posture, hairstyle and other information, and continuously track pedestrians whose faces cannot be clearly photographed in actual monitoring scenarios, enhancing the spatiotemporal data. Continuity helps save a lot of manpower and material resources, and has important research significance.

得益于深度神经网络的快速发展,基于监督式深度学习的行人重识别技术已经能够在主流公开数据集上达到非常高的识别率。在公开的Market-1501数据集上,rank1(首位命中率)已经达到了95%以上,超过了人眼的识别准确率。然而,作为一个重要的视觉任务,行人重识别仍然存在着很多挑战。在实际开放的应用场景中,由于季节、穿着、光照、摄像头不同,行人数据分布会产生极大的差异,若将在带标签的源域数据集中训练好的模型直接迁移到新的应用场景中去,会产生域间隔问题,从而使得特定场景的特定数据学习出来的识别模型不具有普适性,导致在开放环境下,模型泛化能力较差,识别性能显著下降,甚至不能正常完成行人重识别任务。Thanks to the rapid development of deep neural networks, person re-identification technology based on supervised deep learning has been able to achieve very high recognition rates on mainstream public datasets. On the public Market-1501 dataset, the rank1 (first hit rate) has reached more than 95%, exceeding the recognition accuracy of the human eye. However, as an important vision task, person re-identification still has many challenges. In actual open application scenarios, the distribution of pedestrian data will vary greatly due to different seasons, clothing, lighting, and cameras. If the model trained in the labeled source domain data set is directly transferred to a new application scenario In the open environment, the generalization ability of the model is poor, the recognition performance is significantly reduced, and even the pedestrian recognition cannot be completed normally. Identify tasks.

发明内容SUMMARY OF THE INVENTION

本发明实施例的目的是提供一种无监督行人重识别方法、装置、电子设备及存储介质,以解决现有将在带标签的源域数据集中训练好的模型直接迁移到新的应用场景中产生域间隔的问题。The purpose of the embodiments of the present invention is to provide an unsupervised pedestrian re-identification method, device, electronic device and storage medium, so as to solve the problem of directly migrating the existing model trained in the labeled source domain data set to a new application scenario The problem of generating domain gaps.

为了达到上述目的,本发明所采用的技术方案如下:In order to achieve the above object, the technical scheme adopted in the present invention is as follows:

第一方面,本发明实施例提供一种无监督行人重识别方法,包括:In a first aspect, an embodiment of the present invention provides an unsupervised pedestrian re-identification method, including:

预训练步骤,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;The pre-training step is used to pre-train the deep person re-identification model with supervised learning in the labeled source domain dataset;

训练特征提取步骤,用于利用预训练好的深度行人重识别模型提取无标签目标域中训练集样本的训练特征;The training feature extraction step is used to extract the training features of the training set samples in the unlabeled target domain by using the pre-trained deep person re-identification model;

划分步骤,用于根据所述训练特征,基于自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;The dividing step is used to divide the training set samples of the target domain into several clusters based on the adaptive clustering method according to the training features, and assign corresponding pseudo labels;

再训练步骤,用于分别将每个簇定为一个原型,簇中的样本为原型中的可视样本,计算可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;The retraining step is used to set each cluster as a prototype, and the samples in the cluster are the visible samples in the prototype, calculate the distance between the visible sample and the prototype center, and select the visible samples whose distance is less than the set threshold. sample, screen the training features according to the selected visual samples, obtain the training features after screening, and retrain the pre-trained deep pedestrian re-identification model by using the training features after screening and the assigned pseudo-labels, Obtain the deep pedestrian re-identification model after updating the parameters;

识别步骤,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification step is used to input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test features of the query set pictures and the test features of the pictures to be selected, and calculate the difference between the two in the metric space. Similarity, according to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set.

进一步地,还包括:Further, it also includes:

迭代收敛步骤,用于重复训练特征提取步骤、划分步骤和再训练步骤,更新迭代权重,直至收敛。The iterative convergence step is used to repeat the training feature extraction step, the division step and the retraining step, and update the iterative weights until convergence.

进一步地,根据所述的训练特征,利用自适应聚类的方法将目标域行人图像分为若干簇,并分配对应的伪标签,包括:Further, according to the described training features, the pedestrian images in the target domain are divided into several clusters by the adaptive clustering method, and corresponding pseudo-labels are assigned, including:

根据训练特征计算目标域训练集行人图像两两之间的距离,形成距离矩阵;Calculate the distance between the pedestrian images in the target domain training set according to the training features to form a distance matrix;

基于所述的距离矩阵,利用基于密度的自适应聚类算法对目标域训练集行人图像进行无监督聚类,生成若干簇;Based on the distance matrix, use a density-based adaptive clustering algorithm to perform unsupervised clustering on pedestrian images in the target domain training set to generate several clusters;

无监督聚类之后,给目标域训练集中的每个样本分配相应的伪标签。After unsupervised clustering, each sample in the training set of the target domain is assigned a corresponding pseudo-label.

进一步地,分别将每个簇定为一个原型,簇中的样本为原型中的可视样本,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型,包括:Further, each cluster is designated as a prototype, the samples in the cluster are the visible samples in the prototype, the distance between the visible samples in the prototype and the prototype center is calculated, and the visible samples whose distance is less than the set threshold are selected. , screen the training features according to the selected visual samples to obtain the filtered training features, and retrain the pre-trained deep pedestrian re-identification model by using the screened training features and the assigned pseudo-labels to obtain The deep person re-identification model after updating the parameters, including:

将第

Figure 433569DEST_PATH_IMAGE001
个簇设定为一个原型
Figure 436029DEST_PATH_IMAGE002
,其中,
Figure 317397DEST_PATH_IMAGE003
Figure 814238DEST_PATH_IMAGE004
为簇的个数,簇
Figure 952745DEST_PATH_IMAGE005
中每个对象设定为原型
Figure 611259DEST_PATH_IMAGE002
的可视样本,计算对应的原型中心
Figure 979924DEST_PATH_IMAGE006
;will
Figure 433569DEST_PATH_IMAGE001
clusters are set as a prototype
Figure 436029DEST_PATH_IMAGE002
,in,
Figure 317397DEST_PATH_IMAGE003
,
Figure 814238DEST_PATH_IMAGE004
is the number of clusters, the cluster
Figure 952745DEST_PATH_IMAGE005
set each object as a prototype
Figure 611259DEST_PATH_IMAGE002
The visual samples of , calculate the corresponding prototype center
Figure 979924DEST_PATH_IMAGE006
;

计算原型

Figure 14876DEST_PATH_IMAGE002
中可视样本与原型中心
Figure 34653DEST_PATH_IMAGE006
的距离,形成距离向量
Figure 864069DEST_PATH_IMAGE007
Figure 720030DEST_PATH_IMAGE008
中第
Figure 558673DEST_PATH_IMAGE009
个元素
Figure 668842DEST_PATH_IMAGE010
代表
Figure 934739DEST_PATH_IMAGE011
与原型中心
Figure 12416DEST_PATH_IMAGE006
的距离;Computational prototype
Figure 14876DEST_PATH_IMAGE002
China Visual Sample and Prototype Center
Figure 34653DEST_PATH_IMAGE006
distance, forming a distance vector
Figure 864069DEST_PATH_IMAGE007
,
Figure 720030DEST_PATH_IMAGE008
B
Figure 558673DEST_PATH_IMAGE009
elements
Figure 668842DEST_PATH_IMAGE010
represent
Figure 934739DEST_PATH_IMAGE011
with Prototype Center
Figure 12416DEST_PATH_IMAGE006
the distance;

Figure 389171DEST_PATH_IMAGE002
中选出
Figure 383540DEST_PATH_IMAGE012
小于阈值的可视样本,挑选方式如下:exist
Figure 389171DEST_PATH_IMAGE002
selected
Figure 383540DEST_PATH_IMAGE012
Visual samples smaller than the threshold are selected as follows:

Figure 820338DEST_PATH_IMAGE013
Figure 820338DEST_PATH_IMAGE013

其中,

Figure 385311DEST_PATH_IMAGE014
代表原型
Figure 565757DEST_PATH_IMAGE002
挑选出的可视样本,
Figure 647589DEST_PATH_IMAGE015
为设置的距离阈值,
Figure 255288DEST_PATH_IMAGE016
代表示性函数,若成立则为1,反之,则为0;in,
Figure 385311DEST_PATH_IMAGE014
representative prototype
Figure 565757DEST_PATH_IMAGE002
selected visual samples,
Figure 647589DEST_PATH_IMAGE015
is the set distance threshold,
Figure 255288DEST_PATH_IMAGE016
On behalf of the representative function, if it is established, it is 1, otherwise, it is 0;

利用挑选好的样本

Figure 307558DEST_PATH_IMAGE017
对所述的训练特征进行相应的筛选,得到挑选后的训练特征,然后对预训练好的深度行人重识别模型进行训练,得到更新参数后的深度行人重识别模型。Use selected samples
Figure 307558DEST_PATH_IMAGE017
The training features are screened accordingly to obtain the selected training features, and then the pre-trained deep pedestrian re-identification model is trained to obtain a deep pedestrian re-identification model with updated parameters.

进一步地,原型中心

Figure 26115DEST_PATH_IMAGE006
的计算公式如下:Further, the prototype center
Figure 26115DEST_PATH_IMAGE006
The calculation formula is as follows:

Figure 729498DEST_PATH_IMAGE018
Figure 729498DEST_PATH_IMAGE018

其中,

Figure 508098DEST_PATH_IMAGE019
为原型
Figure 47664DEST_PATH_IMAGE002
的可视样本个数,
Figure 569912DEST_PATH_IMAGE011
为原型
Figure 629266DEST_PATH_IMAGE002
的第
Figure 578767DEST_PATH_IMAGE009
个样本,
Figure 340050DEST_PATH_IMAGE020
。in,
Figure 508098DEST_PATH_IMAGE019
as a prototype
Figure 47664DEST_PATH_IMAGE002
the number of visible samples,
Figure 569912DEST_PATH_IMAGE011
as a prototype
Figure 629266DEST_PATH_IMAGE002
First
Figure 578767DEST_PATH_IMAGE009
samples,
Figure 340050DEST_PATH_IMAGE020
.

进一步地,将目标域的查询集和待选集输入到所述更新参数后的深度行人重识别模型中,分别得到二者特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片,包括:Further, input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model after the update parameters, obtain the characteristics of the two respectively, calculate the similarity of the two in the metric space, and select the set from the set according to the similarity. Identify the candidate images that meet the query image requirements, including:

将目标域的查询集和待选集输入到所述的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征;Input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model to obtain the test features of the query set pictures and the test features of the pictures to be selected respectively;

计算所述的查询集的测试特征和待选集的测试特征在度量空间的欧氏距离,得到查询集的测试特征和待选集的测试特征的相似度矩阵,根据相似度矩阵从待选集中识别出符合查询图片要求的待选图片。Calculate the Euclidean distance of the test feature of the query set and the test feature of the set to be selected in the metric space, obtain the similarity matrix of the test feature of the query set and the test feature of the set to be selected, and identify the test feature from the set to be selected according to the similarity matrix Candidate images that meet the query image requirements.

第二方面,本发明实施例提供一种无监督行人重识别装置,包括:In a second aspect, an embodiment of the present invention provides an unsupervised pedestrian re-identification device, including:

预训练单元,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;A pre-training unit for pre-training a deep person re-identification model with supervised learning in a labeled source-domain dataset;

训练特征提取单元,用于利用预训练好的深度行人重识别模型提取无标签目标域中训练集样本的训练特征;The training feature extraction unit is used to extract the training features of the training set samples in the unlabeled target domain by using the pre-trained deep person re-identification model;

划分单元,用于根据所述训练特征,利用自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;a dividing unit, which is used to divide the training set samples of the target domain into several clusters according to the training characteristics by using an adaptive clustering method, and assign corresponding pseudo labels;

再训练单元,用于分别将每个簇定为一个原型,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;The retraining unit is used to set each cluster as a prototype, calculate the distance between the visible samples in the prototype and the prototype center, select the visible samples whose distance is less than the set threshold, and analyze the selected visual samples according to the selected visual samples. The pre-trained deep pedestrian re-identification model is retrained by using the filtered training characteristics and assigned pseudo-labels to obtain the deep pedestrian re-identification model with updated parameters. ;

识别单元,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification unit is used to input the query set and the set to be selected in the target domain into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test features of the query set pictures and the test features of the pictures to be selected, and calculate the difference between the two in the metric space. Similarity, according to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set.

进一步地,还包括:Further, it also includes:

迭代收敛单元,用于重复执行训练特征提取单元、划分单元和再训练单元,更新迭代权重,直至收敛。The iterative convergence unit is used to repeatedly execute the training feature extraction unit, the division unit and the retraining unit, and update the iterative weights until convergence.

第三方面,本发明实施例提供一种电子设备,包括:In a third aspect, an embodiment of the present invention provides an electronic device, including:

一个或多个处理器;one or more processors;

存储器,用于存储一个或多个程序;memory for storing one or more programs;

当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面所述的一种无监督行人重识别方法。When the one or more programs are executed by the one or more processors, the one or more processors implement an unsupervised pedestrian re-identification method as described in the first aspect.

第四方面,本发明实施例提供一种计算机可读的存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面所述的一种无监督行人重识别方法。In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method for unsupervised pedestrian re-identification described in the first aspect.

在本发明实施例的无监督行人重识别和装置中,采用了自适应聚类的方法为无标签的目标域分配伪标签,自适应聚类的方法在聚类时不需要事先设定簇个数,不仅克服了目标域身份类别个数不确定的困难,而且充分挖掘了目标域内部的视觉信息,为跨域迁移提供了更丰富的上下文环境;由于自适应聚类分配的伪标签并不能代表真实标签,目标域分配的伪标签存在一定的噪声,在本发明实施例的无监督行人重识别和装置中,采用了原型优选的方式,有针对性地选择值得信赖的样本参与训练,剔除了可能降低模型性能的噪声样本,提高了识别的准确度。In the unsupervised pedestrian re-identification and device of the embodiment of the present invention, an adaptive clustering method is used to assign pseudo-labels to unlabeled target domains, and the adaptive clustering method does not need to set the cluster number in advance during clustering. It not only overcomes the difficulty of the uncertainty of the number of identity categories in the target domain, but also fully mines the visual information inside the target domain, providing a richer context for cross-domain transfer; the pseudo-labels assigned by adaptive clustering cannot It represents the real label, and the pseudo-label assigned by the target domain has a certain noise. In the unsupervised pedestrian re-identification and device of the embodiment of the present invention, the prototype optimization method is adopted, and trustworthy samples are targeted to participate in training, and the elimination of Noise samples that may degrade model performance are eliminated, improving recognition accuracy.

附图说明Description of drawings

此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The accompanying drawings described herein are used to provide further understanding of the present invention and constitute a part of the present invention. The exemplary embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention. In the attached image:

图1是本发明实施例提供的一种无监督行人重识别方法的流程图;1 is a flowchart of an unsupervised pedestrian re-identification method provided by an embodiment of the present invention;

图2是本发明实施例提供的一种无监督行人重识别装置的框图。FIG. 2 is a block diagram of an unsupervised pedestrian re-identification device provided by an embodiment of the present invention.

具体实施方式Detailed ways

使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案和具体操作过程进行清楚、完整地描述,但本发明的保护范围不限于下述的实施例。To make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions and specific operation processes in the embodiments of the present invention will be described clearly and completely below with reference to the accompanying drawings in the embodiments of the present invention. The scope of protection is not limited to the following examples.

实施例1:Example 1:

如图1所示,本发明实施例公开了一种无监督行人重识别方法,包括以下步骤:As shown in FIG. 1 , an embodiment of the present invention discloses an unsupervised pedestrian re-identification method, which includes the following steps:

预训练步骤S101,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;The pre-training step S101 is used for pre-training a deep pedestrian re-identification model by using a supervised learning method in a labeled source domain data set;

该步骤中深度行人重识别模型

Figure 400410DEST_PATH_IMAGE021
采用深度残差神经网络,并在带标签的源域数据集上进行监督式训练,使
Figure 78385DEST_PATH_IMAGE021
获得在源域上相对鲁棒的性能。In this step, the deep person re-identification model
Figure 400410DEST_PATH_IMAGE021
Using deep residual neural networks and supervised training on the labeled source-domain datasets, the
Figure 78385DEST_PATH_IMAGE021
Obtain relatively robust performance on the source domain.

训练特征提取步骤S103,用于利用预训练好的深度行人重识别模型提取无标签目标域中训练集样本的训练特征

Figure 198787DEST_PATH_IMAGE022
;The training feature extraction step S103 is used to extract the training features of the training set samples in the unlabeled target domain by using the pre-trained deep pedestrian re-identification model
Figure 198787DEST_PATH_IMAGE022
;

划分步骤S105,用于根据所述训练特征,利用自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;The dividing step S105 is used to divide the training set samples of the target domain into several clusters according to the training features by using an adaptive clustering method, and assign corresponding pseudo labels;

具体地,包括以下子步骤:Specifically, it includes the following sub-steps:

步骤S1052:基于已有的距离矩阵

Figure 712945DEST_PATH_IMAGE023
,利用基于密度的自适应聚类算法DBSCAN对目标域训练集行人图像进行无监督聚类。具体步骤如下:Step S1052: Based on the existing distance matrix
Figure 712945DEST_PATH_IMAGE023
, using the density-based adaptive clustering algorithm DBSCAN to perform unsupervised clustering of pedestrian images in the target domain training set. Specific steps are as follows:

首先,初始化簇核心对象样本集合

Figure 576996DEST_PATH_IMAGE024
,聚类簇
Figure 336574DEST_PATH_IMAGE025
,未访问样本集合
Figure 893457DEST_PATH_IMAGE026
,簇划分
Figure 363753DEST_PATH_IMAGE027
;First, initialize the cluster core object sample set
Figure 576996DEST_PATH_IMAGE024
, clusters
Figure 336574DEST_PATH_IMAGE025
, the sample collection is not visited
Figure 893457DEST_PATH_IMAGE026
, the cluster division
Figure 363753DEST_PATH_IMAGE027
;

其次,对于样本集

Figure 297074DEST_PATH_IMAGE022
中每一个样本
Figure 152903DEST_PATH_IMAGE028
,根据
Figure 146267DEST_PATH_IMAGE029
Figure 369438DEST_PATH_IMAGE023
找到其
Figure 575291DEST_PATH_IMAGE030
-领域子样本集
Figure 318251DEST_PATH_IMAGE031
,如果
Figure 216936DEST_PATH_IMAGE032
,则将
Figure 396245DEST_PATH_IMAGE028
加入核心对象样本集
Figure 405789DEST_PATH_IMAGE033
,其中,
Figure 767369DEST_PATH_IMAGE030
表示聚类扫描半径,
Figure 571377DEST_PATH_IMAGE034
表示每簇最小样本数。如果扫描完毕后,
Figure 769141DEST_PATH_IMAGE024
,则结束。其中,设置
Figure 316797DEST_PATH_IMAGE035
,扫描半径
Figure 31418DEST_PATH_IMAGE030
的计算方式如下:将距离矩阵
Figure 6328DEST_PATH_IMAGE036
的右上角展开,获得不重复的样本两两之间的距离,按照从小到大的顺序进行排列,取第t-top个距离设为
Figure 691387DEST_PATH_IMAGE030
。Second, for the sample set
Figure 297074DEST_PATH_IMAGE022
each sample in
Figure 152903DEST_PATH_IMAGE028
,according to
Figure 146267DEST_PATH_IMAGE029
Figure 369438DEST_PATH_IMAGE023
find its
Figure 575291DEST_PATH_IMAGE030
- Domain subsample set
Figure 318251DEST_PATH_IMAGE031
,if
Figure 216936DEST_PATH_IMAGE032
, then the
Figure 396245DEST_PATH_IMAGE028
Add to core object sample set
Figure 405789DEST_PATH_IMAGE033
,in,
Figure 767369DEST_PATH_IMAGE030
represents the cluster scan radius,
Figure 571377DEST_PATH_IMAGE034
Indicates the minimum number of samples per cluster. If after scanning,
Figure 769141DEST_PATH_IMAGE024
, it ends. Among them, set
Figure 316797DEST_PATH_IMAGE035
, scan radius
Figure 31418DEST_PATH_IMAGE030
is calculated as follows: the distance matrix
Figure 6328DEST_PATH_IMAGE036
Expand the upper right corner of , to obtain the distance between pairs of non-repeated samples, arrange them in order from small to large, and take the t-top distance as
Figure 691387DEST_PATH_IMAGE030
.

接着,在核心对象样本集合

Figure 42734DEST_PATH_IMAGE037
中,随机选择一个核心对象
Figure 113327DEST_PATH_IMAGE038
,初始化当前簇核心对象队列
Figure 524716DEST_PATH_IMAGE039
,类别序号
Figure 431493DEST_PATH_IMAGE040
,当前簇样本集合
Figure 320951DEST_PATH_IMAGE041
,更新未访问样本集合
Figure 13095DEST_PATH_IMAGE042
;Next, in the core object sample collection
Figure 42734DEST_PATH_IMAGE037
, randomly select a core object
Figure 113327DEST_PATH_IMAGE038
, initialize the current cluster core object queue
Figure 524716DEST_PATH_IMAGE039
, category number
Figure 431493DEST_PATH_IMAGE040
, the current cluster sample set
Figure 320951DEST_PATH_IMAGE041
, to update the collection of unvisited samples
Figure 13095DEST_PATH_IMAGE042
;

然后,从

Figure 329807DEST_PATH_IMAGE043
取出一个核心对象
Figure 723879DEST_PATH_IMAGE044
,根据
Figure 417028DEST_PATH_IMAGE045
Figure 727793DEST_PATH_IMAGE023
找到其
Figure 480985DEST_PATH_IMAGE046
,令
Figure 362354DEST_PATH_IMAGE047
,更新
Figure 593615DEST_PATH_IMAGE048
Figure 997701DEST_PATH_IMAGE049
Figure 921795DEST_PATH_IMAGE050
。如果当前簇核心对象队列
Figure 290459DEST_PATH_IMAGE051
,则当前簇
Figure 325411DEST_PATH_IMAGE005
生成完毕,更新
Figure 814030DEST_PATH_IMAGE052
,核心对象集合
Figure 174605DEST_PATH_IMAGE053
。如果
Figure 30565DEST_PATH_IMAGE024
,则结束;Then, from
Figure 329807DEST_PATH_IMAGE043
take out a core object
Figure 723879DEST_PATH_IMAGE044
,according to
Figure 417028DEST_PATH_IMAGE045
Figure 727793DEST_PATH_IMAGE023
find its
Figure 480985DEST_PATH_IMAGE046
,make
Figure 362354DEST_PATH_IMAGE047
,renew
Figure 593615DEST_PATH_IMAGE048
,
Figure 997701DEST_PATH_IMAGE049
,
Figure 921795DEST_PATH_IMAGE050
. If the current cluster core object queue
Figure 290459DEST_PATH_IMAGE051
, then the current cluster
Figure 325411DEST_PATH_IMAGE005
Generated, updated
Figure 814030DEST_PATH_IMAGE052
, the core object collection
Figure 174605DEST_PATH_IMAGE053
. if
Figure 30565DEST_PATH_IMAGE024
, then it ends;

最后,输出簇划分

Figure 603629DEST_PATH_IMAGE054
Figure 979378DEST_PATH_IMAGE004
更新为得到的簇总数。Finally, the output cluster partition
Figure 603629DEST_PATH_IMAGE054
,
Figure 979378DEST_PATH_IMAGE004
Update to the total number of clusters obtained.

基于密度的DBSCAN聚类方法不需要事先确定簇个数,更符合开放环境下无监督行人重识别的真实场景。The density-based DBSCAN clustering method does not need to determine the number of clusters in advance, and is more in line with the real scene of unsupervised pedestrian re-identification in an open environment.

步骤S1053:自适应聚类之后,给目标域训练集的每个样本分配相应的伪标签。Step S1053: After adaptive clustering, assign a corresponding pseudo-label to each sample of the target domain training set.

再训练步骤S107,用于分别将每个簇定为一个原型,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;Retraining step S107 is used to define each cluster as a prototype, calculate the distance between the visible sample in the prototype and the center of the prototype, select the visible sample whose distance is less than the set threshold, and pair according to the selected visible sample. The training features are screened to obtain the screened training features, and the pre-trained deep pedestrian re-identification model is retrained by using the screened training features and the allocated pseudo-labels to obtain the deep pedestrian re-identification model after updating the parameters. Model;

具体地,包括以下子步骤:Specifically, it includes the following sub-steps:

步骤S1071:将簇划分

Figure 979695DEST_PATH_IMAGE054
中每个簇
Figure 322951DEST_PATH_IMAGE005
设定为一个原型
Figure 965285DEST_PATH_IMAGE055
,簇
Figure 694076DEST_PATH_IMAGE005
中每个对象设定为原型
Figure 865294DEST_PATH_IMAGE002
的可视样本。计算对应的原型中心,计算方式如下:Step S1071: Divide clusters
Figure 979695DEST_PATH_IMAGE054
each cluster in
Figure 322951DEST_PATH_IMAGE005
set as a prototype
Figure 965285DEST_PATH_IMAGE055
,cluster
Figure 694076DEST_PATH_IMAGE005
set each object as a prototype
Figure 865294DEST_PATH_IMAGE002
visual sample. Calculate the corresponding prototype center, the calculation method is as follows:

Figure 430268DEST_PATH_IMAGE018
Figure 430268DEST_PATH_IMAGE018

其中,

Figure 610713DEST_PATH_IMAGE019
为原型
Figure 692545DEST_PATH_IMAGE002
的可视样本个数。in,
Figure 610713DEST_PATH_IMAGE019
as a prototype
Figure 692545DEST_PATH_IMAGE002
the number of visible samples.

步骤S1072:计算原型

Figure 300244DEST_PATH_IMAGE002
中可视样本与原型中心
Figure 352514DEST_PATH_IMAGE006
的距离,形成距离向量
Figure 71071DEST_PATH_IMAGE056
Figure 774454DEST_PATH_IMAGE007
中每个元素
Figure 553054DEST_PATH_IMAGE057
代表可视样本
Figure 92620DEST_PATH_IMAGE011
与原型中心
Figure 614868DEST_PATH_IMAGE006
的距离,
Figure 674222DEST_PATH_IMAGE019
为原型
Figure 623723DEST_PATH_IMAGE002
的可视样本个数。
Figure 385006DEST_PATH_IMAGE012
计算方式如下:Step S1072: Calculate the prototype
Figure 300244DEST_PATH_IMAGE002
China Visual Sample and Prototype Center
Figure 352514DEST_PATH_IMAGE006
distance, forming a distance vector
Figure 71071DEST_PATH_IMAGE056
,
Figure 774454DEST_PATH_IMAGE007
each element in
Figure 553054DEST_PATH_IMAGE057
represents a visual sample
Figure 92620DEST_PATH_IMAGE011
with Prototype Center
Figure 614868DEST_PATH_IMAGE006
the distance,
Figure 674222DEST_PATH_IMAGE019
as a prototype
Figure 623723DEST_PATH_IMAGE002
the number of visible samples.
Figure 385006DEST_PATH_IMAGE012
It is calculated as follows:

Figure 710945DEST_PATH_IMAGE058
Figure 710945DEST_PATH_IMAGE058

步骤S1073:依据自定步调学习的原则,设置阈值,自动优选在所属原型里离原型中心足够近的目标域训练样本参与训练,挑选方式如下:Step S1073: According to the principle of self-paced learning, set a threshold, and automatically select the target domain training samples in the prototype that are close enough to the prototype center to participate in the training. The selection method is as follows:

Figure 123341DEST_PATH_IMAGE013
Figure 123341DEST_PATH_IMAGE013

其中,

Figure 243744DEST_PATH_IMAGE015
为设置的距离阈值,
Figure 757901DEST_PATH_IMAGE016
代表示性函数,若成立则为1,反之,则为0。in,
Figure 243744DEST_PATH_IMAGE015
is the set distance threshold,
Figure 757901DEST_PATH_IMAGE016
Represents a representative function, 1 if true, 0 otherwise.

步骤S1074:将挑选的样本

Figure 621952DEST_PATH_IMAGE017
进行训练。其中,经由步骤S103,对得到训练特征
Figure 404968DEST_PATH_IMAGE022
进行相应的删选,得到挑选后的特征
Figure 961851DEST_PATH_IMAGE059
,最小化三元组损失
Figure 697726DEST_PATH_IMAGE060
。计算方式如下:Step S1074: the selected sample
Figure 621952DEST_PATH_IMAGE017
to train. Wherein, through step S103, the training features are obtained for
Figure 404968DEST_PATH_IMAGE022
Carry out the corresponding deletion and get the selected features
Figure 961851DEST_PATH_IMAGE059
, which minimizes the triplet loss
Figure 697726DEST_PATH_IMAGE060
. It is calculated as follows:

Figure 99888DEST_PATH_IMAGE061
Figure 486876DEST_PATH_IMAGE062
其中一个元素,其伪标签为
Figure 480240DEST_PATH_IMAGE063
,取离它最近的负样本
Figure 172252DEST_PATH_IMAGE064
,其伪标签为
Figure 643685DEST_PATH_IMAGE065
,取离它最远的正样本
Figure 386644DEST_PATH_IMAGE066
,其伪标签为
Figure 19751DEST_PATH_IMAGE067
,记:make
Figure 99888DEST_PATH_IMAGE061
for
Figure 486876DEST_PATH_IMAGE062
One of the elements whose pseudo tag is
Figure 480240DEST_PATH_IMAGE063
, take the nearest negative sample
Figure 172252DEST_PATH_IMAGE064
, whose pseudo-label is
Figure 643685DEST_PATH_IMAGE065
, take the positive sample farthest from it
Figure 386644DEST_PATH_IMAGE066
, whose pseudo-label is
Figure 19751DEST_PATH_IMAGE067
,remember:

Figure 995797DEST_PATH_IMAGE068
Figure 995797DEST_PATH_IMAGE068

Figure 739762DEST_PATH_IMAGE069
Figure 835763DEST_PATH_IMAGE070
为边界值,则度量损失
Figure 170930DEST_PATH_IMAGE071
的计算公式为:make
Figure 739762DEST_PATH_IMAGE069
,
Figure 835763DEST_PATH_IMAGE070
is the boundary value, then measure the loss
Figure 170930DEST_PATH_IMAGE071
The calculation formula is:

Figure 103113DEST_PATH_IMAGE072
Figure 103113DEST_PATH_IMAGE072

其中,

Figure 916349DEST_PATH_IMAGE073
为hinge函数。in,
Figure 916349DEST_PATH_IMAGE073
is the hinge function.

识别步骤S109,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification step S109 is used to input the query set and the set to be selected in the target domain into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test feature of the query set picture and the test feature of the set of pictures to be selected, and calculate the two in the metric space. According to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set according to the similarity.

具体地,包括以下子步骤:Specifically, it includes the following sub-steps:

步骤S1091:将查询集

Figure 365391DEST_PATH_IMAGE074
和待选集
Figure 340300DEST_PATH_IMAGE075
Figure 25360DEST_PATH_IMAGE076
为查询集元素个数,
Figure 642286DEST_PATH_IMAGE077
为待选集元素个数;
Figure 712879DEST_PATH_IMAGE078
和G中元素皆为RGB图片,大小被调整为256×128×3,分别输入经由步骤S107后的深度行人重识别模型,获得相应特征集合,其中:Step S1091: the query set
Figure 365391DEST_PATH_IMAGE074
and selections
Figure 340300DEST_PATH_IMAGE075
,
Figure 25360DEST_PATH_IMAGE076
is the number of query set elements,
Figure 642286DEST_PATH_IMAGE077
is the number of elements to be selected;
Figure 712879DEST_PATH_IMAGE078
The elements in and G are all RGB images, and the size is adjusted to 256×128×3, respectively input the deep pedestrian re-identification model after step S107 to obtain the corresponding feature set, where:

Figure 858689DEST_PATH_IMAGE079
Figure 858689DEST_PATH_IMAGE079

Figure 765466DEST_PATH_IMAGE080
Figure 765466DEST_PATH_IMAGE080

步骤S1092:计算

Figure 920503DEST_PATH_IMAGE081
Figure 347068DEST_PATH_IMAGE082
之间的欧氏距离,构建距离矩阵
Figure 929359DEST_PATH_IMAGE083
,针对每个查询图片将待选图片按照距离大小进行排序,设置查询个数s,取距离较小的前s个待选图片作为该查询图片的检索候选列表;并用mAP和Rank@1对结果的准确率进行评估。Step S1092: Calculate
Figure 920503DEST_PATH_IMAGE081
and
Figure 347068DEST_PATH_IMAGE082
Euclidean distance between, build a distance matrix
Figure 929359DEST_PATH_IMAGE083
, for each query picture, sort the pictures to be selected according to the size of the distance, set the number of queries s , and take the first s pictures to be selected with a smaller distance as the retrieval candidate list of the query picture; and use mAP and Rank@1 to compare the results accuracy is evaluated.

为了提高行人重识别模型的鲁棒性和准确度,还包括:To improve the robustness and accuracy of the person re-ID model, it also includes:

迭代收敛步骤S108,用于重复训练特征提取步骤S103、划分步骤S105和再训练步骤S107,更新迭代模型M的权重,直至收敛。The iterative convergence step S108 is used to repeat the training feature extraction step S103, the division step S105 and the retraining step S107, and update the weight of the iterative model M until convergence.

以下表1,是基于本发明上述实施例所提供的方法得到的识别准确率结果。从上至下依次陈列了用以对照的其他基准方法同本实施例实施的结果比较,可以看出,本发明上述实施例识别性能有很好的提升。The following Table 1 shows the recognition accuracy results obtained based on the methods provided by the above embodiments of the present invention. From the top to the bottom, other benchmark methods for comparison are listed in order compared with the results implemented in this embodiment. It can be seen that the recognition performance of the above-mentioned embodiment of the present invention is well improved.

表1:识别准确率结果Table 1: Recognition Accuracy Results

Figure 323431DEST_PATH_IMAGE084
Figure 323431DEST_PATH_IMAGE084

综上所述,本发明实施例公开了一种无监督行人重识别方法,该方法利用现有深度学习的优势,通过深度残差神经网络提取特征;构建了基于自适应聚类和原型优选的无监督行人重识别模型,其中,自适应聚类的方法可以自动为目标域行人分配伪标签,不仅克服了目标域身份类别个数不确定的困难,而且充分挖掘了目标域内部的视觉信息,为跨域迁移提供了更丰富的上下文环境,由于自适应聚类分配的伪标签并不能代表真实标签,目标域训练样本存在一定的噪声,基于原型优选的方法自动为目标域训练过程挑选值得信赖的样本,剔除了可能降低模型性能的不信赖样本。综上所述,本发明有效缓解了域间隔问题,提高了行人重识别跨域迁移的准确度,具有良好的鲁棒性和普遍的适用性。To sum up, the embodiment of the present invention discloses an unsupervised pedestrian re-identification method, which utilizes the advantages of existing deep learning and extracts features through a deep residual neural network; constructs a method based on adaptive clustering and prototype optimization. Unsupervised pedestrian re-identification model, in which the adaptive clustering method can automatically assign pseudo-labels to pedestrians in the target domain, which not only overcomes the difficulty of the uncertain number of identity categories in the target domain, but also fully exploits the visual information inside the target domain. Provides a richer context for cross-domain transfer. Since the pseudo-labels assigned by adaptive clustering cannot represent real labels, the training samples in the target domain have a certain amount of noise. The method based on prototype optimization automatically selects trustworthy for the training process of the target domain. , removing untrusted samples that may degrade model performance. To sum up, the present invention effectively alleviates the problem of domain interval, improves the accuracy of cross-domain transfer of pedestrian re-identification, and has good robustness and general applicability.

实施例2:Example 2:

如图2所示,本实施例提供一种无监督行人重识别装置,该装置为实施例1提供的一种无监督行人重识别方法对应的虚拟装置,该装置具备执行该方法相应的功能模块和有益效果,该装置包括:As shown in FIG. 2 , this embodiment provides an unsupervised pedestrian re-identification device, which is a virtual device corresponding to the unsupervised pedestrian re-identification method provided in Embodiment 1, and the device has function modules corresponding to executing the method. and beneficial effects, the device includes:

预训练单元901,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;A pre-training unit 901, used for pre-training a deep person re-identification model by using a supervised learning method in a labeled source domain data set;

训练特征提取单元903,用于利用预训练好的深度行人重识别模型提取无标签目标域中训片集样本的训练特征;The training feature extraction unit 903 is used to extract the training features of the training film set samples in the unlabeled target domain by using the pre-trained deep pedestrian re-identification model;

划分单元905,用于根据所述训练特征,利用自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;A dividing unit 905 is used to divide the training set samples of the target domain into several clusters by means of adaptive clustering according to the training features, and assign corresponding pseudo labels;

再训练单元907,用于分别将每个簇定为一个原型,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;The retraining unit 907 is used to designate each cluster as a prototype, calculate the distance between the visible sample in the prototype and the prototype center, select the visible sample whose distance is less than the set threshold, and select the visible sample according to the selected visual sample. The training features are screened to obtain the screened training features, and the pre-trained deep pedestrian re-identification model is retrained by using the screened training features and the allocated pseudo-labels to obtain the deep pedestrian re-identification model after updating the parameters. Model;

识别单元909,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification unit 909 is used to input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test feature of the query set picture and the test feature of the set of pictures to be selected, and calculate the two in the metric space. According to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set according to the similarity.

进一步地,还包括:Further, it also includes:

迭代收敛单元908,用于重复执行训练特征提取单元、划分单元和再训练单元,更新迭代权重,直至收敛。The iterative convergence unit 908 is configured to repeatedly execute the training feature extraction unit, the division unit and the retraining unit, and update the iterative weights until convergence.

实施例3:Example 3:

本实施例提供一种电子设备,包括:This embodiment provides an electronic device, including:

一个或多个处理器;one or more processors;

存储器,用于存储一个或多个程序;memory for storing one or more programs;

当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如实施例1所述的一种无监督行人重识别方法。When the one or more programs are executed by the one or more processors, the one or more processors implement an unsupervised pedestrian re-identification method as described in Embodiment 1.

实施例4:Example 4:

本实施例提供一种计算机可读的存储介质,其上存储有计算机程序,该程序被处理器执行时实现如实施例1所述的一种无监督行人重识别方法。This embodiment provides a computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the unsupervised pedestrian re-identification method described in Embodiment 1 is implemented.

上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments.

在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present invention, the description of each embodiment has its own emphasis. For parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的设备实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed technical content can be implemented in other ways. The device embodiments described above are only illustrative, for example, the division of the units may be a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.

所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention. The aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the scope of the present invention. within the scope of protection.

Claims (10)

1.一种无监督行人重识别方法,其特征在于,包括:1. an unsupervised pedestrian re-identification method, is characterized in that, comprises: 预训练步骤,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;The pre-training step is used to pre-train the deep person re-identification model with supervised learning in the labeled source domain dataset; 训练特征提取步骤,用于利用预训练好的深度行人重识别模型提取无标签目标域中训练集样本的训练特征;The training feature extraction step is used to extract the training features of the training set samples in the unlabeled target domain by using the pre-trained deep person re-identification model; 划分步骤,用于根据所述训练特征,基于自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;The dividing step is used to divide the training set samples of the target domain into several clusters based on the adaptive clustering method according to the training features, and assign corresponding pseudo labels; 再训练步骤,用于分别将每个簇定为一个原型,簇中的样本为原型中的可视样本,计算可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;The retraining step is used to set each cluster as a prototype, and the samples in the cluster are the visible samples in the prototype, calculate the distance between the visible sample and the prototype center, and select the visible samples whose distance is less than the set threshold. sample, filter the training features according to the selected visual samples, obtain the training features after screening, and retrain the pre-trained deep pedestrian re-identification model by using the training features after screening and the assigned pseudo-labels, Obtain the deep pedestrian re-identification model after updating the parameters; 识别步骤,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification step is used to input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test features of the query set pictures and the test features of the pictures to be selected, and calculate the difference between the two in the metric space. Similarity, according to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set. 2.根据权利要求1所述的一种无监督行人重识别方法,其特征在于,还包括:2. a kind of unsupervised pedestrian re-identification method according to claim 1, is characterized in that, also comprises: 迭代收敛步骤,用于重复训练特征提取步骤、划分步骤和再训练步骤,更新迭代权重,直至收敛。The iterative convergence step is used to repeat the training feature extraction step, the division step and the retraining step, and update the iterative weights until convergence. 3.根据权利要求1所述的一种无监督行人重识别方法,其特征在于,根据所述的训练特征,利用自适应聚类的方法将目标域行人图像分为若干簇,并分配对应的伪标签,包括:3. a kind of unsupervised pedestrian re-identification method according to claim 1 is characterized in that, according to described training characteristic, utilizes the method of adaptive clustering to divide the pedestrian image of target domain into several clusters, and assign corresponding Pseudo tags, including: 根据训练特征计算目标域训练集行人图像两两之间的距离,形成距离矩阵;Calculate the distance between the pedestrian images in the target domain training set according to the training features to form a distance matrix; 基于所述的距离矩阵,利用基于密度的自适应聚类算法对目标域训练集行人图像进行无监督聚类,生成若干簇;Based on the distance matrix, use a density-based adaptive clustering algorithm to perform unsupervised clustering on pedestrian images in the target domain training set to generate several clusters; 无监督聚类之后,给目标域训练集中的每个样本分配相应的伪标签。After unsupervised clustering, each sample in the training set of the target domain is assigned a corresponding pseudo-label. 4.根据权利要求1所述的一种无监督行人重识别方法,其特征在于,分别将每个簇定为一个原型,簇中的样本为原型中的可视样本,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型,包括:4. A kind of unsupervised pedestrian re-identification method according to claim 1, is characterized in that, each cluster is designated as a prototype respectively, and the samples in the cluster are the visible samples in the prototype, and the visible samples in the prototype are calculated. The distance from the prototype center, select the visual samples whose distance is less than the set threshold, filter the training features according to the selected visual samples, obtain the training features after screening, and use the training features and distribution after screening. The good pseudo-label retrains the pre-trained deep person re-identification model, and obtains the deep person re-identification model with updated parameters, including: 将第
Figure 540463DEST_PATH_IMAGE001
个簇
Figure 175712DEST_PATH_IMAGE002
设定为一个原型
Figure 424291DEST_PATH_IMAGE003
,其中,
Figure 288342DEST_PATH_IMAGE004
Figure 305976DEST_PATH_IMAGE005
为簇的个数,簇
Figure 345083DEST_PATH_IMAGE002
中每个对象设定为原型
Figure 80958DEST_PATH_IMAGE003
的可视样本,计算对应的原型中心
Figure 748700DEST_PATH_IMAGE006
will
Figure 540463DEST_PATH_IMAGE001
clusters
Figure 175712DEST_PATH_IMAGE002
set as a prototype
Figure 424291DEST_PATH_IMAGE003
,in,
Figure 288342DEST_PATH_IMAGE004
,
Figure 305976DEST_PATH_IMAGE005
is the number of clusters, the cluster
Figure 345083DEST_PATH_IMAGE002
set each object as a prototype
Figure 80958DEST_PATH_IMAGE003
The visual samples of , calculate the corresponding prototype center
Figure 748700DEST_PATH_IMAGE006
;
计算原型
Figure 886420DEST_PATH_IMAGE003
中可视样本与原型中心
Figure 597893DEST_PATH_IMAGE006
的距离,形成距离向量
Figure 821064DEST_PATH_IMAGE007
Figure 26917DEST_PATH_IMAGE008
中第
Figure 19144DEST_PATH_IMAGE009
个元素
Figure 668562DEST_PATH_IMAGE010
代表
Figure 113450DEST_PATH_IMAGE011
与原型中心
Figure 857415DEST_PATH_IMAGE006
的距离;
Computational prototype
Figure 886420DEST_PATH_IMAGE003
China Visual Sample and Prototype Center
Figure 597893DEST_PATH_IMAGE006
distance, forming a distance vector
Figure 821064DEST_PATH_IMAGE007
,
Figure 26917DEST_PATH_IMAGE008
B
Figure 19144DEST_PATH_IMAGE009
elements
Figure 668562DEST_PATH_IMAGE010
represent
Figure 113450DEST_PATH_IMAGE011
with Prototype Center
Figure 857415DEST_PATH_IMAGE006
the distance;
Figure 218995DEST_PATH_IMAGE003
中选出
Figure 23003DEST_PATH_IMAGE012
小于阈值的可视样本,挑选方式如下:
exist
Figure 218995DEST_PATH_IMAGE003
selected
Figure 23003DEST_PATH_IMAGE012
Visual samples smaller than the threshold are selected as follows:
Figure 220766DEST_PATH_IMAGE013
Figure 220766DEST_PATH_IMAGE013
其中,
Figure 768422DEST_PATH_IMAGE014
代表原型
Figure 488903DEST_PATH_IMAGE003
挑选出的可视样本,
Figure 729392DEST_PATH_IMAGE015
为设置的距离阈值,
Figure 148872DEST_PATH_IMAGE016
代表示性函数,若成立则为1,反之,则为0;
in,
Figure 768422DEST_PATH_IMAGE014
representative prototype
Figure 488903DEST_PATH_IMAGE003
selected visual samples,
Figure 729392DEST_PATH_IMAGE015
is the set distance threshold,
Figure 148872DEST_PATH_IMAGE016
On behalf of the representative function, if it is established, it is 1, otherwise, it is 0;
利用挑选好的样本
Figure 500219DEST_PATH_IMAGE017
对所述的训练特征进行相应的筛选,得到挑选后的训练特征,然后对预训练好的深度行人重识别模型进行训练,得到更新参数后的深度行人重识别模型。
Use selected samples
Figure 500219DEST_PATH_IMAGE017
The training features are screened accordingly to obtain the selected training features, and then the pre-trained deep pedestrian re-identification model is trained to obtain a deep pedestrian re-identification model with updated parameters.
5.根据权利要求4所述的一种无监督行人重识别方法,其特征在于,原型中心
Figure 570812DEST_PATH_IMAGE006
的计算公式如下:
5. An unsupervised pedestrian re-identification method according to claim 4, wherein the prototype center
Figure 570812DEST_PATH_IMAGE006
The calculation formula is as follows:
Figure 982202DEST_PATH_IMAGE018
Figure 982202DEST_PATH_IMAGE018
其中,
Figure 888978DEST_PATH_IMAGE019
为原型
Figure 778436DEST_PATH_IMAGE003
的可视样本个数,
Figure 470580DEST_PATH_IMAGE011
为原型
Figure 52871DEST_PATH_IMAGE003
的第
Figure 446943DEST_PATH_IMAGE009
个样本,
Figure 140093DEST_PATH_IMAGE020
in,
Figure 888978DEST_PATH_IMAGE019
as a prototype
Figure 778436DEST_PATH_IMAGE003
the number of visible samples,
Figure 470580DEST_PATH_IMAGE011
as a prototype
Figure 52871DEST_PATH_IMAGE003
First
Figure 446943DEST_PATH_IMAGE009
samples,
Figure 140093DEST_PATH_IMAGE020
.
6.根据权利要求1所述的一种无监督行人重识别方法,其特征在于,将目标域的查询集图片和待选集图片输入到所述更新参数后的深度行人重识别模型中,分别得到二者特征,计算二者在度量空间的相似度,根据相似度从待选集中选出符合要求的图片,包括:6. A kind of unsupervised pedestrian re-identification method according to claim 1, is characterized in that, inputting the query set picture and the to-be-selected set picture of the target domain into the deep pedestrian re-identification model after the updated parameters, respectively obtaining The characteristics of the two, calculate the similarity between the two in the metric space, and select the pictures that meet the requirements from the candidate set according to the similarity, including: 将目标域的查询集和待选集输入到所述的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征;Input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model to obtain the test features of the query set pictures and the test features of the pictures to be selected respectively; 计算所述的查询集的测试特征和待选集的测试特征在度量空间的欧氏距离,得到查询集的测试特征和待选集的测试特征的相似度矩阵,根据相似度矩阵从待选集中识别出符合查询图片要求的待选图片。Calculate the Euclidean distance of the test feature of the query set and the test feature of the set to be selected in the metric space, obtain the similarity matrix of the test feature of the query set and the test feature of the set to be selected, and identify the test feature from the set to be selected according to the similarity matrix Candidate images that meet the query image requirements. 7.一种无监督行人重识别装置,其特征在于,包括:7. An unsupervised pedestrian re-identification device, characterized in that, comprising: 预训练单元,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;A pre-training unit for pre-training a deep person re-identification model with supervised learning in a labeled source-domain dataset; 训练特征提取单元,用于利用预训练好的深度行人重识别模型提取无标签目标域中训练集样本的训练特征;The training feature extraction unit is used to extract the training features of the training set samples in the unlabeled target domain by using the pre-trained deep person re-identification model; 划分单元,用于根据所述训练特征,利用自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;a dividing unit, which is used to divide the training set samples of the target domain into several clusters according to the training characteristics by using an adaptive clustering method, and assign corresponding pseudo labels; 再训练单元,用于分别将每个簇定为一个原型,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;The retraining unit is used to set each cluster as a prototype, calculate the distance between the visible samples in the prototype and the prototype center, select the visible samples whose distance is less than the set threshold, and analyze the selected visual samples according to the selected visual samples. The pre-trained deep pedestrian re-identification model is retrained by using the filtered training characteristics and assigned pseudo-labels to obtain the deep pedestrian re-identification model with updated parameters. ; 识别单元,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification unit is used to input the query set and the set to be selected in the target domain into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test features of the query set pictures and the test features of the pictures to be selected, and calculate the difference between the two in the metric space. Similarity, according to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set. 8.根据权利要求7所述的一种无监督行人重识别装置,其特征在于,还包括:8. An unsupervised pedestrian re-identification device according to claim 7, characterized in that, further comprising: 迭代收敛单元,用于重复执行训练特征提取单元、划分单元和再训练单元,更新迭代权重,直至收敛。The iterative convergence unit is used to repeatedly execute the training feature extraction unit, the division unit and the retraining unit, and update the iterative weights until convergence. 9.一种电子设备,其特征在于,包括:9. An electronic device, characterized in that, comprising: 一个或多个处理器;one or more processors; 存储器,用于存储一个或多个程序;memory for storing one or more programs; 当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-6任一项所述的一种无监督行人重识别方法。When the one or more programs are executed by the one or more processors, the one or more processors implement an unsupervised pedestrian re-identification method according to any one of claims 1-6. 10.一种计算机可读的存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-6中任一项所述的一种无监督行人重识别方法。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, an unsupervised pedestrian re-identification as described in any one of claims 1-6 is realized method.
CN202010842782.9A 2020-08-20 2020-08-20 Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium Active CN112069929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010842782.9A CN112069929B (en) 2020-08-20 2020-08-20 Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010842782.9A CN112069929B (en) 2020-08-20 2020-08-20 Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112069929A true CN112069929A (en) 2020-12-11
CN112069929B CN112069929B (en) 2024-01-05

Family

ID=73662357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010842782.9A Active CN112069929B (en) 2020-08-20 2020-08-20 Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112069929B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507901A (en) * 2020-12-14 2021-03-16 华南理工大学 Unsupervised pedestrian re-identification method based on pseudo tag self-correction
CN112597871A (en) * 2020-12-18 2021-04-02 中山大学 Unsupervised vehicle re-identification method and system based on two-stage clustering and storage medium
CN112766218A (en) * 2021-01-30 2021-05-07 上海工程技术大学 Cross-domain pedestrian re-identification method and device based on asymmetric joint teaching network
CN112861825A (en) * 2021-04-07 2021-05-28 北京百度网讯科技有限公司 Model training method, pedestrian re-identification method, device and electronic equipment
CN113536928A (en) * 2021-06-15 2021-10-22 清华大学 An efficient method and device for unsupervised person re-identification
CN113553970A (en) * 2021-07-29 2021-10-26 广联达科技股份有限公司 Pedestrian re-identification method, device, equipment and readable storage medium
CN113590852A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Training method of multi-modal recognition model, multi-modal recognition method and device
CN113822262A (en) * 2021-11-25 2021-12-21 之江实验室 Pedestrian re-identification method based on unsupervised learning
CN114299480A (en) * 2021-12-22 2022-04-08 杭州海康威视数字技术股份有限公司 A target detection model training method, target detection method and device
CN114399724A (en) * 2021-12-03 2022-04-26 清华大学 Pedestrian re-identification method, device, electronic device and storage medium
CN114550091A (en) * 2022-02-24 2022-05-27 以萨技术股份有限公司 Unsupervised pedestrian re-identification method and unsupervised pedestrian re-identification device based on local features
CN115273148A (en) * 2022-08-03 2022-11-01 北京百度网讯科技有限公司 Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN116030502A (en) * 2023-03-30 2023-04-28 之江实验室 Pedestrian re-recognition method and device based on unsupervised learning
WO2023115911A1 (en) * 2021-12-24 2023-06-29 上海商汤智能科技有限公司 Object re-identification method and apparatus, electronic device, storage medium, and computer program product
CN116912535A (en) * 2023-09-08 2023-10-20 中国海洋大学 Unsupervised target re-identification method, device and medium based on similarity screening

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948561A (en) * 2019-03-25 2019-06-28 广东石油化工学院 Method and system for unsupervised image and video person re-identification based on transfer network
CN111126360A (en) * 2019-11-15 2020-05-08 西安电子科技大学 Cross-domain person re-identification method based on unsupervised joint multi-loss model
CN111242064A (en) * 2020-01-17 2020-06-05 山东师范大学 Pedestrian re-identification method and system based on camera style migration and single marking
US20200226421A1 (en) * 2019-01-15 2020-07-16 Naver Corporation Training and using a convolutional neural network for person re-identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200226421A1 (en) * 2019-01-15 2020-07-16 Naver Corporation Training and using a convolutional neural network for person re-identification
CN109948561A (en) * 2019-03-25 2019-06-28 广东石油化工学院 Method and system for unsupervised image and video person re-identification based on transfer network
CN111126360A (en) * 2019-11-15 2020-05-08 西安电子科技大学 Cross-domain person re-identification method based on unsupervised joint multi-loss model
CN111242064A (en) * 2020-01-17 2020-06-05 山东师范大学 Pedestrian re-identification method and system based on camera style migration and single marking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
单纯;王敏;: "半监督单样本深度行人重识别方法", 计算机系统应用, no. 01 *
张晓伟;吕明强;李慧;: "基于局部语义特征不变性的跨域行人重识别", 北京航空航天大学学报, no. 09 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507901B (en) * 2020-12-14 2022-05-24 华南理工大学 Unsupervised pedestrian re-identification method based on pseudo tag self-correction
CN112507901A (en) * 2020-12-14 2021-03-16 华南理工大学 Unsupervised pedestrian re-identification method based on pseudo tag self-correction
CN112597871A (en) * 2020-12-18 2021-04-02 中山大学 Unsupervised vehicle re-identification method and system based on two-stage clustering and storage medium
CN112597871B (en) * 2020-12-18 2023-07-18 中山大学 Unsupervised vehicle re-identification method, system and storage medium based on two-stage clustering
CN112766218A (en) * 2021-01-30 2021-05-07 上海工程技术大学 Cross-domain pedestrian re-identification method and device based on asymmetric joint teaching network
CN112861825A (en) * 2021-04-07 2021-05-28 北京百度网讯科技有限公司 Model training method, pedestrian re-identification method, device and electronic equipment
CN112861825B (en) * 2021-04-07 2023-07-04 北京百度网讯科技有限公司 Model training method, pedestrian re-recognition method, device and electronic equipment
WO2022213717A1 (en) * 2021-04-07 2022-10-13 北京百度网讯科技有限公司 Model training method and apparatus, person re-identification method and apparatus, and electronic device
CN113536928A (en) * 2021-06-15 2021-10-22 清华大学 An efficient method and device for unsupervised person re-identification
CN113536928B (en) * 2021-06-15 2024-04-19 清华大学 Efficient unsupervised pedestrian re-identification method and device
CN113590852A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Training method of multi-modal recognition model, multi-modal recognition method and device
CN113553970A (en) * 2021-07-29 2021-10-26 广联达科技股份有限公司 Pedestrian re-identification method, device, equipment and readable storage medium
CN113822262B (en) * 2021-11-25 2022-04-15 之江实验室 A Pedestrian Re-identification Method Based on Unsupervised Learning
CN113822262A (en) * 2021-11-25 2021-12-21 之江实验室 Pedestrian re-identification method based on unsupervised learning
CN114399724A (en) * 2021-12-03 2022-04-26 清华大学 Pedestrian re-identification method, device, electronic device and storage medium
CN114399724B (en) * 2021-12-03 2024-06-28 清华大学 Pedestrian re-recognition method and device, electronic equipment and storage medium
CN114299480A (en) * 2021-12-22 2022-04-08 杭州海康威视数字技术股份有限公司 A target detection model training method, target detection method and device
WO2023115911A1 (en) * 2021-12-24 2023-06-29 上海商汤智能科技有限公司 Object re-identification method and apparatus, electronic device, storage medium, and computer program product
CN114550091A (en) * 2022-02-24 2022-05-27 以萨技术股份有限公司 Unsupervised pedestrian re-identification method and unsupervised pedestrian re-identification device based on local features
CN115273148A (en) * 2022-08-03 2022-11-01 北京百度网讯科技有限公司 Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN115273148B (en) * 2022-08-03 2023-09-05 北京百度网讯科技有限公司 Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN116030502A (en) * 2023-03-30 2023-04-28 之江实验室 Pedestrian re-recognition method and device based on unsupervised learning
CN116912535A (en) * 2023-09-08 2023-10-20 中国海洋大学 Unsupervised target re-identification method, device and medium based on similarity screening
CN116912535B (en) * 2023-09-08 2023-11-28 中国海洋大学 An unsupervised target re-identification method, device and medium based on similarity screening

Also Published As

Publication number Publication date
CN112069929B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
CN112069929A (en) Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium
CN109948561B (en) Method and system for unsupervised image and video pedestrian re-identification based on transfer network
CN108960080B (en) Face recognition method based on active defense against image adversarial attack
CN110263697A (en) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN111611880B (en) Efficient pedestrian re-recognition method based on neural network unsupervised contrast learning
CN104915351B (en) Picture sort method and terminal
CN112819065B (en) Unsupervised pedestrian sample mining method and unsupervised pedestrian sample mining system based on multi-clustering information
CN113642547B (en) A method and system for unsupervised domain-adaptive person re-identification based on density clustering
CN111178208A (en) Pedestrian detection method, device and medium based on deep learning
CN108491766B (en) End-to-end crowd counting method based on depth decision forest
CN109711366A (en) A Pedestrian Re-identification Method Based on Group Information Loss Function
CN112633071B (en) Data Domain Adaptation Method for Person Re-ID Based on Data Style Decoupling Content Transfer
CN113221663A (en) Real-time sign language intelligent identification method, device and system
Wu et al. Decentralised learning from independent multi-domain labels for person re-identification
CN115641613A (en) An unsupervised cross-domain person re-identification method based on clustering and multi-scale learning
CN110929679A (en) An unsupervised adaptive person re-identification method based on GAN
CN106559645A (en) Based on the monitoring method of video camera, system and device
CN110414376A (en) Update method, face recognition cameras and the server of human face recognition model
CN113076963B (en) Image recognition method and device and computer readable storage medium
CN113052150B (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
Sheeba et al. Hybrid features-enabled dragon deep belief neural network for activity recognition
CN111291780A (en) Cross-domain network training and image recognition method
CN118628813A (en) Passive domain adaptive image recognition method based on transferable semantic knowledge
CN112860936A (en) Visual pedestrian re-identification method based on sparse graph similarity migration
CN111950352A (en) Hierarchical face clustering method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant