CN112069929A - Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium - Google Patents
Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112069929A CN112069929A CN202010842782.9A CN202010842782A CN112069929A CN 112069929 A CN112069929 A CN 112069929A CN 202010842782 A CN202010842782 A CN 202010842782A CN 112069929 A CN112069929 A CN 112069929A
- Authority
- CN
- China
- Prior art keywords
- training
- pedestrian
- prototype
- samples
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000012549 training Methods 0.000 claims abstract description 95
- 238000012360 testing method Methods 0.000 claims abstract description 28
- 230000000007 visual effect Effects 0.000 claims description 23
- 230000003044 adaptive effect Effects 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000005180 public health Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于人工智能及计算机视觉技术领域,尤其涉及一种无监督行人重识别方法、装置、电子设备及存储介质。The invention belongs to the technical field of artificial intelligence and computer vision, and in particular relates to an unsupervised pedestrian re-identification method, device, electronic device and storage medium.
背景技术Background technique
随着城市化进程的加快,公共安全已成为了人们日益关注的焦点和需求。大学校园、主题公园、医院、街道等许多重要的公共卫生区域都广泛覆盖了监控摄像头,为利用计算机视觉技术自动化监控创造了良好的客观条件。With the acceleration of urbanization, public safety has become the focus and demand of people. Many important public health areas such as university campuses, theme parks, hospitals, and streets are widely covered by surveillance cameras, creating good objective conditions for automated surveillance using computer vision technology.
近年来,行人重识别作为视频监控领域的一个重要研究方向,日益受到人们的关注。具体来说,行人重识别是指在跨摄像头、跨场景下利用计算机视觉技术判断图像或视频序列中是否存在特定行人的技术。作为人脸识别技术的重要补充,该技术能够根据行人的穿着、体态、发型等信息认知行人,在实际监控场景下对无法获取清晰拍摄人脸的行人进行跨摄像头连续跟踪,增强数据的时空连续性,有助于节省大量的人力物力,具有重要的研究意义。In recent years, person re-identification, as an important research direction in the field of video surveillance, has received increasing attention. Specifically, pedestrian re-identification refers to the technology of using computer vision technology to determine whether there is a specific pedestrian in an image or video sequence under cross-camera and cross-scene conditions. As an important supplement to face recognition technology, this technology can recognize pedestrians based on their clothing, posture, hairstyle and other information, and continuously track pedestrians whose faces cannot be clearly photographed in actual monitoring scenarios, enhancing the spatiotemporal data. Continuity helps save a lot of manpower and material resources, and has important research significance.
得益于深度神经网络的快速发展,基于监督式深度学习的行人重识别技术已经能够在主流公开数据集上达到非常高的识别率。在公开的Market-1501数据集上,rank1(首位命中率)已经达到了95%以上,超过了人眼的识别准确率。然而,作为一个重要的视觉任务,行人重识别仍然存在着很多挑战。在实际开放的应用场景中,由于季节、穿着、光照、摄像头不同,行人数据分布会产生极大的差异,若将在带标签的源域数据集中训练好的模型直接迁移到新的应用场景中去,会产生域间隔问题,从而使得特定场景的特定数据学习出来的识别模型不具有普适性,导致在开放环境下,模型泛化能力较差,识别性能显著下降,甚至不能正常完成行人重识别任务。Thanks to the rapid development of deep neural networks, person re-identification technology based on supervised deep learning has been able to achieve very high recognition rates on mainstream public datasets. On the public Market-1501 dataset, the rank1 (first hit rate) has reached more than 95%, exceeding the recognition accuracy of the human eye. However, as an important vision task, person re-identification still has many challenges. In actual open application scenarios, the distribution of pedestrian data will vary greatly due to different seasons, clothing, lighting, and cameras. If the model trained in the labeled source domain data set is directly transferred to a new application scenario In the open environment, the generalization ability of the model is poor, the recognition performance is significantly reduced, and even the pedestrian recognition cannot be completed normally. Identify tasks.
发明内容SUMMARY OF THE INVENTION
本发明实施例的目的是提供一种无监督行人重识别方法、装置、电子设备及存储介质,以解决现有将在带标签的源域数据集中训练好的模型直接迁移到新的应用场景中产生域间隔的问题。The purpose of the embodiments of the present invention is to provide an unsupervised pedestrian re-identification method, device, electronic device and storage medium, so as to solve the problem of directly migrating the existing model trained in the labeled source domain data set to a new application scenario The problem of generating domain gaps.
为了达到上述目的,本发明所采用的技术方案如下:In order to achieve the above object, the technical scheme adopted in the present invention is as follows:
第一方面,本发明实施例提供一种无监督行人重识别方法,包括:In a first aspect, an embodiment of the present invention provides an unsupervised pedestrian re-identification method, including:
预训练步骤,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;The pre-training step is used to pre-train the deep person re-identification model with supervised learning in the labeled source domain dataset;
训练特征提取步骤,用于利用预训练好的深度行人重识别模型提取无标签目标域中训练集样本的训练特征;The training feature extraction step is used to extract the training features of the training set samples in the unlabeled target domain by using the pre-trained deep person re-identification model;
划分步骤,用于根据所述训练特征,基于自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;The dividing step is used to divide the training set samples of the target domain into several clusters based on the adaptive clustering method according to the training features, and assign corresponding pseudo labels;
再训练步骤,用于分别将每个簇定为一个原型,簇中的样本为原型中的可视样本,计算可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;The retraining step is used to set each cluster as a prototype, and the samples in the cluster are the visible samples in the prototype, calculate the distance between the visible sample and the prototype center, and select the visible samples whose distance is less than the set threshold. sample, screen the training features according to the selected visual samples, obtain the training features after screening, and retrain the pre-trained deep pedestrian re-identification model by using the training features after screening and the assigned pseudo-labels, Obtain the deep pedestrian re-identification model after updating the parameters;
识别步骤,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification step is used to input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test features of the query set pictures and the test features of the pictures to be selected, and calculate the difference between the two in the metric space. Similarity, according to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set.
进一步地,还包括:Further, it also includes:
迭代收敛步骤,用于重复训练特征提取步骤、划分步骤和再训练步骤,更新迭代权重,直至收敛。The iterative convergence step is used to repeat the training feature extraction step, the division step and the retraining step, and update the iterative weights until convergence.
进一步地,根据所述的训练特征,利用自适应聚类的方法将目标域行人图像分为若干簇,并分配对应的伪标签,包括:Further, according to the described training features, the pedestrian images in the target domain are divided into several clusters by the adaptive clustering method, and corresponding pseudo-labels are assigned, including:
根据训练特征计算目标域训练集行人图像两两之间的距离,形成距离矩阵;Calculate the distance between the pedestrian images in the target domain training set according to the training features to form a distance matrix;
基于所述的距离矩阵,利用基于密度的自适应聚类算法对目标域训练集行人图像进行无监督聚类,生成若干簇;Based on the distance matrix, use a density-based adaptive clustering algorithm to perform unsupervised clustering on pedestrian images in the target domain training set to generate several clusters;
无监督聚类之后,给目标域训练集中的每个样本分配相应的伪标签。After unsupervised clustering, each sample in the training set of the target domain is assigned a corresponding pseudo-label.
进一步地,分别将每个簇定为一个原型,簇中的样本为原型中的可视样本,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型,包括:Further, each cluster is designated as a prototype, the samples in the cluster are the visible samples in the prototype, the distance between the visible samples in the prototype and the prototype center is calculated, and the visible samples whose distance is less than the set threshold are selected. , screen the training features according to the selected visual samples to obtain the filtered training features, and retrain the pre-trained deep pedestrian re-identification model by using the screened training features and the assigned pseudo-labels to obtain The deep person re-identification model after updating the parameters, including:
将第个簇设定为一个原型,其中,,为簇的个数,簇中每个对象设定为原型的可视样本,计算对应的原型中心;will clusters are set as a prototype ,in, , is the number of clusters, the cluster set each object as a prototype The visual samples of , calculate the corresponding prototype center ;
计算原型中可视样本与原型中心的距离,形成距离向量,中第个元素代表与原型中心的距离;Computational prototype China Visual Sample and Prototype Center distance, forming a distance vector , B elements represent with Prototype Center the distance;
在中选出小于阈值的可视样本,挑选方式如下:exist selected Visual samples smaller than the threshold are selected as follows:
其中,代表原型挑选出的可视样本,为设置的距离阈值,代表示性函数,若成立则为1,反之,则为0;in, representative prototype selected visual samples, is the set distance threshold, On behalf of the representative function, if it is established, it is 1, otherwise, it is 0;
利用挑选好的样本对所述的训练特征进行相应的筛选,得到挑选后的训练特征,然后对预训练好的深度行人重识别模型进行训练,得到更新参数后的深度行人重识别模型。Use selected samples The training features are screened accordingly to obtain the selected training features, and then the pre-trained deep pedestrian re-identification model is trained to obtain a deep pedestrian re-identification model with updated parameters.
进一步地,原型中心的计算公式如下:Further, the prototype center The calculation formula is as follows:
其中,为原型的可视样本个数,为原型的第个样本,。in, as a prototype the number of visible samples, as a prototype First samples, .
进一步地,将目标域的查询集和待选集输入到所述更新参数后的深度行人重识别模型中,分别得到二者特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片,包括:Further, input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model after the update parameters, obtain the characteristics of the two respectively, calculate the similarity of the two in the metric space, and select the set from the set according to the similarity. Identify the candidate images that meet the query image requirements, including:
将目标域的查询集和待选集输入到所述的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征;Input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model to obtain the test features of the query set pictures and the test features of the pictures to be selected respectively;
计算所述的查询集的测试特征和待选集的测试特征在度量空间的欧氏距离,得到查询集的测试特征和待选集的测试特征的相似度矩阵,根据相似度矩阵从待选集中识别出符合查询图片要求的待选图片。Calculate the Euclidean distance of the test feature of the query set and the test feature of the set to be selected in the metric space, obtain the similarity matrix of the test feature of the query set and the test feature of the set to be selected, and identify the test feature from the set to be selected according to the similarity matrix Candidate images that meet the query image requirements.
第二方面,本发明实施例提供一种无监督行人重识别装置,包括:In a second aspect, an embodiment of the present invention provides an unsupervised pedestrian re-identification device, including:
预训练单元,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;A pre-training unit for pre-training a deep person re-identification model with supervised learning in a labeled source-domain dataset;
训练特征提取单元,用于利用预训练好的深度行人重识别模型提取无标签目标域中训练集样本的训练特征;The training feature extraction unit is used to extract the training features of the training set samples in the unlabeled target domain by using the pre-trained deep person re-identification model;
划分单元,用于根据所述训练特征,利用自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;a dividing unit, which is used to divide the training set samples of the target domain into several clusters according to the training characteristics by using an adaptive clustering method, and assign corresponding pseudo labels;
再训练单元,用于分别将每个簇定为一个原型,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;The retraining unit is used to set each cluster as a prototype, calculate the distance between the visible samples in the prototype and the prototype center, select the visible samples whose distance is less than the set threshold, and analyze the selected visual samples according to the selected visual samples. The pre-trained deep pedestrian re-identification model is retrained by using the filtered training characteristics and assigned pseudo-labels to obtain the deep pedestrian re-identification model with updated parameters. ;
识别单元,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification unit is used to input the query set and the set to be selected in the target domain into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test features of the query set pictures and the test features of the pictures to be selected, and calculate the difference between the two in the metric space. Similarity, according to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set.
进一步地,还包括:Further, it also includes:
迭代收敛单元,用于重复执行训练特征提取单元、划分单元和再训练单元,更新迭代权重,直至收敛。The iterative convergence unit is used to repeatedly execute the training feature extraction unit, the division unit and the retraining unit, and update the iterative weights until convergence.
第三方面,本发明实施例提供一种电子设备,包括:In a third aspect, an embodiment of the present invention provides an electronic device, including:
一个或多个处理器;one or more processors;
存储器,用于存储一个或多个程序;memory for storing one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面所述的一种无监督行人重识别方法。When the one or more programs are executed by the one or more processors, the one or more processors implement an unsupervised pedestrian re-identification method as described in the first aspect.
第四方面,本发明实施例提供一种计算机可读的存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面所述的一种无监督行人重识别方法。In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method for unsupervised pedestrian re-identification described in the first aspect.
在本发明实施例的无监督行人重识别和装置中,采用了自适应聚类的方法为无标签的目标域分配伪标签,自适应聚类的方法在聚类时不需要事先设定簇个数,不仅克服了目标域身份类别个数不确定的困难,而且充分挖掘了目标域内部的视觉信息,为跨域迁移提供了更丰富的上下文环境;由于自适应聚类分配的伪标签并不能代表真实标签,目标域分配的伪标签存在一定的噪声,在本发明实施例的无监督行人重识别和装置中,采用了原型优选的方式,有针对性地选择值得信赖的样本参与训练,剔除了可能降低模型性能的噪声样本,提高了识别的准确度。In the unsupervised pedestrian re-identification and device of the embodiment of the present invention, an adaptive clustering method is used to assign pseudo-labels to unlabeled target domains, and the adaptive clustering method does not need to set the cluster number in advance during clustering. It not only overcomes the difficulty of the uncertainty of the number of identity categories in the target domain, but also fully mines the visual information inside the target domain, providing a richer context for cross-domain transfer; the pseudo-labels assigned by adaptive clustering cannot It represents the real label, and the pseudo-label assigned by the target domain has a certain noise. In the unsupervised pedestrian re-identification and device of the embodiment of the present invention, the prototype optimization method is adopted, and trustworthy samples are targeted to participate in training, and the elimination of Noise samples that may degrade model performance are eliminated, improving recognition accuracy.
附图说明Description of drawings
此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The accompanying drawings described herein are used to provide further understanding of the present invention and constitute a part of the present invention. The exemplary embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention. In the attached image:
图1是本发明实施例提供的一种无监督行人重识别方法的流程图;1 is a flowchart of an unsupervised pedestrian re-identification method provided by an embodiment of the present invention;
图2是本发明实施例提供的一种无监督行人重识别装置的框图。FIG. 2 is a block diagram of an unsupervised pedestrian re-identification device provided by an embodiment of the present invention.
具体实施方式Detailed ways
使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案和具体操作过程进行清楚、完整地描述,但本发明的保护范围不限于下述的实施例。To make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions and specific operation processes in the embodiments of the present invention will be described clearly and completely below with reference to the accompanying drawings in the embodiments of the present invention. The scope of protection is not limited to the following examples.
实施例1:Example 1:
如图1所示,本发明实施例公开了一种无监督行人重识别方法,包括以下步骤:As shown in FIG. 1 , an embodiment of the present invention discloses an unsupervised pedestrian re-identification method, which includes the following steps:
预训练步骤S101,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;The pre-training step S101 is used for pre-training a deep pedestrian re-identification model by using a supervised learning method in a labeled source domain data set;
该步骤中深度行人重识别模型采用深度残差神经网络,并在带标签的源域数据集上进行监督式训练,使获得在源域上相对鲁棒的性能。In this step, the deep person re-identification model Using deep residual neural networks and supervised training on the labeled source-domain datasets, the Obtain relatively robust performance on the source domain.
训练特征提取步骤S103,用于利用预训练好的深度行人重识别模型提取无标签目标域中训练集样本的训练特征;The training feature extraction step S103 is used to extract the training features of the training set samples in the unlabeled target domain by using the pre-trained deep pedestrian re-identification model ;
划分步骤S105,用于根据所述训练特征,利用自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;The dividing step S105 is used to divide the training set samples of the target domain into several clusters according to the training features by using an adaptive clustering method, and assign corresponding pseudo labels;
具体地,包括以下子步骤:Specifically, it includes the following sub-steps:
步骤S1052:基于已有的距离矩阵,利用基于密度的自适应聚类算法DBSCAN对目标域训练集行人图像进行无监督聚类。具体步骤如下:Step S1052: Based on the existing distance matrix , using the density-based adaptive clustering algorithm DBSCAN to perform unsupervised clustering of pedestrian images in the target domain training set. Specific steps are as follows:
首先,初始化簇核心对象样本集合,聚类簇,未访问样本集合,簇划分;First, initialize the cluster core object sample set , clusters , the sample collection is not visited , the cluster division ;
其次,对于样本集中每一个样本,根据∈找到其-领域子样本集,如果,则将加入核心对象样本集,其中,表示聚类扫描半径,表示每簇最小样本数。如果扫描完毕后,,则结束。其中,设置,扫描半径的计算方式如下:将距离矩阵的右上角展开,获得不重复的样本两两之间的距离,按照从小到大的顺序进行排列,取第t-top个距离设为。Second, for the sample set each sample in ,according to ∈ find its - Domain subsample set ,if , then the Add to core object sample set ,in, represents the cluster scan radius, Indicates the minimum number of samples per cluster. If after scanning, , it ends. Among them, set , scan radius is calculated as follows: the distance matrix Expand the upper right corner of , to obtain the distance between pairs of non-repeated samples, arrange them in order from small to large, and take the t-top distance as .
接着,在核心对象样本集合中,随机选择一个核心对象,初始化当前簇核心对象队列,类别序号,当前簇样本集合,更新未访问样本集合;Next, in the core object sample collection , randomly select a core object , initialize the current cluster core object queue , category number , the current cluster sample set , to update the collection of unvisited samples ;
然后,从取出一个核心对象,根据∈找到其,令,更新,,。如果当前簇核心对象队列,则当前簇生成完毕,更新,核心对象集合。如果,则结束;Then, from take out a core object ,according to ∈ find its ,make ,renew , , . If the current cluster core object queue , then the current cluster Generated, updated , the core object collection . if , then it ends;
最后,输出簇划分,更新为得到的簇总数。Finally, the output cluster partition , Update to the total number of clusters obtained.
基于密度的DBSCAN聚类方法不需要事先确定簇个数,更符合开放环境下无监督行人重识别的真实场景。The density-based DBSCAN clustering method does not need to determine the number of clusters in advance, and is more in line with the real scene of unsupervised pedestrian re-identification in an open environment.
步骤S1053:自适应聚类之后,给目标域训练集的每个样本分配相应的伪标签。Step S1053: After adaptive clustering, assign a corresponding pseudo-label to each sample of the target domain training set.
再训练步骤S107,用于分别将每个簇定为一个原型,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;Retraining step S107 is used to define each cluster as a prototype, calculate the distance between the visible sample in the prototype and the center of the prototype, select the visible sample whose distance is less than the set threshold, and pair according to the selected visible sample. The training features are screened to obtain the screened training features, and the pre-trained deep pedestrian re-identification model is retrained by using the screened training features and the allocated pseudo-labels to obtain the deep pedestrian re-identification model after updating the parameters. Model;
具体地,包括以下子步骤:Specifically, it includes the following sub-steps:
步骤S1071:将簇划分中每个簇设定为一个原型,簇中每个对象设定为原型的可视样本。计算对应的原型中心,计算方式如下:Step S1071: Divide clusters each cluster in set as a prototype ,cluster set each object as a prototype visual sample. Calculate the corresponding prototype center, the calculation method is as follows:
其中,为原型的可视样本个数。in, as a prototype the number of visible samples.
步骤S1072:计算原型中可视样本与原型中心的距离,形成距离向量,中每个元素代表可视样本与原型中心的距离,为原型的可视样本个数。计算方式如下:Step S1072: Calculate the prototype China Visual Sample and Prototype Center distance, forming a distance vector , each element in represents a visual sample with Prototype Center the distance, as a prototype the number of visible samples. It is calculated as follows:
步骤S1073:依据自定步调学习的原则,设置阈值,自动优选在所属原型里离原型中心足够近的目标域训练样本参与训练,挑选方式如下:Step S1073: According to the principle of self-paced learning, set a threshold, and automatically select the target domain training samples in the prototype that are close enough to the prototype center to participate in the training. The selection method is as follows:
其中,为设置的距离阈值,代表示性函数,若成立则为1,反之,则为0。in, is the set distance threshold, Represents a representative function, 1 if true, 0 otherwise.
步骤S1074:将挑选的样本进行训练。其中,经由步骤S103,对得到训练特征进行相应的删选,得到挑选后的特征,最小化三元组损失。计算方式如下:Step S1074: the selected sample to train. Wherein, through step S103, the training features are obtained for Carry out the corresponding deletion and get the selected features , which minimizes the triplet loss . It is calculated as follows:
令为其中一个元素,其伪标签为,取离它最近的负样本,其伪标签为,取离它最远的正样本,其伪标签为,记:make for One of the elements whose pseudo tag is , take the nearest negative sample , whose pseudo-label is , take the positive sample farthest from it , whose pseudo-label is ,remember:
令,为边界值,则度量损失的计算公式为:make , is the boundary value, then measure the loss The calculation formula is:
其中,为hinge函数。in, is the hinge function.
识别步骤S109,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification step S109 is used to input the query set and the set to be selected in the target domain into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test feature of the query set picture and the test feature of the set of pictures to be selected, and calculate the two in the metric space. According to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set according to the similarity.
具体地,包括以下子步骤:Specifically, it includes the following sub-steps:
步骤S1091:将查询集和待选集,为查询集元素个数,为待选集元素个数;和G中元素皆为RGB图片,大小被调整为256×128×3,分别输入经由步骤S107后的深度行人重识别模型,获得相应特征集合,其中:Step S1091: the query set and selections , is the number of query set elements, is the number of elements to be selected; The elements in and G are all RGB images, and the size is adjusted to 256×128×3, respectively input the deep pedestrian re-identification model after step S107 to obtain the corresponding feature set, where:
步骤S1092:计算和之间的欧氏距离,构建距离矩阵,针对每个查询图片将待选图片按照距离大小进行排序,设置查询个数s,取距离较小的前s个待选图片作为该查询图片的检索候选列表;并用mAP和Rank@1对结果的准确率进行评估。Step S1092: Calculate and Euclidean distance between, build a distance matrix , for each query picture, sort the pictures to be selected according to the size of the distance, set the number of queries s , and take the first s pictures to be selected with a smaller distance as the retrieval candidate list of the query picture; and use mAP and Rank@1 to compare the results accuracy is evaluated.
为了提高行人重识别模型的鲁棒性和准确度,还包括:To improve the robustness and accuracy of the person re-ID model, it also includes:
迭代收敛步骤S108,用于重复训练特征提取步骤S103、划分步骤S105和再训练步骤S107,更新迭代模型M的权重,直至收敛。The iterative convergence step S108 is used to repeat the training feature extraction step S103, the division step S105 and the retraining step S107, and update the weight of the iterative model M until convergence.
以下表1,是基于本发明上述实施例所提供的方法得到的识别准确率结果。从上至下依次陈列了用以对照的其他基准方法同本实施例实施的结果比较,可以看出,本发明上述实施例识别性能有很好的提升。The following Table 1 shows the recognition accuracy results obtained based on the methods provided by the above embodiments of the present invention. From the top to the bottom, other benchmark methods for comparison are listed in order compared with the results implemented in this embodiment. It can be seen that the recognition performance of the above-mentioned embodiment of the present invention is well improved.
表1:识别准确率结果Table 1: Recognition Accuracy Results
综上所述,本发明实施例公开了一种无监督行人重识别方法,该方法利用现有深度学习的优势,通过深度残差神经网络提取特征;构建了基于自适应聚类和原型优选的无监督行人重识别模型,其中,自适应聚类的方法可以自动为目标域行人分配伪标签,不仅克服了目标域身份类别个数不确定的困难,而且充分挖掘了目标域内部的视觉信息,为跨域迁移提供了更丰富的上下文环境,由于自适应聚类分配的伪标签并不能代表真实标签,目标域训练样本存在一定的噪声,基于原型优选的方法自动为目标域训练过程挑选值得信赖的样本,剔除了可能降低模型性能的不信赖样本。综上所述,本发明有效缓解了域间隔问题,提高了行人重识别跨域迁移的准确度,具有良好的鲁棒性和普遍的适用性。To sum up, the embodiment of the present invention discloses an unsupervised pedestrian re-identification method, which utilizes the advantages of existing deep learning and extracts features through a deep residual neural network; constructs a method based on adaptive clustering and prototype optimization. Unsupervised pedestrian re-identification model, in which the adaptive clustering method can automatically assign pseudo-labels to pedestrians in the target domain, which not only overcomes the difficulty of the uncertain number of identity categories in the target domain, but also fully exploits the visual information inside the target domain. Provides a richer context for cross-domain transfer. Since the pseudo-labels assigned by adaptive clustering cannot represent real labels, the training samples in the target domain have a certain amount of noise. The method based on prototype optimization automatically selects trustworthy for the training process of the target domain. , removing untrusted samples that may degrade model performance. To sum up, the present invention effectively alleviates the problem of domain interval, improves the accuracy of cross-domain transfer of pedestrian re-identification, and has good robustness and general applicability.
实施例2:Example 2:
如图2所示,本实施例提供一种无监督行人重识别装置,该装置为实施例1提供的一种无监督行人重识别方法对应的虚拟装置,该装置具备执行该方法相应的功能模块和有益效果,该装置包括:As shown in FIG. 2 , this embodiment provides an unsupervised pedestrian re-identification device, which is a virtual device corresponding to the unsupervised pedestrian re-identification method provided in Embodiment 1, and the device has function modules corresponding to executing the method. and beneficial effects, the device includes:
预训练单元901,用于在带标签的源域数据集中用监督式学习的方法预训练深度行人重识别模型;A pre-training unit 901, used for pre-training a deep person re-identification model by using a supervised learning method in a labeled source domain data set;
训练特征提取单元903,用于利用预训练好的深度行人重识别模型提取无标签目标域中训片集样本的训练特征;The training feature extraction unit 903 is used to extract the training features of the training film set samples in the unlabeled target domain by using the pre-trained deep pedestrian re-identification model;
划分单元905,用于根据所述训练特征,利用自适应聚类的方法将目标域训练集样本分为若干簇,并分配对应的伪标签;A dividing unit 905 is used to divide the training set samples of the target domain into several clusters by means of adaptive clustering according to the training features, and assign corresponding pseudo labels;
再训练单元907,用于分别将每个簇定为一个原型,计算原型中可视样本与原型中心的距离,挑选出所述距离小于设定阈值的可视样本,根据挑选的可视样本对所述的训练特征进行筛选,得到筛选后的训练特征,利用筛选后的训练特征和分配好的伪标签对预训练好的深度行人重识别模型进行再训练,得到更新参数后的深度行人重识别模型;The retraining unit 907 is used to designate each cluster as a prototype, calculate the distance between the visible sample in the prototype and the prototype center, select the visible sample whose distance is less than the set threshold, and select the visible sample according to the selected visual sample. The training features are screened to obtain the screened training features, and the pre-trained deep pedestrian re-identification model is retrained by using the screened training features and the allocated pseudo-labels to obtain the deep pedestrian re-identification model after updating the parameters. Model;
识别单元909,用于将目标域的查询集和待选集输入到更新参数后的深度行人重识别模型中,分别得到查询集图片的测试特征和待选集图片的测试特征,计算二者在度量空间的相似度,根据相似度从待选集中识别出符合查询图片要求的待选图片。The identification unit 909 is used to input the query set of the target domain and the set to be selected into the deep pedestrian re-identification model after updating the parameters, respectively obtain the test feature of the query set picture and the test feature of the set of pictures to be selected, and calculate the two in the metric space. According to the similarity, the candidate pictures that meet the query picture requirements are identified from the candidate set according to the similarity.
进一步地,还包括:Further, it also includes:
迭代收敛单元908,用于重复执行训练特征提取单元、划分单元和再训练单元,更新迭代权重,直至收敛。The iterative convergence unit 908 is configured to repeatedly execute the training feature extraction unit, the division unit and the retraining unit, and update the iterative weights until convergence.
实施例3:Example 3:
本实施例提供一种电子设备,包括:This embodiment provides an electronic device, including:
一个或多个处理器;one or more processors;
存储器,用于存储一个或多个程序;memory for storing one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如实施例1所述的一种无监督行人重识别方法。When the one or more programs are executed by the one or more processors, the one or more processors implement an unsupervised pedestrian re-identification method as described in Embodiment 1.
实施例4:Example 4:
本实施例提供一种计算机可读的存储介质,其上存储有计算机程序,该程序被处理器执行时实现如实施例1所述的一种无监督行人重识别方法。This embodiment provides a computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the unsupervised pedestrian re-identification method described in Embodiment 1 is implemented.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments.
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present invention, the description of each embodiment has its own emphasis. For parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的设备实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed technical content can be implemented in other ways. The device embodiments described above are only illustrative, for example, the division of the units may be a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention. The aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the scope of the present invention. within the scope of protection.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010842782.9A CN112069929B (en) | 2020-08-20 | 2020-08-20 | Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010842782.9A CN112069929B (en) | 2020-08-20 | 2020-08-20 | Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112069929A true CN112069929A (en) | 2020-12-11 |
CN112069929B CN112069929B (en) | 2024-01-05 |
Family
ID=73662357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010842782.9A Active CN112069929B (en) | 2020-08-20 | 2020-08-20 | Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112069929B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507901A (en) * | 2020-12-14 | 2021-03-16 | 华南理工大学 | Unsupervised pedestrian re-identification method based on pseudo tag self-correction |
CN112597871A (en) * | 2020-12-18 | 2021-04-02 | 中山大学 | Unsupervised vehicle re-identification method and system based on two-stage clustering and storage medium |
CN112766218A (en) * | 2021-01-30 | 2021-05-07 | 上海工程技术大学 | Cross-domain pedestrian re-identification method and device based on asymmetric joint teaching network |
CN112861825A (en) * | 2021-04-07 | 2021-05-28 | 北京百度网讯科技有限公司 | Model training method, pedestrian re-identification method, device and electronic equipment |
CN113536928A (en) * | 2021-06-15 | 2021-10-22 | 清华大学 | An efficient method and device for unsupervised person re-identification |
CN113553970A (en) * | 2021-07-29 | 2021-10-26 | 广联达科技股份有限公司 | Pedestrian re-identification method, device, equipment and readable storage medium |
CN113590852A (en) * | 2021-06-30 | 2021-11-02 | 北京百度网讯科技有限公司 | Training method of multi-modal recognition model, multi-modal recognition method and device |
CN113822262A (en) * | 2021-11-25 | 2021-12-21 | 之江实验室 | Pedestrian re-identification method based on unsupervised learning |
CN114299480A (en) * | 2021-12-22 | 2022-04-08 | 杭州海康威视数字技术股份有限公司 | A target detection model training method, target detection method and device |
CN114399724A (en) * | 2021-12-03 | 2022-04-26 | 清华大学 | Pedestrian re-identification method, device, electronic device and storage medium |
CN114550091A (en) * | 2022-02-24 | 2022-05-27 | 以萨技术股份有限公司 | Unsupervised pedestrian re-identification method and unsupervised pedestrian re-identification device based on local features |
CN115273148A (en) * | 2022-08-03 | 2022-11-01 | 北京百度网讯科技有限公司 | Pedestrian re-recognition model training method and device, electronic equipment and storage medium |
CN116030502A (en) * | 2023-03-30 | 2023-04-28 | 之江实验室 | Pedestrian re-recognition method and device based on unsupervised learning |
WO2023115911A1 (en) * | 2021-12-24 | 2023-06-29 | 上海商汤智能科技有限公司 | Object re-identification method and apparatus, electronic device, storage medium, and computer program product |
CN116912535A (en) * | 2023-09-08 | 2023-10-20 | 中国海洋大学 | Unsupervised target re-identification method, device and medium based on similarity screening |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109948561A (en) * | 2019-03-25 | 2019-06-28 | 广东石油化工学院 | Method and system for unsupervised image and video person re-identification based on transfer network |
CN111126360A (en) * | 2019-11-15 | 2020-05-08 | 西安电子科技大学 | Cross-domain person re-identification method based on unsupervised joint multi-loss model |
CN111242064A (en) * | 2020-01-17 | 2020-06-05 | 山东师范大学 | Pedestrian re-identification method and system based on camera style migration and single marking |
US20200226421A1 (en) * | 2019-01-15 | 2020-07-16 | Naver Corporation | Training and using a convolutional neural network for person re-identification |
-
2020
- 2020-08-20 CN CN202010842782.9A patent/CN112069929B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200226421A1 (en) * | 2019-01-15 | 2020-07-16 | Naver Corporation | Training and using a convolutional neural network for person re-identification |
CN109948561A (en) * | 2019-03-25 | 2019-06-28 | 广东石油化工学院 | Method and system for unsupervised image and video person re-identification based on transfer network |
CN111126360A (en) * | 2019-11-15 | 2020-05-08 | 西安电子科技大学 | Cross-domain person re-identification method based on unsupervised joint multi-loss model |
CN111242064A (en) * | 2020-01-17 | 2020-06-05 | 山东师范大学 | Pedestrian re-identification method and system based on camera style migration and single marking |
Non-Patent Citations (2)
Title |
---|
单纯;王敏;: "半监督单样本深度行人重识别方法", 计算机系统应用, no. 01 * |
张晓伟;吕明强;李慧;: "基于局部语义特征不变性的跨域行人重识别", 北京航空航天大学学报, no. 09 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112507901B (en) * | 2020-12-14 | 2022-05-24 | 华南理工大学 | Unsupervised pedestrian re-identification method based on pseudo tag self-correction |
CN112507901A (en) * | 2020-12-14 | 2021-03-16 | 华南理工大学 | Unsupervised pedestrian re-identification method based on pseudo tag self-correction |
CN112597871A (en) * | 2020-12-18 | 2021-04-02 | 中山大学 | Unsupervised vehicle re-identification method and system based on two-stage clustering and storage medium |
CN112597871B (en) * | 2020-12-18 | 2023-07-18 | 中山大学 | Unsupervised vehicle re-identification method, system and storage medium based on two-stage clustering |
CN112766218A (en) * | 2021-01-30 | 2021-05-07 | 上海工程技术大学 | Cross-domain pedestrian re-identification method and device based on asymmetric joint teaching network |
CN112861825A (en) * | 2021-04-07 | 2021-05-28 | 北京百度网讯科技有限公司 | Model training method, pedestrian re-identification method, device and electronic equipment |
CN112861825B (en) * | 2021-04-07 | 2023-07-04 | 北京百度网讯科技有限公司 | Model training method, pedestrian re-recognition method, device and electronic equipment |
WO2022213717A1 (en) * | 2021-04-07 | 2022-10-13 | 北京百度网讯科技有限公司 | Model training method and apparatus, person re-identification method and apparatus, and electronic device |
CN113536928A (en) * | 2021-06-15 | 2021-10-22 | 清华大学 | An efficient method and device for unsupervised person re-identification |
CN113536928B (en) * | 2021-06-15 | 2024-04-19 | 清华大学 | Efficient unsupervised pedestrian re-identification method and device |
CN113590852A (en) * | 2021-06-30 | 2021-11-02 | 北京百度网讯科技有限公司 | Training method of multi-modal recognition model, multi-modal recognition method and device |
CN113553970A (en) * | 2021-07-29 | 2021-10-26 | 广联达科技股份有限公司 | Pedestrian re-identification method, device, equipment and readable storage medium |
CN113822262B (en) * | 2021-11-25 | 2022-04-15 | 之江实验室 | A Pedestrian Re-identification Method Based on Unsupervised Learning |
CN113822262A (en) * | 2021-11-25 | 2021-12-21 | 之江实验室 | Pedestrian re-identification method based on unsupervised learning |
CN114399724A (en) * | 2021-12-03 | 2022-04-26 | 清华大学 | Pedestrian re-identification method, device, electronic device and storage medium |
CN114399724B (en) * | 2021-12-03 | 2024-06-28 | 清华大学 | Pedestrian re-recognition method and device, electronic equipment and storage medium |
CN114299480A (en) * | 2021-12-22 | 2022-04-08 | 杭州海康威视数字技术股份有限公司 | A target detection model training method, target detection method and device |
WO2023115911A1 (en) * | 2021-12-24 | 2023-06-29 | 上海商汤智能科技有限公司 | Object re-identification method and apparatus, electronic device, storage medium, and computer program product |
CN114550091A (en) * | 2022-02-24 | 2022-05-27 | 以萨技术股份有限公司 | Unsupervised pedestrian re-identification method and unsupervised pedestrian re-identification device based on local features |
CN115273148A (en) * | 2022-08-03 | 2022-11-01 | 北京百度网讯科技有限公司 | Pedestrian re-recognition model training method and device, electronic equipment and storage medium |
CN115273148B (en) * | 2022-08-03 | 2023-09-05 | 北京百度网讯科技有限公司 | Pedestrian re-recognition model training method and device, electronic equipment and storage medium |
CN116030502A (en) * | 2023-03-30 | 2023-04-28 | 之江实验室 | Pedestrian re-recognition method and device based on unsupervised learning |
CN116912535A (en) * | 2023-09-08 | 2023-10-20 | 中国海洋大学 | Unsupervised target re-identification method, device and medium based on similarity screening |
CN116912535B (en) * | 2023-09-08 | 2023-11-28 | 中国海洋大学 | An unsupervised target re-identification method, device and medium based on similarity screening |
Also Published As
Publication number | Publication date |
---|---|
CN112069929B (en) | 2024-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112069929A (en) | Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium | |
CN109948561B (en) | Method and system for unsupervised image and video pedestrian re-identification based on transfer network | |
CN108960080B (en) | Face recognition method based on active defense against image adversarial attack | |
CN110263697A (en) | Pedestrian based on unsupervised learning recognition methods, device and medium again | |
CN111611880B (en) | Efficient pedestrian re-recognition method based on neural network unsupervised contrast learning | |
CN104915351B (en) | Picture sort method and terminal | |
CN112819065B (en) | Unsupervised pedestrian sample mining method and unsupervised pedestrian sample mining system based on multi-clustering information | |
CN113642547B (en) | A method and system for unsupervised domain-adaptive person re-identification based on density clustering | |
CN111178208A (en) | Pedestrian detection method, device and medium based on deep learning | |
CN108491766B (en) | End-to-end crowd counting method based on depth decision forest | |
CN109711366A (en) | A Pedestrian Re-identification Method Based on Group Information Loss Function | |
CN112633071B (en) | Data Domain Adaptation Method for Person Re-ID Based on Data Style Decoupling Content Transfer | |
CN113221663A (en) | Real-time sign language intelligent identification method, device and system | |
Wu et al. | Decentralised learning from independent multi-domain labels for person re-identification | |
CN115641613A (en) | An unsupervised cross-domain person re-identification method based on clustering and multi-scale learning | |
CN110929679A (en) | An unsupervised adaptive person re-identification method based on GAN | |
CN106559645A (en) | Based on the monitoring method of video camera, system and device | |
CN110414376A (en) | Update method, face recognition cameras and the server of human face recognition model | |
CN113076963B (en) | Image recognition method and device and computer readable storage medium | |
CN113052150B (en) | Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium | |
Sheeba et al. | Hybrid features-enabled dragon deep belief neural network for activity recognition | |
CN111291780A (en) | Cross-domain network training and image recognition method | |
CN118628813A (en) | Passive domain adaptive image recognition method based on transferable semantic knowledge | |
CN112860936A (en) | Visual pedestrian re-identification method based on sparse graph similarity migration | |
CN111950352A (en) | Hierarchical face clustering method, system, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |