CN112069929A - Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium - Google Patents

Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112069929A
CN112069929A CN202010842782.9A CN202010842782A CN112069929A CN 112069929 A CN112069929 A CN 112069929A CN 202010842782 A CN202010842782 A CN 202010842782A CN 112069929 A CN112069929 A CN 112069929A
Authority
CN
China
Prior art keywords
training
pedestrian
prototype
samples
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010842782.9A
Other languages
Chinese (zh)
Other versions
CN112069929B (en
Inventor
陆易
叶喜勇
王军
徐晓刚
何鹏飞
张文广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202010842782.9A priority Critical patent/CN112069929B/en
Publication of CN112069929A publication Critical patent/CN112069929A/en
Application granted granted Critical
Publication of CN112069929B publication Critical patent/CN112069929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unsupervised pedestrian re-identification method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: pre-training a pedestrian re-recognition model in a labeled source domain data set; extracting training characteristics of a training set in the label-free target domain by using the model; according to the training characteristics, dividing a target domain training set into a plurality of clusters based on a self-adaptive clustering method, and distributing corresponding pseudo labels; setting each cluster as a prototype, selecting a sample with the distance from the center of the prototype to be smaller than a set threshold value from the prototype, and retraining the model by using the training characteristics and the pseudo labels of the sample to obtain a pedestrian re-identification model with updated parameters; and inputting the query set and the to-be-selected set of the target domain into the model, respectively obtaining the test characteristics of the query set and the to-be-selected set, and selecting pictures meeting the requirements of the query pictures from the to-be-selected set according to the similarity of the test characteristics. The method effectively relieves the problem of inter-domain separation and improves the accuracy of cross-domain pedestrian re-identification.

Description

Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium
Technical Field
The invention belongs to the technical field of artificial intelligence and computer vision, and particularly relates to an unsupervised pedestrian re-identification method and device, electronic equipment and a storage medium.
Background
With the acceleration of urbanization, public safety has become a focus and a demand of increasing attention. Monitoring cameras are widely covered in important public health areas such as university campuses, theme parks, hospitals, streets and the like, and good objective conditions are created for automatic monitoring by utilizing a computer vision technology.
In recent years, pedestrian re-identification has been receiving increasing attention as an important research direction in the field of video monitoring. Specifically, pedestrian re-identification refers to a technology of judging whether a specific pedestrian exists in an image or a video sequence by using a computer vision technology under a cross-camera and cross-scene condition. As an important supplement of a face recognition technology, the technology can recognize pedestrians according to wearing, posture, hairstyle and other information of the pedestrians, the pedestrians who cannot acquire clear shot faces are continuously tracked across cameras in an actual monitoring scene, the space-time continuity of data is enhanced, a large amount of manpower and material resources are saved, and the technology has important research significance.
Thanks to the rapid development of deep neural networks, pedestrian re-identification techniques based on supervised deep learning have been able to achieve very high recognition rates on mainstream public data sets. On the disclosed Market-1501 data set, rank1 (top hit rate) has reached over 95%, exceeding the recognition accuracy of the human eye. However, as an important visual task, pedestrian re-identification remains a number of challenges. In an actual open application scene, due to different seasons, wearing, illumination and cameras, the distribution of pedestrian data can generate great difference, and if a model trained in a source domain data set with a label is directly migrated to a new application scene, the inter-domain problem can be caused, so that the identification model learned by specific data of a specific scene has no universality, the generalization capability of the model is poor, the identification performance is remarkably reduced, and even a pedestrian re-identification task cannot be normally completed in an open environment.
Disclosure of Invention
The embodiment of the invention aims to provide an unsupervised pedestrian re-identification method, an unsupervised pedestrian re-identification device, electronic equipment and a storage medium, so as to solve the problem that domain intervals are generated when a model which is trained in a labeled source domain data set is directly migrated to a new application scene.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, an embodiment of the present invention provides an unsupervised pedestrian re-identification method, including:
a pre-training step, namely pre-training a deep pedestrian re-recognition model by using a supervised learning method in a labeled source domain data set;
a training feature extraction step, which is used for extracting training features of training set samples in the label-free target domain by utilizing a pre-trained deep pedestrian re-recognition model;
a dividing step, which is used for dividing the target domain training set samples into a plurality of clusters based on a self-adaptive clustering method according to the training characteristics and distributing corresponding pseudo labels;
a retraining step, namely respectively determining each cluster as a prototype, wherein samples in the clusters are visual samples in the prototypes, calculating the distance between the visual samples and the center of the prototypes, selecting the visual samples with the distance smaller than a set threshold value, screening the training characteristics according to the selected visual samples to obtain screened training characteristics, and retraining the pre-trained deep pedestrian re-recognition model by using the screened training characteristics and the distributed pseudo labels to obtain the parameter-updated deep pedestrian re-recognition model;
and an identification step, namely inputting the query set and the to-be-selected set of the target domain into the depth pedestrian re-identification model after the parameters are updated, respectively obtaining the test characteristics of the images of the query set and the test characteristics of the images of the to-be-selected set, calculating the similarity of the images of the query set and the test characteristics of the images of the to-be-selected set in the measurement space, and identifying the to-be-selected images meeting the requirements of the query images from the to-be-selected set according to the similarity.
Further, still include:
and the iteration convergence step is used for repeating the training feature extraction step, the division step and the retraining step and updating the iteration weight until convergence.
Further, according to the training characteristics, dividing the pedestrian images in the target domain into a plurality of clusters by using a self-adaptive clustering method, and distributing corresponding pseudo labels, comprising:
calculating the distance between every two pedestrian images in the target domain training set according to the training characteristics to form a distance matrix;
based on the distance matrix, carrying out unsupervised clustering on the pedestrian images in the target domain training set by using a density-based adaptive clustering algorithm to generate a plurality of clusters;
after unsupervised clustering, each sample in the target domain training set is assigned a corresponding pseudo label.
Further, respectively determining each cluster as a prototype, wherein the samples in the clusters are visual samples in the prototypes, calculating the distance between the visual samples in the prototypes and the center of the prototypes, selecting the visual samples with the distance smaller than a set threshold value, screening the training features according to the selected visual samples to obtain screened training features, retraining the pre-trained deep pedestrian re-recognition model by using the screened training features and the distributed pseudo labels to obtain the parameter-updated deep pedestrian re-recognition model, and the method comprises the following steps:
will be first
Figure 433569DEST_PATH_IMAGE001
Each cluster is set as a prototype
Figure 436029DEST_PATH_IMAGE002
Wherein
Figure 317397DEST_PATH_IMAGE003
Figure 814238DEST_PATH_IMAGE004
is the number of clusters, cluster
Figure 952745DEST_PATH_IMAGE005
Each object in the set is set as a prototype
Figure 611259DEST_PATH_IMAGE002
Calculating corresponding prototype center
Figure 979924DEST_PATH_IMAGE006
Calculating prototypes
Figure 14876DEST_PATH_IMAGE002
Center of the visual sample and prototype
Figure 34653DEST_PATH_IMAGE006
Forming a distance vector
Figure 864069DEST_PATH_IMAGE007
Figure 720030DEST_PATH_IMAGE008
To middle
Figure 558673DEST_PATH_IMAGE009
An element
Figure 668842DEST_PATH_IMAGE010
Represents
Figure 934739DEST_PATH_IMAGE011
And the prototype center
Figure 12416DEST_PATH_IMAGE006
The distance of (d);
in that
Figure 389171DEST_PATH_IMAGE002
Is selected out
Figure 383540DEST_PATH_IMAGE012
Visual samples smaller than the threshold are selected as follows:
Figure 820338DEST_PATH_IMAGE013
wherein,
Figure 385311DEST_PATH_IMAGE014
representative prototype
Figure 565757DEST_PATH_IMAGE002
The visual samples that are selected out are,
Figure 647589DEST_PATH_IMAGE015
in order to set the distance threshold value,
Figure 255288DEST_PATH_IMAGE016
the representative function is 1 if the representative function is true, and is 0 if the representative function is not true;
using selected samples
Figure 307558DEST_PATH_IMAGE017
And correspondingly screening the training characteristics to obtain the selected training characteristics, and then training the pre-trained deep pedestrian re-recognition model to obtain the parameter-updated deep pedestrian re-recognition model.
Further, prototype center
Figure 26115DEST_PATH_IMAGE006
The calculation formula of (a) is as follows:
Figure 729498DEST_PATH_IMAGE018
wherein,
Figure 508098DEST_PATH_IMAGE019
is a prototype
Figure 47664DEST_PATH_IMAGE002
The number of visual samples of (a) is,
Figure 569912DEST_PATH_IMAGE011
is a prototype
Figure 629266DEST_PATH_IMAGE002
To (1) a
Figure 578767DEST_PATH_IMAGE009
The number of the samples is one,
Figure 340050DEST_PATH_IMAGE020
further, inputting the query set and the to-be-selected set of the target domain into the deep pedestrian re-identification model after updating the parameters, respectively obtaining the features of the query set and the to-be-selected set, calculating the similarity of the query set and the to-be-selected set in the measurement space, and identifying the to-be-selected picture meeting the requirements of the query picture from the to-be-selected set according to the similarity, including:
inputting the query set and the to-be-selected set of the target domain into the deep pedestrian re-identification model to respectively obtain the test features of the query set picture and the test features of the to-be-selected set picture;
and calculating Euclidean distances between the test features of the query set and the test features of the to-be-selected set in a measurement space to obtain a similarity matrix of the test features of the query set and the test features of the to-be-selected set, and identifying the to-be-selected picture meeting the requirements of the query picture from the to-be-selected set according to the similarity matrix.
In a second aspect, an embodiment of the present invention provides an unsupervised pedestrian re-identification apparatus, including:
the pre-training unit is used for pre-training a deep pedestrian re-recognition model by a supervised learning method in a labeled source domain data set;
the training feature extraction unit is used for extracting training features of training set samples in the label-free target domain by utilizing the pre-trained deep pedestrian re-recognition model;
the dividing unit is used for dividing the target domain training set samples into a plurality of clusters by using a self-adaptive clustering method according to the training characteristics and distributing corresponding pseudo labels;
the retraining unit is used for respectively defining each cluster as a prototype, calculating the distance between a visual sample in the prototype and the center of the prototype, selecting the visual sample with the distance being smaller than a set threshold value, screening the training features according to the selected visual sample to obtain screened training features, and retraining the pre-trained deep pedestrian re-recognition model by using the screened training features and the distributed pseudo labels to obtain the parameter-updated deep pedestrian re-recognition model;
and the identification unit is used for inputting the query set and the to-be-selected set of the target domain into the depth pedestrian re-identification model after the parameters are updated, respectively obtaining the test characteristics of the images of the query set and the test characteristics of the images of the to-be-selected set, calculating the similarity of the images of the query set and the images of the to-be-selected set in the measurement space, and identifying the to-be-selected images meeting the requirements of the query images from the to-be-selected set according to the similarity.
Further, still include:
and the iteration convergence unit is used for repeatedly executing the training feature extraction unit, the division unit and the retraining unit and updating the iteration weight until convergence.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an unsupervised pedestrian re-identification method as described in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements an unsupervised pedestrian re-identification method according to the first aspect.
In the unsupervised pedestrian re-identification and device, the pseudo labels are distributed to the target domains without labels by adopting the self-adaptive clustering method, and the self-adaptive clustering method does not need to set the number of clusters in advance during clustering, so that the difficulty that the number of the identity categories of the target domains is uncertain is overcome, the visual information in the target domains is fully mined, and a richer context environment is provided for cross-domain migration; because the pseudo label distributed by the self-adaptive clustering can not represent a real label, and the pseudo label distributed by the target domain has certain noise, in the unsupervised pedestrian re-identification and device provided by the embodiment of the invention, a prototype optimal mode is adopted, reliable samples are selected in a targeted mode to participate in training, noise samples which possibly reduce the model performance are eliminated, and the identification accuracy is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an unsupervised pedestrian re-identification method according to an embodiment of the present invention;
fig. 2 is a block diagram of an unsupervised pedestrian re-identification apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions and specific operation procedures in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, but the scope of the present invention is not limited to the embodiments described below.
Example 1:
as shown in fig. 1, the embodiment of the invention discloses an unsupervised pedestrian re-identification method, which comprises the following steps:
a pre-training step S101, which is used for pre-training a deep pedestrian re-recognition model by a supervised learning method in a labeled source domain data set;
in the step, the deep pedestrian re-identification model
Figure 400410DEST_PATH_IMAGE021
Adopting a deep residual error neural network and carrying out supervised training on a source domain data set with a label so as to lead the data set to be subjected to the supervised training
Figure 78385DEST_PATH_IMAGE021
A relatively robust performance in the source domain is obtained.
A training feature extraction step S103 for extracting training features of training set samples in the label-free target domain by using the pre-trained deep pedestrian re-recognition model
Figure 198787DEST_PATH_IMAGE022
A dividing step S105, which is used for dividing the target domain training set samples into a plurality of clusters by using a self-adaptive clustering method according to the training characteristics and distributing corresponding pseudo labels;
specifically, the method comprises the following substeps:
step S1052: based on the existing distance matrix
Figure 712945DEST_PATH_IMAGE023
And carrying out unsupervised clustering on the pedestrian images in the target domain training set by using a density-based adaptive clustering algorithm DBSCAN. The method comprises the following specific steps:
first, a cluster core object sample set is initialized
Figure 576996DEST_PATH_IMAGE024
Clustering clusters
Figure 336574DEST_PATH_IMAGE025
Set of unaccessed samples
Figure 893457DEST_PATH_IMAGE026
Cluster division
Figure 363753DEST_PATH_IMAGE027
Second, for the sample set
Figure 297074DEST_PATH_IMAGE022
Each sample of
Figure 152903DEST_PATH_IMAGE028
According to
Figure 146267DEST_PATH_IMAGE029
Figure 369438DEST_PATH_IMAGE023
Find it
Figure 575291DEST_PATH_IMAGE030
-set of domain subsamples
Figure 318251DEST_PATH_IMAGE031
If, if
Figure 216936DEST_PATH_IMAGE032
Then will be
Figure 396245DEST_PATH_IMAGE028
Joining core object sample sets
Figure 405789DEST_PATH_IMAGE033
Wherein
Figure 767369DEST_PATH_IMAGE030
the radius of the cluster scan is represented as,
Figure 571377DEST_PATH_IMAGE034
representing the minimum number of samples per cluster. If the scanning is finished, then the scanning is finished,
Figure 769141DEST_PATH_IMAGE024
and then the process is ended. Wherein is provided with
Figure 316797DEST_PATH_IMAGE035
Scanning ofRadius of
Figure 31418DEST_PATH_IMAGE030
The calculation method of (c) is as follows: distance matrix
Figure 6328DEST_PATH_IMAGE036
Spreading the upper right corner of the sample to obtain the distance between two non-repeated samples, arranging the samples according to the sequence from small to large, and setting the t-top distance as the
Figure 691387DEST_PATH_IMAGE030
Then, in the core object sample set
Figure 42734DEST_PATH_IMAGE037
In (1), a core object is randomly selected
Figure 113327DEST_PATH_IMAGE038
Initializing current cluster core object queue
Figure 524716DEST_PATH_IMAGE039
Class number
Figure 431493DEST_PATH_IMAGE040
Current cluster sample set
Figure 320951DEST_PATH_IMAGE041
Updating the set of unaccessed samples
Figure 13095DEST_PATH_IMAGE042
Then, from
Figure 329807DEST_PATH_IMAGE043
Fetching a core object
Figure 723879DEST_PATH_IMAGE044
According to
Figure 417028DEST_PATH_IMAGE045
Figure 727793DEST_PATH_IMAGE023
Find it
Figure 480985DEST_PATH_IMAGE046
Let us order
Figure 362354DEST_PATH_IMAGE047
Update
Figure 593615DEST_PATH_IMAGE048
Figure 997701DEST_PATH_IMAGE049
Figure 921795DEST_PATH_IMAGE050
. If the current cluster core object queue
Figure 290459DEST_PATH_IMAGE051
Then the current cluster
Figure 325411DEST_PATH_IMAGE005
After generation, update
Figure 814030DEST_PATH_IMAGE052
Core object set
Figure 174605DEST_PATH_IMAGE053
. If it is not
Figure 30565DEST_PATH_IMAGE024
If yes, ending;
finally, output cluster partitioning
Figure 603629DEST_PATH_IMAGE054
Figure 979378DEST_PATH_IMAGE004
Updated to the resulting total number of clusters.
The density-based DBSCAN clustering method does not need to determine the number of clusters in advance, and is more suitable for real scenes of unsupervised pedestrian re-identification in an open environment.
Step S1053: after adaptive clustering, each sample of the training set of target domains is assigned a corresponding pseudo label.
A retraining step S107, configured to respectively define each cluster as a prototype, calculate a distance between a visible sample in the prototype and a center of the prototype, select a visible sample with the distance being smaller than a set threshold, screen the training features according to the selected visible sample, obtain screened training features, retrain the pre-trained deep pedestrian re-recognition model by using the screened training features and the assigned pseudo labels, and obtain a parameter-updated deep pedestrian re-recognition model;
specifically, the method comprises the following substeps:
step S1071: dividing clusters
Figure 979695DEST_PATH_IMAGE054
Each cluster of
Figure 322951DEST_PATH_IMAGE005
Set as a prototype
Figure 965285DEST_PATH_IMAGE055
In a cluster
Figure 694076DEST_PATH_IMAGE005
Each object in the set is set as a prototype
Figure 865294DEST_PATH_IMAGE002
A visual sample of (a). And calculating the corresponding prototype center in the following way:
Figure 430268DEST_PATH_IMAGE018
wherein,
Figure 610713DEST_PATH_IMAGE019
is a prototype
Figure 692545DEST_PATH_IMAGE002
The number of visual samples.
Step S1072: calculating prototypes
Figure 300244DEST_PATH_IMAGE002
Center of the visual sample and prototype
Figure 352514DEST_PATH_IMAGE006
Forming a distance vector
Figure 71071DEST_PATH_IMAGE056
Figure 774454DEST_PATH_IMAGE007
Each element of
Figure 553054DEST_PATH_IMAGE057
Representing visual samples
Figure 92620DEST_PATH_IMAGE011
And the prototype center
Figure 614868DEST_PATH_IMAGE006
The distance of (a) to (b),
Figure 674222DEST_PATH_IMAGE019
is a prototype
Figure 623723DEST_PATH_IMAGE002
The number of visual samples.
Figure 385006DEST_PATH_IMAGE012
The calculation method is as follows:
Figure 710945DEST_PATH_IMAGE058
step S1073: setting a threshold value according to the principle of self-paced learning, automatically and preferably selecting a target domain training sample close enough to the center of the prototype in the prototype to participate in training in the following mode:
Figure 123341DEST_PATH_IMAGE013
wherein,
Figure 243744DEST_PATH_IMAGE015
in order to set the distance threshold value,
Figure 757901DEST_PATH_IMAGE016
the representative function is 1 if true, and 0 if not.
Step S1074: the selected sample
Figure 621952DEST_PATH_IMAGE017
And (5) training. Wherein, via step S103, the training features are obtained
Figure 404968DEST_PATH_IMAGE022
Corresponding deletion selection is carried out to obtain the selected characteristics
Figure 961851DEST_PATH_IMAGE059
Minimizing triple losses
Figure 697726DEST_PATH_IMAGE060
. The calculation method is as follows:
order to
Figure 99888DEST_PATH_IMAGE061
Is composed of
Figure 486876DEST_PATH_IMAGE062
One of the elements is pseudo-labeled as
Figure 480240DEST_PATH_IMAGE063
Taking the nearest negative sample
Figure 172252DEST_PATH_IMAGE064
The pseudo label is
Figure 643685DEST_PATH_IMAGE065
Taking the positive sample furthest away
Figure 386644DEST_PATH_IMAGE066
The pseudo label is
Figure 19751DEST_PATH_IMAGE067
Recording:
Figure 995797DEST_PATH_IMAGE068
order to
Figure 739762DEST_PATH_IMAGE069
Figure 835763DEST_PATH_IMAGE070
Is a boundary value, then measure the loss
Figure 170930DEST_PATH_IMAGE071
The calculation formula of (2) is as follows:
Figure 103113DEST_PATH_IMAGE072
wherein,
Figure 916349DEST_PATH_IMAGE073
is a change function.
And an identification step S109, which is used for inputting the query set and the to-be-selected set of the target domain into the depth pedestrian re-identification model after the parameters are updated, respectively obtaining the test characteristics of the query set picture and the test characteristics of the to-be-selected set picture, calculating the similarity of the query set picture and the to-be-selected set picture in the measurement space, and identifying the to-be-selected picture meeting the requirements of the query picture from the to-be-selected set according to the similarity.
Specifically, the method comprises the following substeps:
step S1091: will query the set
Figure 365391DEST_PATH_IMAGE074
And candidate set
Figure 340300DEST_PATH_IMAGE075
Figure 25360DEST_PATH_IMAGE076
In order to query the number of collection elements,
Figure 642286DEST_PATH_IMAGE077
the number of elements to be selected is;
Figure 712879DEST_PATH_IMAGE078
and G, the elements are RGB pictures, the size of each RGB picture is adjusted to 256 × 128 × 3, and the deep pedestrian re-recognition models obtained in step S107 are input to obtain corresponding feature sets, where:
Figure 858689DEST_PATH_IMAGE079
Figure 765466DEST_PATH_IMAGE080
step S1092: computing
Figure 920503DEST_PATH_IMAGE081
And
Figure 347068DEST_PATH_IMAGE082
euclidean distance between them, constructing a distance matrix
Figure 929359DEST_PATH_IMAGE083
Ordering the pictures to be selected according to the distance of each query picture, and setting the query numbersTaking the front part with smaller distancesTaking the picture to be selected as a retrieval candidate list of the query picture; and evaluating the accuracy of the result by using mAP and Rank @ 1.
In order to improve the robustness and accuracy of the pedestrian re-identification model, the method further comprises the following steps:
and an iteration convergence step S108, configured to repeat the training feature extraction step S103, the division step S105, and the retraining step S107, and update the weight of the iterative model M until convergence.
Table 1 below shows the result of the recognition accuracy obtained by the method according to the above embodiment of the present invention. The results of comparison of other reference methods for comparison with the results of the embodiment are shown from top to bottom, and it can be seen that the recognition performance of the embodiment of the invention is greatly improved.
Table 1: identification accuracy results
Figure 323431DEST_PATH_IMAGE084
In summary, the embodiment of the invention discloses an unsupervised pedestrian re-identification method, which utilizes the advantages of the existing deep learning to extract features through a deep residual error neural network; an unsupervised pedestrian re-recognition model based on adaptive clustering and prototype optimization is constructed, wherein a method of adaptive clustering can automatically allocate pseudo labels to the pedestrians in the target domain, the difficulty that the number of the identity categories of the target domain is uncertain is overcome, visual information in the target domain is fully mined, a richer context environment is provided for cross-domain migration, as the pseudo labels allocated by adaptive clustering cannot represent real labels, certain noise exists in training samples of the target domain, reliable samples are automatically selected for the training process of the target domain based on the prototype optimization method, and unreliable samples which can possibly reduce the performance of the model are eliminated. In conclusion, the method and the device effectively relieve the inter-domain separation problem, improve the accuracy of pedestrian re-identification cross-domain migration, and have good robustness and universal applicability.
Example 2:
as shown in fig. 2, the present embodiment provides an unsupervised pedestrian re-identification apparatus, which is a virtual apparatus corresponding to the unsupervised pedestrian re-identification method provided in embodiment 1, and the apparatus has corresponding functional modules and beneficial effects for executing the method, and the apparatus includes:
the pre-training unit 901 is used for pre-training a deep pedestrian re-recognition model by a supervised learning method in a labeled source domain data set;
a training feature extraction unit 903, configured to extract training features of a training patch set sample in a label-free target domain by using a pre-trained deep pedestrian re-recognition model;
a dividing unit 905, configured to divide the target domain training set samples into a plurality of clusters by using a self-adaptive clustering method according to the training characteristics, and allocate corresponding pseudo labels;
a retraining unit 907, configured to respectively define each cluster as a prototype, calculate a distance between a visible sample in the prototype and a center of the prototype, select a visible sample with the distance being smaller than a set threshold, screen the training features according to the selected visible sample, obtain screened training features, and retrain the pre-trained deep pedestrian re-recognition model by using the screened training features and the assigned pseudo labels, to obtain a parameter-updated deep pedestrian re-recognition model;
the identifying unit 909 is configured to input the query set and the candidate set of the target domain into the deep pedestrian re-identification model after updating the parameters, obtain the test features of the query set picture and the test features of the candidate set picture respectively, calculate the similarity of the two in the metric space, and identify the candidate picture meeting the requirement of the query picture from the candidate set according to the similarity.
Further, still include:
and the iteration convergence unit 908 is used for repeatedly executing the training feature extraction unit, the division unit and the retraining unit and updating the iteration weight until convergence.
Example 3:
the present embodiment provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an unsupervised pedestrian re-identification method as described in embodiment 1.
Example 4:
the present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements an unsupervised pedestrian re-identification method as described in embodiment 1.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described device embodiments are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An unsupervised pedestrian re-identification method is characterized by comprising the following steps:
a pre-training step, namely pre-training a deep pedestrian re-recognition model by using a supervised learning method in a labeled source domain data set;
a training feature extraction step, which is used for extracting training features of training set samples in the label-free target domain by utilizing a pre-trained deep pedestrian re-recognition model;
a dividing step, which is used for dividing the target domain training set samples into a plurality of clusters based on a self-adaptive clustering method according to the training characteristics and distributing corresponding pseudo labels;
a retraining step, namely respectively determining each cluster as a prototype, wherein samples in the clusters are visual samples in the prototypes, calculating the distance between the visual samples and the center of the prototypes, selecting the visual samples with the distance smaller than a set threshold value, screening the training characteristics according to the selected visual samples to obtain screened training characteristics, and retraining the pre-trained deep pedestrian re-recognition model by using the screened training characteristics and the distributed pseudo labels to obtain the parameter-updated deep pedestrian re-recognition model;
and an identification step, namely inputting the query set and the to-be-selected set of the target domain into the depth pedestrian re-identification model after the parameters are updated, respectively obtaining the test characteristics of the images of the query set and the test characteristics of the images of the to-be-selected set, calculating the similarity of the images of the query set and the test characteristics of the images of the to-be-selected set in the measurement space, and identifying the to-be-selected images meeting the requirements of the query images from the to-be-selected set according to the similarity.
2. The unsupervised pedestrian re-identification method according to claim 1, further comprising:
and the iteration convergence step is used for repeating the training feature extraction step, the division step and the retraining step and updating the iteration weight until convergence.
3. The unsupervised pedestrian re-identification method according to claim 1, wherein the step of dividing the pedestrian images in the target domain into a plurality of clusters by using an adaptive clustering method according to the training features and allocating corresponding pseudo labels comprises the following steps:
calculating the distance between every two pedestrian images in the target domain training set according to the training characteristics to form a distance matrix;
based on the distance matrix, carrying out unsupervised clustering on the pedestrian images in the target domain training set by using a density-based adaptive clustering algorithm to generate a plurality of clusters;
after unsupervised clustering, each sample in the target domain training set is assigned a corresponding pseudo label.
4. The unsupervised pedestrian re-recognition method according to claim 1, wherein each cluster is respectively determined as a prototype, the samples in the clusters are visual samples in the prototypes, the distance between the visual samples in the prototypes and the center of the prototypes is calculated, the visual samples with the distance smaller than a set threshold value are selected, the training features are screened according to the selected visual samples to obtain screened training features, and the pre-trained deep pedestrian re-recognition model is retrained by using the screened training features and the distributed pseudo labels to obtain the parameter-updated deep pedestrian re-recognition model, comprising:
will be first
Figure 540463DEST_PATH_IMAGE001
An individual cluster
Figure 175712DEST_PATH_IMAGE002
Set as a prototype
Figure 424291DEST_PATH_IMAGE003
Wherein
Figure 288342DEST_PATH_IMAGE004
Figure 305976DEST_PATH_IMAGE005
is the number of clusters, cluster
Figure 345083DEST_PATH_IMAGE002
Each object in the set is set as a prototype
Figure 80958DEST_PATH_IMAGE003
Calculating corresponding prototype center
Figure 748700DEST_PATH_IMAGE006
Calculating prototypes
Figure 886420DEST_PATH_IMAGE003
Center of the visual sample and prototype
Figure 597893DEST_PATH_IMAGE006
Forming a distance vector
Figure 821064DEST_PATH_IMAGE007
Figure 26917DEST_PATH_IMAGE008
To middle
Figure 19144DEST_PATH_IMAGE009
An element
Figure 668562DEST_PATH_IMAGE010
Represents
Figure 113450DEST_PATH_IMAGE011
And the prototype center
Figure 857415DEST_PATH_IMAGE006
The distance of (d);
in that
Figure 218995DEST_PATH_IMAGE003
Is selected out
Figure 23003DEST_PATH_IMAGE012
Visual samples smaller than the threshold are selected as follows:
Figure 220766DEST_PATH_IMAGE013
wherein,
Figure 768422DEST_PATH_IMAGE014
representative prototype
Figure 488903DEST_PATH_IMAGE003
The visual samples that are selected out are,
Figure 729392DEST_PATH_IMAGE015
in order to set the distance threshold value,
Figure 148872DEST_PATH_IMAGE016
the representative function is 1 if the representative function is true, and is 0 if the representative function is not true;
using selected samples
Figure 500219DEST_PATH_IMAGE017
And correspondingly screening the training characteristics to obtain the selected training characteristics, and then training the pre-trained deep pedestrian re-recognition model to obtain the parameter-updated deep pedestrian re-recognition model.
5. The unsupervised pedestrian re-identification method of claim 4, wherein the prototype center
Figure 570812DEST_PATH_IMAGE006
The calculation formula of (a) is as follows:
Figure 982202DEST_PATH_IMAGE018
wherein,
Figure 888978DEST_PATH_IMAGE019
is a prototype
Figure 778436DEST_PATH_IMAGE003
The number of visual samples of (a) is,
Figure 470580DEST_PATH_IMAGE011
is a prototype
Figure 52871DEST_PATH_IMAGE003
To (1) a
Figure 446943DEST_PATH_IMAGE009
The number of the samples is one,
Figure 140093DEST_PATH_IMAGE020
6. the unsupervised pedestrian re-identification method according to claim 1, wherein the query set picture and the candidate set picture of the target domain are input into the deep pedestrian re-identification model after the parameter updating, so as to respectively obtain the features of the query set picture and the candidate set picture, calculate the similarity of the query set picture and the candidate set picture in the metric space, and select a picture meeting the requirement from the candidate set according to the similarity, and the method comprises the following steps:
inputting the query set and the to-be-selected set of the target domain into the deep pedestrian re-identification model to respectively obtain the test features of the query set picture and the test features of the to-be-selected set picture;
and calculating Euclidean distances between the test features of the query set and the test features of the to-be-selected set in a measurement space to obtain a similarity matrix of the test features of the query set and the test features of the to-be-selected set, and identifying the to-be-selected picture meeting the requirements of the query picture from the to-be-selected set according to the similarity matrix.
7. An unsupervised pedestrian re-identification device, comprising:
the pre-training unit is used for pre-training a deep pedestrian re-recognition model by a supervised learning method in a labeled source domain data set;
the training feature extraction unit is used for extracting training features of training set samples in the label-free target domain by utilizing the pre-trained deep pedestrian re-recognition model;
the dividing unit is used for dividing the target domain training set samples into a plurality of clusters by using a self-adaptive clustering method according to the training characteristics and distributing corresponding pseudo labels;
the retraining unit is used for respectively defining each cluster as a prototype, calculating the distance between a visual sample in the prototype and the center of the prototype, selecting the visual sample with the distance being smaller than a set threshold value, screening the training features according to the selected visual sample to obtain screened training features, and retraining the pre-trained deep pedestrian re-recognition model by using the screened training features and the distributed pseudo labels to obtain the parameter-updated deep pedestrian re-recognition model;
and the identification unit is used for inputting the query set and the to-be-selected set of the target domain into the depth pedestrian re-identification model after the parameters are updated, respectively obtaining the test characteristics of the images of the query set and the test characteristics of the images of the to-be-selected set, calculating the similarity of the images of the query set and the images of the to-be-selected set in the measurement space, and identifying the to-be-selected images meeting the requirements of the query images from the to-be-selected set according to the similarity.
8. An unsupervised pedestrian re-identification arrangement as claimed in claim 7, further comprising:
and the iteration convergence unit is used for repeatedly executing the training feature extraction unit, the division unit and the retraining unit and updating the iteration weight until convergence.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an unsupervised pedestrian re-identification method as claimed in any one of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out an unsupervised pedestrian re-identification method as claimed in any one of claims 1 to 6.
CN202010842782.9A 2020-08-20 2020-08-20 Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium Active CN112069929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010842782.9A CN112069929B (en) 2020-08-20 2020-08-20 Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010842782.9A CN112069929B (en) 2020-08-20 2020-08-20 Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112069929A true CN112069929A (en) 2020-12-11
CN112069929B CN112069929B (en) 2024-01-05

Family

ID=73662357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010842782.9A Active CN112069929B (en) 2020-08-20 2020-08-20 Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112069929B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507901A (en) * 2020-12-14 2021-03-16 华南理工大学 Unsupervised pedestrian re-identification method based on pseudo tag self-correction
CN112597871A (en) * 2020-12-18 2021-04-02 中山大学 Unsupervised vehicle re-identification method and system based on two-stage clustering and storage medium
CN112766218A (en) * 2021-01-30 2021-05-07 上海工程技术大学 Cross-domain pedestrian re-identification method and device based on asymmetric joint teaching network
CN112861825A (en) * 2021-04-07 2021-05-28 北京百度网讯科技有限公司 Model training method, pedestrian re-identification method, device and electronic equipment
CN113536928A (en) * 2021-06-15 2021-10-22 清华大学 High-efficiency unsupervised pedestrian re-identification method and device
CN113553970A (en) * 2021-07-29 2021-10-26 广联达科技股份有限公司 Pedestrian re-identification method, device, equipment and readable storage medium
CN113590852A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Training method of multi-modal recognition model, multi-modal recognition method and device
CN113822262A (en) * 2021-11-25 2021-12-21 之江实验室 Pedestrian re-identification method based on unsupervised learning
CN114399724A (en) * 2021-12-03 2022-04-26 清华大学 Pedestrian re-identification method and device, electronic equipment and storage medium
CN115273148A (en) * 2022-08-03 2022-11-01 北京百度网讯科技有限公司 Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN116030502A (en) * 2023-03-30 2023-04-28 之江实验室 Pedestrian re-recognition method and device based on unsupervised learning
WO2023115911A1 (en) * 2021-12-24 2023-06-29 上海商汤智能科技有限公司 Object re-identification method and apparatus, electronic device, storage medium, and computer program product
CN116912535A (en) * 2023-09-08 2023-10-20 中国海洋大学 Unsupervised target re-identification method, device and medium based on similarity screening

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948561A (en) * 2019-03-25 2019-06-28 广东石油化工学院 The method and system that unsupervised image/video pedestrian based on migration network identifies again
CN111126360A (en) * 2019-11-15 2020-05-08 西安电子科技大学 Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model
CN111242064A (en) * 2020-01-17 2020-06-05 山东师范大学 Pedestrian re-identification method and system based on camera style migration and single marking
US20200226421A1 (en) * 2019-01-15 2020-07-16 Naver Corporation Training and using a convolutional neural network for person re-identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200226421A1 (en) * 2019-01-15 2020-07-16 Naver Corporation Training and using a convolutional neural network for person re-identification
CN109948561A (en) * 2019-03-25 2019-06-28 广东石油化工学院 The method and system that unsupervised image/video pedestrian based on migration network identifies again
CN111126360A (en) * 2019-11-15 2020-05-08 西安电子科技大学 Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model
CN111242064A (en) * 2020-01-17 2020-06-05 山东师范大学 Pedestrian re-identification method and system based on camera style migration and single marking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
单纯;王敏;: "半监督单样本深度行人重识别方法", 计算机系统应用, no. 01 *
张晓伟;吕明强;李慧;: "基于局部语义特征不变性的跨域行人重识别", 北京航空航天大学学报, no. 09 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507901B (en) * 2020-12-14 2022-05-24 华南理工大学 Unsupervised pedestrian re-identification method based on pseudo tag self-correction
CN112507901A (en) * 2020-12-14 2021-03-16 华南理工大学 Unsupervised pedestrian re-identification method based on pseudo tag self-correction
CN112597871A (en) * 2020-12-18 2021-04-02 中山大学 Unsupervised vehicle re-identification method and system based on two-stage clustering and storage medium
CN112597871B (en) * 2020-12-18 2023-07-18 中山大学 Unsupervised vehicle re-identification method, system and storage medium based on two-stage clustering
CN112766218A (en) * 2021-01-30 2021-05-07 上海工程技术大学 Cross-domain pedestrian re-identification method and device based on asymmetric joint teaching network
CN112861825B (en) * 2021-04-07 2023-07-04 北京百度网讯科技有限公司 Model training method, pedestrian re-recognition method, device and electronic equipment
CN112861825A (en) * 2021-04-07 2021-05-28 北京百度网讯科技有限公司 Model training method, pedestrian re-identification method, device and electronic equipment
WO2022213717A1 (en) * 2021-04-07 2022-10-13 北京百度网讯科技有限公司 Model training method and apparatus, person re-identification method and apparatus, and electronic device
CN113536928B (en) * 2021-06-15 2024-04-19 清华大学 Efficient unsupervised pedestrian re-identification method and device
CN113536928A (en) * 2021-06-15 2021-10-22 清华大学 High-efficiency unsupervised pedestrian re-identification method and device
CN113590852A (en) * 2021-06-30 2021-11-02 北京百度网讯科技有限公司 Training method of multi-modal recognition model, multi-modal recognition method and device
CN113553970A (en) * 2021-07-29 2021-10-26 广联达科技股份有限公司 Pedestrian re-identification method, device, equipment and readable storage medium
CN113822262A (en) * 2021-11-25 2021-12-21 之江实验室 Pedestrian re-identification method based on unsupervised learning
CN113822262B (en) * 2021-11-25 2022-04-15 之江实验室 Pedestrian re-identification method based on unsupervised learning
CN114399724A (en) * 2021-12-03 2022-04-26 清华大学 Pedestrian re-identification method and device, electronic equipment and storage medium
CN114399724B (en) * 2021-12-03 2024-06-28 清华大学 Pedestrian re-recognition method and device, electronic equipment and storage medium
WO2023115911A1 (en) * 2021-12-24 2023-06-29 上海商汤智能科技有限公司 Object re-identification method and apparatus, electronic device, storage medium, and computer program product
CN115273148B (en) * 2022-08-03 2023-09-05 北京百度网讯科技有限公司 Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN115273148A (en) * 2022-08-03 2022-11-01 北京百度网讯科技有限公司 Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN116030502A (en) * 2023-03-30 2023-04-28 之江实验室 Pedestrian re-recognition method and device based on unsupervised learning
CN116912535A (en) * 2023-09-08 2023-10-20 中国海洋大学 Unsupervised target re-identification method, device and medium based on similarity screening
CN116912535B (en) * 2023-09-08 2023-11-28 中国海洋大学 Unsupervised target re-identification method, device and medium based on similarity screening

Also Published As

Publication number Publication date
CN112069929B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
CN112069929B (en) Unsupervised pedestrian re-identification method and device, electronic equipment and storage medium
CN111523621B (en) Image recognition method and device, computer equipment and storage medium
CN108140032B (en) Apparatus and method for automatic video summarization
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN110263697A (en) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN112131978B (en) Video classification method and device, electronic equipment and storage medium
CN108780519A (en) Structure learning in convolutional neural networks
CN108491766B (en) End-to-end crowd counting method based on depth decision forest
CN116935447B (en) Self-adaptive teacher-student structure-based unsupervised domain pedestrian re-recognition method and system
CN111814620A (en) Face image quality evaluation model establishing method, optimization method, medium and device
CN113076963B (en) Image recognition method and device and computer readable storage medium
CN106056165B (en) A kind of conspicuousness detection method based on super-pixel relevance enhancing Adaboost classification learning
CN112507912B (en) Method and device for identifying illegal pictures
CN109753884A (en) A kind of video behavior recognition methods based on key-frame extraction
CN112052771B (en) Object re-recognition method and device
WO2021243947A1 (en) Object re-identification method and apparatus, and terminal and storage medium
Song et al. Mesh saliency via weakly supervised classification-for-saliency CNN
CN115098732B (en) Data processing method and related device
CN109191485B (en) Multi-video target collaborative segmentation method based on multilayer hypergraph model
CN111597894A (en) Face database updating method based on face detection technology
CN109063790A (en) Object identifying model optimization method, apparatus and electronic equipment
CN109740527B (en) Image processing method in video frame
CN117892795A (en) Neural network model training method, device, terminal and storage medium
CN113762041A (en) Video classification method and device, computer equipment and storage medium
CN113705310A (en) Feature learning method, target object identification method and corresponding device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant