WO2021027193A1 - Procédé et appareil de regroupement de visages, dispositif et support d'informations - Google Patents

Procédé et appareil de regroupement de visages, dispositif et support d'informations Download PDF

Info

Publication number
WO2021027193A1
WO2021027193A1 PCT/CN2019/123193 CN2019123193W WO2021027193A1 WO 2021027193 A1 WO2021027193 A1 WO 2021027193A1 CN 2019123193 W CN2019123193 W CN 2019123193W WO 2021027193 A1 WO2021027193 A1 WO 2021027193A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
picture
clusters
face feature
cluster
Prior art date
Application number
PCT/CN2019/123193
Other languages
English (en)
Chinese (zh)
Inventor
杨东泉
丁保剑
秦伟
刘伟
李德紘
张少文
Original Assignee
佳都新太科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 佳都新太科技股份有限公司 filed Critical 佳都新太科技股份有限公司
Publication of WO2021027193A1 publication Critical patent/WO2021027193A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the embodiments of the present application relate to the field of face recognition technology, and in particular, to a face clustering method, device, device, and storage medium.
  • Face clustering refers to grouping faces according to their identities. Generally, face clustering is done by comparing all faces in the set pairwise, and then according to the similarity value obtained by the comparison, they will belong to the same identity. People are divided into a group to achieve clustering.
  • Face clustering calculation usually includes two steps, face feature extraction and clustering of the extracted features using a clustering algorithm.
  • face feature extraction traditional feature extraction methods usually artificially define some key points of the face, and then extract the value of these key points from the picture as the features of the face.
  • K-means is common And DBSCAN or other clustering algorithms.
  • General clustering algorithms tend to achieve better results when doing general numerical clustering tasks. The effect of clustering in this specific business scenario is poor and its applicability is low.
  • the embodiments of the present invention provide a face clustering method, device, equipment and storage medium, which improve the efficiency and accuracy of face clustering.
  • an embodiment of the present invention provides a face clustering method, which includes:
  • the neighbor face sets of each face picture are respectively determined as a cluster, and the clusters meeting the preset conditions are merged.
  • an embodiment of the present invention also provides a face clustering device, which includes:
  • the residual network training module is used to train the residual network through the face data set
  • the feature extraction module is used to process the residual network to obtain a face feature extractor
  • the feature vector determining module is configured to input the face picture to be classified into the face feature extractor to obtain the face feature vector corresponding to each face picture;
  • the merging module is used to determine the neighbor face set of each face picture as a cluster respectively, and merge the clusters that meet the preset conditions.
  • an embodiment of the present invention also provides a device, which includes:
  • One or more processors are One or more processors;
  • Storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the face clustering method according to the embodiment of the present invention.
  • the embodiments of the present invention also provide a storage medium containing computer-executable instructions, which are used to execute the face clustering method described in the embodiments of the present invention when the computer-executable instructions are executed by a computer processor .
  • a trained residual network is obtained through training on a face data set, the residual network is processed to obtain a face feature extractor, and the face image to be classified is input to the face feature extraction
  • the device obtains the face feature vector corresponding to each face picture, calculates the vector distance between each face feature vector and other face feature vectors, determines the neighbor face set of each face picture according to the vector distance, and divides each face picture
  • the neighbor face set of the face image is determined as a cluster, and the clusters that meet the preset conditions are merged.
  • the face feature is extracted through the residual network, and it is driven by data without introducing Human prior experience solves the limitations of artificially defined features.
  • the clustering method in this scheme has a small amount of calculation and the iterative process has a fast convergence speed without loss of calculation accuracy.
  • FIG. 1 is a flowchart of a face clustering method provided by an embodiment of the present invention
  • Figure 1a is a schematic structural diagram of a residual network provided by an embodiment of the present invention.
  • Figure 1b is a diagram of the internal structure of a residual network provided by an embodiment of the present invention.
  • Figure 1c is a structural diagram of a face feature extractor provided by an embodiment of the present invention.
  • FIG. 2 is a flowchart of another face clustering method provided by an embodiment of the present invention.
  • FIG. 3 is a flowchart of another face clustering method provided by an embodiment of the present invention.
  • FIG. 4 is a flowchart of another face clustering method provided by an embodiment of the present invention.
  • FIG. 5 is a structural block diagram of a face clustering apparatus provided by an embodiment of the present invention.
  • Fig. 6 is a schematic structural diagram of a device provided by an embodiment of the present invention.
  • Fig. 1 is a flowchart of a face clustering method provided by an embodiment of the present invention. This embodiment is applicable to face clustering.
  • the method can be executed by a computing device such as a server or a computer, and specifically includes the following steps:
  • Step S101 Train a face data set to obtain a trained residual network.
  • the face data set used for training may be a public data set commonly used in the field of face recognition, such as the LFW data set.
  • the face data set is established to study the problem of face recognition in an unrestricted environment, including more than 13,000 face images were collected on the Internet, and each face was tagged with a name. Among them, about 1,680 people contained more than two faces. Others such as IJB-B, CASIA-Webface, and VGG-Face can also be used to train the residual network, and this solution is not limited.
  • a specific residual network is first constructed.
  • the residual network is shown in Figure 1a.
  • Figure 1a is a schematic structural diagram of a residual network provided by an embodiment of the present invention, using a public face data set
  • the specific residual network is learned and trained to obtain a trained residual network.
  • the trained residual network can be used to perform face classification tasks.
  • the specific residual network consists of input (input), N ResNet blocks, a fully connected layer, and a softmax (normalization layer).
  • the internal structure of the ResNet block is shown in Figure 1b.
  • 1b is a diagram of the internal structure of a residual network provided by an embodiment of the present invention.
  • conv(1*1) represents the use of a convolutional layer and the size of the convolution kernel is 1*1
  • the BN layer is used for batch normalization
  • Relu is a commonly used neural network activation function
  • the symbol "+" represents an execution vector The addition operation.
  • the fully connected layer uses 1024 neural network nodes.
  • Step S102 Process the residual network to obtain a face feature extractor, and input a face picture to be classified into the face feature extractor to obtain a face feature vector corresponding to each face picture.
  • the normalization layer of the residual network is removed to obtain a face feature extractor, as shown in FIG. 1c, which is a structural diagram of a face feature extractor provided by an embodiment of the present invention .
  • input corresponds to the input face picture
  • the fully connected layer has 1024 nodes, that is, a vector of 1024 values is output for each input picture as the face feature vector corresponding to the face picture.
  • Step S103 Calculate the vector distance between each face feature vector and other face feature vectors, and determine the neighbor face set of each face picture according to the vector distance.
  • the vector distance between each face feature vector and other face feature vectors is calculated according to the following formula:
  • a and b represent two different face pictures, a i and b i are the face feature vectors corresponding to each picture.
  • the above formula not only considers the direction similarity of the face feature vectors, but also considers the person The difference between the facial feature vector values makes the vector distance measurement result more reasonable. It should be noted that this solution can also use other existing vector distance calculation formulas, but the calculation effect is not as good as the above formulas.
  • the process of determining the neighbor face set of each face picture according to the vector distance may be: according to the formula
  • the vector distance is normalized, and the face pictures that are less than the first preset threshold in the processing result are determined as the neighbor face set, the first preset threshold includes 0.25 (the first preset threshold can be performed according to actual calculation needs Adjustment), where N represents the number of samples, which is a positive integer greater than 1.
  • Step S104 Determine the neighbor face set of each face picture as a cluster respectively, and merge the clusters that meet the preset condition.
  • the preset condition may be that the similarity between clusters is greater than the second preset threshold. For example, according to the formula
  • the second preset threshold includes 0.7, where A and B represent two different clusters respectively Corresponding set,
  • the cluster is initialized, that is, the neighbor face set of each face picture is determined to be a cluster. For example, these can be separated separately.
  • the clusters form a cluster list.
  • the specific merging process can be: take out a cluster from the cluster list, calculate the similarity between the cluster and other clusters in the cluster list, merge if the merging conditions are met, and calculate the merged cluster and the cluster list The similarity between other clusters in the cluster, and so on until all clusters in the cluster list are traversed. Take out the second cluster in the cluster list.
  • the cluster has been merged, take out the next cluster in the cluster list until the unmerged cluster is taken out, and then calculate the similarity between clusters and other clusters in the cluster list. It determines whether the merging condition is satisfied, and if it is satisfied, the merging is performed, and the merging steps are repeated until the number of clusters in a round of iteration is reduced by less than 5% when it is not iterated, and the clustering is determined to be completed.
  • the face features extracted by the residual network are driven by data, without human prior experience, and the residual network can easily find the characteristics of the data. Defining characteristics cannot be done. The artificially defined features are limited, and the more and more refined the defined features, the more effort is spent. For the residual network, only the number of nodes can be increased to efficiently obtain more features.
  • the advantage of the clustering method in this scheme is that the amount of calculation is small, the convergence speed is fast in the iterative process, and the result accuracy is high.
  • the initialization in this scheme is based on each sample as the center, and the neighbor faces are selected.
  • this method initializes N (number of samples) centers, and the subsequent process will gradually reduce the number of clusters. The reason is that in the initial process, the number of people in the face set cannot be determined, and no prior experience is introduced. Repeatedly, in this method, an element can appear in multiple clusters at the beginning, find N overlapping regions of clusters, and decide whether they can be combined according to the overlapping regions. Compared with “Clustering Millions of Faces by Identity", the accuracy of the calculation results is lost in the calculation process, and the clustering effect is not as good as this solution.
  • FIG. 2 is a flowchart of another face clustering method provided by an embodiment of the present invention, and shows an optimized method for obtaining face feature vectors. As shown in Figure 2, the technical solution is as follows:
  • Step S201 Train the face data set to obtain a trained residual network.
  • Step S202 Process the residual network to obtain a face feature extractor, intercept each face picture to be classified to obtain multiple first enhanced pictures, and reverse the first enhanced picture to obtain the first enhanced picture. 2. Enhance the picture.
  • each face image to be classified to 300*300 pixels, and then align the 4 corners to take a screenshot frame of 240*240 pixels to obtain 4 screenshots, and then take another 240 in the central area of the image to be classified.
  • Step S203 Input the first enhanced picture and the second enhanced picture to the face feature extractor, and average the output results to obtain a face feature vector corresponding to each face picture.
  • the corresponding multiple first enhanced pictures and the second enhanced pictures are input to the face feature extractor, and the output results are averaged to obtain each person The face feature vector corresponding to the face image. If the above-mentioned image enhancement method is adopted, that is, for each face image to be classified, 10 1024-dimensional face feature vectors can be obtained, and the values corresponding to the positions of the obtained 10 face feature vectors are summed to calculate the average value. Save it as a new 1024-dimensional vector, and determine the vector as the face feature vector of the face image to be classified.
  • Step S204 Calculate the vector distance between each face feature vector and other face feature vectors, and determine the neighbor face set of each face picture according to the vector distance.
  • Step S205 Determine the neighbor face set of each face picture as a cluster respectively, and merge the clusters that meet the preset conditions.
  • Fig. 3 is a flowchart of another face clustering method provided by an embodiment of the present invention, and provides an optimized scheme for cluster merging. As shown in Figure 3, the technical solution is as follows:
  • step S301 a trained residual network is obtained through training on a face data set.
  • Step S302 Process the residual network to obtain a face feature extractor, and input the face picture to be classified into the face feature extractor to obtain a face feature vector corresponding to each face picture.
  • Step S303 Calculate the vector distance between each face feature vector and other face feature vectors, determine the neighbor face set of each face picture according to the vector distance, and determine the neighbor face set of each face picture as A cluster.
  • Step S304 It is judged whether the two clusters currently compared are in a subset relationship, if so, step S306 is executed, otherwise, step S305 is executed.
  • the subsequent comparison process is performed.
  • Step S305 It is judged whether the number of elements in the two clusters currently compared meets the preset ratio, if yes, step S307 is executed, and if not, step S308 is executed.
  • the value range can be greater than or equal to 2 or less than or equal to 0.5.
  • Step S306 Combine the currently compared two clusters.
  • Step S307 Do not merge the clusters currently compared.
  • Step S308 Calculate the similarity between the two clusters currently compared, determine whether the calculation result is greater than the second preset threshold, if yes, perform step S306, otherwise, perform step S307.
  • the priority is to determine whether the currently compared cluster meets the subset relationship and whether the difference is large. If the subset relationship is satisfied, it is directly merged. If the difference is large, the subsequent cluster similarity is not performed.
  • the calculation of, without cluster merging further improves the cluster merging mechanism and improves the computational efficiency of face clustering.
  • FIG. 4 is a flowchart of another face clustering method provided by an embodiment of the present invention, and shows an optimized face clustering merging method. As shown in Figure 4, the technical solution is as follows:
  • Step S401 Train the face data set to obtain a trained residual network.
  • Step S402 Process the residual network to obtain a face feature extractor, and input the face picture to be classified into the face feature extractor to obtain a face feature vector corresponding to each face picture.
  • Step S403 Calculate the vector distance between each face feature vector and other face feature vectors, and determine the neighbor face set of each face picture according to the vector distance.
  • Step S404 Determine the neighbor face set of each face picture as a cluster respectively, and merge the clusters that meet the preset condition.
  • Step S405 Determine the duplicate face pictures appearing in the merged cluster, and delete the duplicate face pictures appearing in the non-maximum cluster.
  • duplicate face pictures are deleted. Specifically, when a duplicate face picture is determined, the number of the duplicate face picture is obtained, all clusters containing the number are determined, and the largest cluster containing the number is found, and the duplicate face pictures in the largest cluster are retained , Delete the face pictures of this number in the remaining clusters, repeat this operation until all duplicate face pictures are deleted. Optionally, if a cluster with empty elements appears after deleting duplicate face pictures, the cluster is deleted accordingly.
  • FIG. 5 is a structural block diagram of a face clustering device provided by an embodiment of the present invention.
  • the device is used to execute the face clustering method provided in the above-mentioned embodiment, and has functional modules and beneficial effects corresponding to the execution method.
  • the device specifically includes: a residual network training module 101, a feature extraction module 102, a feature vector determination module 103, a vector distance calculation module 104, and a merging module 105, where:
  • the residual network training module 101 is used to obtain a trained residual network through training on a face data set
  • the feature extraction module 102 is configured to process the residual network to obtain a face feature extractor
  • the feature vector determining module 103 is configured to input the face picture to be classified into the face feature extractor to obtain the face feature vector corresponding to each face picture;
  • the vector distance calculation module 104 is configured to calculate the vector distance between each face feature vector and other face feature vectors, and determine the neighbor face set of each face picture according to the vector distance;
  • the merging module 105 is configured to determine the neighbor face set of each face picture as a cluster respectively, and merge the clusters that meet the preset conditions.
  • a trained residual network is obtained through training on a face data set, the residual network is processed to obtain a face feature extractor, and the face image to be classified is input to the face feature extractor Obtain the face feature vector corresponding to each face picture, calculate the vector distance between each face feature vector and other face feature vectors, determine the neighbor face set of each face picture according to the vector distance, and divide each person The neighbor face set of the face picture is determined as a cluster, and the clusters that meet the preset conditions are merged.
  • the face feature is extracted through the residual network. It is driven by data without introducing people. The prior experience of, solves the limitations of artificially defined features.
  • the clustering method in this scheme has a small amount of calculation, and the iterative process converges quickly without loss of calculation accuracy.
  • the feature vector determining module 103 is specifically configured to:
  • the first enhanced picture and the second enhanced picture are input to the face feature extractor, and the output results are averaged to obtain a face feature vector corresponding to each face picture.
  • the face feature vector includes 1024 numerical values
  • the vector distance calculation module 104 is specifically configured to:
  • the vector distance calculation module 104 is specifically configured to:
  • the vector distance is normalized, and the face pictures that are smaller than the first preset threshold in the processing result are determined as the neighbor face set.
  • the first preset threshold includes 0.25, where N represents the number of samples and is greater than A positive integer of 1.
  • the merging module 105 is specifically configured to:
  • the inter-cluster similarity between different clusters is calculated, and the two clusters whose inter-cluster similarity is greater than a second preset threshold are merged.
  • the second preset threshold includes 0.7, where A and B represent two different The cluster corresponds to the set,
  • the merging module 105 is further configured to:
  • the merging module 105 is further configured to:
  • the duplicate face pictures appearing in the merged clusters are determined; the duplicate face pictures appearing in the non-maximum clusters are deleted.
  • FIG. 6 is a schematic structural diagram of a device provided by an embodiment of the present invention.
  • the device includes a processor 201, a memory 202, an input device 203, and an output device 204; the number of processors 201 in the device can be one Or more, one processor 201 is taken as an example in FIG. 6; the processor 201, the memory 202, the input device 203, and the output device 204 in the device may be connected by a bus or other means. In FIG. 6, the connection by a bus is taken as an example.
  • the memory 202 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the face clustering method in the embodiment of the present invention.
  • the processor 201 executes various functional applications and data processing of the device by running software programs, instructions, and modules stored in the memory 202, that is, realizes the aforementioned face clustering method.
  • the memory 202 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal, etc.
  • the memory 202 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 202 may further include a memory remotely provided with respect to the processor 201, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 203 can be used to receive input digital or character information, and generate key signal input related to user settings and function control of the device.
  • the output device 204 may include a display device such as a display screen.
  • An embodiment of the present invention also provides a storage medium containing computer-executable instructions, which are used to execute a face clustering method when executed by a computer processor, the method including:
  • the neighbor face sets of each face picture are respectively determined as a cluster, and the clusters meeting the preset conditions are merged.
  • the input of the face picture to be classified into the face feature extractor to obtain the face feature vector corresponding to each face picture includes:
  • the first enhanced picture and the second enhanced picture are input to the face feature extractor, and the output results are averaged to obtain a face feature vector corresponding to each face picture.
  • the face feature vector includes 1024 values
  • the calculation of the vector distance between each face feature vector and other face feature vectors includes:
  • the determining the neighbor face set of each face picture according to the vector distance includes:
  • the vector distance is normalized, and the face pictures that are smaller than the first preset threshold in the processing result are determined as the neighbor face set.
  • the first preset threshold includes 0.25, where N represents the number of samples and is greater than A positive integer of 1.
  • the merging clusters that meet a preset condition includes:
  • the inter-cluster similarity between different clusters is calculated, and the two clusters whose inter-cluster similarity is greater than a second preset threshold are merged.
  • the second preset threshold includes 0.7, where A and B represent two different The cluster corresponds to the set,
  • the method before calculating the inter-cluster similarity between different clusters, the method further includes:
  • the calculated similarity between different clusters includes:
  • Floppy disk read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including several instructions to make a computer device (which can be A personal computer, a server, or a network device, etc.) execute the methods described in the various embodiments of the embodiments of the present invention.
  • a computer device which can be A personal computer, a server, or a network device, etc.
  • the various units and modules included are only divided according to functional logic, but are not limited to the above division, as long as the corresponding function can be realized;
  • the specific names of the functional units are only used to facilitate distinction from each other, and are not used to limit the protection scope of the embodiments of the present invention.

Abstract

L'invention concerne un procédé et un appareil de regroupement de visages, un dispositif et un support d'informations. Le procédé consiste : à apprendre un ensemble de données de visage afin d'obtenir un réseau résiduel entraîné (S101) ; à traiter le réseau résiduel afin d'obtenir un extracteur de caractéristiques de visage, et à entrer des images de visage à classifier dans l'extracteur de caractéristiques de visage afin d'obtenir le vecteur de caractéristiques de visage correspondant à chaque image de visage (S102) ; à calculer des distances de vecteur entre chaque vecteur de caractéristiques de visage et d'autres vecteurs de caractéristiques de visage, et à déterminer un ensemble de visages voisins de chaque image de visage en fonction des distances de vecteur (S103) ; et à déterminer l'ensemble de visages voisins de chaque image de visage en tant que groupe, et à fusionner les groupes répondant à une condition prédéfinie (S104). Le procédé améliore l'efficacité et la précision du regroupement de visages.
PCT/CN2019/123193 2019-08-12 2019-12-05 Procédé et appareil de regroupement de visages, dispositif et support d'informations WO2021027193A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910737332.0A CN110232373B (zh) 2019-08-12 2019-08-12 人脸聚类方法、装置、设备和存储介质
CN201910737332.0 2019-08-12

Publications (1)

Publication Number Publication Date
WO2021027193A1 true WO2021027193A1 (fr) 2021-02-18

Family

ID=67855263

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/123193 WO2021027193A1 (fr) 2019-08-12 2019-12-05 Procédé et appareil de regroupement de visages, dispositif et support d'informations

Country Status (2)

Country Link
CN (1) CN110232373B (fr)
WO (1) WO2021027193A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948614A (zh) * 2021-02-26 2021-06-11 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及存储介质
CN117152543A (zh) * 2023-10-30 2023-12-01 山东浪潮科学研究院有限公司 一种图像分类方法、装置、设备及存储介质
CN112948614B (zh) * 2021-02-26 2024-05-14 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232373B (zh) * 2019-08-12 2020-01-03 佳都新太科技股份有限公司 人脸聚类方法、装置、设备和存储介质
CN110807115B (zh) * 2019-11-04 2022-03-25 浙江大华技术股份有限公司 人脸检索方法、装置、及存储装置
CN111160468B (zh) * 2019-12-30 2024-01-12 深圳市商汤科技有限公司 数据处理方法及装置、处理器、电子设备、存储介质
CN111209862B (zh) * 2020-01-03 2023-09-29 深圳力维智联技术有限公司 一种人脸图像的聚类方法、装置及介质
CN111242040B (zh) * 2020-01-15 2022-08-02 佳都科技集团股份有限公司 一种动态人脸聚类方法、装置、设备和存储介质
CN111310834A (zh) * 2020-02-19 2020-06-19 深圳市商汤科技有限公司 数据处理方法及装置、处理器、电子设备、存储介质
CN111428767B (zh) * 2020-03-17 2024-03-08 深圳市商汤科技有限公司 数据处理方法及装置、处理器、电子设备及存储介质
CN111738319B (zh) * 2020-06-11 2021-09-10 佳都科技集团股份有限公司 一种基于大规模样本的聚类结果评价方法及装置
CN111709473B (zh) * 2020-06-16 2023-09-19 腾讯科技(深圳)有限公司 对象特征的聚类方法及装置
CN111738341B (zh) * 2020-06-24 2022-04-26 广州佳都科技软件开发有限公司 一种分布式大规模人脸聚类方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274416A (zh) * 2017-06-13 2017-10-20 西北工业大学 基于光谱梯度与层次结构的高光谱图像显著性目标检测方法
US20180114055A1 (en) * 2016-10-25 2018-04-26 VMAXX. Inc. Point to Set Similarity Comparison and Deep Feature Learning for Visual Recognition
CN109086697A (zh) * 2018-07-20 2018-12-25 腾讯科技(深圳)有限公司 一种人脸数据处理方法、装置及存储介质
CN110008876A (zh) * 2019-03-26 2019-07-12 电子科技大学 一种基于数据增强与特征融合的人脸验证方法
CN110232373A (zh) * 2019-08-12 2019-09-13 佳都新太科技股份有限公司 人脸聚类方法、装置、设备和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252616B (zh) * 2013-06-28 2018-01-23 广州华多网络科技有限公司 人脸标注方法、装置及设备
CN106228188B (zh) * 2016-07-22 2020-09-08 北京市商汤科技开发有限公司 聚类方法、装置及电子设备
CN107609466A (zh) * 2017-07-26 2018-01-19 百度在线网络技术(北京)有限公司 人脸聚类方法、装置、设备及存储介质
CN109086720B (zh) * 2018-08-03 2021-05-07 腾讯科技(深圳)有限公司 一种人脸聚类方法、装置和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180114055A1 (en) * 2016-10-25 2018-04-26 VMAXX. Inc. Point to Set Similarity Comparison and Deep Feature Learning for Visual Recognition
CN107274416A (zh) * 2017-06-13 2017-10-20 西北工业大学 基于光谱梯度与层次结构的高光谱图像显著性目标检测方法
CN109086697A (zh) * 2018-07-20 2018-12-25 腾讯科技(深圳)有限公司 一种人脸数据处理方法、装置及存储介质
CN110008876A (zh) * 2019-03-26 2019-07-12 电子科技大学 一种基于数据增强与特征融合的人脸验证方法
CN110232373A (zh) * 2019-08-12 2019-09-13 佳都新太科技股份有限公司 人脸聚类方法、装置、设备和存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AN, BIN ET AL.: "Application of SAM Algorithm in Multispectral Image Classification", CHINESE JOURNAL OF STEREOLOGY AND IMAGE ANALYSIS, vol. 10, no. 1, 31 March 2005 (2005-03-31), pages 55 - 60, XP055779707, ISSN: 1007-1482 *
ZHAO, YUHAI ET AL.: "A Graph Compression Based Overlapping Communities Detection Algorithm", JOURNAL OF NORTHEASTERN UNIVERSITY (NATURAL SCIENCE), vol. 36, no. 11, 30 November 2015 (2015-11-30), pages 1543 - 1547, XP055779709, ISSN: 1005-3026 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948614A (zh) * 2021-02-26 2021-06-11 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及存储介质
CN112948614B (zh) * 2021-02-26 2024-05-14 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及存储介质
CN117152543A (zh) * 2023-10-30 2023-12-01 山东浪潮科学研究院有限公司 一种图像分类方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN110232373A (zh) 2019-09-13
CN110232373B (zh) 2020-01-03

Similar Documents

Publication Publication Date Title
WO2021027193A1 (fr) Procédé et appareil de regroupement de visages, dispositif et support d'informations
WO2021143237A1 (fr) Procédé et appareil de regroupement dynamique de visages humains, dispositif, et support de stockage
WO2020216227A1 (fr) Procédé et appareil de classification d'image et procédé et appareil de traitement de données
WO2021139309A1 (fr) Procédé, appareil et dispositif d'apprentissage d'un modèle de reconnaissance, et support de stockage
WO2021114625A1 (fr) Procédé et appareil de construction de structure de réseau destinés à être utilisés dans un scénario multitâche
WO2019228317A1 (fr) Procédé et dispositif de reconnaissance faciale et support lisible par ordinateur
WO2019120110A1 (fr) Procédé et dispositif de reconstruction d'image
WO2021022521A1 (fr) Procédé de traitement de données et procédé et dispositif d'apprentissage de modèle de réseau neuronal
JP5282658B2 (ja) 画像学習、自動注釈、検索方法及び装置
WO2021057056A1 (fr) Procédé de recherche d'architecture neuronale, procédé et dispositif de traitement d'image, et support de stockage
WO2020233084A1 (fr) Procédé et appareil de segmentation d'image et support de stockage et dispositif terminal
WO2021115242A1 (fr) Procédé de traitement d'image à très haute résolution et appareil associé
WO2020238515A1 (fr) Procédé et appareil de mise en correspondance d'images, dispositif, support, et produit- programme
WO2023065759A1 (fr) Procédé de reconnaissance d'action vidéo basé sur un réseau amélioré spatio-temporel
WO2020125229A1 (fr) Procédé et appareil de fusion de caractéristiques, dispositif électronique et support d'informations
WO2021164269A1 (fr) Procédé et appareil d'acquisition de carte de disparité basés sur un mécanisme d'attention
CN111553215A (zh) 人员关联方法及其装置、图卷积网络训练方法及其装置
WO2023206944A1 (fr) Procédé et appareil de segmentation sémantique, dispositif informatique et support de stockage
WO2024041479A1 (fr) Procédé et appareil de traitement de données
CN113704531A (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2021147437A1 (fr) Procédé de détection de bord de carte d'identité, dispositif, et support de stockage
Xu et al. A novel dynamic graph evolution network for salient object detection
CN114333062A (zh) 基于异构双网络和特征一致性的行人重识别模型训练方法
WO2021217919A1 (fr) Procédé et appareil de reconnaissance d'unité d'action faciale, dispositif électronique et support de stockage
CN112348008A (zh) 证件信息的识别方法、装置、终端设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19941261

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19941261

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 26.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19941261

Country of ref document: EP

Kind code of ref document: A1