CN111783641A - Face clustering method and device - Google Patents

Face clustering method and device Download PDF

Info

Publication number
CN111783641A
CN111783641A CN202010614256.7A CN202010614256A CN111783641A CN 111783641 A CN111783641 A CN 111783641A CN 202010614256 A CN202010614256 A CN 202010614256A CN 111783641 A CN111783641 A CN 111783641A
Authority
CN
China
Prior art keywords
feature vector
face
user
target
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010614256.7A
Other languages
Chinese (zh)
Inventor
王森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202010614256.7A priority Critical patent/CN111783641A/en
Publication of CN111783641A publication Critical patent/CN111783641A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a face clustering algorithm, including: the server obtains a first feature vector, and if a first set is stored in the server and comprises feature vectors corresponding to the face images of one or more users respectively, the server determines a first target feature vector with the highest similarity to the first feature vector from the first set. If the similarity between the first feature vector and the first target feature vector is higher than a first threshold, the server may determine that the face image of the first user and the face image corresponding to the first target feature vector belong to the same user. The first feature vector is obtained by clustering feature vectors respectively corresponding to a plurality of face images of a first user by first equipment. Therefore, according to the scheme, pre-clustering can be performed by the first equipment, and the server can perform face clustering directly according to pre-clustering results, so that face clustering based on a large number of face images by the server is avoided, and the face clustering efficiency is improved.

Description

Face clustering method and device
Technical Field
The present application relates to the field of image recognition, and in particular, to a face clustering method and apparatus.
Background
With the development of scientific technology, the application of the face recognition technology is more and more extensive. Face clustering is an important application of face recognition. Face clustering refers to clustering a plurality of face images so that the face images belonging to the same user are classified into one class.
The traditional face clustering algorithm has low efficiency. Therefore, a solution to solve the above problems is urgently needed.
Disclosure of Invention
The technical problem to be solved by the application is that the traditional face clustering algorithm is low in efficiency, and a face clustering method and a face clustering device are provided.
In a first aspect, an embodiment of the present application provides a face clustering method, which is executed by a server, and the method includes:
acquiring a first feature vector from first equipment, wherein the first feature vector is a feature vector of a face image of a first user, and the first feature vector is obtained by clustering feature vectors respectively corresponding to a plurality of face images of the first user by the first equipment;
if a first set is stored in the server, the first set comprises feature vectors corresponding to the face images of one or more users respectively, and a first target feature vector with the highest similarity to the first feature vector is determined from the first set;
and if the similarity between the first characteristic vector and the first target characteristic vector is higher than a first threshold value, determining that the face image of the first user and the face image corresponding to the first target characteristic vector belong to the same user.
In some embodiments, the determining a first target feature vector from the first set with the highest similarity to the first feature vector includes:
respectively calculating the distance between the first feature vector and each feature vector in the first set;
and determining the feature vector corresponding to the minimum distance in the first set as the first target feature vector.
In some embodiments, the method further comprises:
calculating a geometric median of the first feature vector and the first target feature vector;
and updating the first target feature vector, wherein the updated value is the calculated geometric median.
In some embodiments, the method further comprises:
acquiring a face image of the first user from the first device;
and saving the facial image of the first user to the first set.
In some embodiments, the method further comprises:
and if the similarity between the first feature vector and the first target feature vector is not higher than the first threshold, storing the first feature vector into the first set.
In some embodiments, the method further comprises:
if the first set does not exist in the server, the first feature vector is stored to obtain the first set.
In some embodiments, the first set includes one or more subsets, one subset corresponds to one user, and the one or more subsets include a first subset used for storing feature vectors corresponding to facial images of the first user, or used for storing facial images and corresponding feature vectors of the first user.
In a second aspect, an embodiment of the present application provides a face clustering method, which is executed by a first device, and includes:
executing the following operations on the feature vector corresponding to each face image in the feature vectors corresponding to the plurality of acquired face images respectively:
acquiring a second feature vector, wherein the second feature vector is a feature vector of a first face image, and the plurality of face images comprise the first face image;
if a second set is stored in the first device, the second set comprises feature vectors corresponding to the face images of one or more users respectively, and a second target feature vector with the highest similarity to the second feature vector is determined from the second set;
if the similarity between the second feature vector and the second target feature vector is higher than a second threshold, determining that the face images corresponding to the first face image and the second target feature vector belong to the same user;
updating the second target feature vector according to the first feature vector;
wherein: the plurality of face images are acquired by the image acquisition equipment within a preset time period.
In some embodiments, determining a second target feature vector from the second set with the highest similarity to the second feature vector comprises:
respectively calculating the distance between the second feature vector and each feature vector in the second set;
and determining the feature vector corresponding to the minimum distance in the second set as the second target feature vector.
In some embodiments, updating the second target feature vector based on the first feature vector comprises:
calculating an average of the second feature vector and the second target feature vector;
and updating the second target feature vector, wherein the updated value is the calculated average value.
In some embodiments, the method further comprises:
acquiring the first face image;
saving the first face image to the second set.
In some embodiments, the method further comprises:
and if the similarity between the second feature vector and the second target feature vector is not higher than the second threshold, storing the second feature vector into the second set.
In some embodiments, the method further comprises:
and if the second set does not exist in the first equipment, storing the two feature vectors to obtain the second set.
In some embodiments, the second set includes one or more subsets, one subset corresponds to one user, and the one or more subsets include a second subset used for storing feature vectors corresponding to facial images of the first user, or the second subset is used for storing facial images of the first user and corresponding feature vectors.
In some embodiments, the method further comprises:
and sending the feature vectors in the second set to a server, or correspondingly sending the feature vectors in the second set and the face images corresponding to the feature vectors to the server.
In a third aspect, an embodiment of the present application provides a face clustering device, which is applied to a server, and the device includes:
the device comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for obtaining a first feature vector from first equipment, the first feature vector is a feature vector of a face image of a first user, and the first feature vector is obtained by clustering feature vectors respectively corresponding to a plurality of face images of the first user by the first equipment;
a first determining unit, configured to determine, if a first set is stored in the server, where the first set includes feature vectors corresponding to face images of one or more users, a first target feature vector having a highest similarity to the first feature vector from the first set;
and the second determining unit is used for determining that the face image of the first user and the face image corresponding to the first target feature vector belong to the same user if the similarity between the first feature vector and the first target feature vector is higher than a first threshold.
In some embodiments, the first determining unit is configured to:
respectively calculating the distance between the first feature vector and each feature vector in the first set;
and determining the feature vector corresponding to the minimum distance in the first set as the first target feature vector.
In some embodiments, the apparatus further comprises:
a first calculation unit configured to calculate a geometric median of the first feature vector and the first target feature vector;
and the first updating unit is used for updating the first target feature vector, and the updated value is the calculated geometric median.
In some embodiments, the apparatus further comprises:
a second acquisition unit, configured to acquire a face image of the first user from the first device;
and the first storage unit is used for storing the face image of the first user to the first set.
In some embodiments, the apparatus further comprises:
a second storing unit, configured to store the first feature vector into the first set if the similarity between the first feature vector and the first target feature vector is not higher than the first threshold.
In some embodiments, the apparatus further comprises:
a third storing unit, configured to store the first feature vector to obtain the first set if the first set does not exist in the server.
In some embodiments, the first set includes one or more subsets, one subset corresponds to one user, and the one or more subsets include a first subset used for storing feature vectors corresponding to facial images of the first user, or used for storing facial images and corresponding feature vectors of the first user.
In a fourth aspect, an embodiment of the present application provides a face clustering device, which is applied to a first device, and is configured to perform face clustering on multiple face images acquired by an image acquisition device within a preset time period, where the device includes:
a third obtaining unit, configured to obtain a second feature vector, where the second feature vector is a feature vector of a first face image, and the plurality of face images include the first face image;
a third determining unit, configured to determine, if a second set is stored in the first device, where the second set includes feature vectors corresponding to face images of one or more users, a second target feature vector having a highest similarity to the second feature vector from the second set;
a fourth determining unit, configured to determine that the first facial image and the facial image corresponding to the second target feature vector belong to the same user if the similarity between the second feature vector and the second target feature vector is higher than a second threshold;
and the second updating unit is used for updating the second target feature vector according to the first feature vector.
In some embodiments, the third determining unit is configured to:
respectively calculating the distance between the second feature vector and each feature vector in the second set;
and determining the feature vector corresponding to the minimum distance in the second set as the second target feature vector.
In some embodiments, the second updating unit is configured to:
calculating an average of the second feature vector and the second target feature vector;
and updating the second target feature vector, wherein the updated value is the calculated average value.
In some embodiments, the apparatus further comprises:
a fourth acquisition unit configured to acquire the first face image;
a fourth saving unit configured to save the first face image to the second set.
In some embodiments, the apparatus further comprises:
a fifth storing unit, configured to store the second feature vector into the second set if the similarity between the second feature vector and the second target feature vector is not higher than the second threshold.
In some embodiments, the apparatus further comprises:
a sixth storing unit, configured to store the two feature vectors to obtain the second set if the second set does not exist in the first device.
In some embodiments, the second set includes one or more subsets, one subset corresponds to one user, and the one or more subsets include a second subset used for storing feature vectors corresponding to facial images of the first user, or the second subset is used for storing facial images of the first user and corresponding feature vectors.
In some embodiments, the apparatus further comprises:
and the sending unit is used for sending the feature vectors in the second set to a server, or correspondingly sending the feature vectors in the second set and the face images respectively corresponding to the feature vectors to the server.
In a fifth aspect, an embodiment of the present application provides an apparatus, including: a processor and a memory:
the memory is used for storing a computer program and transmitting the computer program to the processor;
the processor is adapted to perform the method of any of the above first aspects or to perform the method of any of the above second aspects in accordance with the computer program.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium for storing a computer program for executing the method of any one of the above first aspects, or for executing the method of any one of the above second aspects.
Compared with the prior art, the embodiment of the application has the following advantages:
in the embodiment of the application, the first device can perform pre-clustering on the face image acquired by the image acquisition device to obtain a first feature vector, wherein the first feature vector is the feature vector of the face image of the first user, and then the server obtains the first feature vector and performs face clustering again by using the first feature vector. Specifically, after the server acquires the first feature vector, if a first set is stored in the server, where the first set includes feature vectors corresponding to the face images of one or more users, the server determines, from the first set, a first target feature vector having the highest similarity to the first feature vector. If the similarity between the first feature vector and the first target feature vector is higher than a first threshold, it indicates that the similarity between the first feature vector and the first target feature vector is higher. Therefore, by the scheme of the embodiment of the application, all face images acquired by the image acquisition equipment do not need to be sent to the server, the server carries out clustering based on the received face images, the server carries out pre-clustering instead of the first equipment, and the server directly carries out face clustering according to pre-clustering results, so that face clustering based on a large number of face images by the server is avoided, and the face clustering efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an exemplary application scenario provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a face clustering method provided in the embodiment of the present application;
fig. 3 is a schematic flow chart of a face clustering method according to an embodiment of the present application;
fig. 4 is a signaling interaction diagram of a face clustering method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a face clustering device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a face clustering device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor of the application discovers that in the traditional face clustering, the collected face image can be sent to a server by image collecting equipment, and the server carries out face clustering on the received face image. Specifically, the server may analyze the received face image to obtain image features of the face image, and further perform face clustering according to the image features. Because the number of face images collected by the image collecting equipment is large, and one server can be usually connected with a plurality of image collecting equipment in a butt joint mode, a large number of face images need to be processed when the server carries out face clustering, and therefore the face clustering efficiency is low. As can be understood with reference to fig. 1, fig. 1 is a schematic diagram of an exemplary application scenario provided in the embodiment of the present application.
In the scenario shown in fig. 1, server 101 may interact with image capture device 102, image capture device 103, and image capture device 104 over a network. The image acquisition device 102, the image acquisition device 103, and the image acquisition device 104 may acquire face images, and transmit the acquired face images to the server 101 through a network, and the server 101 performs face clustering based on the received face images. Although fig. 1 shows only 3 image capturing devices, fig. 1 is shown only for convenience of understanding, and in practice, the number of image capturing devices may be many, for example, the number of image capturing devices may be several tens or even hundreds. This results in a very large number of face images received by the server 101, which in turn results in a large data processing amount of the server 101, thereby making face clustering inefficient.
In order to solve the above problem, an embodiment of the present application provides a face clustering method, which is described below with reference to the accompanying drawings.
Various non-limiting embodiments of the present application are described in detail below with reference to the accompanying drawings.
Exemplary method
Referring to fig. 2, the figure is a schematic flow chart of a face clustering method provided in the embodiment of the present application.
The method shown in fig. 2 may be performed by a server, which may be, for example, the server 101 shown in fig. 1. In this embodiment, the method may be implemented, for example, by the following steps S201-S203.
S201: the server obtains a first feature vector from first equipment, wherein the first feature vector is a feature vector of a face image of a first user, and the first feature vector is obtained by clustering feature vectors respectively corresponding to a plurality of face images of the first user by the first equipment.
In the embodiment of the application, the first feature vector is obtained by the first device after face clustering is performed on a plurality of face images acquired by the image acquisition device. The first device mentioned here may be an image capturing device, or may be another device independent from the image capturing device, and this embodiment of the present application is not particularly limited. If the first device is a first device independent of the image acquisition device, the image acquisition device may send the acquired face image to the first device, and the first device performs face clustering on the received face image. If the first equipment is image acquisition equipment, the image acquisition equipment acquires face images, and then the face clustering equipment can perform face clustering on the acquired face images without sending the acquired image acquisition equipment to the first equipment, so that the time for sending the acquired face images to the first equipment is saved, and the face clustering efficiency is higher.
With regard to a specific implementation manner of the first device for clustering the face images acquired by the image acquisition device, reference may be made to the following description part for fig. 3, which is not described in detail here.
S202: if a first set is stored in the server, the first set comprises feature vectors corresponding to the face images of one or more users respectively, and the server determines a first target feature vector with the highest similarity with the first feature vector from the first set.
S203: if the similarity between the first feature vector and the first target feature vector is higher than a first threshold, the server determines that the face image of the first user and the face image corresponding to the first target feature vector belong to the same user.
Regarding S202 to S203, in this embodiment of the present application, the first set may be considered as a face gallery stored by the server, where the face gallery may include feature vectors corresponding to face images of several users respectively.
After the server acquires the first feature vector, the first feature vector may be matched with feature vectors in the first set to determine whether the first set includes a first target feature vector matched with a facial image of the first user. Specifically, the higher the similarity of two feature vectors is, the higher the possibility that the face images corresponding to the two feature vectors belong to the same user is. Therefore, in this embodiment of the present application, the server may determine, from the first set, the first target feature vector having the highest similarity with the first feature vector. Because the similarity between the first target feature vector and the first feature vector is highest, the probability that the face image corresponding to the first target feature vector belongs to the first user is also highest.
In one implementation of the embodiment of the present application, the similarity between feature vectors may be represented by considering the distance between the feature vectors. The smaller the distance between the two feature vectors is, the higher the similarity degree between the two feature vectors is; the larger the distance between two feature vectors, the lower the degree of similarity between the two feature vectors. Therefore, in a specific implementation, the server may, for example, calculate distances between the first feature vector and each feature vector in the first set, respectively, and determine a feature vector with the smallest corresponding distance as the first target feature vector.
Further, in consideration that there may also be a certain similarity between faces of different users, in this embodiment of the application, it may also be determined, in combination with the first threshold, whether the face image of the first user and the face image corresponding to the first target feature vector belong to the same user. Wherein the first threshold is a value less than 1 but closer to 1, e.g. the first threshold is equal to 0.9, or the first threshold is equal to 0.8. When the similarity between the first feature vector and the first target feature vector is higher than a first threshold, the similarity between the first feature vector and the first target feature vector may be considered to be relatively high, and at this time, it may be determined that the face image of the first user and the face image corresponding to the first target feature vector belong to the same user.
According to the above description, by using the scheme of the embodiment of the application, all face images acquired by the image acquisition device do not need to be sent to the server, the server performs clustering based on the received face images, but performs pre-clustering by the first device, and the server performs face clustering directly according to pre-clustering results (namely, the first characteristic vectors), so that face clustering based on a large number of face images by the server is avoided, and the face clustering efficiency is improved.
In an implementation manner of the embodiment of the application, after determining that the face image of the first user and the face image corresponding to the first target feature vector belong to the same user, the server may further update the first target feature vector in combination with the first feature vector, so that the updated first target feature vector can increase image information embodied by the first feature vector, and the updated first target feature vector can more accurately embody the face feature of the first user. Accordingly, when the second feature vector from the first device or other devices continues to be received, face clustering can be performed based on the updated first target feature vector.
In this embodiment of the application, when the first target feature vector is updated in combination with the first feature vector in a specific implementation, the server may, for example, calculate a geometric median of the first feature vector and the first target feature vector, and determine the calculated geometric median as the updated first target feature vector. The geometric median value can minimize the sum of the first distance and the second distance, so that the geometric median value can accurately represent the facial features of the first user. The first distance is the distance between the geometric median and the first feature vector, and the second distance is the distance between the geometric median and the first target feature vector.
In an implementation manner of the embodiment of the application, when the first device sends the first feature vector to the server, the first device may also correspondingly send a facial image of the first user to the server, where the facial image of the first user may include one or more images, and the embodiment of the application is not particularly limited. For example, if the first feature vector is obtained by clustering a plurality of face images, the first device may send one or more of the plurality of face images to the server. Correspondingly, the server may correspondingly store the first target feature vector and the received face image of the first user.
Specifically, the first set includes, in addition to feature vectors corresponding to the face images of one or more users, face images of users corresponding to the feature vectors. In a possible implementation manner, the first set may include a plurality of subsets, one subset corresponds to one user, and if the first subset corresponds to the first user, the first subset stores the facial image of the first user and the feature vector of the facial image of the first user. In this embodiment of the application, if the server determines that the face image of the first user and the face image corresponding to the first target feature vector belong to the same user, and the server further receives the face image of the first user from the first device, the server may also store the face image of the first user into the first subset. In other words, the updated first target feature vector and the facial image of the first user are correspondingly stored in the first subset.
In an implementation manner of the embodiment of the present application, if the similarity between the first feature vector and the first target feature vector is not higher than the first threshold, it indicates that the feature vector corresponding to the first user does not exist in the first set stored by the server. For this case, the server may save the first feature vector to the first set, so that when continuing to receive the second feature vector from the first device or other devices, face clustering may be performed based on the first feature vector and the second feature vector. Or, if the server further receives a facial image of the first user from the first device, the server may store the first feature vector and the facial image of the first user into the first set correspondingly.
In an implementation manner of the embodiment of the present application, if the aforementioned first set does not exist in the server, for example, the first feature vector is the first feature vector received by the server, the server does not perform the correlation operation of human clustering before this. For this case, the server may save the first feature vector, resulting in the first set. Or, if the server further receives a face image of the first user from the first device, the server may correspondingly store the face image of the first user and the first feature vector, thereby obtaining the first set. In this way, face clustering may be performed based on the first set and the second feature vector as the second feature vector from the first device or other devices continues to be received.
Although the server does not need to analyze the face image when performing face clustering, the server may store the face image and the feature vector of the face image as described above. The purpose of this is because after face clustering, face images of the user in various scenes may be output subsequently to facilitate subsequent processing. For example, facial images of the user in various scenes can be output, the motion trajectory of the user can be determined by combining the shooting time of the various facial images, and the like.
The method for clustering the faces of the server is described above, and the method for pre-clustering the first device is described below with reference to the accompanying drawings. Referring to fig. 3, the figure is a schematic flow chart of a face clustering method provided in the embodiment of the present application. The method shown in fig. 3 may be performed, for example, by a first device. Specifically, the first device may implement face clustering by, for example, performing the following S301 to S304 on a feature vector corresponding to each of feature vectors respectively corresponding to a plurality of acquired face images, where each of the face images is an image acquired by the same image acquisition device within a preset time period. The duration corresponding to the preset time period may be, for example, 60 seconds, that is, the first device performs face clustering on a plurality of images acquired by the image acquisition device within 60 seconds. In other words, the first device may newly perform the following S301 to S304 at a cycle of 60 seconds.
The image capturing apparatus mentioned here may be, for example, any one of the image capturing apparatus 102, the image capturing apparatus 103, or the image capturing apparatus 104 shown in fig. 1. The first device may be the image capturing device itself or may be a device other than the image capturing device.
Before describing S301-S304, a method for acquiring a face image and acquiring a feature vector corresponding to the face image by a first device is described first.
Considering that in practical applications, the frequency of acquiring images by the image acquisition device is high, for example, tens or even hundreds of images can be acquired in one second. And the moving speed of the user is limited, therefore, a plurality of images acquired by the image acquisition apparatus within a certain time (for example, within 1 second) can be regarded as the same user. Therefore, in the embodiment of the present application, when the face image is obtained, a tracking patch (tracklet) may be obtained, and a face image with the best quality is selected from the tracking patch. The image quality can be determined according to the confidence coefficient and the pixels, and the higher the confidence coefficient is, the more the number of the pixels is, the higher the quality of the corresponding face image is. The tracking patch refers to a set of face images acquired by an image acquisition device within a certain time period.
After the face image is obtained, a feature vector corresponding to the face image may be determined by using a Convolutional Neural Network (CNN), specifically, the face image may be input into a CNN model, and the CNN model may output the feature vector corresponding to the face image.
Next, specific implementations of S301-S304 are described.
S301: the first equipment obtains a second feature vector, and the second feature vector is the feature vector of the first face image.
In the embodiment of the present application, the second feature vector may be a feature vector obtained by inputting the first face image into the CNN model. The first face image is one of the face images acquired by the first device.
S302: if a second set is stored in the first device, the second set includes feature vectors corresponding to the face images of one or more users, and the first device determines a second target feature vector with the highest similarity to the second feature vector from the second set.
S303: and if the similarity between the second characteristic vector and the second target characteristic vector is higher than a second threshold, the first device determines that the first face image and the face image corresponding to the second target characteristic vector belong to the same user.
Regarding S302 to S303, it should be noted that, in this embodiment of the application, the second set may be considered as a face gallery stored in the first device, and the face gallery may include feature vectors corresponding to face images of several users respectively.
After the first device acquires the second feature vector, matching may be performed with the feature vectors in the second set to determine whether a second target feature vector matching the second feature vector is included in the second set. Specifically, the higher the similarity of two feature vectors is, the higher the possibility that the face images corresponding to the two feature vectors belong to the same user is. Therefore, in this embodiment of the present application, the first device may determine, from the second set, a second target feature vector having the highest similarity with the second feature vector. Because the similarity between the second target feature vector and the second feature vector is highest, the probability that the face image corresponding to the second target feature vector and the first face image belong to the first user is also highest.
In one implementation of the embodiment of the present application, the similarity between feature vectors may be represented by considering the distance between the feature vectors. The smaller the distance between the two feature vectors is, the higher the similarity degree between the two feature vectors is; the larger the distance between two feature vectors, the lower the degree of similarity between the two feature vectors. Therefore, in a specific implementation, the first device may, for example, calculate distances between the second feature vector and each feature vector in the second set, and determine the feature vector with the smallest corresponding distance as the second target feature vector.
Further, in consideration that there may exist a certain similarity between faces of different users, in this embodiment of the present application, it may also be determined, in combination with a second threshold, whether the face images corresponding to the first face image and the second target feature vector belong to the same user. Wherein the second threshold is a value less than 1 but closer to 1, e.g. the second threshold is equal to 0.9, or the second threshold is equal to 0.8. When the similarity between the second feature vector and the second target feature vector is higher than a second threshold, the similarity between the second feature vector and the second target feature vector may be considered to be relatively high, and at this time, it may be determined that the facial images corresponding to the first facial image and the second target feature vector belong to the same user.
S304: the first device updates the second target feature vector according to the first feature vector.
In the embodiment of the application, after the first device determines that the first face image and the face image corresponding to the second target feature vector belong to the same user, the second target feature vector may be updated according to the second feature vector, so that the updated second target feature vector can embody more face features of the first user, where the first user image is the face image of the first user. Therefore, in the subsequent face clustering process, the clustering result can be more accurate.
In an implementation manner of the embodiment of the present application, when updating the second target feature vector in combination with the second feature vector is specifically implemented, for example, the first device may calculate an average value of the second feature vector and the second target feature vector, and determine the calculated average value as the updated second target feature vector. Because the first device carries out face clustering on the face images collected by the same image collecting device, the possibility that the face of the same user is collected by the same image collecting device is higher. Therefore, the average value of the second feature vector and the second target feature vector can be directly calculated, and the average value can accurately represent the face feature of the first user.
In an implementation manner of the embodiment of the application, when the first device performs face clustering based on the second feature vector, the first device may further obtain and store the first face image. Specifically, when the first device determines that the first face image and the face image corresponding to the second target feature vector belong to the same user, the first device may correspondingly store the updated second target feature vector and the updated first face image.
Specifically, the second set includes, in addition to feature vectors corresponding to the face images of one or more users, face images of users corresponding to the feature vectors. In one implementation, the second set may include a plurality of subsets, one subset corresponds to one user, and if the second target feature vector is stored in the second subset, for the second subset, the second subset further stores facial images of the user corresponding to the second target feature vector. In this embodiment of the application, if the first device determines that the first face image and the face image corresponding to the second target feature vector belong to the same user, the first device may also store the first face image in the second subset.
In an implementation manner of the embodiment of the present application, if the similarity between the second feature vector and the second target feature vector is not higher than the second threshold, it indicates that there is no feature vector belonging to the same user as the first face image in the second set stored in the first device. For this case, the first device may store the second feature vector into the second set, so that when the face clustering of the face image captured by the image capturing device is continued, the face clustering may be performed based on the second feature vector. Alternatively, the first device may store the second feature vector and the first face image correspondence in the second set.
In an implementation manner of the embodiment of the present application, if the second set does not exist in the first device, for example, the second feature vector is a first feature vector obtained by the first device, before that, the first device does not perform the correlation operation of the face clustering. For this case, the first device may save the second feature vector, resulting in the second set. Alternatively, the first device may save the first face image and the second feature vector, thereby obtaining the second set. In this way, when face clustering is continued for the face images taken by the image acquisition device, face clustering can be performed based on the obtained second set.
In an implementation manner of the embodiment of the present application, after the first device performs the above S301 to S303 on each face image, a second set may be obtained. It is understood that the second set may include a plurality of feature vectors, and the plurality of feature vectors may be the aforementioned second feature vector directly stored in the first set, or the aforementioned updated second target feature vector. After the first device obtains the second set, each feature vector in the second set may be sent to the controller; alternatively, the first device may transmit each feature vector in the second set and the facial image correspondence corresponding to each feature vector to the server, so that the controller performs the above S201-S203 based on the received feature vectors.
For convenience of description, the following introduces the face clustering method provided in the embodiment of the present application, taking the first device as the image capture device itself, and combining with the application scenario shown in fig. 1.
Referring to fig. 4, the figure is a signaling interaction diagram of a face clustering method according to an embodiment of the present application. The method shown in fig. 4 can be implemented, for example, by the following S401 to S405.
S401: the image capturing device 102 acquires N face images captured by the image capturing device within a preset time period.
The N face images mentioned here include the first face image in S301 above. The N face images mentioned here may be obtained from N tracking patches, for example. One tracking patch corresponds to one of the N face images.
S402: the image pickup device 102 extracts feature vectors of the N personal face images, respectively.
S403: the image acquisition device 102 performs face clustering on the N face images based on the feature vectors of the N face images to obtain a second set.
Regarding the specific implementation manner of S403, reference may be made to the relevant description portions above for S302-S304, and the description is not repeated here.
S404: the image capturing device 102 sends the feature vectors of the second set to the server 101.
The image capture device 102 may send the feature vectors in the second set to the server 101 via Kafka, rabbitmq, or the like.
The feature vectors in the second set include the first feature vector mentioned in S201.
S405: the server 101 performs face clustering based on the received feature vectors.
S405 in a specific implementation, the server may perform S202-S203 provided in the above embodiment based on each received feature vector, and the description is not repeated here.
Fig. 4 is only shown for convenience of understanding, and in practice, the image capturing apparatus 103 and the image capturing apparatus 104 may also perform the above S401 to S404, which will not be described in detail.
Exemplary device
Based on the methods provided by the above embodiments, the embodiments of the present application also provide corresponding apparatuses, which are described below with reference to the accompanying drawings.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a face clustering device according to an embodiment of the present application. The apparatus 500 shown in fig. 5 may be applied to a server, and is configured to perform the face clustering method performed by the server according to the foregoing embodiment, where the apparatus 500 may specifically include: a first acquisition unit 501, a first determination unit 502 and a second determination unit 503.
A first obtaining unit 501, configured to obtain a first feature vector from a first device, where the first feature vector is a feature vector of a face image of a first user, and the first feature vector is obtained by clustering, by the first device, feature vectors corresponding to multiple face images of the first user, respectively;
a first determining unit 502, configured to determine, if a first set is stored in the server, where the first set includes feature vectors corresponding to face images of one or more users, a first target feature vector with a highest similarity to the first feature vector from the first set;
a second determining unit 503, configured to determine that the face image of the first user and the face image corresponding to the first target feature vector belong to the same user if the similarity between the first feature vector and the first target feature vector is higher than a first threshold.
In some embodiments, the first determining unit 501 is configured to:
respectively calculating the distance between the first feature vector and each feature vector in the first set;
and determining the feature vector corresponding to the minimum distance in the first set as the first target feature vector.
In some embodiments, the apparatus 500 further comprises:
a first calculation unit configured to calculate a geometric median of the first feature vector and the first target feature vector;
and the first updating unit is used for updating the first target feature vector, and the updated value is the calculated geometric median.
In some embodiments, the apparatus 500 further comprises:
a second acquisition unit, configured to acquire a face image of the first user from the first device;
and the first storage unit is used for storing the face image of the first user to the first set.
In some embodiments, the apparatus 500 further comprises:
a second storing unit, configured to store the first feature vector into the first set if the similarity between the first feature vector and the first target feature vector is not higher than the first threshold.
In some embodiments, the apparatus 500 further comprises:
a third storing unit, configured to store the first feature vector to obtain the first set if the first set does not exist in the server.
In some embodiments, the first set includes one or more subsets, one subset corresponds to one user, and the one or more subsets include a first subset used for storing feature vectors corresponding to facial images of the first user, or used for storing facial images and corresponding feature vectors of the first user.
Since the apparatus 500 is an apparatus corresponding to the method executed by the server provided in the above method embodiment, and the specific implementation of each unit of the apparatus 500 is the same as that of the above method embodiment, for the specific implementation of each unit of the apparatus 500, reference may be made to the relevant description part of the above method embodiment, and details are not repeated here.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a face clustering device according to an embodiment of the present application. The apparatus 600 shown in fig. 6 may be applied to a first device, and is configured to execute the face clustering method executed by the first device according to the foregoing embodiment, so as to perform face clustering on multiple face images acquired by an image acquisition device within a preset time period, where the apparatus 600 may specifically include: a third acquisition unit 601, a third determination unit 602, a fourth determination unit 603, and a second update unit 604.
A third obtaining unit 601, configured to obtain a second feature vector, where the second feature vector is a feature vector of a first face image, and the plurality of face images include the first face image;
a third determining unit 602, configured to determine, if a second set is stored in the first device, where the second set includes feature vectors corresponding to face images of one or more users, a second target feature vector with a highest similarity to the second feature vector from the second set;
a fourth determining unit 603, configured to determine that the first facial image and the facial image corresponding to the second target feature vector belong to the same user if the similarity between the second feature vector and the second target feature vector is higher than a second threshold;
a second updating unit 604, configured to update the second target feature vector according to the first feature vector.
In some embodiments, the third determining unit 602 is configured to:
respectively calculating the distance between the second feature vector and each feature vector in the second set;
and determining the feature vector corresponding to the minimum distance in the second set as the second target feature vector.
In some embodiments, the second updating unit 604 is configured to:
calculating an average of the second feature vector and the second target feature vector;
and updating the second target feature vector, wherein the updated value is the calculated average value.
In some embodiments, the apparatus 600 further comprises:
a fourth acquisition unit configured to acquire the first face image;
a fourth saving unit configured to save the first face image to the second set.
In some embodiments, the apparatus 600 further comprises:
a fifth storing unit, configured to store the second feature vector into the second set if the similarity between the second feature vector and the second target feature vector is not higher than the second threshold.
In some embodiments, the apparatus 600 further comprises:
a sixth storing unit, configured to store the two feature vectors to obtain the second set if the second set does not exist in the first device.
In some embodiments, the second set includes one or more subsets, one subset corresponds to one user, and the one or more subsets include a second subset used for storing feature vectors corresponding to facial images of the first user, or the second subset is used for storing facial images of the first user and corresponding feature vectors.
In some embodiments, the apparatus 600 further comprises:
and the sending unit is used for sending the feature vectors in the second set to a server, or correspondingly sending the feature vectors in the second set and the face images respectively corresponding to the feature vectors to the server.
Since the apparatus 600 is a device corresponding to the method executed by the first device provided in the above method embodiment, and the specific implementation of each unit of the apparatus 600 is the same as that of the above method embodiment, reference may be made to the relevant description part of the above method embodiment for the specific implementation of each unit of the apparatus 600, and details are not repeated here.
The embodiment of the present application further provides a device, which may be configured to execute the above face clustering method performed by the server, and may also be configured to execute the above face clustering method performed by the first device, and the device is briefly described next.
Referring to fig. 7, the figure is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
As shown in fig. 7, the apparatus comprises a processor 70 and a memory 71, the memory 71 storing machine executable instructions capable of being executed by the processor 70, the processor 70 executing the machine executable instructions to implement the face clustering method described above.
Further, the apparatus shown in fig. 7 further includes a bus 72 and a communication interface 73, and the processor 70, the communication interface 73 and the memory 71 are connected by the bus 72.
The Memory 71 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 73 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 72 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 7, but this does not indicate only one bus or one type of bus.
The processor 70 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 70. The Processor 70 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 71, and the processor 70 reads the information in the memory 71 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the application also provides a computer readable storage medium, which is used for storing a computer program, wherein the computer program is used for executing the face clustering method executed by the server provided by the method embodiment.
The embodiment of the present application further provides a computer-readable storage medium, which is used for storing a computer program, where the computer program is used for executing the face clustering method executed by the first device, provided by the above method embodiment.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the attached claims
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (32)

1. A face clustering method, performed by a server, the method comprising:
acquiring a first feature vector from first equipment, wherein the first feature vector is a feature vector of a face image of a first user, and the first feature vector is obtained by clustering feature vectors respectively corresponding to a plurality of face images of the first user by the first equipment;
if a first set is stored in the server, the first set comprises feature vectors corresponding to the face images of one or more users respectively, and a first target feature vector with the highest similarity to the first feature vector is determined from the first set;
and if the similarity between the first characteristic vector and the first target characteristic vector is higher than a first threshold value, determining that the face image of the first user and the face image corresponding to the first target characteristic vector belong to the same user.
2. The method of claim 1, wherein the determining a first target feature vector from the first set with the highest similarity to the first feature vector comprises:
respectively calculating the distance between the first feature vector and each feature vector in the first set;
and determining the feature vector corresponding to the minimum distance in the first set as the first target feature vector.
3. The method of claim 1, further comprising:
calculating a geometric median of the first feature vector and the first target feature vector;
and updating the first target feature vector, wherein the updated value is the calculated geometric median.
4. The method of claim 1, further comprising:
acquiring a face image of the first user from the first device;
and saving the facial image of the first user to the first set.
5. The method of claim 1, further comprising:
and if the similarity between the first feature vector and the first target feature vector is not higher than the first threshold, storing the first feature vector into the first set.
6. The method of claim 1, further comprising:
if the first set does not exist in the server, the first feature vector is stored to obtain the first set.
7. The method according to any one of claims 1 to 6, wherein the first set comprises one or more subsets, one subset corresponding to one user, and the one or more subsets comprise a first subset used for storing feature vectors corresponding to facial images of the first user, or the first subset is used for storing facial images and corresponding feature vectors of the first user.
8. A face clustering method, performed by a first device, the method comprising:
executing the following operations on the feature vector corresponding to each face image in the feature vectors corresponding to the plurality of acquired face images respectively:
acquiring a second feature vector, wherein the second feature vector is a feature vector of a first face image, and the plurality of face images comprise the first face image;
if a second set is stored in the first device, the second set comprises feature vectors corresponding to the face images of one or more users respectively, and a second target feature vector with the highest similarity to the second feature vector is determined from the second set;
if the similarity between the second feature vector and the second target feature vector is higher than a second threshold, determining that the face images corresponding to the first face image and the second target feature vector belong to the same user;
updating the second target feature vector according to the first feature vector;
wherein: the plurality of face images are acquired by the image acquisition equipment within a preset time period.
9. The method of claim 8, wherein determining a second target eigenvector from the second set with the highest similarity to the second eigenvector comprises:
respectively calculating the distance between the second feature vector and each feature vector in the second set;
and determining the feature vector corresponding to the minimum distance in the second set as the second target feature vector.
10. The method of claim 8, wherein updating the second target eigenvector from the first eigenvector comprises:
calculating an average of the second feature vector and the second target feature vector;
and updating the second target feature vector, wherein the updated value is the calculated average value.
11. The method of claim 8, further comprising:
acquiring the first face image;
saving the first face image to the second set.
12. The method of claim 8, further comprising:
and if the similarity between the second feature vector and the second target feature vector is not higher than the second threshold, storing the second feature vector into the second set.
13. The method of claim 8, further comprising:
and if the second set does not exist in the first equipment, storing the two feature vectors to obtain the second set.
14. The method according to any one of claims 8 to 13, wherein the second set comprises one or more subsets, one subset corresponding to one user, and the one or more subsets comprise a second subset used for storing feature vectors corresponding to facial images of the first user, or the second subset is used for storing facial images and corresponding feature vectors of the first user.
15. The method according to any one of claims 8-13, further comprising:
and sending the feature vectors in the second set to a server, or correspondingly sending the feature vectors in the second set and the face images corresponding to the feature vectors to the server.
16. A face clustering device applied to a server, the device comprising:
the device comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for obtaining a first feature vector from first equipment, the first feature vector is a feature vector of a face image of a first user, and the first feature vector is obtained by clustering feature vectors respectively corresponding to a plurality of face images of the first user by the first equipment;
a first determining unit, configured to determine, if a first set is stored in the server, where the first set includes feature vectors corresponding to face images of one or more users, a first target feature vector having a highest similarity to the first feature vector from the first set;
and the second determining unit is used for determining that the face image of the first user and the face image corresponding to the first target feature vector belong to the same user if the similarity between the first feature vector and the first target feature vector is higher than a first threshold.
17. The apparatus of claim 16, wherein the first determining unit is configured to:
respectively calculating the distance between the first feature vector and each feature vector in the first set;
and determining the feature vector corresponding to the minimum distance in the first set as the first target feature vector.
18. The apparatus of claim 16, further comprising:
a first calculation unit configured to calculate a geometric median of the first feature vector and the first target feature vector;
and the first updating unit is used for updating the first target feature vector, and the updated value is the calculated geometric median.
19. The apparatus of claim 16, further comprising:
a second acquisition unit, configured to acquire a face image of the first user from the first device;
and the first storage unit is used for storing the face image of the first user to the first set.
20. The apparatus of claim 16, further comprising:
a second storing unit, configured to store the first feature vector into the first set if the similarity between the first feature vector and the first target feature vector is not higher than the first threshold.
21. The apparatus of claim 16, further comprising:
a third storing unit, configured to store the first feature vector to obtain the first set if the first set does not exist in the server.
22. The apparatus according to any of claims 16-21, wherein the first set comprises one or more subsets, one subset corresponding to one user, and the one or more subsets comprise a first subset used for storing feature vectors corresponding to facial images of the first user, or the first subset is used for storing facial images and corresponding feature vectors of the first user.
23. The utility model provides a face clustering device which characterized in that is applied to first equipment for carry out face clustering to many people's face images that image acquisition equipment gathered in the predetermined time quantum, the device includes:
a third obtaining unit, configured to obtain a second feature vector, where the second feature vector is a feature vector of a first face image, and the plurality of face images include the first face image;
a third determining unit, configured to determine, if a second set is stored in the first device, where the second set includes feature vectors corresponding to face images of one or more users, a second target feature vector having a highest similarity to the second feature vector from the second set;
a fourth determining unit, configured to determine that the first facial image and the facial image corresponding to the second target feature vector belong to the same user if the similarity between the second feature vector and the second target feature vector is higher than a second threshold;
and the second updating unit is used for updating the second target feature vector according to the first feature vector.
24. The apparatus of claim 23, wherein the third determining unit is configured to:
respectively calculating the distance between the second feature vector and each feature vector in the second set;
and determining the feature vector corresponding to the minimum distance in the second set as the second target feature vector.
25. The apparatus of claim 23, wherein the second updating unit is configured to:
calculating an average of the second feature vector and the second target feature vector;
and updating the second target feature vector, wherein the updated value is the calculated average value.
26. The apparatus of claim 23, further comprising:
a fourth acquisition unit configured to acquire the first face image;
a fourth saving unit configured to save the first face image to the second set.
27. The apparatus of claim 23, further comprising:
a fifth storing unit, configured to store the second feature vector into the second set if the similarity between the second feature vector and the second target feature vector is not higher than the second threshold.
28. The apparatus of claim 23, further comprising:
a sixth storing unit, configured to store the two feature vectors to obtain the second set if the second set does not exist in the first device.
29. The apparatus according to any one of claims 23-28, wherein the second set comprises one or more subsets, one subset corresponding to one user, and the one or more subsets comprise a second subset for storing feature vectors corresponding to facial images of the first user, or the second subset is for storing facial images of the first user and corresponding feature vectors.
30. The apparatus of any one of claims 23-28, further comprising:
and the sending unit is used for sending the feature vectors in the second set to a server, or correspondingly sending the feature vectors in the second set and the face images respectively corresponding to the feature vectors to the server.
31. An apparatus, characterized in that the apparatus comprises: a processor and a memory:
the memory is used for storing a computer program and transmitting the computer program to the processor;
the processor is adapted to perform the method of any of claims 1 to 15 in accordance with the computer program.
32. A computer-readable storage medium for storing a computer program for performing the method of any one of claims 1 to 15.
CN202010614256.7A 2020-06-30 2020-06-30 Face clustering method and device Pending CN111783641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010614256.7A CN111783641A (en) 2020-06-30 2020-06-30 Face clustering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010614256.7A CN111783641A (en) 2020-06-30 2020-06-30 Face clustering method and device

Publications (1)

Publication Number Publication Date
CN111783641A true CN111783641A (en) 2020-10-16

Family

ID=72760855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010614256.7A Pending CN111783641A (en) 2020-06-30 2020-06-30 Face clustering method and device

Country Status (1)

Country Link
CN (1) CN111783641A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071860A (en) * 2023-03-07 2023-05-05 雷图志悦(北京)科技发展有限公司 Access control data management method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071860A (en) * 2023-03-07 2023-05-05 雷图志悦(北京)科技发展有限公司 Access control data management method and system

Similar Documents

Publication Publication Date Title
CN110147717B (en) Human body action recognition method and device
CN108985162B (en) Target real-time tracking method and device, computer equipment and storage medium
CN108388879B (en) Target detection method, device and storage medium
CN114049681A (en) Monitoring method, identification method, related device and system
CN111626371B (en) Image classification method, device, equipment and readable storage medium
JP2016507834A (en) System and method for tracking and detecting a target object
CN109635693B (en) Front face image detection method and device
JP7089045B2 (en) Media processing methods, related equipment and computer programs
CN109195011B (en) Video processing method, device, equipment and storage medium
CN113657163B (en) Behavior recognition method, electronic device and storage medium
CN109960969B (en) Method, device and system for generating moving route
CN111241928B (en) Face recognition base optimization method, system, equipment and readable storage medium
CN112001948A (en) Target tracking processing method and device
CN111429476A (en) Method and device for determining action track of target person
CN113723157A (en) Crop disease identification method and device, electronic equipment and storage medium
CN114092515B (en) Target tracking detection method, device, equipment and medium for obstacle shielding
CN113627334A (en) Object behavior identification method and device
CN111783641A (en) Face clustering method and device
CN113688804A (en) Multi-angle video-based action identification method and related equipment
CN113505720A (en) Image processing method and device, storage medium and electronic device
CN111046831B (en) Poultry identification method, device and server
CN110992426B (en) Gesture recognition method and device, electronic equipment and storage medium
CN111866468B (en) Object tracking distribution method, device, storage medium and electronic device
CN115830342A (en) Method and device for determining detection frame, storage medium and electronic device
CN114422776A (en) Detection method and device for camera equipment, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination