WO2023125839A1 - Facial data archiving method and related device - Google Patents

Facial data archiving method and related device Download PDF

Info

Publication number
WO2023125839A1
WO2023125839A1 PCT/CN2022/143530 CN2022143530W WO2023125839A1 WO 2023125839 A1 WO2023125839 A1 WO 2023125839A1 CN 2022143530 W CN2022143530 W CN 2022143530W WO 2023125839 A1 WO2023125839 A1 WO 2023125839A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
face data
archived
similarity
archiving
Prior art date
Application number
PCT/CN2022/143530
Other languages
French (fr)
Chinese (zh)
Inventor
章跃
谢友平
刘国伟
Original Assignee
深圳云天励飞技术股份有限公司
成都云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术股份有限公司, 成都云天励飞技术有限公司 filed Critical 深圳云天励飞技术股份有限公司
Publication of WO2023125839A1 publication Critical patent/WO2023125839A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Definitions

  • the invention relates to the field of artificial intelligence, in particular to a face data archiving method and related equipment.
  • the archiving of face data is to attribute the face data of the same person to the same face file for face data query and processing.
  • the usual archiving method is to perform clustering and archiving of face data.
  • factors such as face posture, brightness, and clarity have a great impact on clustering and archiving, resulting in a low archiving rate.
  • face pose changes very little the similarity difference is not big, but after the pose change reaches a certain level, the similarity changes greatly, which in turn affects the archiving rate. Therefore, due to the change of face posture, there is a problem of low archiving rate in the archiving of existing face data.
  • An embodiment of the present invention provides a method for archiving face data. After obtaining batches of face data from video streams, the corresponding face features are extracted for archiving. During the archiving process, a preset threshold matrix is used for similarity judgment. Since the preset threshold matrix is obtained according to the facial pose characteristics of the archived face data, a similarity threshold suitable for the facial pose characteristics can be provided through the preset threshold matrix to solve the problem of using unified The threshold evaluation method makes the problem of low filing rate.
  • an embodiment of the present invention provides a method for archiving face data, the method comprising:
  • Obtain batches of face data to be archived the face data including face features and face gesture features
  • Threshold screening is performed on the global similarity of each face data according to the facial pose feature and a preset similarity threshold matrix to obtain a first human face data set, and the preset similarity threshold matrix is based on Obtain the facial features and facial posture features contained in the archived face data in the preset archive library;
  • the first human face data set is archived through the archived human face data in the preset archive library.
  • the method also includes:
  • the preset threshold value matrix is constructed and obtained according to the facial features and facial pose features of the archived facial data.
  • the multiple archived face datasets include a frontal face dataset and multiple deflected face datasets
  • the facial pose features include face angle feature values
  • the The face feature and the face pose feature of the face data are constructed to obtain the preset threshold matrix, including:
  • the face data contained in the archived face file is divided according to the face angle feature value to obtain a plurality of different face angle intervals.
  • the archived face dataset For each archived face file in the preset archive library, the face data contained in the archived face file is divided according to the face angle feature value to obtain a plurality of different face angle intervals.
  • the average similarity matrix of each archived human face file is constructed, wherein, each of the archived human face files corresponds to a described average similarity matrix, so The dimension of the average similarity matrix is related to the number of the divided intervals;
  • the preset threshold matrix is obtained by calculating according to the average similarity matrix.
  • one average similarity matrix unit in the average similarity matrix corresponds to an average similarity value
  • the calculation according to the average similarity matrix to obtain the preset threshold matrix includes:
  • the final similarity matrix is used as the preset threshold matrix.
  • archiving the first face data set through the archived face data in the preset archive library includes:
  • the archiving process is performed on the second human face data set through the archived human face data in the preset archiving library.
  • the second face relationship graph includes a first face relationship cluster
  • the second face data set is archived through the archived face data in the preset archive library, include:
  • the batch of face data is archived based on the second face relationship graph.
  • the second face relation graph includes a first face relation cluster, and archiving the batch of face data based on the second face relation graph includes:
  • the batch of face data is archived based on the second face relation cluster.
  • an embodiment of the present invention provides a face data archiving device, the device comprising:
  • Obtaining module is used for obtaining batch face data to be archived, and described face data comprises face feature and face gesture feature;
  • Calculation module used to calculate the pairwise similarity between all facial features, and obtain the global similarity of each facial data
  • the first screening module is configured to perform threshold screening on the global similarity of each face data according to the face pose feature and a preset similarity threshold matrix to obtain a first face data set, the preset The similarity threshold matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library;
  • the archiving module is configured to archive the first human face data set through the archived human face data in the preset archiving library.
  • an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, The steps in the archiving method of face data provided by the embodiment of the present invention are realized.
  • an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the archiving of face data provided by the embodiment of the invention is realized steps in the method.
  • batches of face data to be archived are obtained, and the face data includes face features and face pose features; the pairwise similarity between all face features is calculated to obtain the global profile of each face data Similarity; according to the face pose feature and the preset similarity threshold matrix, threshold screening is performed on the global similarity of each face data to obtain the first face data set, and the preset similarity threshold
  • the value matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library; through the archived face data in the preset archive library, the first face data Sets are archived.
  • the preset threshold matrix is used for similarity judgment, because the preset threshold matrix is based on the archived face data Therefore, a preset threshold matrix can be used to provide a similarity threshold that is compatible with the facial posture features, so as to solve the problem of low filing rate caused by the unified threshold evaluation method in the prior art.
  • Fig. 1 is the flow chart of a kind of archiving method of face data provided by the embodiment of the present invention
  • Fig. 2 is a flowchart of a method for constructing a threshold matrix provided by an embodiment of the present invention
  • Fig. 3 is a schematic structural diagram of a face data archiving device provided by an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another archiving device for face data provided by an embodiment of the present invention.
  • Fig. 5 is a schematic structural diagram of a building block provided by an embodiment of the present invention.
  • Fig. 6 is a schematic structural diagram of a second calculation sub-module provided by an embodiment of the present invention.
  • Fig. 7 is a schematic structural diagram of an archiving module provided by an embodiment of the present invention.
  • Fig. 8 is a schematic structural diagram of an archiving sub-module provided by an embodiment of the present invention.
  • Fig. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • Fig. 1 is the flow chart of a kind of archiving method of human face data provided by the embodiment of the present invention, as shown in Fig. 1, the archiving method of this human face data comprises the following steps:
  • the above-mentioned video stream may be a video stream obtained in real time by a camera, or a video stream uploaded by a user, and the above-mentioned camera may be a camera installed in each scene, such as a camera installed indoors or installed outdoors camera.
  • the foregoing video stream may be a video stream collected by one camera, or may be a video stream collected by multiple cameras.
  • the batch face data can be obtained from the video stream through the face detection algorithm, and the batch face data can be obtained from the video through the face detection algorithm.
  • the face data includes face features and face pose features.
  • Each face data also includes face image information, face confidence information, and face detection frame ID.
  • face detection algorithm the same face detection frame ID will be assigned to the same target identified by the algorithm.
  • Each face data also includes time information and position information.
  • the image frame in the video stream includes time information
  • the video stream includes position information.
  • the above time information can be understood as the time point when the image frame is collected.
  • the location information can be understood as the location where the video stream is collected, and the above location information can also be understood as the installation location of the camera, and each camera will be provided with corresponding latitude and longitude information as the location information when it is installed.
  • the time information corresponding to the face data can be obtained through the time information corresponding to the image frame, and the position information corresponding to the face data can be obtained through the position information of the video stream.
  • the face features corresponding to the face data can be extracted from the face image information through the face feature extraction algorithm, and the face pose features corresponding to the face data can be extracted from the face image information through the face state estimation algorithm , the spatiotemporal features of the face corresponding to the face data can be extracted from the above time information and position information.
  • the above-mentioned face feature can be a face feature vector, and the face feature vector is to map the high-dimensional space image data of the face image to a low-dimensional, forming a preset dimension feature vector (such as a 512-dimensional feature vector) that can represent the feature of the picture itself , different face images can be judged by comparing the distance between the face feature vectors to determine whether they are similar. The closer the face feature vectors are, the greater the probability that the face images are faces of the same person.
  • the degree of discretization of the face feature vector depends on the position of the selected face key points and the degree of distinction of each feature.
  • the facial posture feature mentioned above may be an angle value of a face orientation deflection
  • the face orientation deflection may include a pitch angle deflection, a yaw angle deflection, a roll angle deflection, and the like.
  • one face data corresponds to one face feature
  • a batch of face data corresponds to a batch of face features
  • the similarity between two pairs is calculated. It is equivalent to performing a similarity calculation on a facial feature with other facial features except itself to obtain the global similarity of the facial feature.
  • the batch of face data is M face data
  • the corresponding M face features are extracted, and each face feature is calculated with M-1 face features except itself to obtain the corresponding global similarity , at this time, the global similarity corresponding to each face feature has M-1 similarities.
  • the highest TopN similarities among the M-1 similarities can be selected as the global similarity of the face data.
  • the preset threshold value matrix is obtained according to the face pose characteristics of the archived face data. Specifically, thresholds corresponding to different facial posture features are set in the above-mentioned preset threshold matrix, and the corresponding thresholds can be matched in the preset threshold matrix according to different facial posture features, and the global similarity can be compared with the corresponding thresholds. In comparison, if there is a similarity greater than a corresponding threshold in the global similarity, the corresponding face data may be added to the first face data set, wherein one face data corresponds to one first face data set.
  • the face pose feature of M-1 face data is A1
  • the face pose feature of face data B is B1
  • the corresponding threshold C1 can be found in the preset threshold matrix according to (A1, B1), and the similarity S1 is compared with the threshold C1
  • the similarity S1 is greater than or equal to the threshold C1
  • the face data B is added to the first face data set of the face data A. If the similarity S1 is smaller than the threshold C1, no processing is performed.
  • the first face data set of the face data A includes the face data A.
  • the first face data set retains the face data with a higher similarity with the master face data, and the first face data set corresponding to the master face data with the same face detection frame can be grouped together. into the same file.
  • the first human face data set is screened according to the human face spatio-temporal characteristics to obtain a second human face data set; Data sets are archived.
  • the above-mentioned spatio-temporal feature of the face includes time information and position information, and the first face data set is screened according to the spatio-temporal feature of the face.
  • the face data is the main face data of the first face data set, if two faces with the same time information appear in the first face data set or two or more face data, then keep the face data with the highest similarity with the master face data among the two or more face data, and delete the rest of the face data in the first face data set. If two or more face data with the same time information and different position information appear in the first face data set, then keep the one with the highest similarity with the main face data among the two or more face data For the face data, delete the rest of the face data in the first face data set.
  • the certain face data is deleted from the first face data set. Judging whether the position and time relationship between each two face data conforms to the second preset relationship except the main face data, if the position and time relationship between certain two face data do not conform to the second preset relationship, then judge Whether the two face data match the trajectory data corresponding to the master face data, delete the face data that does not match the trajectory data corresponding to the master face data.
  • the trajectory data corresponding to the master face data can be determined according to the face detection frame ID corresponding to the master face data, specifically, the face data having the same face detection frame ID as the master face data and the master face data can form corresponding trajectory data.
  • the location information of two face data is LA and LB
  • the time information is TA and TB respectively
  • the above-mentioned first preset relationship is ⁇ 1, if the position and time relationship ⁇ between the master’s face data and a certain face data is greater than ⁇ 1, it means that the position and time relationship between the master’s face data and a certain face data does not conform to the first default relationship.
  • the above second preset relationship is ⁇ 2, if the position and time relationship ⁇ between certain two face data is greater than ⁇ 2, it means that the position and time relationship between certain two face data does not conform to the second preset relationship.
  • the first human face data set is screened by the above method to obtain the second human face data set, wherein one second human face data set corresponds to one human face data.
  • the face data with higher similarity with the main face data is retained in the second face data set, and the facial spatiotemporal data is used to filter out the face data that does not conform to the spatiotemporal relationship, and the second face data set is added.
  • the probability that the face data belongs to the face of the same person is added.
  • the second face data sets corresponding to the main face data with the same face detection frame can be grouped into the same file.
  • the second face data set is clustered and archived through a part of the archived face data, and the archiving of the face data is to archive the unarchived face data into this part of the archived face data in the corresponding file.
  • the archived face data also includes file tags, and the archived face data is used as the clustering center to cluster the main face data of all the second face data sets.
  • the second face dataset is deduplicated and archived.
  • the second face data set is formed into a face relation graph, and archived according to the face relation clusters in the face relation graph. It should be noted that, in the face relationship graph, each node represents a face data, and the edge between two nodes represents the similarity between two face data.
  • the above-mentioned second human face data set can be formed into a first human face relational map; strong connectivity calculation is performed on the above-mentioned first human face relational map to obtain a second human face relational map; based on the above-mentioned second human face relational map Archive the above-mentioned batches of face data.
  • both the first human face relationship graph and the second human face relationship graph are directed graphs.
  • Strongly Connected means that there is a path from v1 to v2 between any two points v1 and v2 in the directed graph G (Directed Graph). Points and edges are not repeated, it is called a path) and the path from v2 to v1.
  • the method of bidirectional traversal to obtain the intersection can be used to find the strongly connected components of the first face relationship map, and the time complexity is O(N ⁇ 2+M).
  • the Kosaraju algorithm or the Tarjan algorithm can also be used to find the strongly connected components of the first face relation graph, both of which have a time complexity of O(N+M).
  • the second face relation map includes the first face relation cluster, and the first face relation cluster can be optimized according to a preset community discovery algorithm to obtain the second face relation cluster; based on the above second face relation cluster
  • the face relation cluster archives the aforementioned batches of face data.
  • the second face relationship graph represents the face relationship network.
  • the face relationship network some faces are closely connected, and some faces are relatively sparsely connected.
  • the closely connected part can be regarded as a community, and there is a relatively close connection between the internal nodes, while the relatively sparse connection between the two communities.
  • the community discovery algorithm algorithm can realize the community division of large-scale networks with different granularities in a short period of time, and there is no need to specify the number of communities. When the modularity no longer increases, the iteration will automatically stop.
  • batches of face data to be archived are obtained, and the face data includes face features and face pose features; the pairwise similarity between all face features is calculated to obtain the global profile of each face data Similarity; according to the face pose feature and the preset similarity threshold matrix, threshold screening is performed on the global similarity of each face data to obtain the first face data set, and the preset similarity threshold
  • the value matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library; through the archived face data in the preset archive library, the first face data Sets are archived.
  • the preset threshold matrix is used for similarity judgment, because the preset threshold matrix is based on the archived face data Therefore, a preset threshold matrix can be used to provide a similarity threshold that is compatible with the facial posture features, so as to solve the problem of low filing rate caused by the unified threshold evaluation method in the prior art.
  • FIG. 2 is a flowchart of a threshold matrix construction method provided by an embodiment of the present invention. As shown in FIG. 2, before step 103 of the embodiment in FIG. 1, a preset threshold matrix is also included.
  • the construction method specifically includes the following steps:
  • each archived face file corresponds to the archived face data of a person
  • the archived face data in the archived face files can include the cover person Face data and archived face data, wherein the cover face data can be front face data, and the face orientation deflection of the front face data can include a pitch angle deflection of 0°, a yaw angle deflection of 0°, and a roll angle deflection of 0°. Deflection is 0°.
  • each archived face data includes face image information, face confidence information, face detection frame ID, time information and location information
  • the face feature extraction algorithm can be used to extract the face features of the face image information of the archived face data, and the face features of the archived face data can be extracted, and the archived face data can be analyzed by the face pose estimation algorithm.
  • the face image information is used to estimate the face state, and the face pose features of the archived face data are obtained.
  • each of the archived face data therein has a corresponding face feature and a face pose feature extracted. It can calculate the face pose feature difference of two archived face data, and calculate the similarity of the two archived face data, and compare the similarity of the two archived face data with the two archived face data
  • the facial pose features are poorly correlated.
  • the above-mentioned face pose feature includes a face angle feature value, and for each archived face file in the preset archive library, according to the face angle feature value, the person included in the archived face file is
  • the face data is divided to obtain multiple archived face data sets under different face angle intervals; the average similarity between the archived face data sets is calculated according to the facial features of the archived face data, or the average similarity between the archived face data sets is calculated according to the archived face data.
  • the face feature of the face data calculates the average similarity between the frontal face data set and the deflected face data set; according to the average similarity between the archived face data sets, the construction of each archived face file is obtained.
  • An average similarity matrix wherein each archived face file corresponds to an average similarity matrix, and the dimension of the average similarity matrix is related to the number of divided intervals; a preset threshold matrix is obtained by calculating the average similarity matrix.
  • the face pose feature is the angle feature value of the face yaw angle
  • the angle feature value of the face yaw angle can be between 0° and 60° degrees
  • the angle feature value of the face yaw angle can be expressed as Divide the number of preset angles. For example, divide the angle feature value of the face yaw angle by every 10°, then get 0°, (0, 10°], (10°, 20°], (20°, 30° °], (30°, 40°], (40°, 50°], (50°, 60°] and other intervals, if the above intervals are formed into a matrix, the following table 1 is obtained:
  • the two archived face data are archived face data A and archived face data B respectively, and the angle feature value of the face yaw angle of archived face data A is 35°, falling into (30°, 40 °] interval, the angle eigenvalue of the face yaw angle of the archived face data B is 20°, which falls within the (10°, 20°] interval, calculate the archived face data A and the archived face data
  • the similarity SAB between B, the obtained similarity SAB corresponds to (30°, 40°] and (10°, 20°]
  • Table 2 is obtained as follows:
  • the average similarity between the front face data set and the deflected face data set may be calculated according to the face features of the archived face data.
  • Table 4 For an archived face file, the average similarity table as shown in Table 4 can be obtained, and Table 4 is as follows:
  • a0, a1, a2, a3, a4, a5, a6 are the average similarity between the frontal face data set and the deflected face data set calculated according to the face features of the archived face data.
  • the average similarity values in the corresponding average similarity matrix units can be extracted and averaged to obtain the final A similarity matrix; the above-mentioned final similarity matrix is used as the above-mentioned preset threshold matrix.
  • the similarity threshold matrix of the smooth moving average similarity can be used to fit the similarity in various postures as the threshold matrix, which can further improve the clustering effect and further improve the archiving rate of face data.
  • the face data archiving method provided by the embodiment of the present invention can be applied to smart phones, computers, servers and other devices that can perform face data archiving.
  • FIG. 3 is a schematic structural diagram of an archiving device for face data provided by an embodiment of the present invention. As shown in FIG. 3, the device includes:
  • Acquisition module 301 is used for obtaining batches of human face data to be archived, and described human face data comprises human face feature and human face posture feature;
  • Calculation module 302 is used for calculating the pairwise similarity between all face features, obtains the global similarity of each face data
  • the first screening module 303 is configured to perform threshold screening on the global similarity of each face data according to the facial pose feature and a preset similarity threshold matrix to obtain a first human face data set, the preset The set similarity threshold matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library;
  • the archiving module 304 is configured to archive the first face data set through the archived face data in the preset archiving library.
  • the device further includes:
  • the extraction module 306 is used to obtain the archived face data in the preset archive library, and extract the facial features and facial gesture features of the archived face data;
  • the construction module 307 is configured to construct the preset threshold matrix according to the facial features and facial pose features of the archived facial data.
  • the multiple archived face datasets include a frontal face dataset and multiple deflected face datasets
  • the facial pose features include face angle feature values
  • the Building blocks 307 including:
  • the division sub-module 3071 is used to divide the face data contained in the archived face files according to the face angle feature value for each archived face file in the preset archive library, to obtain multiple An archived face dataset under different face angle intervals;
  • the first calculation sub-module 3072 is used to calculate the average similarity between the archived face data sets according to the face features of the archived face data, or to calculate the average similarity according to the face features of the archived face data the average similarity between the frontal face dataset and the deflected face dataset;
  • Construction sub-module 3073 for constructing the average similarity matrix of each archived face file according to the average similarity between the archived face data sets, wherein each of the archived face files corresponds to one of the The average similarity matrix, the dimension of the average similarity matrix is related to the quantity of the divided intervals;
  • the second calculation sub-module 3074 is configured to calculate the preset threshold matrix according to the average similarity matrix.
  • an average similarity matrix unit in the average similarity matrix corresponds to an average similarity value
  • the archiving module 304 includes:
  • the extracting unit 30741 is used for extracting the average similarity values in the corresponding average similarity matrix units for all the average similarity matrices, and performing addition and averaging to obtain a final similarity matrix;
  • the acquiring unit 30742 is configured to use the final similarity matrix as the preset threshold matrix.
  • the archiving module 305 is also used to screen the first face data set according to the spatiotemporal features of the face to obtain the second face data set; through the archived face data in the preset archiving library, Perform archiving processing on the second human face data set.
  • the archiving module 305 includes:
  • An integration submodule 3051 configured to form the second face dataset into a first face relational graph
  • the third calculation sub-module 3052 is used to perform strong connectivity calculation on the first human face relationship graph to obtain a second human face relationship graph;
  • the archiving submodule 3053 is configured to archive the batch of face data based on the second face relational graph.
  • the second face relation map includes the first face relation cluster
  • the archiving submodule 3053 includes:
  • An optimization unit 50531 configured to optimize the first face relationship cluster according to a preset community discovery algorithm to obtain a second face relationship cluster
  • the archiving unit 30532 is configured to archive the batch of face data based on the second face relation cluster.
  • the face data archiving device provided by the embodiment of the present invention can be applied to smart phones, computers, servers and other devices that can perform face data archiving.
  • the face data archiving device provided by the embodiment of the present invention can realize each process realized by the face data archiving method in the above method embodiment, and can achieve the same beneficial effect. To avoid repetition, details are not repeated here.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention. As shown in FIG. 9, it includes: a memory 902, a processor 901 and a A computer program for the archiving method of face data running on 901, wherein:
  • the processor 901 is used to call the computer program stored in the memory 902, and perform the following steps:
  • Obtain batches of face data to be archived the face data including face features and face gesture features
  • Threshold screening is performed on the global similarity of each face data according to the facial pose feature and a preset similarity threshold matrix to obtain a first human face data set, and the preset similarity threshold matrix is based on Obtain the facial features and facial posture features contained in the archived face data in the preset archive library;
  • the method executed by the processor 901 further includes:
  • the preset threshold value matrix is constructed and obtained according to the facial features and facial pose features of the archived facial data.
  • the multiple archived face datasets include a frontal face dataset and multiple deflected face datasets
  • the facial pose features include face angle feature values
  • the processor 901 executes the according to
  • the facial features and facial posture features of the archived facial data are constructed to obtain the preset threshold matrix, including:
  • the face data contained in the archived face file is divided according to the face angle feature value to obtain a plurality of different face angle intervals.
  • the archived face dataset For each archived face file in the preset archive library, the face data contained in the archived face file is divided according to the face angle feature value to obtain a plurality of different face angle intervals.
  • the average similarity matrix of each archived human face file is constructed, wherein, each of the archived human face files corresponds to a described average similarity matrix, so The dimension of the average similarity matrix is related to the number of the divided intervals;
  • the preset threshold matrix is obtained by calculating according to the average similarity matrix.
  • one average similarity matrix unit in the average similarity matrix executed by the processor 901 corresponds to an average similarity value
  • the calculation according to the average similarity matrix to obtain the preset threshold matrix includes:
  • the final similarity matrix is used as the preset threshold matrix.
  • the archiving processing of the first face data set by using the archived face data in the preset archiving library performed by the processor 901 includes:
  • the archiving process is performed on the second human face data set through the archived human face data in the preset archiving library.
  • the second face relation graph includes a first face relation cluster
  • the process executed by the processor 901 is to process the second face data through the archived face data in the preset archive repository.
  • collections for archiving including:
  • the batch of face data is archived based on the second face relationship graph.
  • the second face relation graph includes a first face relation cluster
  • the archiving of the batch of face data based on the second face relation graph executed by the processor 901 includes:
  • the batch of face data is archived based on the second face relation cluster.
  • the electronic device provided by the embodiment of the present invention can realize each process realized by the face data archiving method in the above method embodiment, and can achieve the same beneficial effect. To avoid repetition, it is not repeated here.
  • the embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for archiving face data or the application terminal provided by the embodiment of the present invention is realized.
  • the various processes of the face data archiving method can achieve the same technical effect, so in order to avoid repetition, details will not be repeated here.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM for short).

Abstract

The embodiments of the present invention provide a facial data archiving method. The method comprises: acquiring batch facial data to be archived, wherein each piece of facial data comprises a facial feature and a facial pose feature; calculating the similarity between every two facial features among all facial features, so as to obtain the global similarity of each piece of facial data; performing threshold screening on the global similarity of each piece of facial data according to the facial pose feature and a preset similarity threshold matrix, so as to obtain a first facial data set, wherein the preset similarity threshold matrix is obtained according to facial features and facial pose features included in archived facial data in a preset archive library; and archiving the first facial data set by means of the archived facial data in the preset archive library. A similarity threshold adaptive to a facial pose feature can be provided by means of a preset threshold matrix, such that the problem in the prior art of the archiving rate being low due to the usage of a unified threshold evaluation mode is solved.

Description

人脸数据的归档方法及相关设备Method for archiving face data and related equipment
本申请要求于2021年12月31日提交中国专利局,申请号为202111678101.0、发明名称为“人脸数据的归档方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202111678101.0 and the title of the invention "Face data archiving method and related equipment" submitted to the China Patent Office on December 31, 2021, the entire contents of which are incorporated herein by reference Applying.
技术领域technical field
本发明涉及人工智能领域,尤其涉及一种人脸数据的归档方法及相关设备。The invention relates to the field of artificial intelligence, in particular to a face data archiving method and related equipment.
背景技术Background technique
随着人工智能的不断发展,图像识别的相关技术不断进步,越来越多的人脸识别应用得到落地。人脸数据的归档是将同一人员的人脸数据归于同一个人脸档案中,以便进行人脸数据查询和处理。在人脸数据的归档中,通常采用的归档方式是对人脸数据进行聚类归档,然而,由于人脸的姿态、亮度、清晰度等因素对聚类归档影响较大,使得归档率低。其中,人脸姿态变化很小时,相似度差别不大,但是姿态变化达到一定程度之后,相似度变化很大,进而影响到归档率。因此,由于人脸姿态变化的原因,现有的人脸数据的归档中存在归档率低的问题。With the continuous development of artificial intelligence and the continuous improvement of image recognition related technologies, more and more face recognition applications have been implemented. The archiving of face data is to attribute the face data of the same person to the same face file for face data query and processing. In the archiving of face data, the usual archiving method is to perform clustering and archiving of face data. However, factors such as face posture, brightness, and clarity have a great impact on clustering and archiving, resulting in a low archiving rate. Among them, when the face pose changes very little, the similarity difference is not big, but after the pose change reaches a certain level, the similarity changes greatly, which in turn affects the archiving rate. Therefore, due to the change of face posture, there is a problem of low archiving rate in the archiving of existing face data.
发明内容Contents of the invention
本发明实施例提供一种人脸数据的归档方法,通过从视频流中获取批量人脸数据后提取对应的人脸特征进行归档,在归档过程中,采用预设的阈值矩阵进行相似度判断,由于预设的阈值矩阵是根据已归档人脸数据的人脸姿态特征得到的,因此,可以通过预设的阈值矩阵提供与人脸姿态特征相适应的相似度阈值,解决现有技术中采用统一的阈值评估方式使得归档率低的问题。An embodiment of the present invention provides a method for archiving face data. After obtaining batches of face data from video streams, the corresponding face features are extracted for archiving. During the archiving process, a preset threshold matrix is used for similarity judgment. Since the preset threshold matrix is obtained according to the facial pose characteristics of the archived face data, a similarity threshold suitable for the facial pose characteristics can be provided through the preset threshold matrix to solve the problem of using unified The threshold evaluation method makes the problem of low filing rate.
第一方面,本发明实施例提供一种人脸数据的归档方法,所述方法包括:In a first aspect, an embodiment of the present invention provides a method for archiving face data, the method comprising:
获取待归档的批量人脸数据,所述人脸数据包括人脸特征以及人脸姿态特征;Obtain batches of face data to be archived, the face data including face features and face gesture features;
计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度;Calculate the pairwise similarity between all facial features to obtain the global similarity of each face data;
根据所述人脸姿态特征以及预设的相似度阀值矩阵对所述每个人脸数据的全局相似度进行阈值筛选,得到第一人脸数据集,所述预设的相似度阀值矩阵根据预设归档库中的已归档人脸数据所包含的人脸特征和人脸姿态特征得到;Threshold screening is performed on the global similarity of each face data according to the facial pose feature and a preset similarity threshold matrix to obtain a first human face data set, and the preset similarity threshold matrix is based on Obtain the facial features and facial posture features contained in the archived face data in the preset archive library;
通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归 档处理。The first human face data set is archived through the archived human face data in the preset archive library.
可选的,所述方法还包括:Optionally, the method also includes:
获取预设归档库中的已归档人脸数据,并提取已归档人脸数据的人脸特征以及人脸姿态特征;Obtain the archived face data in the preset archive library, and extract the face features and facial pose features of the archived face data;
根据所述已归档人脸数据的人脸特征以及人脸姿态特征,构建得到所述预设的阀值矩阵。The preset threshold value matrix is constructed and obtained according to the facial features and facial pose features of the archived facial data.
可选的,所述多个归档人脸数据集中包括一个正面人脸数据集以及多个偏转人脸数据集,所述人脸姿态特征包括人脸角度特征值,所述根据所述已归档人脸数据的人脸特征以及人脸姿态特征,构建得到所述预设的阀值矩阵,包括:Optionally, the multiple archived face datasets include a frontal face dataset and multiple deflected face datasets, the facial pose features include face angle feature values, and the The face feature and the face pose feature of the face data are constructed to obtain the preset threshold matrix, including:
针对所述预设归档库中的每个已归档人脸档案,根据所述人脸角度特征值对所述已归档人脸档案包含的人脸数据进行划分,得到多个不同人脸角度区间下的归档人脸数据集;For each archived face file in the preset archive library, the face data contained in the archived face file is divided according to the face angle feature value to obtain a plurality of different face angle intervals. The archived face dataset;
根据所述已归档人脸数据的人脸特征计算所述归档人脸数据集之间的平均相似度,或者根据所述已归档人脸数据的人脸特征计算所述正面人脸数据集与所述偏转人脸数据集之间的平均相似度;Calculate the average similarity between the archived face data sets according to the face features of the archived face data, or calculate the difference between the frontal face data set and the archived face data according to the face features of the archived face data. The average similarity between the deflected face datasets;
根据所述归档人脸数据集之间的平均相似度,构建得到各个已归档人脸档案的平均相似度矩阵,其中,每个所述已归档人脸档案对应一个所述平均相似度矩阵,所述平均相似度矩阵的维度与所述划分区间的数量相关;According to the average similarity between the archived face data sets, the average similarity matrix of each archived human face file is constructed, wherein, each of the archived human face files corresponds to a described average similarity matrix, so The dimension of the average similarity matrix is related to the number of the divided intervals;
根据所述平均相似度矩阵计算得到所述预设的阈值矩阵。The preset threshold matrix is obtained by calculating according to the average similarity matrix.
可选的,所述平均相似度矩阵中一个平均相似度矩阵单元对应一个平均相似度值,所述根据所述平均相似度矩阵计算得到所述预设的阈值矩阵,包括:Optionally, one average similarity matrix unit in the average similarity matrix corresponds to an average similarity value, and the calculation according to the average similarity matrix to obtain the preset threshold matrix includes:
针对所有的所述平均相似度矩阵,提取对应平均相似度矩阵单元中的平均相似度值进行相加求平均,得到最终相似度矩阵;For all the average similarity matrices, extracting the average similarity values in the corresponding average similarity matrix units is added and averaged to obtain the final similarity matrix;
将所述最终相似度矩阵作为所述预设的阈值矩阵。The final similarity matrix is used as the preset threshold matrix.
可选的,所述通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理,包括:Optionally, archiving the first face data set through the archived face data in the preset archive library includes:
根据所述人脸时空特征对所述第一人脸数据集进行筛选,得到第二人脸数据集;Filtering the first human face data set according to the human face spatiotemporal characteristics to obtain a second human face data set;
通过所述预设归档库中的已归档人脸数据,对所述第二人脸数据集进行归档处理。The archiving process is performed on the second human face data set through the archived human face data in the preset archiving library.
可选的,所述第二人脸关系图谱包括第一人脸关系簇,所述通过所述预设归档库中的已归档人脸数据,对所述第二人脸数据集进行归档处理,包括:Optionally, the second face relationship graph includes a first face relationship cluster, and the second face data set is archived through the archived face data in the preset archive library, include:
基于所述第二人脸数据集,生成形成第一人脸关系图谱;Based on the second human face data set, generate and form a first human face relational graph;
对所述第一人脸关系图谱进行强连通计算,得到第二人脸关系图谱;Performing strong connectivity calculations on the first human face relational graph to obtain a second human face relational graph;
基于所述第二人脸关系图谱对所述批量人脸数据进行归档。The batch of face data is archived based on the second face relationship graph.
可选的,所述第二人脸关系图谱包括第一人脸关系簇,所述基于所述第二人脸关系图谱对所述批量人脸数据进行归档,包括:Optionally, the second face relation graph includes a first face relation cluster, and archiving the batch of face data based on the second face relation graph includes:
根据预设的社区发现算法对所述第一人脸关系簇进行优化,得到第二人脸关系簇;Optimizing the first face relationship cluster according to a preset community discovery algorithm to obtain a second face relationship cluster;
基于所述第二人脸关系簇对所述批量人脸数据进行归档。The batch of face data is archived based on the second face relation cluster.
第二方面,本发明实施例提供一种人脸数据的归档装置,所述装置包括:In a second aspect, an embodiment of the present invention provides a face data archiving device, the device comprising:
获取模块,用于获取待归档的批量人脸数据,所述人脸数据包括人脸特征以及人脸姿态特征;Obtaining module, is used for obtaining batch face data to be archived, and described face data comprises face feature and face gesture feature;
计算模块,用于计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度;Calculation module, used to calculate the pairwise similarity between all facial features, and obtain the global similarity of each facial data;
第一筛选模块,用于根据所述人脸姿态特征以及预设的相似度阀值矩阵对所述每个人脸数据的全局相似度进行阈值筛选,得到第一人脸数据集,所述预设的相似度阀值矩阵根据预设归档库中的已归档人脸数据所包含的人脸特征和人脸姿态特征得到;The first screening module is configured to perform threshold screening on the global similarity of each face data according to the face pose feature and a preset similarity threshold matrix to obtain a first face data set, the preset The similarity threshold matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library;
归档模块,用于通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理。The archiving module is configured to archive the first human face data set through the archived human face data in the preset archiving library.
第三方面,本发明实施例提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本发明实施例提供的人脸数据的归档方法中的步骤。In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and operable on the processor. When the processor executes the computer program, The steps in the archiving method of face data provided by the embodiment of the present invention are realized.
第四方面,本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现发明实施例提供的人脸数据的归档方法中的步骤。In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the archiving of face data provided by the embodiment of the invention is realized steps in the method.
本发明实施例中,获取待归档的批量人脸数据,所述人脸数据包括人脸特征以及人脸姿态特征;计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度;根据所述人脸姿态特征以及预设的相似度阀值矩阵对所述每个人脸数据的全局相似度进行阈值筛选,得到第一人脸数据集,所述预设的相 似度阀值矩阵根据预设归档库中的已归档人脸数据所包含的人脸特征和人脸姿态特征得到;通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理。通过从视频流中获取批量人脸数据后提取对应的人脸特征进行归档,在归档过程中,采用预设的阈值矩阵进行相似度判断,由于预设的阈值矩阵是根据已归档人脸数据的人脸姿态特征得到的,因此,可以通过预设的阈值矩阵提供与人脸姿态特征相适应的相似度阈值,解决现有技术中采用统一的阈值评估方式使得归档率低的问题。In the embodiment of the present invention, batches of face data to be archived are obtained, and the face data includes face features and face pose features; the pairwise similarity between all face features is calculated to obtain the global profile of each face data Similarity; according to the face pose feature and the preset similarity threshold matrix, threshold screening is performed on the global similarity of each face data to obtain the first face data set, and the preset similarity threshold The value matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library; through the archived face data in the preset archive library, the first face data Sets are archived. By obtaining batches of face data from the video stream and extracting the corresponding face features for archiving, in the archiving process, the preset threshold matrix is used for similarity judgment, because the preset threshold matrix is based on the archived face data Therefore, a preset threshold matrix can be used to provide a similarity threshold that is compatible with the facial posture features, so as to solve the problem of low filing rate caused by the unified threshold evaluation method in the prior art.
附图说明Description of drawings
下面将对本申请实施例中所需要使用的附图作介绍。The drawings that need to be used in the embodiments of the present application will be introduced below.
图1是本发明实施例提供的一种人脸数据的归档方法的流程图;Fig. 1 is the flow chart of a kind of archiving method of face data provided by the embodiment of the present invention;
图2是本发明实施例提供的一种阈值矩阵构建方法的流程图;Fig. 2 is a flowchart of a method for constructing a threshold matrix provided by an embodiment of the present invention;
图3是本发明实施例提供的一种人脸数据的归档装置的结构示意图;Fig. 3 is a schematic structural diagram of a face data archiving device provided by an embodiment of the present invention;
图4是本发明实施例提供的另一种人脸数据的归档装置的结构示意图;4 is a schematic structural diagram of another archiving device for face data provided by an embodiment of the present invention;
图5是本发明实施例提供的一种构建模块的结构示意图;Fig. 5 is a schematic structural diagram of a building block provided by an embodiment of the present invention;
图6是本发明实施例提供的一种第二计算子模块的结构示意图;Fig. 6 is a schematic structural diagram of a second calculation sub-module provided by an embodiment of the present invention;
图7是本发明实施例提供的一种归档模块的结构示意图;Fig. 7 is a schematic structural diagram of an archiving module provided by an embodiment of the present invention;
图8是本发明实施例提供的一种归档子模块的结构示意图;Fig. 8 is a schematic structural diagram of an archiving sub-module provided by an embodiment of the present invention;
图9是本发明实施例提供的一种电子设备的结构示意图。Fig. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图对本申请的实施例进行描述。Embodiments of the present application are described below in conjunction with the accompanying drawings.
请参见图1,图1是本发明实施例提供的一种人脸数据的归档方法的流程图,如图1所示,该人脸数据的归档方法包括以下步骤:Please refer to Fig. 1, Fig. 1 is the flow chart of a kind of archiving method of human face data provided by the embodiment of the present invention, as shown in Fig. 1, the archiving method of this human face data comprises the following steps:
101、获取待归档的批量人脸数据。101. Obtain batches of face data to be archived.
在本发明实施例中,上述视频流可以是通过摄像头实时获取到的视频流,也可以是用户上传的视频流,上述摄像头可以各个场景中安装的摄像头,比如可以是室内安装的摄像头或者室外安装的摄像头。In the embodiment of the present invention, the above-mentioned video stream may be a video stream obtained in real time by a camera, or a video stream uploaded by a user, and the above-mentioned camera may be a camera installed in each scene, such as a camera installed indoors or installed outdoors camera.
进一步的,上述视频流可以是一个摄像头采集到的视频流,也可以是多个摄像头采集到的视频流。Further, the foregoing video stream may be a video stream collected by one camera, or may be a video stream collected by multiple cameras.
可以通过人脸检测算法从视频流中的获取批量人脸数据,通过人脸检测算法从视频的获取到批量人脸数据中,人脸数据包括人脸特征以及人脸姿态特征。每个人脸数据还包括人脸图像信息、人脸置信度信息以及人脸检测框ID,在人 脸检测算法中,会给算法认定的同一个目标分配相同的人脸检测框ID。The batch face data can be obtained from the video stream through the face detection algorithm, and the batch face data can be obtained from the video through the face detection algorithm. The face data includes face features and face pose features. Each face data also includes face image information, face confidence information, and face detection frame ID. In the face detection algorithm, the same face detection frame ID will be assigned to the same target identified by the algorithm.
每个人脸数据还包括时间信息与位置信息,在摄像头采集视频流时,视频流中图像帧包括时间信息,视频流包括位置信息,上述时间信息可以理解为采集到该图像帧的时间点,上述位置信息可以理解为采集到该视频流的地点,上述位置信息也可以理解为摄像头的安装位置,每个摄像头安装时都会设置有对应的经纬度信息作为位置信息。Each face data also includes time information and position information. When the camera captures the video stream, the image frame in the video stream includes time information, and the video stream includes position information. The above time information can be understood as the time point when the image frame is collected. The location information can be understood as the location where the video stream is collected, and the above location information can also be understood as the installation location of the camera, and each camera will be provided with corresponding latitude and longitude information as the location information when it is installed.
通过图像帧对应的时间信息,可以得到对应人脸数据的时间信息,通过视频流的位置信息,可以得对应人脸数据的位置信息。The time information corresponding to the face data can be obtained through the time information corresponding to the image frame, and the position information corresponding to the face data can be obtained through the position information of the video stream.
可以通过人脸特征提取算法,从上述人脸图像信息提取出人脸数据对应的人脸特征,可以通过人脸状态估计算法,从上述人脸图像信息提取出人脸数据对应的人脸姿态特征,可以从上述时间信息与位置信息,提取人脸数据对应的出人脸时空特征。The face features corresponding to the face data can be extracted from the face image information through the face feature extraction algorithm, and the face pose features corresponding to the face data can be extracted from the face image information through the face state estimation algorithm , the spatiotemporal features of the face corresponding to the face data can be extracted from the above time information and position information.
上述人脸特征可以是人脸特征向量,人脸特征向量是将人脸图像的高维空间图片数据映射到低维,形成可以代表图片本身特征的预设维特征向量(比如512维特征向量),不同的人脸图像可以通过对比人脸特征向量之间的距离来判断是否相似,人脸特征向量之间越相近,则人脸图像为同一人员人脸的概率越大。人脸特征向量的离散程度取决于选取的人脸关键点位置和各个特征的区别程度。The above-mentioned face feature can be a face feature vector, and the face feature vector is to map the high-dimensional space image data of the face image to a low-dimensional, forming a preset dimension feature vector (such as a 512-dimensional feature vector) that can represent the feature of the picture itself , different face images can be judged by comparing the distance between the face feature vectors to determine whether they are similar. The closer the face feature vectors are, the greater the probability that the face images are faces of the same person. The degree of discretization of the face feature vector depends on the position of the selected face key points and the degree of distinction of each feature.
上述人脸姿态特征可以是人脸朝向偏转的角度值,人脸朝向偏转可以包括俯仰角偏转、偏航角偏转、翻滚角偏转等。The facial posture feature mentioned above may be an angle value of a face orientation deflection, and the face orientation deflection may include a pitch angle deflection, a yaw angle deflection, a roll angle deflection, and the like.
102、计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度。102. Calculate the pairwise similarity between all face features, and obtain the global similarity of each face data.
在本发明实施例中,一个人脸数据对应的一个人脸特征,批量人脸数据对应批量的人脸特征,对于批量的人脸特征,计算两两之间的相似度。相当于一个人脸特征要和除自身外的人脸特征都进行一遍相似度计算,得到该个人脸特征的全局相似度。In the embodiment of the present invention, one face data corresponds to one face feature, and a batch of face data corresponds to a batch of face features, and for the batch of face features, the similarity between two pairs is calculated. It is equivalent to performing a similarity calculation on a facial feature with other facial features except itself to obtain the global similarity of the facial feature.
举例来说,批量人脸数据为M个人脸数据,对应的提取到M个人脸特征,每个人脸特征都与除自身外的M-1个人脸特征进行相似度计算,得到对应的全局相似度,此时,每个人脸特征对应的全局相似度均有M-1个相似度。For example, the batch of face data is M face data, and the corresponding M face features are extracted, and each face feature is calculated with M-1 face features except itself to obtain the corresponding global similarity , at this time, the global similarity corresponding to each face feature has M-1 similarities.
在一种可能的实施例中,对于每个人脸数据来说,可以选择M-1个相似度 中最高的TopN个相似度作为该个人脸数据的全局相似度。In a possible embodiment, for each face data, the highest TopN similarities among the M-1 similarities can be selected as the global similarity of the face data.
103、根据人脸姿态特征以及预设的相似度阀值矩阵对每个人脸数据的全局相似度进行阈值筛选。103. Perform threshold screening on the global similarity of each face data according to the facial posture features and the preset similarity threshold matrix.
在本发明实施例中,预设的阀值矩阵根据已归档人脸数据的人脸姿态特征得到。具体的,上述预设的阈值矩阵中设置有对应于不同人脸姿态特征的阈值,可以根据不同的人脸姿态特征,在预设的阈值矩阵中匹配对应阈值,将全局相似度与对应阈值进行比较,若全局相似度中存在大于对应阈值的相似度,则可以将对应的人脸数据添加到第一人脸数据集中,其中,一个人脸数据对应一个第一人脸数据集。In the embodiment of the present invention, the preset threshold value matrix is obtained according to the face pose characteristics of the archived face data. Specifically, thresholds corresponding to different facial posture features are set in the above-mentioned preset threshold matrix, and the corresponding thresholds can be matched in the preset threshold matrix according to different facial posture features, and the global similarity can be compared with the corresponding thresholds. In comparison, if there is a similarity greater than a corresponding threshold in the global similarity, the corresponding face data may be added to the first face data set, wherein one face data corresponds to one first face data set.
举例来说,对于一个人脸数据A来说,人脸数据M-1个的人脸姿态特征为A1,在其余M-1个人脸数据中,存在任意一个人脸数据B与该人脸数据A的相似度为S1,人脸数据B的人脸姿态特征为B1,则可以根据(A1,B1)在预设的阈值矩阵中找到对应的阈值C1,将相似度S1与阈值C1进行比较,若相似度S1大于或者等于阈值C1,则将人脸数据B添加到人脸数据A的第一人脸数据集中。若相似度S1小于阈值C1,则不进行处理。人脸数据A的第一人脸数据集中包括人脸数据A。For example, for a face data A, the face pose feature of M-1 face data is A1, and in the remaining M-1 face data, there is any one face data B and the face data The similarity of A is S1, and the face pose feature of face data B is B1, then the corresponding threshold C1 can be found in the preset threshold matrix according to (A1, B1), and the similarity S1 is compared with the threshold C1, If the similarity S1 is greater than or equal to the threshold C1, the face data B is added to the first face data set of the face data A. If the similarity S1 is smaller than the threshold C1, no processing is performed. The first face data set of the face data A includes the face data A.
104、通过预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理。104. Perform archiving processing on the first human face data set by presetting the archived human face data in the archiving repository.
在本发明实施例中,第一人脸数据集中保留了与主人脸数据相似度较高的人脸数据,可以将具有相同人脸检测框的主人脸数据所对应的第一人脸数据集归到同一个档案中。In the embodiment of the present invention, the first face data set retains the face data with a higher similarity with the master face data, and the first face data set corresponding to the master face data with the same face detection frame can be grouped together. into the same file.
可选的,根据可以人脸时空特征对所述第一人脸数据集进行筛选,得到第二人脸数据集;通过所述预设归档库中的已归档人脸数据,对第二人脸数据集进行归档处理。Optionally, the first human face data set is screened according to the human face spatio-temporal characteristics to obtain a second human face data set; Data sets are archived.
上述人脸时空特征包括时间信息与位置信息,根据人脸时空特征对第一人脸数据集进行筛选。The above-mentioned spatio-temporal feature of the face includes time information and position information, and the first face data set is screened according to the spatio-temporal feature of the face.
进一步的,对于一个人脸数据的第一人脸数据集,则该个人脸数据为该第一人脸数据集的主人脸数据,若在该第一人脸数据集中出现时间信息相同的两个或两个以上人脸数据,则保留两个或两个以上人脸数据中与该主人脸数据相似度最高的人脸数据,将其余人脸数据在第一人脸数据集进行删除。若在该第 一人脸数据集中出现时间信息相同且而位置信息不同的两个或两个以上人脸数据,则保留两个或两个以上人脸数据中与该主人脸数据相似度最高的人脸数据,将其余人脸数据在第一人脸数据集进行删除。判断主人脸数据与各个人脸数据之间的位置与时间关系是否符合第一预设关系,若主人脸数据与某个人脸数据之间的位置与时间关系不符合第一预设关系,则将该某个人脸数据从第一人脸数据集中进行删除。判断除主人脸数据外,每两个人脸数据之间的位置与时间关系是否符合第二预设关系,若某两个人脸数据之间的位置与时间关系不符合第二预设关系,则判断两个人脸数据是否符合主人脸数据对应的轨迹数据,将不符合主人脸数据对应的轨迹数据的人脸数据进行删除。主人脸数据对应的轨迹数据可以根据主人脸数据对应的人脸检测框ID进行确定,具体可以将与主人脸数据具有相同人脸检测框ID的人脸数据与主人脸数据形成对应轨迹数据。Further, for the first face data set of a face data, the face data is the main face data of the first face data set, if two faces with the same time information appear in the first face data set or two or more face data, then keep the face data with the highest similarity with the master face data among the two or more face data, and delete the rest of the face data in the first face data set. If two or more face data with the same time information and different position information appear in the first face data set, then keep the one with the highest similarity with the main face data among the two or more face data For the face data, delete the rest of the face data in the first face data set. Judging whether the position and time relationship between the master face data and each face data conforms to the first preset relationship, if the position and time relationship between the master face data and certain face data does not conform to the first preset relationship, then the The certain face data is deleted from the first face data set. Judging whether the position and time relationship between each two face data conforms to the second preset relationship except the main face data, if the position and time relationship between certain two face data do not conform to the second preset relationship, then judge Whether the two face data match the trajectory data corresponding to the master face data, delete the face data that does not match the trajectory data corresponding to the master face data. The trajectory data corresponding to the master face data can be determined according to the face detection frame ID corresponding to the master face data, specifically, the face data having the same face detection frame ID as the master face data and the master face data can form corresponding trajectory data.
举例来说,两个人脸数据的位置信息分别为LA和LB,时间信息分别TA和TB,则上述位置与时间关系的表达式为:ε=|LA-LB|/|TA-TB|。上述第一预设关系为ε1,若主人脸数据与某个人脸数据之间的位置与时间关系ε大于ε1,则说明主人脸数据与某个人脸数据之间的位置与时间关系不符合第一预设关系。上述第二预设关系为ε2,若某两个人脸数据之间的位置与时间关系ε大于ε2,则说明某两个人脸数据之间的位置与时间关系不符合第二预设关系。For example, the location information of two face data is LA and LB, and the time information is TA and TB respectively, then the expression of the relationship between the above location and time is: ε=|LA-LB|/|TA-TB|. The above-mentioned first preset relationship is ε1, if the position and time relationship ε between the master’s face data and a certain face data is greater than ε1, it means that the position and time relationship between the master’s face data and a certain face data does not conform to the first default relationship. The above second preset relationship is ε2, if the position and time relationship ε between certain two face data is greater than ε2, it means that the position and time relationship between certain two face data does not conform to the second preset relationship.
通过上述方法对第一人脸数据集进行筛选,得到第二人脸数据集,其中,一个第二人脸数据集对应一个人脸数据。The first human face data set is screened by the above method to obtain the second human face data set, wherein one second human face data set corresponds to one human face data.
在本发明实施例中,第二人脸数据集中保留了与主人脸数据相似度较高的人脸数据,而且通过人脸时空数据筛选掉了不符合时空关系,增加了第二人脸数据集中人脸数据属于同一人员人脸的概率。In the embodiment of the present invention, the face data with higher similarity with the main face data is retained in the second face data set, and the facial spatiotemporal data is used to filter out the face data that does not conform to the spatiotemporal relationship, and the second face data set is added. The probability that the face data belongs to the face of the same person.
进一步的,可以将具有相同人脸检测框的主人脸数据所对应的第二人脸数据集归到同一个档案中。Further, the second face data sets corresponding to the main face data with the same face detection frame can be grouped into the same file.
在一种可能的实施例中,通过一部分已归档人脸数据对第二人脸数据集进行聚类归档,人脸数据的归档是将未归档的人脸数据归档到这部分已归档人脸数据对应的档案中。具体的,已归档人脸数据还包括档案标签,以已归档人脸数据为聚类中心,对所有第二人脸数据集的主人脸数据进行聚类,聚类成功后对属于同一档案标签的第二人脸数据集进行去重归档。In a possible embodiment, the second face data set is clustered and archived through a part of the archived face data, and the archiving of the face data is to archive the unarchived face data into this part of the archived face data in the corresponding file. Specifically, the archived face data also includes file tags, and the archived face data is used as the clustering center to cluster the main face data of all the second face data sets. The second face dataset is deduplicated and archived.
在一种可能的实施例中,将第二人脸数据集形成人脸关系图谱,根据人脸 关系图谱中的人脸关系簇进行归档。需要说明的是,在人脸关系图谱中,每个节点代表一个人脸数据,两个节点之间的边代表两个人脸数据之间的相似度。In a possible embodiment, the second face data set is formed into a face relation graph, and archived according to the face relation clusters in the face relation graph. It should be noted that, in the face relationship graph, each node represents a face data, and the edge between two nodes represents the similarity between two face data.
可选的,可以将上述第二人脸数据集形成第一人脸关系图谱;对上述第一人脸关系图谱进行强连通计算,得到第二人脸关系图谱;基于上述第二人脸关系图谱对上述批量人脸数据进行归档。其中,第一人脸关系图谱与第二人脸关系图谱均为有向图。Optionally, the above-mentioned second human face data set can be formed into a first human face relational map; strong connectivity calculation is performed on the above-mentioned first human face relational map to obtain a second human face relational map; based on the above-mentioned second human face relational map Archive the above-mentioned batches of face data. Wherein, both the first human face relationship graph and the second human face relationship graph are directed graphs.
需要说明的是,在计算机图论中,强连通(Strongly Connected)是指有向图G(Directed Graph)中任意两点v1、v2之间都存在着v1到v2的路径(path,若途径的点和边都不重复,则称为路径)及v2到v1的路径。可以采用双向遍历取交集的方法求第一人脸关系图谱的强连通分量,时间复杂度为O(N^2+M)。也可以采用Kosaraju算法或Tarjan算法求第一人脸关系图谱的强连通分量,两者的时间复杂度都是O(N+M)。It should be noted that in computer graph theory, Strongly Connected (Strongly Connected) means that there is a path from v1 to v2 between any two points v1 and v2 in the directed graph G (Directed Graph). Points and edges are not repeated, it is called a path) and the path from v2 to v1. The method of bidirectional traversal to obtain the intersection can be used to find the strongly connected components of the first face relationship map, and the time complexity is O(N^2+M). The Kosaraju algorithm or the Tarjan algorithm can also be used to find the strongly connected components of the first face relation graph, both of which have a time complexity of O(N+M).
通过对第一人脸关系图谱进行强连通计算,可以剔除掉一些错误的人脸数据,使得人脸关系簇更加准确。By performing strong connectivity calculations on the first face relation graph, some erroneous face data can be eliminated, making the face relation clusters more accurate.
可选的,第二人脸关系图谱包括第一人脸关系簇,可以根据预设的社区发现算法对上述第一人脸关系簇进行优化,得到第二人脸关系簇;基于上述第二人脸关系簇对上述批量人脸数据进行归档。Optionally, the second face relation map includes the first face relation cluster, and the first face relation cluster can be optimized according to a preset community discovery algorithm to obtain the second face relation cluster; based on the above second face relation cluster The face relation cluster archives the aforementioned batches of face data.
需要说明的是,第二人脸关系图谱表征人脸关系网络,在人脸关系网络中,有的人脸之间的连接较为紧密,有的人脸之间的连接关系较为稀疏,在这样的人脸关系网络中,连接较为紧密的部分可以被看成一个社区,其内部的节点之间有较为紧密的连接,而在两个社区间则相对连接较为稀疏。社区发现算法算法可以在较短时间内实现大规模网络以不同粒度的社区划分,并且无需指定社区的数量,当模块度不再增益时候,迭代便自动停止。It should be noted that the second face relationship graph represents the face relationship network. In the face relationship network, some faces are closely connected, and some faces are relatively sparsely connected. In the face relational network, the closely connected part can be regarded as a community, and there is a relatively close connection between the internal nodes, while the relatively sparse connection between the two communities. The community discovery algorithm algorithm can realize the community division of large-scale networks with different granularities in a short period of time, and there is no need to specify the number of communities. When the modularity no longer increases, the iteration will automatically stop.
在本发明实施例中,可以在室外场景摄像头分布不是很密集,而且摄像头和清晰度差别大的情况下,不同场景拍摄到的人脸特征差别大。聚簇划分会携带部分错误数据,所以可以利用社区发现算法对簇群体做第一人脸关系簇进行二次优化,提高人脸聚簇的准确率。In the embodiment of the present invention, when the distribution of cameras in the outdoor scene is not very dense, and the differences in camera resolution and definition are large, the facial features captured in different scenes are greatly different. Clustering division will carry some wrong data, so the community discovery algorithm can be used to perform secondary optimization on the first face relationship cluster of the cluster group to improve the accuracy of face clustering.
本发明实施例中,获取待归档的批量人脸数据,所述人脸数据包括人脸特征以及人脸姿态特征;计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度;根据所述人脸姿态特征以及预设的相似度阀值矩阵对所述每 个人脸数据的全局相似度进行阈值筛选,得到第一人脸数据集,所述预设的相似度阀值矩阵根据预设归档库中的已归档人脸数据所包含的人脸特征和人脸姿态特征得到;通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理。通过从视频流中获取批量人脸数据后提取对应的人脸特征进行归档,在归档过程中,采用预设的阈值矩阵进行相似度判断,由于预设的阈值矩阵是根据已归档人脸数据的人脸姿态特征得到的,因此,可以通过预设的阈值矩阵提供与人脸姿态特征相适应的相似度阈值,解决现有技术中采用统一的阈值评估方式使得归档率低的问题。In the embodiment of the present invention, batches of face data to be archived are obtained, and the face data includes face features and face pose features; the pairwise similarity between all face features is calculated to obtain the global profile of each face data Similarity; according to the face pose feature and the preset similarity threshold matrix, threshold screening is performed on the global similarity of each face data to obtain the first face data set, and the preset similarity threshold The value matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library; through the archived face data in the preset archive library, the first face data Sets are archived. By obtaining batches of face data from the video stream and extracting the corresponding face features for archiving, in the archiving process, the preset threshold matrix is used for similarity judgment, because the preset threshold matrix is based on the archived face data Therefore, a preset threshold matrix can be used to provide a similarity threshold that is compatible with the facial posture features, so as to solve the problem of low filing rate caused by the unified threshold evaluation method in the prior art.
可选的,请继续参考图2,图2是本发明实施例提供的阈值矩阵构建方法的流程图,如图2所示,在图1实施例的步骤103之前,还包括预设的阈值矩阵构建方法,具体包括以下步骤:Optionally, please continue to refer to FIG. 2. FIG. 2 is a flowchart of a threshold matrix construction method provided by an embodiment of the present invention. As shown in FIG. 2, before step 103 of the embodiment in FIG. 1, a preset threshold matrix is also included. The construction method specifically includes the following steps:
201、获取预设归档库中的已归档人脸数据,并提取已归档人脸数据的人脸特征以及人脸姿态特征。201. Acquire archived face data in a preset archive library, and extract face features and facial posture features of the archived face data.
在本发明实施例中,可以获取多个已归档人脸档案,每个已归档人脸档案对应一个人员的已归档人脸数据,已归档人脸档案中的已归档人脸数据可以包括封面人脸数据以及入档人脸数据,其中,封面人脸数据可以是正面人脸数据,正面人脸数据的人脸朝向偏转可以包括俯仰角偏转为0°、偏航角偏转为0°、翻滚角偏转为0°。In the embodiment of the present invention, multiple archived face files can be obtained, and each archived face file corresponds to the archived face data of a person, and the archived face data in the archived face files can include the cover person Face data and archived face data, wherein the cover face data can be front face data, and the face orientation deflection of the front face data can include a pitch angle deflection of 0°, a yaw angle deflection of 0°, and a roll angle deflection of 0°. Deflection is 0°.
对于同一个已归档人脸档案中,存在多个已归档人脸数据,每个已归档人脸数据包括人脸图像信息、人脸置信度信息、人脸检测框ID、时间信息以及位置信息,可以通过人脸特征提取算法对已归档人脸数据的人脸图像信息进行人脸特征提取,提取得到已归档人脸数据的人脸特征,以及可以通过人脸姿态估计算法对已归档人脸数据的人脸图像信息进行人脸状态估计,得到已归档人脸数据的人脸姿态特征。For the same archived face file, there are multiple archived face data, each archived face data includes face image information, face confidence information, face detection frame ID, time information and location information, The face feature extraction algorithm can be used to extract the face features of the face image information of the archived face data, and the face features of the archived face data can be extracted, and the archived face data can be analyzed by the face pose estimation algorithm. The face image information is used to estimate the face state, and the face pose features of the archived face data are obtained.
202、根据已归档人脸数据的人脸特征以及人脸姿态特征,构建得到预设的阀值矩阵。202. Construct a preset threshold value matrix according to the facial features and facial posture features of the archived facial data.
在本发明实施例中,对于一个已归档人脸档案来说,其中的每个已归档人脸数据均提取到对应的人脸特征以及人脸姿态特征。可以计算两个已归档人脸数据的人脸姿态特征差,以及计算这两个已归档人脸数据的相似度,将两个已归档人脸数据的相似度与两个已归档人脸数据的人脸姿态特征差进行关联。In the embodiment of the present invention, for an archived face file, each of the archived face data therein has a corresponding face feature and a face pose feature extracted. It can calculate the face pose feature difference of two archived face data, and calculate the similarity of the two archived face data, and compare the similarity of the two archived face data with the two archived face data The facial pose features are poorly correlated.
可选的,上述人脸姿态特征包括人脸角度特征值,针对所述预设归档库中的每个已归档人脸档案,根据人脸角度特征值对所述已归档人脸档案包含的人脸数据进行划分,得到多个不同人脸角度区间下的归档人脸数据集;根据所述已归档人脸数据的人脸特征计算归档人脸数据集之间的平均相似度,或者根据已归档人脸数据的人脸特征计算正面人脸数据集与所述偏转人脸数据集之间的平均相似度;根据归档人脸数据集之间的平均相似度,构建得到各个已归档人脸档案的平均相似度矩阵,其中,每个已归档人脸档案对应一个平均相似度矩阵,平均相似度矩阵的维度与划分区间的数量相关;根据平均相似度矩阵计算得到预设的阈值矩阵。Optionally, the above-mentioned face pose feature includes a face angle feature value, and for each archived face file in the preset archive library, according to the face angle feature value, the person included in the archived face file is The face data is divided to obtain multiple archived face data sets under different face angle intervals; the average similarity between the archived face data sets is calculated according to the facial features of the archived face data, or the average similarity between the archived face data sets is calculated according to the archived face data. The face feature of the face data calculates the average similarity between the frontal face data set and the deflected face data set; according to the average similarity between the archived face data sets, the construction of each archived face file is obtained. An average similarity matrix, wherein each archived face file corresponds to an average similarity matrix, and the dimension of the average similarity matrix is related to the number of divided intervals; a preset threshold matrix is obtained by calculating the average similarity matrix.
通过将人脸角度特征值进行划分,设置划分区间对应的平均相似度作为相似度阈值,可以避免当人脸角度发生偏转单一相似度不能很好的判断是否相似的问题。By dividing the face angle feature value and setting the average similarity corresponding to the division interval as the similarity threshold, it is possible to avoid the problem that a single similarity cannot judge whether the face is similar when the face angle is deflected.
举例来说,人脸姿态特征为人脸偏航角的角度特征值,人脸偏航角的角度特征值可以在0°至60°度之间,可以将人脸偏航角的角度特征值按预设角度数进行划分,比如将人脸偏航角的角度特征值按每10°进行划分,则得到0°,(0,10°],(10°,20°],(20°,30°],(30°,40°],(40°,50°],(50°,60°]等区间,将上述区间形成矩阵,则得到如下表1所述:For example, the face pose feature is the angle feature value of the face yaw angle, the angle feature value of the face yaw angle can be between 0° and 60° degrees, and the angle feature value of the face yaw angle can be expressed as Divide the number of preset angles. For example, divide the angle feature value of the face yaw angle by every 10°, then get 0°, (0, 10°], (10°, 20°], (20°, 30° °], (30°, 40°], (40°, 50°], (50°, 60°] and other intervals, if the above intervals are formed into a matrix, the following table 1 is obtained:
表1Table 1
Figure PCTCN2022143530-appb-000001
Figure PCTCN2022143530-appb-000001
Figure PCTCN2022143530-appb-000002
Figure PCTCN2022143530-appb-000002
两个已归档人脸数据分别为已归档人脸数据A和已归档人脸数据B,已归档人脸数据A的人脸偏航角的角度特征值为35°,落入(30°,40°]区间内,已归档人脸数据B的人脸偏航角的角度特征值为20°,落入(10°,20°]区间内,计算已归档人脸数据A和已归档人脸数据B之间的相似度SAB,得到的相似度SAB对应于(30°,40°]和(10°,20°],得到表2如下所示:The two archived face data are archived face data A and archived face data B respectively, and the angle feature value of the face yaw angle of archived face data A is 35°, falling into (30°, 40 °] interval, the angle eigenvalue of the face yaw angle of the archived face data B is 20°, which falls within the (10°, 20°] interval, calculate the archived face data A and the archived face data The similarity SAB between B, the obtained similarity SAB corresponds to (30°, 40°] and (10°, 20°], and Table 2 is obtained as follows:
表2Table 2
Figure PCTCN2022143530-appb-000003
Figure PCTCN2022143530-appb-000003
需要说明的是,在表1和表2中,除去首行和首列后,则为对应的矩阵形式。当存在多个相似度对应于(30°,40°]和(10°,20°]时,则计算平均相似度进行填入,比如,还存在两个已归档人脸数据分别为已归档人脸数据C和已归档人脸数据D,已归档人脸数据C的人脸偏航角的角度特征值为35°,落入(30°,40°]区间内,已归档人脸数据D的人脸偏航角的角度特征值为20°,落入(10°,20°]区间内,计算已归档人脸数据C和已归档人脸数据D之间的相似度SCD,得到的相似度SCD对应于(30°,40°]和(10°,20°],求SAB与SCD的平均值S作为(30°,40°]和(10°,20°]对应的平均相似 度,得到表3如下所示:It should be noted that, in Table 1 and Table 2, after removing the first row and the first column, it is in the corresponding matrix form. When there are multiple similarities corresponding to (30°, 40°] and (10°, 20°], the average similarity is calculated and filled in. For example, there are two archived face data respectively archived person Face data C and archived face data D, the angle feature value of the face yaw angle of archived face data C is 35°, falling in the interval of (30°, 40°], the archived face data D’s The angular eigenvalue of the face yaw angle is 20°, which falls within the interval (10°, 20°), calculate the similarity SCD between the archived face data C and the archived face data D, and obtain the similarity SCD corresponds to (30°, 40°] and (10°, 20°], find the average S of SAB and SCD as the average similarity corresponding to (30°, 40°] and (10°, 20°], get Table 3 is as follows:
表3table 3
Figure PCTCN2022143530-appb-000004
Figure PCTCN2022143530-appb-000004
可选的,可以根据已归档人脸数据的人脸特征计算正面人脸数据集与偏转人脸数据集之间的平均相似度。Optionally, the average similarity between the front face data set and the deflected face data set may be calculated according to the face features of the archived face data.
针对一个已归档人脸档案,可以得到如表4所示的平均相似度表,表4如下所示:For an archived face file, the average similarity table as shown in Table 4 can be obtained, and Table 4 is as follows:
表4Table 4
Figure PCTCN2022143530-appb-000005
Figure PCTCN2022143530-appb-000005
Figure PCTCN2022143530-appb-000006
Figure PCTCN2022143530-appb-000006
可以只保留单个平均相似度,如表5所示:Only a single average similarity can be retained, as shown in Table 5:
表5table 5
Figure PCTCN2022143530-appb-000007
Figure PCTCN2022143530-appb-000007
其中,上述a0,a1,a2,a3,a4,a5,a6为根据已归档人脸数据的人脸特征计算正面人脸数据集与偏转人脸数据集之间的平均相似度。Wherein, the above-mentioned a0, a1, a2, a3, a4, a5, a6 are the average similarity between the frontal face data set and the deflected face data set calculated according to the face features of the archived face data.
需要说明的是,上述表4与表5,去掉首行与首列,则为对应的平均相似度矩阵。It should be noted that, in Table 4 and Table 5 above, if the first row and the first column are removed, the corresponding average similarity matrix is obtained.
对于多个已归档人脸档案,则对应有多个平均相似度矩阵,可以针对所有的上述平均相似度矩阵,提取对应平均相似度矩阵单元中的平均相似度值进行相加求平均,得到最终相似度矩阵;将上述最终相似度矩阵作为上述预设的阈值矩阵。For multiple archived face files, there are multiple average similarity matrices correspondingly. For all the above average similarity matrices, the average similarity values in the corresponding average similarity matrix units can be extracted and averaged to obtain the final A similarity matrix; the above-mentioned final similarity matrix is used as the above-mentioned preset threshold matrix.
由于在人脸偏转角度很小时,相似度差别不大,但是偏转达到一定程度之后,相似度变化很大,采用统一的阈值评估方式不能很好的解决这种较大偏转问题,而且评估起来较复杂。在本发明实施例中,利用平滑移动平均相似度的相似度阈值矩阵,可以拟合多种姿态下相似度来作为阈值矩阵,可以进一步提升聚类效果,进而提高人脸数据归档率。As the face deflection angle is small, the difference in similarity is not large, but after the deflection reaches a certain level, the similarity changes greatly. Using a unified threshold evaluation method cannot solve this large deflection problem well, and it is relatively difficult to evaluate. complex. In the embodiment of the present invention, the similarity threshold matrix of the smooth moving average similarity can be used to fit the similarity in various postures as the threshold matrix, which can further improve the clustering effect and further improve the archiving rate of face data.
需要说明的是,本发明实施例提供的人脸数据的归档方法可以应用于可以进行人脸数据的归档的智能手机、电脑、服务器等设备。It should be noted that the face data archiving method provided by the embodiment of the present invention can be applied to smart phones, computers, servers and other devices that can perform face data archiving.
可选的,请参见图3,图3是本发明实施例提供的一种人脸数据的归档装置的结构示意图,如图3所示,所述装置包括:Optionally, please refer to FIG. 3. FIG. 3 is a schematic structural diagram of an archiving device for face data provided by an embodiment of the present invention. As shown in FIG. 3, the device includes:
获取模块301,用于获取待归档的批量人脸数据,所述人脸数据包括人脸特征以及人脸姿态特征; Acquisition module 301, is used for obtaining batches of human face data to be archived, and described human face data comprises human face feature and human face posture feature;
计算模块302,用于计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度; Calculation module 302, is used for calculating the pairwise similarity between all face features, obtains the global similarity of each face data;
第一筛选模块303,用于根据所述人脸姿态特征以及预设的相似度阀值矩阵对所述每个人脸数据的全局相似度进行阈值筛选,得到第一人脸数据集,所述预设的相似度阀值矩阵根据预设归档库中的已归档人脸数据所包含的人脸特征和人脸姿态特征得到;The first screening module 303 is configured to perform threshold screening on the global similarity of each face data according to the facial pose feature and a preset similarity threshold matrix to obtain a first human face data set, the preset The set similarity threshold matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library;
归档模块304,用于通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理。The archiving module 304 is configured to archive the first face data set through the archived face data in the preset archiving library.
可选的,如图4所示,所述装置还包括:Optionally, as shown in Figure 4, the device further includes:
提取模块306,用于获取预设归档库中的已归档人脸数据,并提取已归档人脸数据的人脸特征以及人脸姿态特征;The extraction module 306 is used to obtain the archived face data in the preset archive library, and extract the facial features and facial gesture features of the archived face data;
构建模块307,用于根据所述已归档人脸数据的人脸特征以及人脸姿态特征,构建得到所述预设的阀值矩阵。The construction module 307 is configured to construct the preset threshold matrix according to the facial features and facial pose features of the archived facial data.
可选的,如图5所示,所述多个归档人脸数据集中包括一个正面人脸数据集以及多个偏转人脸数据集,所述人脸姿态特征包括人脸角度特征值,所述构建模块307,包括:Optionally, as shown in FIG. 5 , the multiple archived face datasets include a frontal face dataset and multiple deflected face datasets, the facial pose features include face angle feature values, and the Building blocks 307, including:
划分子模块3071,用于针对所述预设归档库中的每个已归档人脸档案,根据所述人脸角度特征值对所述已归档人脸档案包含的人脸数据进行划分,得到多个不同人脸角度区间下的归档人脸数据集;The division sub-module 3071 is used to divide the face data contained in the archived face files according to the face angle feature value for each archived face file in the preset archive library, to obtain multiple An archived face dataset under different face angle intervals;
第一计算子模块3072,用于根据所述已归档人脸数据的人脸特征计算所述归档人脸数据集之间的平均相似度,或者根据所述已归档人脸数据的人脸特征计算所述正面人脸数据集与所述偏转人脸数据集之间的平均相似度;The first calculation sub-module 3072 is used to calculate the average similarity between the archived face data sets according to the face features of the archived face data, or to calculate the average similarity according to the face features of the archived face data the average similarity between the frontal face dataset and the deflected face dataset;
构建子模块3073,用于根据所述归档人脸数据集之间的平均相似度,构建得到各个已归档人脸档案的平均相似度矩阵,其中,每个所述已归档人脸档案对应一个所述平均相似度矩阵,所述平均相似度矩阵的维度与所述划分区间的数量相关; Construction sub-module 3073, for constructing the average similarity matrix of each archived face file according to the average similarity between the archived face data sets, wherein each of the archived face files corresponds to one of the The average similarity matrix, the dimension of the average similarity matrix is related to the quantity of the divided intervals;
第二计算子模块3074,用于根据所述平均相似度矩阵计算得到所述预设的阈值矩阵。The second calculation sub-module 3074 is configured to calculate the preset threshold matrix according to the average similarity matrix.
可选的,如图6所示,所述平均相似度矩阵中一个平均相似度矩阵单元对应一个平均相似度值,所述归档模块304,包括:Optionally, as shown in FIG. 6, an average similarity matrix unit in the average similarity matrix corresponds to an average similarity value, and the archiving module 304 includes:
提取单元30741,用于针对所有的所述平均相似度矩阵,提取对应平均相似度矩阵单元中的平均相似度值进行相加求平均,得到最终相似度矩阵;The extracting unit 30741 is used for extracting the average similarity values in the corresponding average similarity matrix units for all the average similarity matrices, and performing addition and averaging to obtain a final similarity matrix;
获取单元30742,用于将所述最终相似度矩阵作为所述预设的阈值矩阵。The acquiring unit 30742 is configured to use the final similarity matrix as the preset threshold matrix.
可选的,所述归档模块305还用于根据人脸时空特征对第一人脸数据集进行筛选,得到第二人脸数据集;通过所述预设归档库中的已归档人脸数据,对所述第二人脸数据集进行归档处理。Optionally, the archiving module 305 is also used to screen the first face data set according to the spatiotemporal features of the face to obtain the second face data set; through the archived face data in the preset archiving library, Perform archiving processing on the second human face data set.
可选的,如图7所示,所述归档模块305,包括:Optionally, as shown in FIG. 7, the archiving module 305 includes:
集成子模块3051,用于将所述第二人脸数据集形成第一人脸关系图谱;An integration submodule 3051, configured to form the second face dataset into a first face relational graph;
第三计算子模块3052,用于对所述第一人脸关系图谱进行强连通计算,得到第二人脸关系图谱;The third calculation sub-module 3052 is used to perform strong connectivity calculation on the first human face relationship graph to obtain a second human face relationship graph;
归档子模块3053,用于基于所述第二人脸关系图谱对所述批量人脸数据进行归档。The archiving submodule 3053 is configured to archive the batch of face data based on the second face relational graph.
可选的,如图8所示,所述第二人脸关系图谱包括第一人脸关系簇,所述归档子模块3053,包括:Optionally, as shown in FIG. 8, the second face relation map includes the first face relation cluster, and the archiving submodule 3053 includes:
优化单元50531,用于根据预设的社区发现算法对所述第一人脸关系簇进行优化,得到第二人脸关系簇;An optimization unit 50531, configured to optimize the first face relationship cluster according to a preset community discovery algorithm to obtain a second face relationship cluster;
归档单元30532,用于基于所述第二人脸关系簇对所述批量人脸数据进行归档。The archiving unit 30532 is configured to archive the batch of face data based on the second face relation cluster.
需要说明的是,本发明实施例提供的人脸数据的归档装置可以应用于可以 进行人脸数据的归档的智能手机、电脑、服务器等设备。It should be noted that the face data archiving device provided by the embodiment of the present invention can be applied to smart phones, computers, servers and other devices that can perform face data archiving.
本发明实施例提供的人脸数据的归档装置能够实现上述方法实施例中人脸数据的归档方法实现的各个过程,且可以达到相同的有益效果。为避免重复,这里不再赘述。The face data archiving device provided by the embodiment of the present invention can realize each process realized by the face data archiving method in the above method embodiment, and can achieve the same beneficial effect. To avoid repetition, details are not repeated here.
参见图9,图9是本发明实施例提供的一种电子设备的结构示意图,如图9所示,包括:存储器902、处理器901及存储在所述存储器902上并可在所述处理器901上运行的人脸数据的归档方法的计算机程序,其中:Referring to FIG. 9, FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention. As shown in FIG. 9, it includes: a memory 902, a processor 901 and a A computer program for the archiving method of face data running on 901, wherein:
处理器901用于调用存储器902存储的计算机程序,执行如下步骤:The processor 901 is used to call the computer program stored in the memory 902, and perform the following steps:
获取待归档的批量人脸数据,所述人脸数据包括人脸特征以及人脸姿态特征;Obtain batches of face data to be archived, the face data including face features and face gesture features;
计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度;Calculate the pairwise similarity between all facial features to obtain the global similarity of each face data;
根据所述人脸姿态特征以及预设的相似度阀值矩阵对所述每个人脸数据的全局相似度进行阈值筛选,得到第一人脸数据集,所述预设的相似度阀值矩阵根据预设归档库中的已归档人脸数据所包含的人脸特征和人脸姿态特征得到;Threshold screening is performed on the global similarity of each face data according to the facial pose feature and a preset similarity threshold matrix to obtain a first human face data set, and the preset similarity threshold matrix is based on Obtain the facial features and facial posture features contained in the archived face data in the preset archive library;
通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理。Perform archiving processing on the first human face data set through the archived human face data in the preset archiving library.
可选的,处理器901执行的所述方法还包括:Optionally, the method executed by the processor 901 further includes:
获取预设归档库中的已归档人脸数据,并提取已归档人脸数据的人脸特征以及人脸姿态特征;Obtain the archived face data in the preset archive library, and extract the face features and facial pose features of the archived face data;
根据所述已归档人脸数据的人脸特征以及人脸姿态特征,构建得到所述预设的阀值矩阵。The preset threshold value matrix is constructed and obtained according to the facial features and facial pose features of the archived facial data.
可选的,所述多个归档人脸数据集中包括一个正面人脸数据集以及多个偏转人脸数据集,所述人脸姿态特征包括人脸角度特征值,处理器901执行的所述根据所述已归档人脸数据的人脸特征以及人脸姿态特征,构建得到所述预设的阀值矩阵,包括:Optionally, the multiple archived face datasets include a frontal face dataset and multiple deflected face datasets, the facial pose features include face angle feature values, and the processor 901 executes the according to The facial features and facial posture features of the archived facial data are constructed to obtain the preset threshold matrix, including:
针对所述预设归档库中的每个已归档人脸档案,根据所述人脸角度特征值对所述已归档人脸档案包含的人脸数据进行划分,得到多个不同人脸角度区间下的归档人脸数据集;For each archived face file in the preset archive library, the face data contained in the archived face file is divided according to the face angle feature value to obtain a plurality of different face angle intervals. The archived face dataset;
根据所述已归档人脸数据的人脸特征计算所述归档人脸数据集之间的平均相似度,或者根据所述已归档人脸数据的人脸特征计算所述正面人脸数据集与 所述偏转人脸数据集之间的平均相似度;Calculate the average similarity between the archived face data sets according to the face features of the archived face data, or calculate the difference between the frontal face data set and the archived face data according to the face features of the archived face data. The average similarity between the deflected face datasets;
根据所述归档人脸数据集之间的平均相似度,构建得到各个已归档人脸档案的平均相似度矩阵,其中,每个所述已归档人脸档案对应一个所述平均相似度矩阵,所述平均相似度矩阵的维度与所述划分区间的数量相关;According to the average similarity between the archived face data sets, the average similarity matrix of each archived human face file is constructed, wherein, each of the archived human face files corresponds to a described average similarity matrix, so The dimension of the average similarity matrix is related to the number of the divided intervals;
根据所述平均相似度矩阵计算得到所述预设的阈值矩阵。The preset threshold matrix is obtained by calculating according to the average similarity matrix.
可选的,处理器901执行的所述平均相似度矩阵中一个平均相似度矩阵单元对应一个平均相似度值,所述根据所述平均相似度矩阵计算得到所述预设的阈值矩阵,包括:Optionally, one average similarity matrix unit in the average similarity matrix executed by the processor 901 corresponds to an average similarity value, and the calculation according to the average similarity matrix to obtain the preset threshold matrix includes:
针对所有的所述平均相似度矩阵,提取对应平均相似度矩阵单元中的平均相似度值进行相加求平均,得到最终相似度矩阵;For all the average similarity matrices, extracting the average similarity values in the corresponding average similarity matrix units is added and averaged to obtain the final similarity matrix;
将所述最终相似度矩阵作为所述预设的阈值矩阵。The final similarity matrix is used as the preset threshold matrix.
可选的,处理器901执行的所述通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理,包括:Optionally, the archiving processing of the first face data set by using the archived face data in the preset archiving library performed by the processor 901 includes:
根据所述人脸时空特征对所述第一人脸数据集进行筛选,得到第二人脸数据集;Filtering the first human face data set according to the human face spatiotemporal characteristics to obtain a second human face data set;
通过所述预设归档库中的已归档人脸数据,对所述第二人脸数据集进行归档处理。The archiving process is performed on the second human face data set through the archived human face data in the preset archiving library.
可选的,所述第二人脸关系图谱包括第一人脸关系簇,处理器901执行的所述通过所述预设归档库中的已归档人脸数据,对所述第二人脸数据集进行归档处理,包括:Optionally, the second face relation graph includes a first face relation cluster, and the process executed by the processor 901 is to process the second face data through the archived face data in the preset archive repository. collections for archiving, including:
基于所述第二人脸数据集,生成形成第一人脸关系图谱;Based on the second human face data set, generate and form a first human face relational graph;
对所述第一人脸关系图谱进行强连通计算,得到第二人脸关系图谱;Performing strong connectivity calculations on the first human face relational graph to obtain a second human face relational graph;
基于所述第二人脸关系图谱对所述批量人脸数据进行归档。The batch of face data is archived based on the second face relationship graph.
可选的,所述第二人脸关系图谱包括第一人脸关系簇,处理器901执行的所述基于所述第二人脸关系图谱对所述批量人脸数据进行归档,包括:Optionally, the second face relation graph includes a first face relation cluster, and the archiving of the batch of face data based on the second face relation graph executed by the processor 901 includes:
根据预设的社区发现算法对所述第一人脸关系簇进行优化,得到第二人脸关系簇;Optimizing the first face relationship cluster according to a preset community discovery algorithm to obtain a second face relationship cluster;
基于所述第二人脸关系簇对所述批量人脸数据进行归档。The batch of face data is archived based on the second face relation cluster.
本发明实施例提供的电子设备能够实现上述方法实施例中人脸数据的归档方法实现的各个过程,且可以达到相同的有益效果。为避免重复,这里不再赘 述。The electronic device provided by the embodiment of the present invention can realize each process realized by the face data archiving method in the above method embodiment, and can achieve the same beneficial effect. To avoid repetition, it is not repeated here.
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现本发明实施例提供的人脸数据的归档方法或应用端人脸数据的归档方法的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。The embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for archiving face data or the application terminal provided by the embodiment of the present invention is realized. The various processes of the face data archiving method can achieve the same technical effect, so in order to avoid repetition, details will not be repeated here.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存取存储器(Random Access Memory,简称RAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the programs can be stored in a computer-readable storage medium. During execution, it may include the processes of the embodiments of the above-mentioned methods. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM for short).

Claims (10)

  1. 一种人脸数据的归档方法,其特征在于,包括以下步骤:A method for archiving face data, comprising the following steps:
    获取待归档的批量人脸数据,所述人脸数据包括人脸特征以及人脸姿态特征;Obtain batches of face data to be archived, the face data including face features and face gesture features;
    计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度;Calculate the pairwise similarity between all facial features to obtain the global similarity of each face data;
    根据所述人脸姿态特征以及预设的相似度阀值矩阵对所述每个人脸数据的全局相似度进行阈值筛选,得到第一人脸数据集,所述预设的相似度阀值矩阵根据预设归档库中的已归档人脸数据所包含的人脸特征和人脸姿态特征得到;Threshold screening is performed on the global similarity of each face data according to the facial pose feature and a preset similarity threshold matrix to obtain a first human face data set, and the preset similarity threshold matrix is based on Obtain the facial features and facial posture features contained in the archived face data in the preset archive library;
    通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理。Perform archiving processing on the first human face data set through the archived human face data in the preset archiving library.
  2. 如权利要求1所述的人脸数据的归档方法,其特征在于,所述方法还包括:The archiving method of face data as claimed in claim 1, is characterized in that, described method also comprises:
    获取预设归档库中的已归档人脸数据,并提取已归档人脸数据的人脸特征以及人脸姿态特征;Obtain the archived face data in the preset archive library, and extract the face features and facial pose features of the archived face data;
    根据所述已归档人脸数据的人脸特征以及人脸姿态特征,构建得到所述预设的阀值矩阵。The preset threshold value matrix is constructed and obtained according to the facial features and facial pose features of the archived facial data.
  3. 如权利要求2所述的人脸数据的归档方法,其特征在于,所述人脸姿态特征包括人脸角度特征值,所述根据所述已归档人脸数据的人脸特征以及人脸姿态特征,构建得到所述预设的阀值矩阵,包括:The archiving method of human face data as claimed in claim 2, is characterized in that, described human face posture feature comprises human face angle feature value, and described human face feature and human face posture feature according to described archived human face data , to construct the preset threshold matrix, including:
    针对所述预设归档库中的每个已归档人脸档案,所述多个归档人脸数据集中包括一个正面人脸数据集以及多个偏转人脸数据集,根据所述人脸角度特征值对所述已归档人脸档案包含的人脸数据进行划分,得到多个不同人脸角度区间下的归档人脸数据集;For each archived face file in the preset archive library, the multiple archived face data sets include a frontal face data set and multiple deflected face data sets, according to the face angle feature value Dividing the face data contained in the archived face file to obtain a plurality of archived face data sets under different face angle intervals;
    根据所述已归档人脸数据的人脸特征计算所述归档人脸数据集之间的平均相似度,或者根据所述已归档人脸数据的人脸特征计算所述正面人脸数据集与所述偏转人脸数据集之间的平均相似度;Calculate the average similarity between the archived face data sets according to the face features of the archived face data, or calculate the difference between the frontal face data set and the archived face data according to the face features of the archived face data. The average similarity between the deflected face datasets;
    根据所述归档人脸数据集之间的平均相似度,构建得到各个已归档人脸档案的平均相似度矩阵,其中,每个所述已归档人脸档案对应一个所述平均相似度矩阵,所述平均相似度矩阵的维度与所述划分区间的数量相关;According to the average similarity between the archived face data sets, the average similarity matrix of each archived human face file is constructed, wherein, each of the archived human face files corresponds to a described average similarity matrix, so The dimension of the average similarity matrix is related to the number of the divided intervals;
    根据所述平均相似度矩阵计算得到所述预设的阈值矩阵。The preset threshold matrix is obtained by calculating according to the average similarity matrix.
  4. 如权利要求3所述的人脸数据的归档方法,其特征在于,所述平均相似度矩阵中一个平均相似度矩阵单元对应一个平均相似度值,所述根据所述平均相似度矩阵计算得到所述预设的阈值矩阵,包括:The archiving method of face data as claimed in claim 3, is characterized in that, in described average similarity matrix, one average similarity matrix unit corresponds to an average similarity value, and described according to described average similarity matrix calculation obtains The preset threshold matrix described above includes:
    针对所有的所述平均相似度矩阵,提取对应平均相似度矩阵单元中的平均相似度值进行相加求平均,得到最终相似度矩阵;For all the average similarity matrices, extracting the average similarity values in the corresponding average similarity matrix units is added and averaged to obtain the final similarity matrix;
    将所述最终相似度矩阵作为所述预设的阈值矩阵。The final similarity matrix is used as the preset threshold matrix.
  5. 如权利要求1所述的人脸数据的归档方法,其特征在于,所述通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理,包括:The archiving method of face data as claimed in claim 1, wherein the archiving of the first face data set through the archived face data in the preset archiving library includes:
    根据所述人脸时空特征对所述第一人脸数据集进行筛选,得到第二人脸数据集;Filtering the first human face data set according to the human face spatiotemporal characteristics to obtain a second human face data set;
    通过所述预设归档库中的已归档人脸数据,对所述第二人脸数据集进行归档处理。The archiving process is performed on the second human face data set through the archived human face data in the preset archiving library.
  6. 如权利要求5所述的人脸数据的归档方法,其特征在于,所述第二人脸关系图谱包括第一人脸关系簇,所述通过所述预设归档库中的已归档人脸数据,对所述第二人脸数据集进行归档处理,包括:The archiving method of human face data according to claim 5, wherein the second human face relation graph includes a first human face relation cluster, and the archived face data in the preset archiving library , archiving the second face data set, including:
    基于所述第二人脸数据集,生成形成第一人脸关系图谱;Based on the second human face data set, generate and form a first human face relational graph;
    对所述第一人脸关系图谱进行强连通计算,得到第二人脸关系图谱;Performing strong connectivity calculations on the first human face relational graph to obtain a second human face relational graph;
    基于所述第二人脸关系图谱对所述批量人脸数据进行归档。The batch of face data is archived based on the second face relationship graph.
  7. 如权利要求6所述的人脸数据的归档方法,其特征在于,所述第二人脸关系图谱包括第一人脸关系簇,所述基于所述第二人脸关系图谱对所述批量人脸数据进行归档,包括:The archiving method of face data according to claim 6, characterized in that, the second face relation graph includes a first face relation cluster, and the batch of people is classified based on the second face relation graph. Face data is archived, including:
    根据预设的社区发现算法对所述第一人脸关系簇进行优化,得到第二人脸关系簇;Optimizing the first face relationship cluster according to a preset community discovery algorithm to obtain a second face relationship cluster;
    基于所述第二人脸关系簇对所述批量人脸数据进行归档。The batch of face data is archived based on the second face relation cluster.
  8. 一种人脸数据的归档装置,其特征在于,所述装置包括:A kind of archiving device of face data, it is characterized in that, described device comprises:
    获取模块,用于获取待归档的批量人脸数据,所述人脸数据包括人脸特征以及人脸姿态特征;Obtaining module, is used for obtaining batch face data to be archived, and described face data comprises face feature and face gesture feature;
    计算模块,用于计算所有人脸特征之间的两两相似度,得到每个人脸数据的全局相似度;Calculation module, used to calculate the pairwise similarity between all facial features, and obtain the global similarity of each facial data;
    第一筛选模块,用于根据所述人脸姿态特征以及预设的相似度阀值矩阵对 所述每个人脸数据的全局相似度进行阈值筛选,得到第一人脸数据集,所述预设的相似度阀值矩阵根据预设归档库中的已归档人脸数据所包含的人脸特征和人脸姿态特征得到;The first screening module is configured to perform threshold screening on the global similarity of each face data according to the face pose feature and a preset similarity threshold matrix to obtain a first face data set, the preset The similarity threshold matrix is obtained according to the facial features and facial posture features contained in the archived face data in the preset archive library;
    归档模块,用于通过所述预设归档库中的已归档人脸数据,对所述第一人脸数据集进行归档处理。The archiving module is configured to archive the first human face data set through the archived human face data in the preset archiving library.
  9. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至7中任一项所述的人脸数据的归档方法中的步骤。An electronic device, characterized in that it comprises: a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the computer program, the computer program according to claim 1 is realized. Steps in the archiving method of face data described in any one of to 7.
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的人脸数据的归档方法中的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the human face according to any one of claims 1 to 7 is realized. The steps in the archiving method of the data.
PCT/CN2022/143530 2021-12-31 2022-12-29 Facial data archiving method and related device WO2023125839A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111678101.0 2021-12-31
CN202111678101.0A CN114443893A (en) 2021-12-31 2021-12-31 Face data filing method and related equipment

Publications (1)

Publication Number Publication Date
WO2023125839A1 true WO2023125839A1 (en) 2023-07-06

Family

ID=81365203

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/143530 WO2023125839A1 (en) 2021-12-31 2022-12-29 Facial data archiving method and related device

Country Status (2)

Country Link
CN (1) CN114443893A (en)
WO (1) WO2023125839A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114443893A (en) * 2021-12-31 2022-05-06 深圳云天励飞技术股份有限公司 Face data filing method and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150205997A1 (en) * 2012-06-25 2015-07-23 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN105488527A (en) * 2015-11-27 2016-04-13 小米科技有限责任公司 Image classification method and apparatus
CN109684913A (en) * 2018-11-09 2019-04-26 长沙小钴科技有限公司 A kind of video human face mask method and system based on community discovery cluster
CN109800668A (en) * 2018-12-28 2019-05-24 上海依图网络科技有限公司 A kind of archiving method and device
CN111738221A (en) * 2020-07-28 2020-10-02 腾讯科技(深圳)有限公司 Face clustering method, face clustering device and computer-readable storage medium
CN114443893A (en) * 2021-12-31 2022-05-06 深圳云天励飞技术股份有限公司 Face data filing method and related equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150205997A1 (en) * 2012-06-25 2015-07-23 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN105488527A (en) * 2015-11-27 2016-04-13 小米科技有限责任公司 Image classification method and apparatus
CN109684913A (en) * 2018-11-09 2019-04-26 长沙小钴科技有限公司 A kind of video human face mask method and system based on community discovery cluster
CN109800668A (en) * 2018-12-28 2019-05-24 上海依图网络科技有限公司 A kind of archiving method and device
CN111738221A (en) * 2020-07-28 2020-10-02 腾讯科技(深圳)有限公司 Face clustering method, face clustering device and computer-readable storage medium
CN114443893A (en) * 2021-12-31 2022-05-06 深圳云天励飞技术股份有限公司 Face data filing method and related equipment

Also Published As

Publication number Publication date
CN114443893A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
CN109961051B (en) Pedestrian re-identification method based on clustering and block feature extraction
Zhang et al. Deep convolutional neural networks for forest fire detection
CN110516586B (en) Face image clustering method, system, product and medium
WO2022002150A1 (en) Method and device for constructing visual point cloud map
Cong et al. Towards scalable summarization of consumer videos via sparse dictionary selection
CN109598268B (en) RGB-D (Red Green blue-D) significant target detection method based on single-stream deep network
JP5463415B2 (en) Method and system for quasi-duplicate image retrieval
JP2001155169A (en) Method and system for dividing, classifying and summarizing video image
CN110717411A (en) Pedestrian re-identification method based on deep layer feature fusion
CN112967341B (en) Indoor visual positioning method, system, equipment and storage medium based on live-action image
WO2019080908A1 (en) Image processing method and apparatus for implementing image recognition, and electronic device
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
WO2023125839A1 (en) Facial data archiving method and related device
CN113011329A (en) Pyramid network based on multi-scale features and dense crowd counting method
US20230351794A1 (en) Pedestrian tracking method and device, and computer-readable storage medium
CN104216974A (en) Unmanned aerial vehicle aerial image matching method based on vocabulary tree blocking and clustering
TWI745818B (en) Method and electronic equipment for visual positioning and computer readable storage medium thereof
Li et al. Superpixel segmentation based on spatially constrained subspace clustering
Fan et al. Adaptive crowd segmentation based on coherent motion detection
CN111709317A (en) Pedestrian re-identification method based on multi-scale features under saliency model
Wang et al. Video background/foreground separation model based on non-convex rank approximation RPCA and superpixel motion detection
CN110769259A (en) Image data compression method for tracking track content of video target
Pang et al. Federated Learning for Crowd Counting in Smart Surveillance Systems
CN111160099A (en) Intelligent segmentation method for video image target
CN107564029B (en) Moving target detection method based on Gaussian extreme value filtering and group sparse RPCA

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22915088

Country of ref document: EP

Kind code of ref document: A1