CN111177436A - Face feature retrieval method, device and equipment - Google Patents

Face feature retrieval method, device and equipment Download PDF

Info

Publication number
CN111177436A
CN111177436A CN201811331149.2A CN201811331149A CN111177436A CN 111177436 A CN111177436 A CN 111177436A CN 201811331149 A CN201811331149 A CN 201811331149A CN 111177436 A CN111177436 A CN 111177436A
Authority
CN
China
Prior art keywords
face
similarity
model
regression
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811331149.2A
Other languages
Chinese (zh)
Other versions
CN111177436B (en
Inventor
金琦峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811331149.2A priority Critical patent/CN111177436B/en
Publication of CN111177436A publication Critical patent/CN111177436A/en
Application granted granted Critical
Publication of CN111177436B publication Critical patent/CN111177436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a face feature retrieval method, a face feature retrieval device and face feature retrieval equipment, wherein the method comprises the following steps: acquiring a face image to be retrieved; acquiring N groups of face feature data of a face image to be retrieved, wherein the N groups of face feature data are respectively extracted from the face image to be retrieved of the N types of face models; respectively retrieving the N groups of face feature data in a face feature library to obtain N groups of retrieval results; respectively converting the similarity of N-1 groups of retrieval results of the N-1 types of face models into the similarity corresponding to the target face model according to a similarity conversion relation acquired in advance; and outputting the converted retrieval result corresponding to the target face model. The embodiment of the invention can improve the performance of the face retrieval system and the algorithm accuracy and data compatibility.

Description

Face feature retrieval method, device and equipment
Technical Field
The invention relates to the technical field of image recognition, in particular to a face feature retrieval method, a face feature retrieval device and face feature retrieval equipment.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. The method has the advantages of high efficiency, strong reliability and the like in the fields of security protection, monitoring and the like, thereby being widely applied.
Face retrieval generally includes face image acquisition and detection, face image preprocessing, face image feature extraction, and matching and recognition. The process of face image matching and recognition is to search and match the extracted feature information of the face image with face feature data stored in a database to obtain the similarity between the two, and to judge whether the face images are the same face by setting a similarity threshold. Such as: when the similarity exceeds the threshold, the extracted face image and the face image stored in the database can be judged to belong to the same face.
In the related art, a face retrieval system usually adopts a single model configuration for face recognition, that is, only one face analysis server is configured, and only one face model is stored in the server, for example: the system comprises an NVR (Network video recorder) and a set of intelligent analysis system.
However, the configuration of the single model does not support the iterative upgrade of the face model and cannot expand the capacity, so that the performance of the face retrieval system is poor.
Disclosure of Invention
The embodiment of the invention provides a face feature retrieval method, a face feature retrieval device and face feature retrieval equipment, which are used for solving the problem of poor performance of a face retrieval system in the related technology.
In order to solve the technical problems, the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a face feature retrieval method, including:
acquiring a face image to be retrieved;
acquiring N groups of face feature data of the face image to be retrieved, wherein the N groups of face feature data are respectively extracted from the face image to be retrieved of N types of face models;
respectively retrieving the N groups of face feature data in a face feature library to obtain N groups of retrieval results;
respectively converting the similarity of N-1 groups of retrieval results of the N-1 types of face models into the similarity corresponding to the target face model according to a similarity conversion relation acquired in advance;
and outputting the converted retrieval result corresponding to the N target face models.
In a second aspect, an embodiment of the present invention provides a face feature retrieval apparatus, including:
the first acquisition module is used for acquiring a face image to be retrieved;
the second acquisition module is used for acquiring N groups of face feature data of the face image to be retrieved, wherein the N groups of face feature data are respectively extracted from the face image to be retrieved of N types of face models;
the retrieval module is used for retrieving the N groups of face feature data in a face feature library respectively to obtain N groups of retrieval results;
the conversion module is used for converting the similarity of the N-1 group retrieval results of the N-1 types of face models into the similarity corresponding to the target face model respectively according to the similarity conversion relation acquired in advance;
and the output module is used for outputting the converted retrieval result corresponding to the target face model.
In a third aspect, an embodiment of the present invention further provides a facial feature retrieval device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the facial feature retrieval method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the face feature retrieval method as described above.
In the embodiment of the invention, a face image to be retrieved is obtained; acquiring N groups of face feature data of the face image to be retrieved, wherein the N groups of face feature data are respectively extracted from the face image to be retrieved of N types of face models; respectively retrieving the N groups of face feature data in a face feature library to obtain N groups of retrieval results; respectively converting the similarity of N-1 groups of retrieval results of the N-1 types of face models into the similarity corresponding to the target face model according to a similarity conversion relation acquired in advance; and outputting the converted retrieval result corresponding to the target face model. Therefore, multiple groups of retrieval results corresponding to multiple face models can be obtained, the similarity among the multiple groups of retrieval results is converted into the similarity corresponding to the same face model, and the converted retrieval results with the similarity are output, so that the retrieval results have comparability, and the performance of the face retrieval system is further improved; in addition, the N types of face models are respectively compatible with different face models, so that the data compatibility of the face retrieval system is improved, and the accuracy of the algorithm can be improved by adopting various face models.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram of a heterogeneous cluster system;
fig. 2 is a flowchart of a face feature retrieval method according to an embodiment of the present invention;
fig. 3 is a structural diagram of a face feature retrieval apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram of another face feature retrieval apparatus according to an embodiment of the present invention;
fig. 5 is a structural diagram of a face feature retrieval device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the face feature retrieval method provided by the embodiment of the present invention may be applied to a heterogeneous monitoring system as shown in fig. 1. The system comprises N intelligent terminals, wherein the intelligent terminals can also be called as backend intelligence, and each intelligent terminal can be respectively configured with a human face model.
Specifically, as shown in fig. 1, the system includes: the system comprises a heterogeneous intelligent cluster 101, a face mount 102, a monitoring camera 103, a service scheduling 104, a picture storage library 105 and a big database 106, wherein the face mount 102, the monitoring camera 103, the service scheduling 104, the picture storage library 105 and the big database 106 are respectively connected with the heterogeneous intelligent cluster 101.
In this embodiment, the heterogeneous intelligent cluster 101 includes 4 intelligent terminals 1011, the face mount 102 and the monitoring camera 103 are respectively configured to provide image information such as a face picture, a monitoring picture, and a monitoring video to the 4 intelligent terminals 1011, and the intelligent terminals 1011 can automatically obtain a large face image according to the image information, capture a small face image from the large face image, and store the captured small face image and large face image in the image repository 105. And the face feature data of the face small image is respectively extracted by adopting the face model configured in the intelligent terminal 1011, and is stored in the big database 105 in the form of semi-structured data.
After the monitoring personnel inputs the face image to be retrieved through the service scheduling 104, the face feature data of the face image to be retrieved is respectively extracted through the 4 intelligent terminals 1011, and the face feature data of the face image to be retrieved and the face feature data which are stored in the database 105 and have the same identification information are subjected to similarity calculation to obtain a retrieval result.
The retrieval result may include similarity between the face image to be retrieved and each face thumbnail in the image information provided by the face mount 102 and the monitoring camera 103.
In addition, the retrieval result can also be output through the service scheduling 104 or other interfaces and displayed on the display device.
It should be noted that, because the face feature data of the monitored face image can be extracted by using different face models, and the accuracy, the dimension, the algorithm used, and the like of the different face models may be different. If the face feature data acquired by different face models are directly retrieved, wrong similarity can be obtained.
For example: the dimension of one face model is 128, the dimension of the other face model is 256, in the process of calculating the similarity, the 128-dimensional face model extracts the characteristic value of a 128-dimensional vector according to one face image, the 256-dimensional face model extracts the characteristic value of a 256-dimensional vector according to the other face image, and obviously, the similarity can not be obtained by directly comparing the 128-dimensional face characteristic data with the 256-dimensional face characteristic data. The invention distinguishes the face characteristic data obtained by different face models, and only carries out similarity calculation on the face characteristic data obtained by adopting the same face model so as to obtain a retrieval result.
Of course, the heterogeneous monitoring system shown in fig. 1 is only used to illustrate one application scenario of the face feature retrieval method provided in the embodiment of the present invention, and other application scenarios and specific steps of the face feature retrieval method are not limited.
Fig. 2 is a flowchart of a face feature retrieval method according to an embodiment of the present invention. As shown in fig. 2, the method comprises the steps of:
step 201, obtaining a face image to be retrieved.
The acquired face image to be retrieved may be a face image input by a user, or a face image sent by another device, for example: the user or the external device inputs the face image through the service schedule 104 as shown in fig. 1.
Of course, the obtained face image to be retrieved is not limited to the face image input by the user, and may also be a face image obtained by the retrieval system through shooting, analysis, and the like, for example: the face retrieval system determines that the behavior of a person is dangerous through shooting, analyzing and the like, and automatically acquires the face image of the dangerous person if the person needs to be monitored.
In addition, the facial image to be retrieved may include an image of a face portion, and may also include other contents besides the face portion, for example: background or body, etc.
As an optional implementation, before step 201, the method further comprises the following steps:
acquiring a monitoring face image through an image acquisition unit;
acquiring N groups of face feature data of the monitored face image, wherein the N groups of face feature data of the monitored face image are respectively extracted from the monitored face images of N types of face models;
and storing the N groups of face feature data of the monitored face image into the face feature library.
In a specific application scene, a face image in a monitoring scene can be acquired through image acquisition units such as a network camera and a monitoring camera installed in a cell, the acquired face image is matched with a face image to be retrieved to obtain the similarity between the two, and the two are judged to be the same person under the condition that the similarity is greater than a certain threshold value, so that security measures such as monitoring or tracking of personnel to be retrieved are achieved.
Of course, the image directly acquired by the image acquisition unit may be a large face image, and the monitoring face image is obtained by cutting out a face portion from the large face image as a small face image.
The face small image may be a picture of only a face part taken from the face large image, and the face large image may include other contents besides the face part.
In addition, the obtained large face image and the small face image can be stored in a picture library.
In the application process, the number of the monitoring face images is often large. The face feature data of one monitored face image can be respectively extracted by the N kinds of face models, and the face feature data is sequentially repeated until the face feature data of the complete monitored face image is extracted. The extraction speed of the face feature data of the monitored face image can be increased, and the overall efficiency of the face feature retrieval method can be improved.
In the embodiment, the face feature data of the multiple monitoring face images are respectively extracted by adopting the N types of face models, so that the problem that a large amount of time is consumed due to the fact that one type of face model is adopted to sequentially extract the face feature data of the multiple monitoring face images is avoided, and the efficiency of the face feature retrieval method is improved.
As an optional implementation manner, the face feature retrieval method further includes:
preferably, the face model with the highest accuracy in the N kinds of face models is adopted to extract the face feature data of the monitored face image in the most important monitored scene.
The most important monitoring scene may be important places such as a cell entrance, a place around a school, a place around a government department, and the like, or places with a large traffic flow, and may be set according to a specific actual situation.
In addition, the face model with the highest accuracy is generally the face model with the latest version.
In the embodiment, the face model with high accuracy can be configured for the image acquisition unit with higher monitoring accuracy requirement, so that the pertinence in the extraction process of the face feature data is improved.
As an alternative implementation, the identification information of the face model may include a version number.
For example, as shown in table 1 below, the version number includes 11 digits, where the first 4 digits are used to distinguish the face model, the middle 4 is a dimension used to represent the face model, and the last 3 is not used to represent the version level of the face model:
TABLE 1
Face model (4 bit) Dimension (4 bit) Version (3 bit) Version number (11 bits))
231 128 V1.0 02320128010
231 128 V1.1 02320128011
245 512 V1.0 02450513010
247 2024 V1.1 02472024011
280 128 V1.1 0280128011
In the retrieval process, the version number of each group of face feature data can be obtained first, two or more groups of face feature data with the same version number are compared and analyzed, and the version information and the dimension information of the face model extracting the face feature data with the version number can be analyzed simply and conveniently according to the version number.
In the embodiment, the version number is used as the identification information, so that the information such as the face model, the dimension of the face model, the version level of the face model and the like can be simply and conveniently read.
Step 202, acquiring N groups of face feature data of the face image to be retrieved, wherein the N groups of face feature data are extracted from the face images to be retrieved of the N types of face models respectively.
Wherein N may be an integer greater than 1.
It should be noted that, in order to facilitate the retrieval of the face image to be retrieved from the face image in the image library, the face image needs to be converted into face feature data, and then the similarity between the face image to be retrieved and the face image in the image library can be determined by comparing the size, the difference and the like of the face feature data.
The face feature data may include a plurality of face feature values.
In addition, the N types of face models may be expressed in that each of the N types of face models is a face model with a different algorithm, for example: the variety of algorithms is different, the old and new versions of the algorithms are different, the accuracy or dimensionality of the algorithms is different, and the like.
For example: the method comprises the steps of adopting a first face model and a second face model to respectively extract face feature data of a face image to be retrieved, wherein the first face model obtains first face feature data of the face image to be retrieved, the second face model obtains second face feature data of the face image to be retrieved, the first face model is different from the second face model, and therefore the obtained first face feature data are different from the obtained second face feature data.
Specifically, the N types of face models may be installed in N intelligent terminals in a heterogeneous system, or the N types of face models may be installed in one system.
It should be noted that the type of the face model may be a neural network model, deep machine learning, or the like. According to the progress of the times, the versions of the face models also need to be updated, so that the versions of the face models before and after updating are different.
Moreover, a face model with low accuracy or dimensionality can be configured under the condition that the accuracy requirement of the face feature data is low; and under the condition that the requirement on the accuracy of the face feature data is high, configuring a face model with high accuracy or high dimensionality.
Therefore, the low-accuracy or low-dimensionality face model can be adopted as much as possible under the condition of meeting the accuracy requirement of the face feature data, so that the problem that the calculation process of the face model is too complex due to the fact that the accuracy or dimensionality is excessive and the calculation efficiency is reduced is avoided.
Certainly, the face image to be retrieved acquired in step 201 may be a large face image, a small face image may be intercepted from the large face image through the steps of intercepting and the like, step 201 may further include storing the acquired large face image and the small face image in a picture library, and step 202 may refer to acquiring N sets of face feature data of the small face image through N types of face models respectively.
In the step, at least two face models are adopted to respectively extract the face feature data of the face image to be retrieved, and in the process of upgrading partial face models, the face feature data can be conveniently extracted continuously based on the original face model, so that the problem that the face feature data cannot be extracted in the upgrading process due to the fact that a system adopting a single face model needs to re-analyze a large amount of data in the upgrading process is avoided.
And 203, retrieving the N groups of face feature data in a face feature library respectively to obtain N groups of retrieval results.
The retrieval may refer to retrieving face feature data that are the same as the face model.
It should be noted that, a large number of face images and face feature data of each face image may be stored in the face retrieval system, and each group of retrieval results may include similarities between multiple groups of face feature data in the face feature library and the face feature data, may also include corresponding face images, and of course, may also include identification information of face models corresponding to each group of retrieval results.
In addition, the face feature data of the face images in the face feature library can be extracted respectively by adopting the N kinds of face models.
It should be noted that, the face feature data of the monitored face image may be extracted by using different face models, and the similarity algorithms and the like may also be different according to the face feature data extracted by different face models. And calculating the similarity of the face feature data with the same identification information by adopting a similarity algorithm matched with the face model.
For example: the dimension of one face model is 128, the dimension of the other face model is 256, and in the process of calculating the similarity, the similarity calculation method adopted by the face feature data of 128 dimensions and the face feature data of 256 dimensions is obviously different.
In the step, because the face feature data extracted by the same face model and the face feature data in the face feature library are retrieved, the face feature data which are extracted by different face models and have no comparability are prevented from being compared, thereby avoiding the mixing of the face feature data and improving the accuracy and the reliability of the face feature retrieval.
As an optional implementation manner, in order to retrieve face feature data that is the same as a face model, each of the N sets of face feature data includes identification information of a face model used by the face feature data, and each of the sets of face feature data in the face feature library includes identification information of a face model used by the face feature data, and step 203 specifically includes:
and respectively retrieving the N groups of face feature data in a face feature library according to the identification information of the face model used by each group of face feature data to obtain N groups of retrieval results.
For example: the face model comprises a first face model and a second face model, the number of face images to be retrieved is 1, 10 monitoring face images exist in a face feature library, face feature data of the 10 monitoring face images are respectively extracted by adopting the two face models to obtain 10 groups of face feature data which correspond to the monitoring face images one by one, wherein 4 groups of face feature data are extracted by the first face model and comprise first identification information; and the other 6 groups of face feature data are extracted by a second face model and comprise second identification information. In addition, the face feature data of the face image to be retrieved of the two face models are adopted to obtain two groups of face feature data, wherein one group comprises first identification information, and the other group comprises second identification information.
In the retrieval process, similarity calculation is carried out on the face feature data of 4 groups of monitoring face images including first identification information and the face feature data of the face image to be retrieved including the first identification information, and similarity calculation is carried out on the face feature data of 6 groups of monitoring face images including second identification information and the face feature data of the face image to be retrieved including the second identification information, so as to obtain the similarity between the face image to be retrieved and each monitoring face image.
In addition, the identification information may be in any form of numbers, codes, characters, and the like, and is used for distinguishing face feature data obtained by different face models, and avoiding a process that the obtained similarity is incorrect or comparison cannot be performed due to comparison of two face feature data which are extracted by different face models and have no comparability. The method for distinguishing the face feature data extracted by different face models by adopting the identification information can improve the accuracy of the face feature detection result.
In the embodiment, the identification information of the face model used by the face feature data is added, so that the similarity calculation is only performed on the face feature data with the same identification information, and the situation that face feature data obtained by different face models do not have comparability or the similarity calculation methods are different to obtain wrong retrieval results is avoided, so that the pertinence and the reliability of the retrieval results are improved.
Of course, in the embodiment of the present invention, it is not limited that the face feature data includes identification information of a face model, for example: the face feature data obtained by different models can be stored in different storage directories, and N groups of face feature data can be sequentially obtained according to a preset sequence, so that the face model used by each group of face feature data can be determined in the sequence, and the face feature data identical to the face model can be retrieved in the retrieval process.
And 204, respectively converting the similarity of the N-1 group retrieval results of the N-1 human face models into the similarity corresponding to the target human face model according to the similarity conversion relation acquired in advance.
The similarity conversion relationship may be a conversion relationship that converts the similarities corresponding to the N-1 types of face models into the similarity corresponding to the target face model, where the target face model is another face model of the N types of face models except the N-1 types of face models.
The N groups of search results obtained in step 203 are search results corresponding to the N types of face models, and the algorithms, the precisions, and the like adopted by the N types of face models have differences, so that the similarity among the N groups of search results is not comparable.
For example: the method comprises the steps of respectively extracting face feature data of 2 face images by adopting two face models with different algorithms, and respectively comparing 2 groups of face feature data extracted by the same face model to obtain 2 similarity degrees respectively corresponding to the two face models. At this time, since the algorithms of the two face models are different, the values of the 2 similarities obtained are not equal, but the 2 similarities all represent the similarity between the same 2 face images. Therefore, the similarity obtained by adopting different face models is not comparable. After the similarity among the N groups of search results is converted into the same criterion through step 204, the similarity among the N groups of search results can be made comparable.
The target face model may be one of the N face models with the latest version or the highest accuracy.
In addition, the pre-acquired similarity conversion relationship may include a numerical conversion relationship between the similarity of the N-1 groups of search results and the similarity corresponding to the target face model. After the similarity of the N-1 groups of retrieval results is substituted respectively, the numerical conversion relation can output the similarity corresponding to the target face model, so that the similarity of the retrieval results of the target face model and the N-1 similarities obtained after conversion have comparability.
For example: and respectively adopting the first face model and the second face model which are different to calculate the similarity between the first face image and the second face image and the similarity between the first face image and the third face image.
The similarity between the first face image and the second face image calculated by the first face model and the second face model is respectively equal to: 90 percent and 89 percent; the similarity between the first face image and the third face image is respectively equal to: 65% and 67%. Moreover, the second face model, in which the calculated similarity between the first face image and the second face image is equal to 89%, is higher in accuracy, so that the face model is selected as the target face model.
In addition, the conversion relation between the similarity of the first face model and the similarity of the second face model is obtained according to the conversion relation between 90% and 89% and between 65% and 67%.
In this way, after the similarity between the other two face images is calculated by the first face model, the similarity calculated by the first face model can be directly converted into the similarity corresponding to the second face model according to the conversion relationship between the similarities of the first face model and the second face model; and converting the similarity calculated by the third face model into the similarity corresponding to the first face model according to the conversion relation between the similarities of the first face model and the third face model.
Therefore, the similarity obtained by different face models has comparability.
Of course, to ensure the accuracy of the transformation relationship, two sets of similarities with multiple similarity values may be used to perform the regression calculation to obtain a more accurate transformation relationship.
As an optional implementation manner, the conversion relationship includes N-1 regression models, where the N-1 regression models respectively represent conversion relationships between the similarity corresponding to the N-1 types of face models and the similarity corresponding to the target face model.
The number of the regression models is the number of the face models, and 1 is subtracted from the number of the face models, that is, the regression models correspond to the N-1 types of face models one to one, so that in the process of similarity conversion, the similarities obtained by the N-1 types of face models can be respectively substituted into the corresponding regression models, and the similarity corresponding to the target face model can be obtained.
And repeating the steps until the similarity obtained by the N-1 human face models is converted into the similarity corresponding to the target human face model, so that the similarity obtained by the N human face models is comparable.
In the embodiment, the N-1 regression models are used for respectively converting the similarity of the N-1 human face models into the similarity corresponding to the target human face model, so that the effect of comparability between the similarities obtained by the N human face models is achieved.
Of course, the conversion relationship may also be in other forms such as codes, parameters, etc., for example: and correcting codes are obtained according to the difference of algorithms, dimensions and the like among different face models, and the obtained similarity of the correcting codes can be corrected into the similarity corresponding to the face models with the same algorithms and dimensions according to the difference of algorithms, dimensions and the like among different face models.
As an optional implementation manner, the N-1 regression models include a first regression model, an independent variable of the first regression model is a similarity corresponding to a first face model, a dependent variable of the first regression model is a similarity corresponding to the target face model output by the independent variable through the first regression model, and the first face model is one of the N-1 face models;
the step of converting the similarity of the N-1 groups of retrieval results of the N-1 types of face models into the similarity corresponding to the target face model according to the similarity conversion relation acquired in advance comprises the following steps:
and substituting the similarity corresponding to the first face model in the N-1 groups of retrieval results into the independent variable of the first regression model, and calculating the dependent variable of the first regression model to obtain the similarity corresponding to the target face model after the similarity corresponding to the first face model is converted.
And the similarity corresponding to the first face model represents that two groups of face feature data of two face images are respectively extracted by adopting the first face model, and the similarity between the two groups of face feature data is calculated by adopting a similarity calculation method.
Of course, the N-1 regression models also include the second regression model, the third regression model up to the N-1 regression model.
And the second regression model, the third regression model and the nth-1 regression model are respectively used for converting the similarity corresponding to the N-1 types of face models into the similarity corresponding to the target face model.
In this embodiment, the value of the dependent variable of the first regression model can be obtained only after the similarity of the first face model is input as the independent variable of the first regression model, where the value of the dependent variable is the similarity corresponding to the target face model. Therefore, the step of calculating the corresponding similarity of the target face model is simplified.
As an optional implementation, the first regression model is determined by:
respectively extracting second face feature data of a first face image set and second face image set through the target face model, wherein the first face image set comprises n face images, the second face image set comprises m face images, and both n and m are integers greater than 1;
acquiring n pairs of regression sample images, wherein each pair of regression sample images comprises a face image which belongs to a first face image set and a second face image set respectively, and the first similarity of the feature data of the first face image of the n pairs of regression sample images is in linear distribution;
respectively extracting second face feature data of the n pairs of regression sample images through the first face model, and calculating second similarity of the second face feature data of each pair of regression sample images;
and determining the first regression model according to the first similarity and the second similarity of the n pairs of regression sample images, wherein the independent variable of the first regression model is the second similarity, and the dependent variable is the first similarity.
Of course, the second regression model, the third regression model up to the nth-1 regression model may also be determined in the manner described above.
Wherein the formula y can be adoptedi=β01xiiRepresenting a regression model, wherein xiIs an independent variable, yii represents the ith regression model, i takes any value between 1 and N-1, β0and beta1To estimate, muithe parameters β in the formula can be determined by a regression algorithm0、β1、μiThe specific numerical value of (1).
in addition, β is0、μiEach may take any value including 0, and thus the formula of the regression model is not limited to y alonei=β01xiie.g. in beta0When the value is equal to 0, the formula of the regression model is yi=β1xii
For example: assuming that the first facial image set includes 10 facial images, the second facial image set includes 100 facial images, the new model is used to calculate the similarity between 10 facial images and 100 facial images respectively, so as to obtain 10 × 100 first similarities, after the 1000 first similarities are linearly arranged, n similarity values (for example, 30, the similarity values are respectively 61%, 62%, 63%, …, and 90%) are found in a region with higher similarity, for example, in a range from 61% to 90%, and 30 pairs of regression sample images respectively corresponding to the 30 similarity values are obtained. Then, 30 pairs of regression sample images are respectively subjected to face feature data extraction by adopting an old model, and 30 second similarity degrees are obtained according to the extracted face feature data.
Using 30 first similarities as independent variable xiAnd using the corresponding 30 second similarity degrees as the dependent variable yiAre respectively substituted into formula yi=β01xiiThen, can findparameter β in the formula0、β1、μiThe specific numerical value of (1). Thereby obtaining a regression model.
The formula may be solved by a least square method or a maximum likelihood method.
The solution process may be, assuming:
Figure BDA0001860023970000131
it is possible to obtain:
Figure BDA0001860023970000132
wherein the random error muiAssuming 0, the corresponding unary thread equation is obtained.
Of course, multiple regression equations may be established to establish the regression model.
Then, the similarity obtained by more than two face models can be converted into the similarity corresponding to the latest model through the regression equation, and the converted similarity values can be uniformly sorted, sorted and fed back to achieve the purpose of fusion retrieval.
The sample table shown in table 2 below selects the similarity of the regression sample images in the same pair under different models, wherein it is assumed that the regression linear equation corresponding to the regression model obtained by the above solution method is: y isi=0.00484+1.0936xii
TABLE 2
Serial number Old model Novel model Calculation of regression equation Deviation mui
1 0.2210 0.2730 0.246512 0.026475
2 0.2673 0.3043 0.297182 0.007137
3 0.3137 0.3847 0.347852 0.036800
4 0.3560 0.4180 0.394153 0.023837
5 0.4042 0.4752 0.446897 0.028323
6 0.4525 0.5045 0.499642 0.004808
7 0.5020 0.5860 0.553830 0.032170
8 0.5231 0.6041 0.576905 0.027195
9 0.5587 0.5797 0.615837 (0.036137)
10 0.5643 0.6313 0.621962 0.009338
11 0.5794 0.6584 0.638475 0.019925
12 0.6084 0.6724 0.670190 0.002210
13 0.6343 0.6653 0.698514 (0.033214)
In addition, the same threshold value can be set for the converted similarity, and the face images corresponding to the similarity larger than the threshold value can be judged as the same person.
It should be noted that, the larger the number of regression sample images is, the higher the accuracy of the obtained regression model is, and in addition, after the regression equation is solved, the corresponding estimation deviation μ is obtainediThe value of (b) can be obtained by looking up a table of samples shown in table 2.
In the step, the similarity of N-1 groups of retrieval results of N-1 types of face models is converted into the similarity corresponding to the target face model, so that the similarities obtained by adopting different face models have comparability. And monitoring personnel can accurately judge whether the face images are the same person or not only according to the numerical relationship among the converted similarities.
And step 205, outputting the converted retrieval result corresponding to the target face model.
Wherein, step 205 can be understood as: and outputting the N-1 groups of converted retrieval results in the N groups of retrieval results and the other non-converted retrieval result in the N groups of retrieval results.
And outputting the converted N-1 groups of retrieval results and the retrieval result corresponding to the target face model according to the similarity from large to small according to the converted N-1 groups of retrieval results and the other retrieval result which is not converted in the N groups of retrieval results.
Therefore, monitoring personnel can more clearly acquire the detection result with high similarity.
In addition, the output search result may be a search result corresponding to a similarity greater than a certain value, for example: if the similarity is less than 50%, the possibility that both are the same person may be excluded, and only the search result having a similarity greater than or equal to 50% may be output.
In the step, the converted N-1 groups of retrieval results and the retrieval results corresponding to the target face model are output, so that monitoring personnel can conveniently identify the retrieval results.
In the embodiment of the invention, a face image to be retrieved is obtained; acquiring N groups of face feature data of the face image to be retrieved, wherein the N groups of face feature data are respectively extracted from the face image to be retrieved of N types of face models; respectively retrieving the N groups of face feature data in a face feature library to obtain N groups of retrieval results; respectively converting the similarity of N-1 groups of retrieval results of the N-1 types of face models into the similarity corresponding to the target face model according to a similarity conversion relation acquired in advance; and outputting the converted retrieval result corresponding to the target face model. Therefore, multiple groups of retrieval results corresponding to multiple face models can be obtained, the similarity among the multiple groups of retrieval results is converted into the similarity corresponding to the same face model, and the converted retrieval results with the similarity are output, so that the retrieval results have comparability, and the performance of the face retrieval system is further improved.
In addition, in the embodiment of the invention, because various face models are included, and the similarity in a plurality of groups of retrieval results can be converted into the similarity corresponding to the same face model, the problem that the similarity of an old model in old data is incompatible with the similarity in a new model due to the fact that the old data does not exist in the new model after the new model is upgraded can be solved, the data compatibility of the face retrieval system is further improved, and the accuracy of an algorithm can be improved by adopting the various face models.
Fig. 3 is a structural diagram of a facial feature retrieval device according to an embodiment of the present invention. The face feature retrieval device 300 includes:
the first acquisition module 301 is configured to acquire a face image to be retrieved;
a second obtaining module 302, configured to obtain N groups of face feature data of the face image to be retrieved, where the N groups of face feature data are extracted from the face images to be retrieved of N types of face models respectively;
the retrieval module 303 is configured to retrieve the N groups of face feature data in a face feature library respectively to obtain N groups of retrieval results;
the conversion module 304 is configured to convert the similarity of the N-1 group search results of the N-1 types of face models into the similarity corresponding to the target face model according to a similarity conversion relationship obtained in advance;
and the output module 305 outputs the converted retrieval result corresponding to the target face model.
Wherein N may be an integer greater than 1.
In addition, the retrieval may refer to retrieving the face feature data identical to the face model.
In addition, the similarity conversion relationship may be a conversion relationship that converts the similarities corresponding to the N-1 types of face models into the similarity corresponding to the target face model, where the target face model is another face model of the N types of face models except for the N-1 types of face models.
The content output by the output module 305 may also be referred to as: the N groups of search results comprise N-1 groups of search results after conversion and another search result which is not converted.
Optionally, each of the N groups of face feature data includes identification information of a face model used by the face feature data, each of the face feature data in the face feature library includes identification information of a face model used by the face feature data, and the retrieval module 304 is specifically configured to:
and respectively retrieving the N groups of face feature data in a face feature library according to the identification information of the face model used by each group of face feature data to obtain N groups of retrieval results.
Optionally, the pre-obtained similarity transformation relationship includes:
and N-1 regression models, wherein the N-1 regression models respectively represent the conversion relationship between the similarity corresponding to the N-1 types of face models and the similarity corresponding to the target face model.
Optionally, the N-1 regression models include a first regression model, an independent variable of the first regression model is a similarity corresponding to a first human face model, a dependent variable of the first regression model is a similarity corresponding to the target human face model output by the independent variable through the first regression model, and the first human face model is one of the N-1 human face models;
the conversion module 304 is specifically configured to:
and substituting the similarity corresponding to the first face model in the N-1 groups of retrieval results into the independent variable of the first regression model, and calculating the dependent variable of the first regression model to obtain the similarity corresponding to the target face model after the similarity corresponding to the first face model is converted.
Optionally, the first regression model is determined by:
respectively extracting second face feature data of a first face image set and second face image set through the target face model, wherein the first face image set comprises n face images, the second face image set comprises m face images, and both n and m are integers greater than 1;
acquiring n pairs of regression sample images, wherein each pair of regression sample images comprises a face image which belongs to a first face image set and a second face image set respectively, and the first similarity of the feature data of the first face image of the n pairs of regression sample images is in linear distribution;
respectively extracting second face feature data of the n pairs of regression sample images through the first face model, and calculating second similarity of the second face feature data of each pair of regression sample images;
and determining the first regression model according to the first similarity and the second similarity of the n pairs of regression sample images, wherein the independent variable of the first regression model is the second similarity, and the dependent variable is the first similarity.
Optionally, as shown in fig. 4, the apparatus 300 further includes:
a third obtaining module 306, configured to obtain a monitored face image through the image collecting unit;
a fourth obtaining module 307, configured to obtain N groups of face feature data of the monitored face image, where the N groups of face feature data of the monitored face image are extracted from the monitored face image through N types of face models respectively.
A storage module 308, configured to store the N groups of fourth face feature data in the face feature library.
Optionally, the identification information of the face model includes a version number.
It should be noted that the face feature retrieval device provided in the embodiment of the present invention can implement the steps in the method embodiments shown in fig. 1 or fig. 2, and obtain the same beneficial effects, and is not described herein again to avoid repetition.
Fig. 5 is a diagram of a face feature retrieval device according to an embodiment of the present invention. As shown in fig. 5, the face feature retrieval apparatus includes: the transceiver 501, the processor 502, the memory 503 and the computer program 5031 stored on the memory 503 and operable on the processor 502, the computer program 5031 when executed by the processor 502 implements the following process:
the transceiver 501 is configured to: acquiring a face image to be retrieved;
the processor 502 is configured to obtain N groups of face feature data of the face image to be retrieved, where the N groups of face feature data are extracted from the face images to be retrieved of N types of face models respectively.
The processor 502 is further configured to retrieve the N groups of face feature data in a face feature library respectively to obtain N groups of retrieval results.
The processor 502 is further configured to convert the similarity of the N-1 group search results of the N-1 types of face models into the similarity corresponding to the target face model according to a similarity conversion relationship obtained in advance.
The transceiver 501 outputs the converted retrieval result corresponding to the target face model.
Optionally, each of the N groups of face feature data includes identification information of a face model used by the face feature data, each of the face feature data in the face feature library includes identification information of a face model used by the face feature data, and the processor 502 executes: the step of retrieving the N groups of face feature data in a face feature library respectively to obtain N groups of retrieval results, specifically comprising:
and respectively retrieving the N groups of face feature data in a face feature library according to the identification information of the face model used by each group of face feature data to obtain N groups of retrieval results.
Optionally, the pre-obtained similarity transformation relationship includes:
and N-1 regression models, wherein the N-1 regression models respectively represent the conversion relationship between the similarity corresponding to the N-1 types of face models and the similarity corresponding to the target face model.
Optionally, the N-1 regression models include a first regression model, an independent variable of the first regression model is a similarity corresponding to a first human face model, a dependent variable of the first regression model is a similarity corresponding to the target human face model output by the independent variable through the first regression model, and the first human face model is one of the N-1 human face models;
the processor 502 performs: converting the similarity of N-1 groups of retrieval results of the N-1 types of face models into the similarity corresponding to the target face model according to a similarity conversion relation acquired in advance, which specifically comprises the following steps:
and substituting the similarity corresponding to the first face model in the N-1 groups of retrieval results into the independent variable of the first regression model, and calculating the dependent variable of the first regression model to obtain the similarity corresponding to the target face model after the similarity corresponding to the first face model is converted.
Optionally, the first regression model is determined by:
respectively extracting first face feature data of a first face image set and second face image set through the target face model, wherein the first face image set comprises n face images, the second face image set comprises m face images, and both n and m are integers greater than 1;
acquiring n pairs of regression sample images, wherein each pair of regression sample images comprises a face image which belongs to a first face image set and a second face image set respectively, and the first similarity of the feature data of the first face image of the n pairs of regression sample images is in linear distribution;
respectively extracting second face feature data of the n pairs of regression sample images through the first face model, and calculating second similarity of the second face feature data of each pair of regression sample images;
and determining the first regression model according to the first similarity and the second similarity of the n pairs of regression sample images, wherein the independent variable of the first regression model is the second similarity, and the dependent variable is the first similarity.
Optionally, the transceiver 501 is further configured to, before acquiring the face image to be retrieved:
acquiring a monitoring face image through an image acquisition unit;
the processor 502 is further configured to obtain N groups of face feature data of the monitored face image, where the N groups of face feature data of the monitored face image are extracted from the monitored face images of the N types of face models respectively;
the processor 502 is further configured to store the N groups of facial feature data of the monitored facial image in the facial feature library.
Optionally, the identification information of the face model includes a version number.
The embodiment of the present invention can implement any step in the method embodiments corresponding to fig. 1 or fig. 2, and can achieve the same beneficial effects, and in order to avoid repetition, the details are not repeated here.
Those skilled in the art will appreciate that all or part of the steps of the method described above can be implemented by hardware associated with program instructions, and the program can be stored in a computer readable medium. An embodiment of the present invention further provides a computer storage medium, where a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the face feature retrieval method shown in fig. 1 or fig. 2 are implemented, and the same beneficial effects can be achieved, and are not described herein again to avoid repetition.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the face feature retrieval method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A face feature retrieval method is characterized by comprising the following steps:
acquiring a face image to be retrieved;
acquiring N groups of face feature data of the face image to be retrieved, wherein the N groups of face feature data are respectively extracted from the face image to be retrieved of N types of face models;
respectively retrieving the N groups of face feature data in a face feature library to obtain N groups of retrieval results;
respectively converting the similarity of N-1 groups of retrieval results of the N-1 types of face models into the similarity corresponding to the target face model according to a similarity conversion relation acquired in advance;
and outputting the converted retrieval result corresponding to the target face model.
2. The method according to claim 1, wherein the pre-obtained similarity transformation relationship comprises:
and N-1 regression models, wherein the N-1 regression models respectively represent the conversion relationship between the similarity corresponding to the N-1 types of face models and the similarity corresponding to the target face model.
3. The method according to claim 2, wherein the N-1 regression models comprise a first regression model, the independent variable of the first regression model is the corresponding similarity of a first face model, the dependent variable of the first regression model is the corresponding similarity of the target face model output by the independent variable through the first regression model, and the first face model is one of the N-1 face models;
the step of converting the similarity of the N-1 groups of retrieval results of the N-1 types of face models into the similarity corresponding to the target face model according to the similarity conversion relation acquired in advance comprises the following steps:
and substituting the similarity corresponding to the first face model in the N-1 groups of retrieval results into the independent variable of the first regression model, and calculating the dependent variable of the first regression model to obtain the similarity corresponding to the target face model after the similarity corresponding to the first face model is converted.
4. The method of claim 3, wherein the first regression model is determined by:
respectively extracting first face feature data of a first face image set and second face image set through the target face model, wherein the first face image set comprises n face images, the second face image set comprises m face images, and both n and m are integers greater than 1;
acquiring n pairs of regression sample images, wherein each pair of regression sample images comprises a face image which belongs to a first face image set and a second face image set respectively, and the first similarity of the feature data of the first face image of the n pairs of regression sample images is in linear distribution;
respectively extracting second face feature data of the n pairs of regression sample images through the first face model, and calculating second similarity of the second face feature data of each pair of regression sample images;
and determining the first regression model according to the first similarity and the second similarity of the n pairs of regression sample images, wherein the independent variable of the first regression model is the second similarity, and the dependent variable is the first similarity.
5. The method according to any one of claims 1 to 4, wherein before the obtaining of the face image to be retrieved, the method further comprises:
acquiring a monitoring face image through an image acquisition unit;
acquiring N groups of face feature data of the monitored face image, wherein the N groups of face feature data of the monitored face image are respectively extracted from the monitored face images of N types of face models;
and storing the N groups of face feature data of the monitored face image into the face feature library.
6. A face feature retrieval apparatus, comprising:
the first acquisition module is used for acquiring a face image to be retrieved;
the second acquisition module is used for acquiring N groups of face feature data of the face image to be retrieved, wherein the N groups of face feature data are respectively extracted from the face image to be retrieved of N types of face models;
the retrieval module is used for retrieving the N groups of face feature data in a face feature library respectively to obtain N groups of retrieval results;
the conversion module is used for converting the similarity of the N-1 group retrieval results of the N-1 types of face models into the similarity corresponding to the target face model respectively according to the similarity conversion relation acquired in advance;
and the output module is used for outputting the converted retrieval result corresponding to the target face model.
7. The apparatus of claim 6, wherein the pre-obtained similarity transformation relation comprises:
and N-1 regression models, wherein the N-1 regression models respectively represent the conversion relationship between the similarity corresponding to the N-1 types of face models and the similarity corresponding to the target face model.
8. The apparatus of claim 7, wherein the N-1 regression models comprise a first regression model, an independent variable of the first regression model is a similarity corresponding to a first face model, a dependent variable of the first regression model is a similarity corresponding to the target face model output by the independent variable through the first regression model, and the first face model is one of the N-1 face models;
the conversion module is used for substituting the similarity corresponding to the first human face model in the N-1 groups of retrieval results into the independent variable of the first regression model, and calculating the dependent variable of the first regression model to obtain the similarity corresponding to the target human face model after the similarity corresponding to the first human face model is converted.
9. The apparatus of claim 8, wherein the first regression model is determined by:
respectively extracting first face feature data of a first face image set and second face image set through the target face model, wherein the first face image set comprises n face images, the second face image set comprises m face images, and both n and m are integers greater than 1;
acquiring n pairs of regression sample images, wherein each pair of regression sample images comprises a face image which belongs to a first face image set and a second face image set respectively, and the first similarity of the feature data of the first face image of the n pairs of regression sample images is in linear distribution;
respectively extracting second face feature data of the n pairs of regression sample images through the first face model, and calculating second similarity of the second face feature data of each pair of regression sample images;
and determining the first regression model according to the first similarity and the second similarity of the n pairs of regression sample images, wherein the independent variable of the first regression model is the second similarity, and the dependent variable is the first similarity.
10. The apparatus of any one of claims 7 to 9, further comprising:
the third acquisition module is used for acquiring the monitoring face image through the image acquisition unit;
a fourth obtaining module, configured to obtain N groups of face feature data of the monitored face image, where the N groups of face feature data of the monitored face image are extracted from the monitored face images of N types of face models respectively;
and the storage module is used for storing the N groups of face feature data of the monitored face image into the face feature library.
11. A facial feature retrieval device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the facial feature retrieval method as claimed in any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the face feature retrieval method according to any one of claims 1 to 5.
CN201811331149.2A 2018-11-09 2018-11-09 Face feature retrieval method, device and equipment Active CN111177436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811331149.2A CN111177436B (en) 2018-11-09 2018-11-09 Face feature retrieval method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811331149.2A CN111177436B (en) 2018-11-09 2018-11-09 Face feature retrieval method, device and equipment

Publications (2)

Publication Number Publication Date
CN111177436A true CN111177436A (en) 2020-05-19
CN111177436B CN111177436B (en) 2023-08-22

Family

ID=70655241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811331149.2A Active CN111177436B (en) 2018-11-09 2018-11-09 Face feature retrieval method, device and equipment

Country Status (1)

Country Link
CN (1) CN111177436B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782855A (en) * 2020-07-15 2020-10-16 上海依图网络科技有限公司 Face image processing method, device, equipment and medium
CN112329797A (en) * 2020-11-13 2021-02-05 杭州海康威视数字技术股份有限公司 Target object retrieval method, device, server and storage medium
CN112529008A (en) * 2020-11-03 2021-03-19 浙江大华技术股份有限公司 Image recognition method, image feature processing method, electronic device and storage medium
CN112699846A (en) * 2021-01-12 2021-04-23 武汉大学 Specific character and specific behavior combined retrieval method and device with identity consistency check function

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
CN106250858A (en) * 2016-08-05 2016-12-21 重庆中科云丛科技有限公司 A kind of recognition methods merging multiple face recognition algorithms and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
WO2017181769A1 (en) * 2016-04-21 2017-10-26 腾讯科技(深圳)有限公司 Facial recognition method, apparatus and system, device, and storage medium
CN106250858A (en) * 2016-08-05 2016-12-21 重庆中科云丛科技有限公司 A kind of recognition methods merging multiple face recognition algorithms and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782855A (en) * 2020-07-15 2020-10-16 上海依图网络科技有限公司 Face image processing method, device, equipment and medium
CN112529008A (en) * 2020-11-03 2021-03-19 浙江大华技术股份有限公司 Image recognition method, image feature processing method, electronic device and storage medium
CN112329797A (en) * 2020-11-13 2021-02-05 杭州海康威视数字技术股份有限公司 Target object retrieval method, device, server and storage medium
CN112699846A (en) * 2021-01-12 2021-04-23 武汉大学 Specific character and specific behavior combined retrieval method and device with identity consistency check function
CN112699846B (en) * 2021-01-12 2022-06-07 武汉大学 Specific character and specific behavior combined retrieval method and device with identity consistency check function

Also Published As

Publication number Publication date
CN111177436B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN112199375B (en) Cross-modal data processing method and device, storage medium and electronic device
CN110175549B (en) Face image processing method, device, equipment and storage medium
CN110889433B (en) Face clustering method, device, computer equipment and storage medium
CN111177436B (en) Face feature retrieval method, device and equipment
CN109284729B (en) Method, device and medium for acquiring face recognition model training data based on video
KR102660052B1 (en) Optimization of media fingerprint retention to improve system resource utilization
CN110019876B (en) Data query method, electronic device and storage medium
CN107169106B (en) Video retrieval method, device, storage medium and processor
CN105184238A (en) Human face recognition method and system
CN110348362B (en) Label generation method, video processing method, device, electronic equipment and storage medium
CN110765882B (en) Video tag determination method, device, server and storage medium
CN111368867B (en) File classifying method and system and computer readable storage medium
CN110941978A (en) Face clustering method and device for unidentified personnel and storage medium
CN112036362A (en) Image processing method, image processing device, computer equipment and readable storage medium
CN114139015A (en) Video storage method, device, equipment and medium based on key event identification
CN110083731B (en) Image retrieval method, device, computer equipment and storage medium
CN113987243A (en) Image file gathering method, image file gathering device and computer readable storage medium
CN112434049A (en) Table data storage method and device, storage medium and electronic device
CN111626313B (en) Feature extraction model training method, image processing method and device
CN109376581B (en) Object relation recognition method and device, storage medium and electronic device
CN115391596A (en) Video archive generation method and device and storage medium
CN115019360A (en) Matching method and device, nonvolatile storage medium and computer equipment
CN114048344A (en) Similar face searching method, device, equipment and readable storage medium
CN113065025A (en) Video duplicate checking method, device, equipment and storage medium
US20150120693A1 (en) Image search system and image search method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant