WO2021027555A1 - 一种人脸检索方法及装置 - Google Patents

一种人脸检索方法及装置 Download PDF

Info

Publication number
WO2021027555A1
WO2021027555A1 PCT/CN2020/105160 CN2020105160W WO2021027555A1 WO 2021027555 A1 WO2021027555 A1 WO 2021027555A1 CN 2020105160 W CN2020105160 W CN 2020105160W WO 2021027555 A1 WO2021027555 A1 WO 2021027555A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
face
unstructured
features
structured
Prior art date
Application number
PCT/CN2020/105160
Other languages
English (en)
French (fr)
Inventor
陈凯
申皓全
王铭学
赖昌材
胡翔宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021027555A1 publication Critical patent/WO2021027555A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Definitions

  • This application relates to the field of computer vision, and in particular to a face retrieval method and device.
  • face retrieval is an emerging biometric technology that combines computer image processing knowledge and biostatistics knowledge.
  • face retrieval is widely used in identity recognition, identity verification and other related scenarios (such as security monitoring and access control gates, etc.).
  • the face retrieval device compares it with multiple face images in the designated face library to find the most similar face image Image or multiple face images.
  • the face retrieval device does not directly calculate the similarity between the face image to be retrieved and the face image in the face database, but represents all the images as features, and uses these features to calculate the relationship with each other. Similarity.
  • the face retrieval device When performing feature extraction on face images, if a single feature extraction model is used for feature extraction on all images, the limited feature extraction capabilities of a single model will make it difficult to process face retrieval in all scenes; and if all images are Multiple feature extraction models are used for feature extraction. Since each image needs to pass all models, the computational complexity is high.
  • This application provides a face retrieval method and device, which are suitable for face retrieval in complex scenes and reduce computational complexity.
  • this application provides a face retrieval method, which can be applied to related scenarios such as identity recognition and identity verification.
  • the above-mentioned face retrieval method may include: acquiring a face image to be retrieved; acquiring a structural feature of the face image, the structured feature corresponding to a plurality of preset feature dimensions; according to the structured feature, acquiring a plurality of One-to-one correspondence of unstructured features with preset feature dimensions; at least according to unstructured features, obtain standard features corresponding to multiple preset feature dimensions.
  • Standard features include the features of unstructured features transformed by neural network; according to standard features , Perform face retrieval on face images.
  • the face features in the face image can be divided into structured features and unstructured features.
  • the structured features can include features for characterizing the attributes of the face, and the face attributes can refer to the features of the face image.
  • Some specific physical meanings, such as age, gender, and/or angle, are extracted from face images through structured feature extraction models; unstructured features can include vectors used to represent features of a person’s face.
  • Face features can refer to features that have no specific physical meaning in a face image. They are composed of a string of numbers. They can also be called feature vectors. They are extracted from the face image through an unstructured feature extraction model. The similarity of can be used to represent the similarity between the face image to be retrieved and the face template image.
  • the feature extraction capability is stronger than a single model, and it is more suitable for face retrieval in complex scenes.
  • the structured model is first used to divide the face image into different feature dimensions, on the one hand, the face image can be processed more specifically, on the other hand, the face image does not need to pass all unstructured
  • the feature extraction model reduces the number of models that the face image needs to pass and reduces the computational complexity.
  • obtaining the structured features of the face image includes: obtaining a structured feature extraction model, the structured model is obtained by training according to multiple preset feature dimensions; The image is input to the structured feature extraction model to obtain the output structured feature.
  • obtaining unstructured features corresponding to multiple preset feature dimensions in a face image according to the structured feature includes: determining multiple preset features based on the structured feature Whether the target feature dimension is included in the dimension; if the target feature dimension is included in multiple preset feature dimensions, the unstructured feature extraction model corresponding to the target feature dimension is obtained.
  • the unstructured feature extraction model is based on the data corresponding to the target feature dimension Obtained by training; input the face image into the unstructured feature extraction model to obtain the output unstructured features.
  • both the structured feature extraction model and the unstructured feature extraction model are machine learning models (for example, convolutional neural networks).
  • Convolutional neural network is essentially an input-to-output mapping. It can learn a large number of mapping relationships between input and output without requiring any precise mathematical expressions between input and output. After collecting training samples After training the convolutional neural network, the convolutional neural network has the ability to map between input and output pairs.
  • the structured feature extraction model and the unstructured feature extraction model may also be other machine learning models, which are not specifically limited in the embodiment of the present application.
  • the above method may further include: if the target feature dimension is not included in the plurality of preset feature dimensions, obtaining a general feature extraction model, The general feature extraction model is trained based on data outside the target feature dimension; the face image is input into the general feature extraction model to obtain the output general features; the general features are determined as standard features.
  • At least obtaining the standard features corresponding to the unstructured features includes: obtaining a feature mapping model, the feature mapping model corresponds to the unstructured feature model one-to-one; and the unstructured feature is input
  • the feature mapping model corresponding to the unstructured feature obtains the output standard feature.
  • At least obtaining standard features corresponding to unstructured features includes: obtaining a feature mapping model, the feature mapping model and the unstructured feature model have a one-to-one correspondence; The feature mapping model corresponding to the structured feature input and the unstructured feature to obtain the output standard feature.
  • structured features and unstructured features are used together as the input of the feature mapping model, so that the mapping of unstructured features can utilize structured features, thereby improving the accuracy of feature mapping.
  • the above method may further include: obtaining a face sample image, the face sample image has corresponding identity information; obtaining the structural features of the face sample image and information about the face sample image Unstructured features: Based on the structured features of the face sample image, the unstructured features of the face sample image and the identity information, the feature mapping model is trained to obtain a feature mapping model that meets the objective function.
  • obtaining unstructured features corresponding to multiple preset feature dimensions in a face image according to the structured feature includes: determining multiple preset features based on the structured feature Whether the target feature dimension is included in the dimension; if the target feature dimension is included in multiple preset feature dimensions, multiple unstructured feature extraction models corresponding to the target feature dimension are obtained; the face image is input to multiple unstructured feature extraction models , To obtain the unstructured characteristics of the output.
  • performing face retrieval on a face image based on standard features includes: determining the average value of the standard features as the output feature of the face image; using the output feature to perform face image retrieval on the face image Face retrieval.
  • the present application provides a face retrieval device, including: an interface module for obtaining a face image to be retrieved; a feature extraction module for obtaining structured features of a face image, and the structured features include: The features that characterize the attributes of the face, the structured features correspond to multiple preset feature dimensions; according to the structured features, the unstructured features corresponding to multiple preset feature dimensions in the face image are obtained one-to-one, and the unstructured features include A vector used to represent facial features; at least according to unstructured features, obtain standard features corresponding to multiple preset feature dimensions, the standard features include the features of unstructured features transformed by neural network; face retrieval module for According to standard features, face retrieval is performed on face images.
  • the feature extraction module is used to obtain a structured feature extraction model
  • the structured model is obtained by training according to multiple preset feature dimensions; the face image is input into the structured feature Extract the model to obtain the structured features of the output.
  • the feature extraction module is used to determine whether the target feature dimension is included in the multiple preset feature dimensions according to the structured feature; if the target feature dimension is included in the multiple preset feature dimensions , The unstructured feature extraction model corresponding to the target feature dimension is obtained.
  • the unstructured feature extraction model is trained based on the data corresponding to the target feature dimension; the face image is input into the unstructured feature extraction model to obtain the unstructured feature extraction model. Structural features.
  • the feature extraction module is also used to obtain a general feature extraction model if the target feature dimension is not included in the multiple preset feature dimensions.
  • the general feature extraction model is based on the target feature dimension It is obtained by training with other data; input the face image into the general feature extraction model to obtain the output general feature; determine the general feature as the standard feature.
  • the feature extraction module is used to obtain a feature mapping model, and the feature mapping model corresponds to the unstructured feature model one to one; the unstructured feature is input into the feature corresponding to the unstructured feature Map the model to obtain the standard features of the output.
  • the feature extraction module is used to obtain a feature mapping model.
  • the feature mapping model corresponds to the unstructured feature model; the structured feature and the unstructured feature are input to the unstructured feature model.
  • the feature mapping model corresponding to the chemical feature is obtained to obtain the output standard feature.
  • the feature extraction module is also used to obtain a face sample image, which has corresponding identity information; obtains the structural features of the face sample image and the face sample image Based on the structured features of the face sample image, the unstructured feature of the face sample image and the identity information, the feature mapping model is trained to obtain a feature mapping model that meets the objective function.
  • the feature extraction module is used to determine whether the target feature dimension is included in the multiple preset feature dimensions according to the structured feature; if the target feature dimension is included in the multiple preset feature dimensions , Then obtain multiple unstructured feature extraction models corresponding to the target feature dimension; input the face image into multiple unstructured feature extraction models to obtain the output unstructured features.
  • the face retrieval module is used to determine the average value of the standard features as the output feature of the face image; use the output feature to perform face retrieval on the face image.
  • the interface module mentioned in the above second aspect may be a receiving interface, a receiving circuit or a receiver, etc.; the feature extraction module and the face retrieval module may be one or more processors.
  • this application provides a face retrieval device, which may include a processor and a communication interface, and the processor may be used to support the face retrieval device to implement the first aspect or any possible implementation manner of the first aspect.
  • the processor can obtain the face image to be retrieved through the communication interface.
  • the face retrieval device may further include a memory, and the memory is used to store the computer-executed instructions and data necessary for the face retrieval device.
  • the processor executes the computer-executable instructions stored in the memory, so that the face retrieval device executes the aforementioned first aspect or any one of the possible implementation manners of the first aspect Face retrieval method.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium stores instructions, and when the instructions are run on a computer, they are used to execute any of the face retrieval methods in the first aspect.
  • this application provides a computer program or computer program product, which when the computer program or computer program product is executed on a computer, enables the computer to implement the face retrieval method in any one of the above-mentioned first aspects.
  • FIG. 1 is a schematic diagram of facial features in an embodiment of this application.
  • FIG. 2 is a schematic flowchart of a face retrieval method in an embodiment of the application
  • 3 is a schematic diagram of structured features extracted by the structured feature extraction model in an embodiment of the application.
  • FIG. 4 is a schematic diagram of training an unstructured feature extraction model in an embodiment of the application.
  • FIG. 5 is a schematic diagram of mapping unstructured features to a standard feature space in an embodiment of this application.
  • FIG. 6 is a schematic diagram of the process of facial feature extraction in an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of a face retrieval device in an embodiment of the application.
  • Fig. 8 is a schematic structural diagram of a face retrieval device in an embodiment of the application.
  • the corresponding device may include one or more units such as functional units to perform the described one or more method steps (for example, one unit performs one or more steps) , Or multiple units, each of which performs one or more of multiple steps), even if such one or more units are not explicitly described or illustrated in the drawings.
  • the corresponding method may include one step to perform the functionality of one or more units (for example, one step performs one or more units). The functionality, or multiple steps, each of which performs the functionality of one or more of the multiple units), even if such one or more steps are not explicitly described or illustrated in the drawings.
  • the face retrieval method can be widely used in relevant scenarios such as identity recognition and identity verification.
  • the face retrieval device performs feature extraction on the face image to be retrieved, compares the extracted features with those of the face template image, and retrieves a face template image with a higher degree of matching with the face image to be retrieved Or multiple face template images to complete face retrieval.
  • face retrieval equipment often only trains a single feature extraction model to perform feature extraction on all face images. Then, there will be more complex scenes, such as side faces, cross-ages, occlusion, makeup, dark light, etc. In other words, a single feature extraction model cannot be processed due to its limited capabilities.
  • the face retrieval device uses multiple feature extraction models to perform feature extraction on the face image.
  • the embodiments of the present application provide a face retrieval method, which can be applied to the above-mentioned face retrieval device, and the face retrieval device can be installed on devices such as security monitoring, access control, etc. .
  • FIG. 1 is a schematic diagram of the facial features in the embodiments of the present application.
  • the face features in the face image can be divided into structured features and unstructured features.
  • the structured features can include features used to characterize the attributes of the face, and the face attributes can refer to the face image.
  • Some specific physical meanings of, such as age, gender, angle, etc., are extracted from face images through structured feature extraction models; unstructured features can include vectors used to represent facial features.
  • Features can refer to features that have no specific physical meaning in the face image. They are composed of a string of numbers and can be called feature vectors. They are extracted from the face image through an unstructured feature extraction model.
  • the similarity can be used to represent the similarity between the face image to be retrieved and the face template image.
  • the aforementioned structured feature extraction model and unstructured feature extraction model are both machine learning models (for example, convolutional neural networks (CNN)).
  • CNN is essentially an input-to-output mapping. It can learn a large number of mapping relationships between input and output without requiring any precise mathematical expressions between input and output. After collecting training samples, When CNN is trained, CNN has the ability to map between input and output pairs.
  • the structured feature extraction model and the unstructured feature extraction model may also be other machine learning models, which are not specifically limited in the embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a face retrieval method in an embodiment of this application. As shown in FIG. 2, the method may include:
  • the face retrieval device receives the input face image to be retrieved.
  • the face retrieval device can also receive the input base image (that is, the face template image).
  • the face template image can be used to compare with the face image to complete the face retrieval of the face image.
  • the structured feature corresponds to a plurality of preset feature dimensions.
  • the face retrieval device may predefine multiple feature dimensions according to the actual needs of different application scenarios, and these feature dimensions may be multiple feature dimensions for the face retrieval device to extract structured features of the face image, such as Angle, age, gender, race, makeup, brightness, etc., and then use a large number of training sample images to train the structured feature extraction model, so that the face retrieval device inputs the face image to be retrieved into the structured feature extraction model ,
  • the structural features of the face image under these feature dimensions can be identified, and these feature dimensions can be considered as multiple preset feature dimensions corresponding to the structural features.
  • the pre-defined feature dimensions can include the two feature dimensions of "angle" and "age”.
  • the structured feature extraction model can identify the two features of "angle” and "age”. Structured features under three feature dimensions.
  • the structured features corresponding to “angle” can be angle values of +10°, +30°, +45° and other yaw angles (yaw), and the structured features corresponding to “age” can be It is 3 years old, 15 years old, 70 years old, etc.
  • the two characteristic dimensions of "angle” and “age” may be preset characteristic dimensions corresponding to structured characteristics.
  • FIG. 3 is a schematic diagram of structured features extracted by the structured feature extraction model in an embodiment of this application.
  • the aforementioned predefined feature dimensions can be further divided into multiple dimensions.
  • “angle” can be further divided into feature dimensions such as “side face” and/or “other”
  • “age” can be further divided into feature dimensions such as “child”, “old age” and/or “other”, “makeup”
  • characteristic dimensions such as “make-up” and/or “other”
  • “brightness” can be divided into characteristic dimensions such as “highlight”, “dark light” and/or “other”, among which, "side face” and “children” Feature dimensions such as “old age”, “makeup”, "high light”, and/or “dark light” belong to preset scenes, and the preset scenes can be set according to the actual needs of face retrieval, which are not specifically limited in the embodiments of this application.
  • the face retrieval device may extract the structured features of the face image through the structured feature extraction model, and according to the obtained structured features, determine which feature dimensions of the divided feature dimensions the face image can correspond to. For example, if the preset feature dimension is "age”, and the structured feature under the feature dimension of "age” extracted by the structured feature extraction model is 3 years old, the face retrieval system can think that the face image falls into the "child” category. A feature dimension (for example, 0-10 years old), the structured feature of the face image corresponds to the preset feature dimension of "child”; or, assuming the feature dimension of "age” extracted by the structured feature extraction model The structured feature below is 70 years old.
  • the face retrieval system can consider that the face image falls into the feature dimension of “elderly” (for example, greater than 60 years old), then the structural feature of the face image is similar to that of “elderly” Corresponds to a preset feature dimension; further, assuming that the structured feature under the feature dimension of "age” extracted by the structured feature extraction model is 35 years old, the face retrieval system can think that the face image falls into the "other" category A feature dimension (for example, greater than 10 years old and less than 60 years old), the structural feature of the face image corresponds to the preset feature dimension of "other".
  • the preset feature dimension is "angle"
  • the structured feature under the feature dimension of "angle” extracted by the structured feature extraction model is +60°.
  • the face retrieval system can consider the face image to fall into the "side”
  • the feature dimension of "face” (for example, in the interval -90° to -45° or the interval +45° to +90°)
  • the structured feature of the face image corresponds to the preset feature dimension of "side face”
  • the face retrieval system can consider the face image to fall into the feature dimension of "other” (for example, in the interval -45° to +45°)
  • the structural feature of the face image corresponds to the preset feature dimension of "other”.
  • the structural features in all the feature dimensions of the input face image fall into the feature dimension of "other”
  • the structural feature of the face image corresponds to the preset feature dimension of "other” .
  • S203 According to the structured feature correspondence, obtain unstructured features in the face image that correspond to multiple preset feature dimensions one-to-one;
  • the face retrieval device can determine whether the target is included in the multiple preset feature dimensions according to multiple preset feature dimensions corresponding to the structured feature Feature dimensions.
  • the target feature dimensions mentioned here can refer to one or more feature dimensions of the feature dimensions corresponding to the multiple preset scenes, such as “side face”, “child”, “old age”, “makeup”, Feature dimensions such as "high light” and/or “dark light”, and the target feature dimension may be predefined according to the actual requirements of different application scenarios.
  • the target feature dimension can be the preset feature dimension of "children", which means that multiple preset feature dimensions include the target feature dimension;
  • the structured feature corresponds to the preset feature dimension of "makeup”, and the target feature dimension can be the preset feature dimension of "makeup”, which means that multiple preset feature dimensions include the target feature dimension;
  • the structured feature It does not correspond to any target feature dimension, and the structured feature corresponds to the feature dimension of "other", which means that the multiple preset feature dimensions do not include the target feature dimension.
  • the face retrieval device obtains an unstructured feature extraction model corresponding to the target feature dimension, where one target feature dimension corresponds to an unstructured feature extraction model. Finally, the face retrieval device inputs the face images into the unstructured feature extraction model to obtain the output unstructured features. If the target feature dimension is not included in the multiple preset feature dimensions, the face retrieval device can obtain the unstructured feature extraction model corresponding to the feature dimension of "other" to extract unstructured features from the face image, and extract The resulting unstructured features can be called general features, and the unstructured feature extraction model corresponding to the feature dimension of “other” can be called general feature extraction model.
  • the general feature extraction model is based on data training outside the target feature dimension. of.
  • FIG. 4 is a schematic diagram of training an unstructured feature extraction model in an embodiment of this application.
  • the face retrieval device can sort each sample in the training sample set according to multiple presets.
  • the feature dimension is divided into multiple categories such as child samples, elderly samples, black samples, white samples, makeup samples, dark light samples, etc., and then use the samples of each category to train the unstructured feature extraction model to obtain the corresponding Unstructured feature extraction model.
  • the samples of the feature dimension of "other" do not belong to the features of the above-mentioned preset scene, when the corresponding unstructured extraction model training is performed, an unstructured feature extraction model that can extract general features is obtained. , Which is the above-mentioned general feature extraction model.
  • the standard feature is the feature after the structured feature is transformed by the neural network
  • each prediction is obtained through S203.
  • This feature space can be called the standard feature space.
  • the unstructured features are mapped to the standard feature space, it can be Obtain standard features corresponding to unstructured features.
  • the face retrieval system can determine the feature space corresponding to any of the aforementioned preset feature dimensions as the standard feature space. For example, the feature space corresponding to "child” is determined as the standard feature space, and the special space corresponding to "side face” is determined as the standard.
  • Unstructured features in the standard feature space do not need to be mapped, and can be directly used as standard features to participate in face retrieval of face images. For example, if the feature space corresponding to "children" is determined as the standard feature space, the unstructured features corresponding to "children" can be directly used as standard features.
  • the standard feature space can select a general feature space, that is, when the feature mapping model is trained, the general features are directly regarded as standard features without passing through the mapping model. Then, after the feature mapping model is trained, other unstructured features can be mapped to the general feature space through the corresponding feature mapping model; the general features do not need to go through the mapping model and are directly output as standard features. After the unstructured features are mapped to the general feature space, in the general feature space, the unstructured features are converted into general features by the neural network. In this way, the number of feature mappings can be effectively reduced. In terms of probability, the number of samples in the feature dimension of "other" is the largest. Using common features as standard features can minimize the number of feature mappings.
  • FIG. 5 is a schematic diagram of mapping unstructured features to a standard feature space in an embodiment of this application.
  • the feature space corresponding to "other" ie, the general feature space
  • the standard feature space is selected as the standard feature space.
  • the corresponding standard feature can be [0.24,0.32,...,0.35]; the unstructured feature of face image B is [0.13,0.45,...,0.26] and the feature of "side face"
  • the face retrieval device maps the unstructured features [0.13,0.45,...,0.26] of the face image B to the standard feature space (ie the feature space corresponding to "other") to obtain the standard features of the face image B [0.23, 0.33,..., 0.36], the standard features of the face template image A and the standard features of the face image B can be directly compared, such as calculating the standard features of the face template image A and the standard features of the face image B
  • the cosine similarity of is 0.9.
  • formula (1) can be used to calculate the cosine similarity between the standard feature A of the face template image A and the standard feature B of the face image B:
  • a i and B i respectively represent the respective components of the feature vectors A and B, n is the number of components of A and B, and n is a positive integer.
  • the face retrieval device may also calculate the similarity between standard features through similarity algorithms such as Euclidean distance and Manhattan distance, which are not specifically limited in the embodiment of the present application.
  • the above-mentioned feature mapping may be implemented by a feature mapping model, and the face retrieval device may train a feature mapping model for each preset feature dimension.
  • the face retrieval device may train a feature mapping model for each preset feature dimension.
  • the training method of the feature mapping model is as follows: First, the face retrieval device obtains a face sample image, where the face sample image has corresponding identity information; then, the above S203 can be performed to obtain the non-identity of the face sample image. Structured features, and then based on the unstructured features of the face sample image, the feature mapping model is trained to obtain a feature mapping model that meets the above objective function.
  • the face retrieval device when performing feature mapping, can also use structured features and unstructured features as the input of the feature mapping model, so that the mapping of unstructured features can use structure Characterization information.
  • the face retrieval device may perform S202 to obtain the structured features of the face sample image and perform S203 to obtain the unstructured features of the face sample image, and then, based on the face sample image The structured features of and the unstructured features of face sample images.
  • the structured feature can be converted into discrete values, and the face retrieval device can use the unstructured feature value and the discretized structured feature value as the input of the neural network (because the unstructured feature value has been It is a specific value, so it can be directly used as input), and the feature mapping model is trained according to the objective function.
  • the structured feature corresponding to the "age” feature dimension can be converted into a specific age value
  • the structured feature of the "makeup” feature dimension can discretize "plain face” and "makeup” into two values of 0 and 1.
  • the structured features can also be converted into other discrete values according to specific feature dimensions, which are not limited to the above examples, and the embodiments of the present application do not specifically limit this.
  • the face retrieval device splices the discrete structured features and unstructured features into the neural network, and trains the feature mapping model according to the objective function until the objective function converges, so that the unstructured features corresponding to the same identity information
  • the similarity is as large as possible, and the similarity of the unstructured features corresponding to different identity information is as small as possible.
  • the discrete structured feature value is "1" (ie, the feature dimension of "makeup")
  • the unstructured feature value is [0.04,...,0.08]. Accordingly, the input feature of the neural network can be [1, 0.04] ,...,0.08].
  • S204 may include: obtaining standard features corresponding to multiple preset feature dimensions according to structured features and unstructured features.
  • the face retrieval device trains the feature mapping model based on the structured features and unstructured features of the face sample image, and then combines the structured features and unstructured features of the face image to be retrieved. Commonly input the trained feature mapping model to obtain standard features corresponding to each preset feature dimension.
  • the above objective function may be a triple loss objective function, see the following formula (2):
  • N is the number of training samples, with Is the face sample image and its characteristics, with Is the face sample image and its features that are the same as the identity information of the face sample image, with Is the face sample image and its features that are different from the identity information of the face sample;
  • is the expected difference between the distance between the positive sample pair and the distance between the negative sample pair, when the distance between the negative sample pair is greater than the positive sample
  • the objective function value of the triplet is 0, otherwise it is greater than 0.
  • the embodiment of the present application by minimizing the objective function, the similarity of the unstructured features corresponding to the same identity information can be as large as possible, and the similarity of the unstructured features corresponding to different identity information can be as small as possible.
  • the embodiment of the present application does not limit the form of the objective function, and the objective function that can be used to train a single face recognition model can be used in the technical solutions described in the embodiment of the present application.
  • S205 Perform face retrieval on the face image according to the standard features.
  • the face retrieval device can directly compare these standard features with the features of the face template image, find the most similar feature, and obtain a Or multiple face template images to complete face retrieval.
  • the above-mentioned face template image can be input into the face retrieval device together with the face image to be retrieved, and S201 to S204 are executed in sequence to complete the extraction of facial features, and map it to the standard feature space, and then combine with the face image Compare the standard features of each face template image; or, input the face template image into the face retrieval device in advance to complete the extraction of face features, and map it to the standard feature space to obtain the standard features corresponding to each face template image, and then convert the face template image
  • the corresponding standard features are stored for subsequent acquisition of the standard features of the face image to be retrieved, and then the standard features corresponding to each face template image are read and compared to complete the face retrieval.
  • the face image to be retrieved and the face template image can also be subjected to feature extraction and feature comparison in other ways, as long as the face retrieval can be completed, which is not specifically limited in the embodiment of the present application.
  • the number of unstructured feature extraction models that a face image passes through can reflect the difficulty of extracting features for the image (for example, a face image passes through "side face", "make-up” and "old age”.
  • the unstructured feature extraction model corresponding to the three feature dimensions of “human” means that the face image has the attributes of these three feature dimensions).
  • the average The feature value of is used for face retrieval, which is equivalent to the integration of models, and when the face image is more difficult and complex, the more the number of integrated models, the more robust the face retrieval can be improved.
  • the above S205 may further include: using the average value of the standard features as the output feature of the face image; and performing face retrieval on the face image using the output feature.
  • the face retrieval device may map the extracted unstructured features of the face image to the standard feature space through S204 and convert them into standard features, calculate the average value of these standard features, and combine the The average value is used as the output feature of the face image. Finally, the output feature is compared with the feature of the face template image to complete the face retrieval. It should be noted that in order to improve the robustness of face retrieval, after obtaining the corresponding standard features of the face template image, it is also required to take the average value of the standard features, using the calculated average value and the output feature of the face image , That is, the average value of the standard features is compared to complete the face retrieval.
  • FIG. 6 is a schematic diagram of the process of facial feature extraction in an embodiment of this application.
  • the foregoing S201 to S204 may include:
  • the first step the face retrieval device obtains the face image to be retrieved
  • Step 2 The face retrieval device inputs the aforementioned face image into the structured feature extraction model, and extracts the corresponding structured feature.
  • the face image is judged to contain two feature dimensions of "side face” and "makeup";
  • the face retrieval device inputs the face image into the unstructured feature extraction models corresponding to the two feature dimensions of "side face” and "makeup", such as side face model and makeup model;
  • Step 4 The face retrieval device obtains the unstructured features [0.04,...,0.08] output by the profile model and the unstructured features [0.06,...,0.03] output by the makeup model;
  • Step 5 The face retrieval device inputs the unstructured features [0.04,...,0.08] into the feature mapping model corresponding to the side face model to obtain the corresponding standard features [0.02,...,0.06], and converts the unstructured features [0.06,...,0.03] Input the feature mapping model corresponding to the makeup model to obtain the corresponding standard features [0.021,...,0.059];
  • the face retrieval device calculates the average value of the standard features [0.02,...,0.06] and [0.021,...,0.059] to obtain the output feature [0.0205,...,0.0595] of the face image.
  • the face retrieval device can use the output feature [0.0205,...,0.0595] to perform face retrieval on the face image.
  • the same feature dimension may allow multiple unstructured feature extraction models with the same function.
  • the "side face” feature dimension there may be two side face model 1 and side face model 2. model.
  • the face retrieval device can input the face image into the side face model 1 and the side face model 2, respectively, to obtain the corresponding non-structure.
  • unstructured features are respectively mapped to the standard feature space to obtain corresponding standard features, and then the standard features are averaged to perform face retrieval.
  • the aforementioned multiple unstructured feature extraction models with the same function may be different versions of unstructured feature extraction models, and the unstructured features may also carry the version number of the model.
  • "001" in the unstructured feature [001, 0.06,...,0.03] represents the version number of the unstructured feature extraction model, and the following [0.06,...,0.03] is the feature vector.
  • the above S203 may include: determining whether the target feature dimension is included in the multiple preset feature dimensions according to the structured feature; if the target feature dimension is included in the multiple preset feature dimensions, obtaining multiple non-structures corresponding to the target feature dimension Feature extraction model; input the face image into multiple unstructured feature extraction models to obtain the output unstructured features.
  • the face retrieval device selects corresponding multiple unstructured feature extraction models according to the preset feature dimensions corresponding to the structured features.
  • the structured feature extraction model can be multiple models with the same function corresponding to the same feature dimension. Then, the face retrieval device inputs the face image to be retrieved into each unstructured feature extraction model, and the unstructured feature extraction model is used to extract Multiple unstructured features corresponding to the feature dimension of the face image.
  • different versions of unstructured feature extraction models may be the model before and after the update in the same face retrieval device.
  • the face image can use the new model to extract unstructured features, determine the unstructured features extracted by the new model as standard features, and then map the features of the face template image to the new standard feature space, and then The above S205 is executed to realize face retrieval.
  • the new model can be used to perform feature extraction on the new face image.
  • different versions of unstructured feature extraction models can also be models on different devices.
  • Different devices can include unstructured feature extraction models of various feature dimensions and general feature extraction models. After standard features are selected (For example, select the general feature extracted by the general feature extraction model on a certain device as the standard feature), map the unstructured features extracted by the model on other devices to the standard feature space, and then execute the above S205 to realize the face Retrieval.
  • different versions of unstructured feature extraction models can also be models provided by different vendors, and different vendors can provide unstructured feature extraction models including various feature dimensions and general feature extraction models.
  • the standard After selecting the feature (such as selecting a general feature extraction model provided by a certain supplier), map the unstructured features extracted by the models on other devices to the standard feature space, and then perform the above S205 to achieve face retrieval.
  • the face images described in the above embodiments all need to extract structured features first, and then extract unstructured features.
  • the unstructured features of the face image can also be directly extracted. In this way, the face image is There is no need to go through the structured feature extraction model, but directly input the unstructured feature extraction model for feature extraction.
  • the unstructured feature extraction model can be the general feature extraction model described in the above embodiment, or according to the features to be extracted
  • the unstructured feature extraction model for demand design is not specifically limited in the embodiment of this application.
  • the feature extraction ability is stronger than a single model, and it is more suitable for face retrieval in complex scenes.
  • the structured model is first used to divide the face image into different feature dimensions, on the one hand, the face image can be processed more specifically, on the other hand, the face image does not need to pass all unstructured
  • the feature extraction model reduces the number of models that the face image needs to pass and reduces the computational complexity.
  • an embodiment of the present application provides a face retrieval device.
  • the face retrieval device may be the face retrieval device in the face retrieval device described in the above embodiment or one of the face retrieval devices.
  • the chip or the system-on-chip may also be a functional module used to implement the methods described in the foregoing embodiments in the face retrieval device.
  • the face retrieval apparatus can implement the functions performed by the face retrieval devices in the foregoing embodiments, and the functions can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the aforementioned functions.
  • FIG. 7 is a schematic structural diagram of a face retrieval apparatus in an embodiment of this application. As shown in FIG.
  • the face retrieval apparatus 700 includes: an interface module 701, To obtain the face image to be retrieved; the feature extraction module 702 is used to obtain the structured feature of the face image, the structured feature is the feature of the face image with specific physical meaning, the structured feature and multiple preset feature dimensions Correspondence; According to the structured features, obtain unstructured features in the face image that correspond to multiple preset feature dimensions one-to-one.
  • the unstructured features include feature vectors used to represent the face image; at least according to the unstructured features, Obtain standard features corresponding to multiple preset feature dimensions.
  • the standard features include the features of unstructured features transformed by the neural network; the face retrieval module 703 is used to perform face retrieval on the face image according to the standard features.
  • the feature extraction module 702 is used to obtain a structured feature extraction model, the structured model is obtained by training according to multiple preset feature dimensions; the face image is input into the structured feature extraction model to obtain Structured characteristics of the output.
  • the feature extraction module 702 is configured to determine whether the target feature dimension is included in the plurality of preset feature dimensions according to the structured feature; if the target feature dimension is included in the plurality of preset feature dimensions, the target feature dimension is acquired
  • the unstructured feature extraction model corresponding to the feature dimension, the unstructured feature extraction model is obtained by training based on the data corresponding to the target feature dimension; the face image is input into the unstructured feature extraction model to obtain the output unstructured feature.
  • the feature extraction module 702 is also used to obtain a general feature extraction model if the target feature dimension is not included in the multiple preset feature dimensions, and the general feature extraction model is based on data outside the target feature dimension Obtained by training; input the face image into the general feature extraction model to obtain the output general features; determine the general features as standard features.
  • the feature extraction module 702 is used to obtain a feature mapping model, and the feature mapping model corresponds to the unstructured feature model one to one; the unstructured feature is input into the feature mapping model corresponding to the unstructured feature to obtain Standard characteristics of the output.
  • the feature extraction module 702 is used to obtain a feature mapping model, and the feature mapping model corresponds to the unstructured feature model one-to-one; the structured feature and the unstructured feature input correspond to the unstructured feature Feature mapping model to obtain standard features of the output.
  • the feature extraction module 702 is also used to obtain a sample face image, which has corresponding identity information; obtain the structured feature of the sample face image and the unstructured face sample image Features: Based on the structured features of the face sample image, the unstructured features of the face sample image and the identity information, the feature mapping model is trained to obtain a feature mapping model that meets the objective function.
  • the feature extraction module 702 is configured to determine whether the target feature dimension is included in the plurality of preset feature dimensions according to the structured feature; if the target feature dimension is included in the plurality of preset feature dimensions, the target feature dimension is acquired Multiple unstructured feature extraction models corresponding to feature dimensions; input the face image into multiple unstructured feature extraction models to obtain the output unstructured features.
  • the face retrieval module 703 is configured to determine the average value of the standard features as the output feature of the face image; use the output feature to perform face retrieval on the face image.
  • the interface module 701 can be used to perform S201 in the above embodiment
  • the feature extraction module 702 can be used to perform S202 to S204 in the above embodiment
  • the face retrieval module 703 can be used to perform the above embodiment. S205 in.
  • the interface module mentioned in the embodiment of the present application may be a receiving interface, a receiving circuit or a receiver, etc.; the feature extraction module and the face retrieval module may be one or more processors.
  • FIG. 8 is a schematic structural diagram of the face retrieval device in an embodiment of the application. See the solid line in FIG.
  • the retrieval device 800 may include: a processor 801 and a communication interface 802.
  • the processor 801 may be used to support the face retrieval device 800 to implement the functions involved in each of the foregoing embodiments. For example, the processor 801 may obtain the information to be retrieved through the communication interface 802. Face image.
  • the face retrieval device 800 may further include a memory 803 and a memory 803, which are used to store the necessary computer execution instructions and data of the face retrieval device 800.
  • the processor 801 executes the computer-executable instructions stored in the memory 803, so that the face retrieval device 800 executes the face retrieval method described in each of the foregoing embodiments.
  • an embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores instructions. When the instructions run on a computer, they are used to execute the human Face retrieval method.
  • embodiments of the present application provide a computer program or computer program product.
  • the computer program or computer program product is executed on a computer, the computer can realize the face retrieval described in each of the above embodiments. method.
  • the computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium, or a communication medium that includes any medium that facilitates the transfer of a computer program from one place to another (for example, according to a communication protocol) .
  • computer-readable media may generally correspond to (1) non-transitory tangible computer-readable storage media, or (2) communication media, such as signals or carrier waves.
  • Data storage media can be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, codes, and/or data structures for implementing the techniques described in this application.
  • the computer program product may include a computer-readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, or structures that can be used to store instructions or data Any other media that can be accessed by the computer in the form of desired program code. And, any connection is properly termed a computer-readable medium.
  • any connection is properly termed a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave to transmit instructions from a website, server, or other remote source
  • coaxial cable Wire, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of media.
  • the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other temporary media, but are actually directed to non-transitory tangible storage media.
  • magnetic disks and optical discs include compact discs (CD), laser discs, optical discs, digital versatile discs (DVD), and Blu-ray discs. Disks usually reproduce data magnetically, while discs use lasers to reproduce data optically. data. Combinations of the above should also be included in the scope of computer-readable media.
  • DSP digital signal processors
  • ASIC application-specific integrated circuits
  • FPGA field programmable logic arrays
  • processor may refer to any of the foregoing structure or any other structure suitable for implementing the techniques described herein.
  • DSP digital signal processors
  • ASIC application-specific integrated circuits
  • FPGA field programmable logic arrays
  • the term "processor” as used herein may refer to any of the foregoing structure or any other structure suitable for implementing the techniques described herein.
  • the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided in dedicated hardware and/or software modules configured for encoding and decoding, or combined Into the combined codec.
  • the technology can be fully implemented in one or more circuits or logic elements.
  • the technology of this application can be implemented in a variety of devices or devices, including wireless handsets, integrated circuits (ICs), or a set of ICs (for example, chipsets).
  • ICs integrated circuits
  • a set of ICs for example, chipsets.
  • Various components, modules, or units are described in this application to emphasize the functional aspects of the device for performing the disclosed technology, but they do not necessarily need to be implemented by different hardware units.
  • various units can be combined with appropriate software and/or firmware in the codec hardware unit, or by interoperating hardware units (including one or more processors as described above). provide.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供一种人脸检索方法及装置。该方法可以包括:获取待检索的人脸图像;获取人脸图像的结构化特征,结构化特征包括用于表征人脸属性的特征,结构化特征与多个预设特征维度对应;根据结构化特征,获取人脸图像中与多个预设特征维度一一对应的非结构化特征,非结构化特征包括用于表示人脸特征的向量;至少根据非结构化特征,获取多个预设特征维度对应的标准特征,标准特征包括非结构化特征经神经网络转换后的特征。在本申请中,由于先利用结构化模型将人脸图像划分到不同的特征维度,再利用多个非结构化特征提取模型来进行特征提取,如此以适合处理复杂场景的人脸检索,并降低计算复杂度。

Description

一种人脸检索方法及装置
本申请要求在2019年8月15日提交中国专利局、申请号为201910755742.8、发明名称为“一种人脸检索方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉领域,特别涉及一种人脸检索方法及装置。
背景技术
随着科技的发展,人脸检索是一项融合了计算机图像处理知识以及生物统计学知识的新兴生物识别技术。目前人脸检索被广泛应用于身份识别、身份验证等相关场景(例如安防监控和门禁闸机等)。
在人脸检索技术中,通常是给定一张待检索的人脸图像,人脸检索设备将其与指定人脸库中的多个人脸图像进行比对,找出最相似的一张人脸图像或多张人脸图像。但是,人脸检索设备并不直接计算待检索的人脸图像与人脸库中的人脸图像之间的相似度,而是将所有图像都表示成特征,并利用这些特征来计算与彼此的相似度。在对人脸图像进行特征提取时,如果对所有图像采用单一特征提取模型进行特征提取,由于单一模型的特征提取能力有限,就会导致其难以处理所有场景的人脸检索;而如果对所有图像采用多个特征提取模型进行特征提取,由于每一张图像需要通过所有模型,导致计算复杂度高。
发明内容
本申请提供了一种人脸检索方法及装置,以适合处理复杂场景的人脸检索,并降低计算复杂度。
第一方面,本申请提供一种人脸检索方法,该方法可以应用于如身份识别、身份验证等相关场景中。上述人脸检索方法可以包括:获取待检索的人脸图像;获取人脸图像的结构化特征,结构化特征与多个预设特征维度对应;根据结构化特征,获取人脸图像中与多个预设特征维度一一对应的非结构化特征;至少根据非结构化特征,获取多个预设特征维度对应的标准特征,标准特征包括非结构化特征经神经网络转换后的特征;根据标准特征,对人脸图像进行人脸检索。
在本申请中,人脸图像中的人脸特征可以分为结构化特征和非结构化特征,其中,结构化特征可以包括用于表征人脸属性的特征,人脸属性可以指人脸图像的一些具体的 物理含义,例如年龄、性别和/或角度等,是通过结构化特征提取模型从人脸图像中提取出的;而非结构化特征可以包括用于表示人脸特征的向量,该人脸特征可以指人脸图像中没有具体物理含义的特征,由一串数字组成,又可以被称为特征向量,是通过非结构化特征提取模型从人脸图像中提取出的,特征向量之间的相似度可以用来代表待检索的人脸图像与人脸模板图像之间的相似度。
在本申请中,由于利用多个特征提取模型来进行特征提取,使得特征提取能力强于单一模型,更适合处理复杂场景的人脸检索。进一步地的,由于先利用结构化模型将人脸图像划分到不同的特征维度,如此,一方面可以更有具针对性地处理人脸图像,另一方面人脸图像无需通过所有的非结构化特征提取模型,减少人脸图像需要通过的模型的个数,降低计算复杂度。
基于第一方面,在一些可能的实施方式中,获取人脸图像的结构化特征,包括:获取结构化特征提取模型,结构化模型是按照多个预设特征维度进行训练得到的;将人脸图像输入结构化特征提取模型,获得输出的结构化特征。
基于第一方面,在一些可能的实施方式中,根据结构化特征,获取人脸图像中与多个预设特征维度对应的非结构化特征,包括:根据结构化特征,确定多个预设特征维度中是否包含目标特征维度;若多个预设特征维度中包含目标特征维度,则获取目标特征维度对应的非结构化特征提取模型,非结构化特征提取模型是基于目标特征维度对应的数据进行训练得到的;将人脸图像输入非结构化特征提取模型,获得输出的非结构化特征。
在本申请中,上述结构化特征提取模型和非结构化特征提取模型均为机器学习模型(例如卷积神经网络)。卷积神经网络本质上是一种输入到输出的映射,它能够学习大量的输入与输出之间的映射关系,而不需要任何输入和输出之间的精确的数学表达式,在收集好训练样本后,对卷积神经网络加以训练,卷积神经网络就具有输入输出对之间的映射能力。当然,结构化特征提取模型和非结构化特征提取模型还可以为其他的机器学习模型,本申请实施例不做具体限定。
基于第一方面,在一些可能的实施方式中,对人脸图像进行人脸检索之前,上述方法还可以包括:若多个预设特征维度中不包含目标特征维度,则获取通用特征提取模型,通用特征提取模型是基于目标特征维度之外的数据训练得到的;将人脸图像输入通用特征提取模型,获得输出的通用特征;将通用特征确定为标准特征。
基于第一方面,在一些可能的实施方式中,至少获取非结构化特征对应的标准特征,包括:获取特征映射模型,特征映射模型与非结构化特征模型一一对应;将非结构化特征输入非结构化特征对应的特征映射模型,获得输出的标准特征。
基于第一方面,在一些可能的实施方式中,至少获取非结构化特征对应的标准特 征,包括:获取特征映射模型,特征映射模型与非结构化特征模型一一对应;将结构化特征和非结构化特征输入与非结构化特征对应的特征映射模型,获得输出的标准特征。
在本申请中,将结构化特征和非结构化特征共同作为特征映射模型的输入,使得非结构化特征的映射可以利用结构化特征,从而提升特征映射的准确率。
基于第一方面,在一些可能的实施方式中,上述方法还可以包括:获取人脸样本图像,人脸样本图像具有对应的身份信息;获取人脸样本图像的结构化特征以及人脸样本图像的非结构化特征;基于人脸样本图像的结构化特征、人脸样本图像的非结构化特征以及身份信息,对特征映射模型进行训练,得到满足目标函数的特征映射模型。
基于第一方面,在一些可能的实施方式中,根据结构化特征,获取人脸图像中与多个预设特征维度对应的非结构化特征,包括:根据结构化特征,确定多个预设特征维度中是否包含目标特征维度;若多个预设特征维度中包含目标特征维度,则获取目标特征维度对应的多个非结构化特征提取模型;将人脸图像输入多个非结构化特征提取模型,获得输出的非结构化特征。
基于第一方面,在一些可能的实施方式中,根据标准特征,对人脸图像进行人脸检索,包括:将标准特征的平均值确定为人脸图像的输出特征;使用输出特征对人脸图像进行人脸检索。
第二方面,本申请提供一种人脸检索装置,包括:接口模块,用于获取待检索的人脸图像;特征提取模块,用于获取人脸图像的结构化特征,结构化特征包括用于表征人脸属性的特征,结构化特征与多个预设特征维度对应;根据结构化特征,获取人脸图像中与多个预设特征维度一一对应的非结构化特征,非结构化特征包括用于表示人脸特征的向量;至少根据非结构化特征,获取多个预设特征维度对应的标准特征,标准特征包括非结构化特征经神经网络转换后的特征;人脸检索模块,用于根据标准特征,对人脸图像进行人脸检索。
基于第二方面,在一些可能的实施方式中,特征提取模块,用于获取结构化特征提取模型,结构化模型是按照多个预设特征维度进行训练得到的;将人脸图像输入结构化特征提取模型,获得输出的结构化特征。
基于第二方面,在一些可能的实施方式中,特征提取模块,用于根据结构化特征,确定多个预设特征维度中是否包含目标特征维度;若多个预设特征维度中包含目标特征维度,则获取目标特征维度对应的非结构化特征提取模型,非结构化特征提取模型是基于目标特征维度对应的数据进行训练得到的;将人脸图像输入非结构化特征提取模型,获得输出的非结构化特征。
基于第二方面,在一些可能的实施方式中,特征提取模块,还用于若多个预设特征维度中不包含目标特征维度,则获取通用特征提取模型,通用特征提取模型是基于目标 特征维度之外的数据训练得到的;将人脸图像输入通用特征提取模型,获得输出的通用特征;将通用特征确定为标准特征。
基于第二方面,在一些可能的实施方式中,特征提取模块,用于获取特征映射模型,特征映射模型与非结构化特征模型一一对应;将非结构化特征输入非结构化特征对应的特征映射模型,获得输出的标准特征。
基于第二方面,在一些可能的实施方式中,特征提取模块,用于获取特征映射模型,特征映射模型与非结构化特征模型一一对应;将结构化特征和非结构化特征输入与非结构化特征对应的特征映射模型,获得输出的标准特征。
基于第二方面,在一些可能的实施方式中,特征提取模块,还用于获取人脸样本图像,人脸样本图像具有对应的身份信息;获取人脸样本图像的结构化特征以及人脸样本图像的非结构化特征;基于人脸样本图像的结构化特征、人脸样本图像的非结构化特征以及身份信息,对特征映射模型进行训练,得到满足目标函数的特征映射模型。
基于第二方面,在一些可能的实施方式中,特征提取模块,用于根据结构化特征,确定多个预设特征维度中是否包含目标特征维度;若多个预设特征维度中包含目标特征维度,则获取目标特征维度对应的多个非结构化特征提取模型;将人脸图像输入多个非结构化特征提取模型,获得输出的非结构化特征。
基于第二方面,在一些可能的实施方式中,人脸检索模块,用于将标准特征的平均值确定为人脸图像的输出特征;使用输出特征对人脸图像进行人脸检索。
上述第二方面中提到的接口模块可以为接收接口、接收电路或者接收器等;特征提取模块和人脸检索模块可以为一个或者多个处理器。
第三方面,本申请提供一种人脸检索设备,可以包括:处理器和通信接口,处理器可以用于支持人脸检索设备实现上述第一方面或者第一方面的任一种可能的实施方式中所涉及的功能,例如:处理器可以通过通信接口获取待检索的人脸图像。
基于第三方面,在一些可能的实施方式中,人脸检索设备还可以包括存储器,存储器,用于保存人脸检索设备必要的计算机执行指令和数据。当该人脸检索设备运行时,该处理器执行该存储器存储的该计算机执行指令,以使该人脸检索设备执行如上述第一方面或者第一方面的任一种可能的实施方式所述的人脸检索方法。
第四方面,本申请提供一种计算机可读存储介质,计算机可读存储介质存储有指令,当指令在计算机上运行时,用于执行上述第一方面中任一的人脸检索方法。
第五方面,本申请提供一种计算机程序或计算机程序产品,当计算机程序或计算机程序产品在计算机上被执行时,使得计算机实现上述第一方面中任一的人脸检索方法。
应当理解的是,本申请的第二至五方面与本申请的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1为本申请实施例中的人脸特征的示意图;
图2为本申请实施例中的人脸检索方法的流程示意图;
图3为本申请实施例中的结构化特征提取模型提取结构化特征的示意图;
图4为本申请实施例中的训练非结构化特征提取模型的示意图;
图5为本申请实施例中的非结构化特征映射至标准特征空间的示意图;
图6为本申请实施例中的人脸特征提取的过程示意图;
图7为本申请实施例中的人脸检索装置的结构示意图;
图8为本申请实施例中的人脸检索设备的结构示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。以下描述中,参考形成本申请一部分并以说明之方式示出本申请实施例的具体方面或可使用本申请实施例的具体方面的附图。应理解,本申请实施例可在其它方面中使用,并可包括附图中未描绘的结构或逻辑变化。因此,以下详细描述不应以限制性的意义来理解。例如,应理解,结合所描述方法的揭示内容可以同样适用于用于执行所述方法的对应设备或系统,且反之亦然。例如,如果描述一个或多个具体方法步骤,则对应的设备可以包含如功能单元等一个或多个单元,来执行所描述的一个或多个方法步骤(例如,一个单元执行一个或多个步骤,或多个单元,其中每个都执行多个步骤中的一个或多个),即使附图中未明确描述或说明这种一个或多个单元。另一方面,例如,如果基于如功能单元等一个或多个单元描述具体装置,则对应的方法可以包含一个步骤来执行一个或多个单元的功能性(例如,一个步骤执行一个或多个单元的功能性,或多个步骤,其中每个执行多个单元中一个或多个单元的功能性),即使附图中未明确描述或说明这种一个或多个步骤。进一步,应理解的是,除非另外明确提出,本文中所描述的各示例性实施例和/或方面的特征可以相互组合。
人脸检索方法可以广泛的应用于身份识别、身份验证等相关场景中。人脸检索设备对待检索的人脸图像进行特征提取,将提取出的特征与人脸模板图像的特征进行比对,检索出与待检索的人脸图像匹配度较高的一张人脸模板图像或者多张人脸模板图像,以完成人脸检索。但是,人脸检索设备往往只训练单个特征提取模型来对所有的人脸图像进行特征提取,那么,就会存在对于较为复杂的场景,例如,侧脸、跨年龄、遮挡、化妆、暗光等来说,单个特征提取模型由于能力有限而无法处理。为此,进一步地,人脸检索设备采用多个特征提取模型来对人脸图像进行特征提取。由于不同特征提取模型提取到的特征不能直接比对,只有相同特征提取模型所提取出的特征才能直接进行比对,所以,所有人脸图像需要遍历所有的特征提取模型,这样大大提高了人脸检索设备的计算复杂度,尤其是特征提取模型的数量较大时。
为了解决上述问题,本申请实施例提供一种人脸检索方法,该人脸检索方法可以应用于上述人脸检索设备中,该人脸检索设备可以设置于如安防监控、门禁闸机等设备上。
需要说明的是,在本申请实施例中,图1为本申请实施例中的人脸特征的示意图,
参见图1所示,人脸图像中的人脸特征可以分为结构化特征和非结构化特征,其中,结构化特征可以包括用于表征人脸属性的特征,人脸属性可以指人脸图像的一些具体的物理含义,例如年龄、性别、角度等,是通过结构化特征提取模型从人脸图像中提取出的;而非结构化特征可以包括用于表示人脸特征的向量,这些人脸特征可以指人脸图像中没有具体物理含义的特征,由一串数字组成,又可以被称为特征向量,是通过非结构化特征提取模型从人脸图像中提取出的,特征向量之间的相似度可以用来代表待检索的人脸图像与人脸模板图像之间的相似度。
上述结构化特征提取模型和非结构化特征提取模型均为机器学习模型(例如卷积神经网络(convolutional neural networks,CNN))。CNN本质上是一种输入到输出的映射,它能够学习大量的输入与输出之间的映射关系,而不需要任何输入和输出之间的精确的数学表达式,在收集好训练样本后,对CNN加以训练,CNN就具有输入输出对之间的映射能力。当然,结构化特征提取模型和非结构化特征提取模型还可以为其他的机器学习模型,本申请实施例不做具体限定。
图2为本申请实施例中的人脸检索方法的流程示意图,参见图2所示,该方法可以包括:
S201:获取待检索的人脸图像;
在本申请实施例中,人脸检索设备接收输入的待检索的人脸图像。当然,人脸检索设备还可以接收输入的底库图像(也就是人脸模板图像)。人脸模板图像可以用于与人脸图像进行比对,完成对人脸图像的人脸检索。
S202:获取人脸图像的结构化特征;
其中,所述结构化特征与多个预设特征维度对应。
在本申请实施例中,人脸检索设备可以根据不同应用场景的实际需求预先定义多个特征维度,这些特征维度可以为人脸检索设备对人脸图像进行结构化特征提取的多个特征维度,如角度、年龄、性别、种族、妆容、亮度等,然后,使用大量的训练样本图像对结构化特征提取模型进行训练,使得人脸检索设备将上述待检索的人脸图像输入结构化特征提取模型后,可以识别出人脸图像在这些特征维度下的结构化特征,这些特征维度可以认为是与结构化特征对应的多个预设特征维度。例如,预先定义的特征维度可以包括“角度”和“年龄”这两个特征维度,人脸图像输入结构化特征提取模型后,结构化特征提取模型能够识别出“角度”和“年龄”这两个特征维度下的结构化特征,“角度”对应的结构化特征可以为+10°、+30°、+45°等偏航角(yaw)的角度值,“年龄”对应的结构化特征可以为3岁、15岁、70岁等年龄数值。“角度”和“年龄”这两个特征维度可以为与结构化特征对应的预设特征维度。
在一些可能的实施方式中,图3为本申请实施例中的结构化特征提取模型提取结构化特征的示意图,参见图3所示,上述预先定义的特征维度还可以被进一步划分成多个维度,如“角度”可以进一步划分为“侧脸”和/或“其他”等特征维度,“年龄”可以进一步划分为“儿童”、“老年”和/或“其他”等特征维度,“妆容”可以分为“化妆”和/或“其他”等特征维度,“亮 度”可以分为“高光”、“暗光”和/或“其他”等特征维度,其中,“侧脸”、“儿童”、“老年”、“化妆”、“高光”和/或“暗光”等特征维度属于预设场景,预设场景可以根据人脸检索的实际需求进行设定,本申请实施例不作具体限定。人脸检索设备可以通过结构化特征提取模型对人脸图像的结构化特征进行提取,根据得到的结构化特征,确定人脸图像可以与划分后的特征维度中的哪些特征维度对应。例如,预设特征维度为“年龄”,结构化特征提取模型提取到的“年龄”这一特征维度下的结构化特征为3岁,人脸检索系统可以认为人脸图像落入“儿童”这一特征维度(例如0~10岁),则该人脸图像的结构化特征与“儿童”这一预设特征维度对应;或者,假设结构化特征提取模型提取到的“年龄”这一特征维度下的结构化特征为70岁,人脸检索系统可以认为人脸图像落入“老年人”这一特征维度(例如大于60岁),则该人脸图像的结构化特征与“老年人”这一预设特征维度对应;再者,假设结构化特征提取模型提取到的“年龄”这一特征维度下的结构化特征为35岁,人脸检索系统可以认为人脸图像落入“其他”这一特征维度(例如大于10岁且小于60岁),则该人脸图像的结构化特征与“其他”这一预设特征维度对应。再例如,预设特征维度为“角度”,结构化特征提取模型提取到的“角度”这一特征维度下的结构化特征为+60°,人脸检索系统可以认为人脸图像落入“侧脸”这一特征维度(例如位于区间-90°至-45°或者区间+45°至+90°),则该人脸图像的结构化特征与“侧脸”这一预设特征维度对应;或者,假设结构化特征提取模型提取到的“角度”这一特征维度下的结构化特征为+30°,人脸检索系统可以认为人脸图像落入“其他”这一特征维度(例如位于区间-45°至+45°),则该人脸图像的结构化特征与“其他”这一预设特征维度对应。可选的,如果输入的人脸图像的所有特征维度下的结构化特征均落入“其他”这一特征维度,则该人脸图像的结构化特征与“其他”这一预设特征维度对应。
S203:根据结构化特征对应,获取人脸图像中与多个预设特征维度一一对应的非结构化特征;
在本申请实施例中,人脸检索设备在通过S202获取到人脸图像的结构化特征后,可以根据结构化特征对应的多个预设特征维度,确定多个预设特征维度中是否包含目标特征维度,这里所说的目标特征维度可以指所述多个预设场景对应的特征维度中的一个或者多个特征维度,例如“侧脸”、“儿童”、“老年”、“化妆”、“高光”和/或“暗光”等特征维度,目标特征维度可以是根据不同应用场景的实际需求预先定义的。例如,若结构化特征与“儿童”这一预设特征维度对应,则目标特征维度可以为“儿童”这一预设特征维度,也就是说多个预设特征维度中包含目标特征维度;若结构化特征与“化妆”这一预设特征维度对应,则目标特征维度可以为“化妆”这一预设特征维度,也就是说多个预设特征维度中包含目标特征维度;若结构化特征不与任何目标特征维度对应,结构化特征与“其他”这一特征维度对应,也就是说多个预设特征维度中不包含目标特征维度。然后,若多个预设特征维度中包含目标特征维度,则人脸检索设备获取目标特征维度对应的非结构化特征提取模型,其中,一个目标特征维度与一个非结构化特征提取模型对应。最后,人脸检索设备将人脸图像分别输入非结构化特征提取模型,获得输出的非结构化特征。而若多个预设特征维度中不包含目标特征维度,则人脸检索设备可以获取“其他”这一特征维度对应的非结构化特征提取模型来对人脸图像进行非结构化特征提取,提取出的非结构化特征可以称为通用特征,“其他”这一特征维度对应的非结构化特征提取模型可以称为通用特征提取模型,通用特征提取模型是基于目标特征维度之外的数据训练得到的。
在一些可能的实施方式中,图4为本申请实施例中的训练非结构化特征提取模型的示 意图,参见图4所示,人脸检索设备可以将训练样本集合中的各个样本按照多个预设特征维度划分成儿童样本、老年人样本、黑人样本、白人样本、化妆样本、暗光样本等多个类别,然后,再使用各个类别的样本对非结构化特征提取模型进行训练,得到对应的非结构化特征提取模型。需要注意的是,由于“其他”这一特征维度的样本不属于上述预设场景的特征,在进行对应的非结构化提取模型训练时,得到的是能够提取通用特征的非结构化特征提取模型,也就是上述通用特征提取模型。
S204:至少根据非结构化特征,获取多个预设特征维度对应的标准特征;
其中,标准特征为结构化特征经神经网络转换后的特征;
在本申请实施例中,由于不同特征维度对应的非结构化特征提取模型提取出的非结构化特征是无法直接对比的,所以,为了对这些非结构化特征进行对比,在通过S203获得各个预设特征维度对应的非结构化特征后,需要将这些非结构化特征映射至同一个特征空间,该特征空间可以被称为标准特征空间,当非结构化特征映射至标准特征空间后,就能够获得非结构化特征对应的标准特征。人脸检索系统可以将上述任一预设特征维度对应的特征空间确定为标准特征空间,如将“儿童”对应的特征空间确定为标准特征空间、将“侧脸”对应的特空间确定为标准特征空间等。标准特征空间中的非结构化特征无需进映射,可直接作为标准特征参与人脸图像的人脸检索。例如,将“儿童”对应的特征空间确定为标准特征空间,与“儿童”对应的非结构化特征就可以直接作为标准特征。
可选的,标准特征空间可以选择通用特征空间,即在训练特征映射模型时,通用特征不通过映射模型、直接视作标准特征。那么,在特征映射模型训练好后,其他非结构化特征通过对应的特征映射模型,即可映射至通用特征空间;而通用特征不需经过映射模型,直接作为标准特征输出。在非结构化特征映射至通用特征空间后,在通用特征空间中,非结构化特征经神经网络转换为通用特征,如此,能够有效地减少特征映射的次数。从概率上来说,“其他”这一特征维度的样本数目是最多的,将通用特征作为标准特征,可以最小化特征映射次数。
举例来说,图5为本申请实施例中的非结构化特征映射至标准特征空间的示意图,参见图5所示,以选取“其他”对应的特征空间(即通用特征空间)作为标准特征空间为例,假设人脸模板图像A的非结构化特征[0.24,0.32,…,0.35]与“其他”这一特征维度对应,则人脸模板图像A的非结构化特征[0.24,0.32,…,0.35]就是通用特征,对应的标准特征可以为[0.24,0.32,…,0.35];人脸图像B的非结构化特征为[0.13,0.45,…,0.26]与“侧脸”这一特征维度对应,人脸检索设备将人脸图像B的非结构化特征[0.13,0.45,…,0.26]映射至标准特征空间(即“其他”对应的特征空间),得到人脸图像B的标准特征[0.23,0.33,…,0.36],人脸模板图像A的标准特征与人脸图像B的标准特征可以直接进行比对,如计算人脸模板图像A的标准特征与人脸图像B的标准特征的余弦相似度为0.9。
可选的,可以采用公式(1)计算人脸模板图像A的标准特征A与人脸图像B的标准特征B的余弦相似度:
Figure PCTCN2020105160-appb-000001
上述A i和B i分别表示特征向量A和B的各个分量,n为A和B的分量数目,n为正整数。
在一些可能的实施方式中,人脸检索设备还可以通过如欧式距离、曼哈顿距离等相 似度算法来计算标准特征之间的相似度,本申请实施例不做具体限定。
在一些可能的实施方式中,上述特征映射可以通过特征映射模型来实现,人脸检索设备可以为每一个预设特征维度训练一个特征映射模型。在进行特征映射模型的训练时,人脸检索设备将各个预设特征维度对应的非结构化特征映射至标准特征空间之后,在标准特征空间内,不管映射后的标准特征的来源是什么,都需要满足目标函数:同一身份信息对应的非结构化特征的相似度尽可能的大,不同身份信息对应的非结构化特征的相似度尽可能的小。由此可以得出特征映射模型的训练方法为:首先,人脸检索设备获取人脸样本图像,其中,人脸样本图像具有对应的身份信息;然后,可以执行上述S203获得人脸样本图像的非结构化特征,再基于人脸样本图像的非结构化特征,对特征映射模型进行训练,得到满足上述目标函数的特征映射模型。
进一步地,为了提升特征映射的准确率,在进行特征映射时,人脸检索设备还可以将结构化特征和非结构化特征共同作为特征映射模型的输入,使得非结构化特征的映射可以利用结构化特征信息。可选的,人脸检索设备在获取人脸样本图像后,可以分别执行S202获得人脸样本图像的结构化特征以及执行S203获得人脸样本图像的非结构化特征,然后,基于人脸样本图像的结构化特征以及人脸样本图像的非结构化特征。在一些可能的实施方式中,结构化特征可以转换为离散值,人脸检索设备可以将非结构化特征值和离散化后结构化特征值共同作为神经网络的输入(由于非结构化特征值已经是具体的数值,所以可以直接作为输入),按照目标函数对特征映射模型进行训练。例如,“年龄”特征维度对应的结构化特征可以转换为具体的年龄数值,“妆容”特征维度的结构化特征可以将“素颜”和“化妆”离散化为0和1两个数值。当然,结构化特征还可以根据具体的特征维度转换成其他离散值,并不限于上述举例,本申请实施例对此不做具体限定。最后,人脸检索设备将离散后的结构化特征与非结构化特征拼接在一起输入神经网络,按照目标函数对特征映射模型进行训练,直至目标函数收敛,使得同一身份信息对应的非结构化特征的相似度尽可能的大,不同身份信息对应的非结构化特征的相似度尽可能的小。假设,离散后的结构化特征值为“1”(即“化妆”特征维度),非结构化特征值为[0.04、…、0.08],相应地,神经网络的输入特征可以为[1,0.04、…、0.08]。
相应的,S204可以包括:根据结构化特征以及非结构化特征,获取多个预设特征维度对应的标准特征。作为一种可能的实施方式,人脸检索设备在根据人脸样本图像的结构化特征和非结构化特征训练好特征映射模型之后,将待检索的人脸图像的结构化特征和非结构化特征共同输入训练好的特征映射模型,得到各个预设为特征维度对应的标准特征。
在一些可能的实施方式中,上述目标函数可以为三元组损失目标函数,参见以下公式(2):
Figure PCTCN2020105160-appb-000002
其中,N为训练样本的个数,
Figure PCTCN2020105160-appb-000003
Figure PCTCN2020105160-appb-000004
为人脸样本图像及其特征,
Figure PCTCN2020105160-appb-000005
Figure PCTCN2020105160-appb-000006
为与人脸样本图像的身份信息相同的人脸样本图像及其特征,
Figure PCTCN2020105160-appb-000007
Figure PCTCN2020105160-appb-000008
为与人脸样本的身份信息不同的人脸样本图像及其特征;α为期望的正样本对之间距离与负样本对之间距离的差值,当负样本对之间的距离比正样本对之间的距离大α时,则该三元组的目标函数值为0,否则大于0。
在本申请实施例中,通过最小化目标函数即可达到同一身份信息对应的非结构化特征的相似度尽可能的大,不同身份信息对应的非结构化特征的相似度尽可能的小的目 的。需要注意的是,本申请实施例对目标函数的形式没有限制,可以用于训练单人脸识别模型的目标函数均可用于本申请实施例所述的技术方案。
S205:根据标准特征,对人脸图像进行人脸检索。
在本申请实施例中,人脸检索设备在非结构化特征映射至标准特征空间后,可以直接将这些标准特征与人脸模板图像的特征进行直接比对,找到最为相似的特征,进而获得一个或者多个人脸模板图像,完成人脸检索。
需要说明的是,上述人脸模板图像可以与待检索的人脸图像一同输入人脸检索设备,依次执行S201至S204,完成人脸特征的提取,并映射至标准特征空间,进而与人脸图像的标准特征进行比对;或者,人脸模板图像预先输入人脸检索设备,完成人脸特征的提取,并映射至标准特征空间得到各个人脸模板图像对应的标准特征,然后将人脸模板图像对应的标准特征进行存储,以供后续获取待检索的人脸图像的标准特征后,读取各个人脸模板图像对应的标准特征并进行比对,进而完成人脸检索。当然,待检索的人脸图像与人脸模板图像还可以以其他方式进行特征提取和特征比对,只要能够完成人脸检索即可,本申请实施例不做具体限定。
在一些可能的实施方式中,人脸图像经过的非结构化特征提取模型的个数可以反映针对该图像提取特征的难度(例如一幅人脸图像经过“侧脸”、“化妆”和“老年人”这三个特征维度对应的非结构化特征提取模型,则说明人脸图像拥有这三个特征维度的属性),经过多个非结构化特征提取模型以及对应的特征映射模型,将平均后的特征值用来进行人脸检索,相当于进行了模型的集成,而且在人脸图像越难越复杂的情况下,集成的模型数目越多,越能够提升人脸检索的鲁棒性。上述S205还可以包括:将标准特征的平均值作为人脸图像的输出特征;使用输出特征对人脸图像进行人脸检索。
在本申请实施例中,人脸检索设备可以在通过S204将提取到的人脸图像的非结构化特征映射至标准特征空间并转换成标准特征之后,计算这些标准特征的平均值,并将该平均值作为人脸图像的输出特征,最后,使用输出特征与人脸模板图像的特征进行比对,完成人脸检索。需要注意的是,为了提高人脸检索的鲁棒性,人脸模板图像在获得相应的标准特征后,也需要求取标准特征的平均值,使用求取的平均值与人脸图像的输出特征,也就是标准特征的平均值进行比对,完成人脸检索。
举例来说,图6为本申请实施例中的人脸特征提取的过程示意图,参见图6所示,上述S201至S204可以包括:
第一步、人脸检索设备获取待检索的人脸图像;
第二步、人脸检索设备将上述人脸图像输入结构化特征提取模型,提取对应的结构化特征,如人脸图像被判断为包含“侧脸”和“化妆”两个特征维度;
第三步、人脸检索设备分别将人脸图像输入“侧脸”和“化妆”这两个特征维度对应的非结构化特征提取模型,如侧脸模型和化妆模型;
第四步、人脸检索设备获得由侧脸模型输出的非结构化特征[0.04、…、0.08]和由化妆模型输出的非结构化特征[0.06、…、0.03];
第五步、人脸检索设备将非结构化特征[0.04、…、0.08]输入侧脸模型对应的特征映射模型,以获得对应的标准特征[0.02、…、0.06],并将非结构化特征[0.06、…、0.03]输入化妆模型对应的特征映射模型,以获得对应的标准特征[0.021、…、0.059];
第六步、人脸检索设备对标准特征[0.02、…、0.06]和[0.021、…、0.059]求取平均值, 得到人脸图像的输出特征[0.0205、…、0.0595]。
至此,便完成了人脸图像的特征提取过程,接下来,人脸检索设备就可以使用输出特征[0.0205、…、0.0595]来对人脸图像进行人脸检索。
在本申请实施例中,同一个特征维度可以允许存在多个功能相同的非结构化特征提取模型,例如,针对“侧脸”特征维度,可以存在侧脸模型1和侧脸模型2共两个模型。那么,如果人脸图像的结构化特征表示人脸图像落入“侧脸”特征维度,则人脸检索设备可以将人脸图像分别输入侧脸模型1和侧脸模型2,获得对应的非结构化特征,然后,与上述实施例中的S204一致,分别对非结构化特征映射至标准特征空间,获得对应的标准特征,再对标准特征求取平均值,进而进行人脸检索。
在一些可能的实施方式中,上述多个功能相同的非结构化特征提取模型可以为不同版本的非结构化特征提取模型,在非结构化特征中还可以携带模型的版本号。例如,非结构化特征[001,0.06、…、0.03]中“001”就表示非结构化特征提取模型的版本号,后面[0.06、…、0.03]为特征向量。
那么,上述S203可以包括:根据结构化特征,确定多个预设特征维度中是否包含目标特征维度;若多个预设特征维度中包含目标特征维度,则获取目标特征维度对应的多个非结构化特征提取模型;将人脸图像输入多个非结构化特征提取模型,获得输出的非结构化特征。
在本申请实施例中,人脸检索设备在通过S202获取到人脸图像的结构化特征后,根据结构化特征对应的预设特征维度,选择相应的多个非结构化特征提取模型,这些非结构化特征提取模型可以为同一特征维度对应的多个功能相同的模型,然后,人脸检索设备将待检索的人脸图像分别输入各个非结构化特征提取模型,通过非结构化特征提取模型提取人脸图像该特征维度对应的多个非结构化特征。
在一些可能的实施方式中,不同版本的非结构化特征提取模型可以为同一人脸检索设备中更新前的模型和更新后的模型。当模型更新时,人脸图像可以使用新模型提取非结构化特征,将新模型提取的非结构化特征确定为标准特征,然后,将人脸模板图像的特征映射至新的标准特征空间,进而执行上述S205,以实现人脸检索。可选的,当有新的人脸图像作为人脸模板图像时,可以使用上述新模型对新的人脸图像进行特征提取。
可选的,不同版本的非结构化特征提取模型还可以为不同设备上的模型,不同设备均可以包括各个特征维度的非结构化特征提取模型以及通用特征提取模型,在选定好标准特征后(如选择某一设备上的通用特征提取模型提取的通用特征作为标准特征),将其他设备上的模型所提取的非结构化特征均映射至标准特征空间,进而执行上述S205,以实现人脸检索。
可选的,不同版本的非结构化特征提取模型还可以为不同供应商提供的模型,不同供应商可以提供包括各个特征维度的非结构化特征提取模型以及通用特征提取模型,在选定好标准特征后(如选择某一供应商提供的通用特征提取模型),将其他设备上的模型所提取的非结构化特征均映射至标准特征空间,进而执行上述S205,以实现人脸检索。
当然,不同版本的非结构化特征提取模型不仅限于上述几种情况,上述仅为不同版本的非结构化特征提取模型的一些举例,本申请实施例不做具体限定。
上述实施例所述的人脸图像均需要先提取结构化特征,再提取非结构化特征,在本申请实施例中,还可以直接提取人脸图像的非结构化特征,如此,人脸图像就无需经过结构 化特征提取模型,而是直接输入非结构化特征提取模型进行特征提取,此时的非结构化特征提取模型可以为上述实施例中所述的通用特征提取模型,或者根据待提取特征的需求设计的非结构化特征提取模型,本申请实施例不做具体限定。
在本申请实施例中,由于利用多个特征提取模型来进行特征提取,使得特征提取能力强于单一模型,更适合处理复杂场景的人脸检索。进一步地的,由于先利用结构化模型将人脸图像划分到不同的特征维度,如此,一方面可以更有具针对性地处理人脸图像,另一方面人脸图像无需通过所有的非结构化特征提取模型,减少人脸图像需要通过的模型的个数,降低计算复杂度。
基于与上述方法相同的发明构思,本申请实施例提供一种人脸检索装置,该人脸检索装置可以为上述实施例所述人脸检索设备中的人脸检索装置或者人脸检索装置中的芯片或者片上系统,还可以为人脸检索设备中用于实现上述各实施例所述的方法的功能模块。该人脸检索装置可以实现上述各实施例中人脸检索设备所执行的功能,所述功能可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个上述功能相应的模块。举例来说,一种可能的实施方式中,图7为本申请实施例中的人脸检索装置的结构示意图,参见图7所示,该人脸检索装置700包括:包括:接口模块701,用于获取待检索的人脸图像;特征提取模块702,用用于获取人脸图像的结构化特征,结构化特征为人脸图像的具有具体物理含义的特征,结构化特征与多个预设特征维度对应;根据结构化特征,获取人脸图像中与多个预设特征维度一一对应的非结构化特征,非结构化特征包括用于表示人脸图像的特征向量;至少根据非结构化特征,获取多个预设特征维度对应的标准特征,标准特征包括非结构化特征经神经网络转换后的特征;人脸检索模块703,用于根据标准特征,对人脸图像进行人脸检索。
在一些可能的实施方式中,特征提取模块702,用于获取结构化特征提取模型,结构化模型是按照多个预设特征维度进行训练得到的;将人脸图像输入结构化特征提取模型,获得输出的结构化特征。
在一些可能的实施方式中,特征提取模块702,用于根据结构化特征,确定多个预设特征维度中是否包含目标特征维度;若多个预设特征维度中包含目标特征维度,则获取目标特征维度对应的非结构化特征提取模型,非结构化特征提取模型是基于目标特征维度对应的数据进行训练得到的;将人脸图像输入非结构化特征提取模型,获得输出的非结构化特征。
在一些可能的实施方式中,特征提取模块702,还用于若多个预设特征维度中不包含目标特征维度,则获取通用特征提取模型,通用特征提取模型是基于目标特征维度之外的数据训练得到的;将人脸图像输入通用特征提取模型,获得输出的通用特征;将通用特征确定为标准特征。
在一些可能的实施方式中,特征提取模块702,用于获取特征映射模型,特征映射模型与非结构化特征模型一一对应;将非结构化特征输入非结构化特征对应的特征映射模型,获得输出的标准特征。
在一些可能的实施方式中,特征提取模块702,用于获取特征映射模型,特征映射模型与非结构化特征模型一一对应;将结构化特征和非结构化特征输入与非结构化特征对应的特征映射模型,获得输出的标准特征。
在一些可能的实施方式中,特征提取模块702,还用于获取人脸样本图像,人脸样本 图像具有对应的身份信息;获取人脸样本图像的结构化特征以及人脸样本图像的非结构化特征;基于人脸样本图像的结构化特征、人脸样本图像的非结构化特征以及身份信息,对特征映射模型进行训练,得到满足目标函数的特征映射模型。
在一些可能的实施方式中,特征提取模块702,用于根据结构化特征,确定多个预设特征维度中是否包含目标特征维度;若多个预设特征维度中包含目标特征维度,则获取目标特征维度对应的多个非结构化特征提取模型;将人脸图像输入多个非结构化特征提取模型,获得输出的非结构化特征。
在一些可能的实施方式中,人脸检索模块703,用于将标准特征的平均值确定为人脸图像的输出特征;使用输出特征对人脸图像进行人脸检索。
还需要说明的是,接口模块701、特征提取模块702以及人脸检索模块703的具体实现过程可参考图2至图6实施例的详细描述,为了说明书的简洁,这里不再赘述。在本申请实施例中,接口模块701可以用于执行上述实施例中的S201,特征提取模块702可以用于执行上述实施例中的S202至S204,人脸检索模块703可以用于执行上述实施例中的S205。
本申请实施例中提到的接口模块可以为接收接口、接收电路或者接收器等;特征提取模块和人脸检索模块可以为一个或者多个处理器。
基于与上述方法相同的发明构思,本申请实施例提供一种人脸检索设备,图8为本申请实施例中的人脸检索设备的结构示意图,参见图8中实线所示,该人脸检索设备800可以包括:处理器801和通信接口802,处理器801可以用于支持人脸检索设备800实现上述各个实施例中所涉及的功能,例如:处理器801可以通过通信接口802获取待检索的人脸图像。
在一些可能的实施方式中,参见图8中虚线所示,人脸检索设备800还可以包括存储器803,存储器803,用于保存人脸检索设备800必要的计算机执行指令和数据。当该人脸检索设备800运行时,该处理器801执行该存储器803存储的该计算机执行指令,以使该人脸检索设备800执行如上述各个实施例中所述的人脸检索方法。
基于与上述方法相同的发明构思,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质存储有指令,当指令在计算机上运行时,用于执行上述各个实施例所述的人脸检索方法。
基于与上述方法相同的发明构思,本申请实施例提供一种计算机程序或计算机程序产品,当计算机程序或计算机程序产品在计算机上被执行时,使得计算机实现上述各个实施例所述的人脸检索方法。
本领域技术人员能够领会,结合本文公开描述的各种说明性逻辑框、模块和算法步骤所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么各种说明性逻辑框、模块、和步骤描述的功能可作为一或多个指令或代码在计算机可读媒体上存储或传输,且由基于硬件的处理单元执行。计算机可读媒体可包含计算机可读存储媒体,其对应于有形媒体,例如数据存储媒体,或包括任何促进将计算机程序从一处传送到另一处的媒体(例如,根据通信协议)的通信媒体。以此方式,计算机可读媒体大体上可对应于(1)非暂时性的有形计算机可读存储媒体,或(2)通信媒体,例如信号或载波。数据存储媒体可为可由一或多个计算机或一或多个处理器存取以检索用于实施本申请中描述的技术的指令、代码和/或数据结构的任何可用媒体。计算机程序产品 可包含计算机可读媒体。
作为实例而非限制,此类计算机可读存储媒体可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来存储指令或数据结构的形式的所要程序代码并且可由计算机存取的任何其它媒体。并且,任何连接被恰当地称作计算机可读媒体。举例来说,如果使用同轴缆线、光纤缆线、双绞线、数字订户线(DSL)或例如红外线、无线电和微波等无线技术从网站、服务器或其它远程源传输指令,那么同轴缆线、光纤缆线、双绞线、DSL或例如红外线、无线电和微波等无线技术包含在媒体的定义中。但是,应理解,所述计算机可读存储媒体和数据存储媒体并不包括连接、载波、信号或其它暂时媒体,而是实际上针对于非暂时性有形存储媒体。如本文中所使用,磁盘和光盘包含压缩光盘(CD)、激光光盘、光学光盘、数字多功能光盘(DVD)和蓝光光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光以光学方式再现数据。以上各项的组合也应包含在计算机可读媒体的范围内。
可通过例如一或多个数字信号处理器(DSP)、通用微处理器、专用集成电路(ASIC)、现场可编程逻辑阵列(FPGA)或其它等效集成或离散逻辑电路等一或多个处理器来执行指令。因此,如本文中所使用的术语“处理器”可指前述结构或适合于实施本文中所描述的技术的任一其它结构中的任一者。另外,在一些方面中,本文中所描述的各种说明性逻辑框、模块、和步骤所描述的功能可以提供于经配置以用于编码和解码的专用硬件和/或软件模块内,或者并入在组合编解码器中。而且,所述技术可完全实施于一或多个电路或逻辑元件中。
本申请的技术可在各种各样的装置或设备中实施,包含无线手持机、集成电路(IC)或一组IC(例如,芯片组)。本申请中描述各种组件、模块或单元是为了强调用于执行所揭示的技术的装置的功能方面,但未必需要由不同硬件单元实现。实际上,如上文所描述,各种单元可结合合适的软件和/或固件组合在编码解码器硬件单元中,或者通过互操作硬件单元(包含如上文所描述的一或多个处理器)来提供。
在上述实施例中,对各个实施例的描述各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
以上所述,仅为本申请示例性的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (20)

  1. 一种人脸检索方法,其特征在于,包括:
    获取待检索的人脸图像;
    获取所述人脸图像的结构化特征,所述结构化特征包括用于表征人脸属性的特征,所述结构化特征与多个预设特征维度对应;
    根据所述结构化特征,获取所述人脸图像中与所述多个预设特征维度一一对应的非结构化特征,所述非结构化特征包括用于表示人脸特征的向量;
    至少根据所述非结构化特征,获取所述多个预设特征维度对应的标准特征,所述标准特征包括所述非结构化特征经神经网络转换后的特征;
    根据所述标准特征,对所述人脸图像进行人脸检索。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述人脸图像的结构化特征,包括:
    获取结构化特征提取模型,所述结构化模型是按照所述多个预设特征维度进行训练得到的;
    将所述人脸图像输入结构化特征提取模型,获得输出的所述结构化特征。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述结构化特征,获取所述人脸图像中与所述多个预设特征维度对应的非结构化特征,包括:
    根据所述结构化特征,确定所述多个预设特征维度中是否包含目标特征维度;
    若所述多个预设特征维度中包含所述目标特征维度,则获取所述目标特征维度对应的非结构化特征提取模型,所述非结构化特征提取模型是基于所述目标特征维度对应的数据进行训练得到的;
    将所述人脸图像输入所述非结构化特征提取模型,获得输出的所述非结构化特征。
  4. 根据权利要求3所述的方法,其特征在于,在所述根据所述标准特征,对所述人脸图像进行人脸检索之前,所述方法还包括:
    若所述多个预设特征维度中不包含所述目标特征维度,则获取通用特征提取模型,所述通用特征提取模型是基于所述目标特征维度之外的数据训练得到的;
    将所述人脸图像输入所述通用特征提取模型,获得输出的通用特征;
    将所述通用特征确定为所述标准特征。
  5. 根据权利要求3或4所述的方法,其特征在于,所述至少获取所述非结构化特征对应的标准特征,包括:
    获取特征映射模型,所述特征映射模型与所述非结构化特征模型一一对应;
    将所述非结构化特征输入所述非结构化特征对应的所述特征映射模型,获得输出 的所述标准特征。
  6. 根据权利要求3或4所述的方法,其特征在于,所述至少获取所述非结构化特征对应的标准特征,包括:
    获取特征映射模型,所述特征映射模型与所述非结构化特征模型一一对应;
    将所述结构化特征和所述非结构化特征输入与所述非结构化特征对应的所述特征映射模型,获得输出的所述标准特征。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    获取人脸样本图像,所述人脸样本图像具有对应的身份信息;
    获取所述人脸样本图像的结构化特征以及所述人脸样本图像的非结构化特征;
    基于所述人脸样本图像的结构化特征、所述人脸样本图像的非结构化特征以及所述身份信息,对所述特征映射模型进行训练,得到满足目标函数的特征映射模型。
  8. 根据权利要求3至7任一项所述的方法,其特征在于,所述根据所述结构化特征,获取所述人脸图像中与所述多个预设特征维度对应的非结构化特征,包括:
    根据所述结构化特征,确定所述多个预设特征维度中是否包含目标特征维度;
    若所述多个预设特征维度中包含所述目标特征维度,则获取所述目标特征维度对应的多个非结构化特征提取模型;
    将所述人脸图像输入所述多个非结构化特征提取模型,获得输出的所述非结构化特征。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述根据所述标准特征,对所述人脸图像进行人脸检索,包括:
    将所述标准特征的平均值作为所述人脸图像的输出特征;
    使用所述输出特征对所述人脸图像进行人脸检索。
  10. 一种人脸检索装置,其特征在于,包括:
    接口模块,用于获取待检索的人脸图像;
    特征提取模块,用于获取所述人脸图像的结构化特征,所述结构化特征包括用于表征人脸属性的特征,所述结构化特征与多个预设特征维度对应;根据所述结构化特征,获取所述人脸图像中与所述多个预设特征维度一一对应的非结构化特征,所述非结构化特征包括用于表示人脸特征的向量;至少根据所述非结构化特征,获取所述多个预设特征维度对应的标准特征,所述标准特征包括所述非结构化特征经神经网络转换后的特征;
    人脸检索模块,用于根据所述标准特征,对所述人脸图像进行人脸检索。
  11. 根据权利要求10所述的装置,其特征在于,所述特征提取模块,用于获取结构化特征提取模型,所述结构化模型是按照所述多个预设特征维度进行训练得到 的;将所述人脸图像输入结构化特征提取模型,获得输出的所述结构化特征。
  12. 根据权利要求10或11所述的装置,其特征在于,特征提取模块,用于根据所述结构化特征,确定所述多个预设特征维度中是否包含目标特征维度;若所述多个预设特征维度中包含所述目标特征维度,则获取所述目标特征维度对应的非结构化特征提取模型,所述非结构化特征提取模型是基于所述目标特征维度对应的数据进行训练得到的;将所述人脸图像输入所述非结构化特征提取模型,获得输出的所述非结构化特征。
  13. 根据权利要求12所述的装置,其特征在于,所述特征提取模块,还用于若所述多个预设特征维度中不包含所述目标特征维度,则获取通用特征提取模型,所述通用特征提取模型是基于所述目标特征维度之外的数据训练得到的;将所述人脸图像输入所述通用特征提取模型,获得输出的通用特征;将所述通用特征确定为所述标准特征。
  14. 根据权利要求12或13所述的装置,其特征在于,所述特征提取模块,用于获取特征映射模型,所述特征映射模型与所述非结构化特征模型一一对应;将所述非结构化特征输入所述非结构化特征对应的所述特征映射模型,获得输出的所述标准特征。
  15. 根据权利要求12或13所述的装置,其特征在于,所述特征提取模块,用于获取特征映射模型,所述特征映射模型与所述非结构化特征模型一一对应;将所述结构化特征和所述非结构化特征输入与所述非结构化特征对应的所述特征映射模型,获得输出的所述标准特征。
  16. 根据权利要求15所述的装置,其特征在于,所述特征提取模块,还用于获取人脸样本图像,所述人脸样本图像具有对应的身份信息;获取所述人脸样本图像的结构化特征以及所述人脸样本图像的非结构化特征;基于所述人脸样本图像的结构化特征、所述人脸样本图像的非结构化特征以及所述身份信息,对所述特征映射模型进行训练,得到满足目标函数的特征映射模型。
  17. 根据权利要求12至16任一项所述的装置,其特征在于,所述特征提取模块,用于根据所述结构化特征,确定所述多个预设特征维度中是否包含目标特征维度;若所述多个预设特征维度中包含所述目标特征维度,则获取所述目标特征维度对应的多个非结构化特征提取模型;将所述人脸图像输入所述多个非结构化特征提取模型,获得输出的所述非结构化特征。
  18. 根据权利要求10至17任一项所述的装置,其特征在于,所述人脸检索模块,用于将所述标准特征的平均值作为所述人脸图像的输出特征;使用所述输出特征对所述人脸图像进行人脸检索。
  19. 一种人脸检索设备,其特征在于,包括:处理器和通信接口;
    所述通信接口,与所述处理器耦合,所述处理器通过所述通信接口获取待检索人脸图像;
    所述处理器,用于支持所述人脸检索设备实现如权利要求1至9任一项所述的人脸检索方法。
  20. 根据权利要求19所述的设备,其特征在于,所述人脸检索设备还包括:存储器,用于保存所述人脸检索设备必要的计算机执行指令和数据;当所述人脸检索设备运行时,所述处理器执行所述存储器存储的所述计算机执行指令,以使所述人脸检索设备执行如权利要求1至9任一项所述的人脸检索方法。
PCT/CN2020/105160 2019-08-15 2020-07-28 一种人脸检索方法及装置 WO2021027555A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910755742.8A CN112395448A (zh) 2019-08-15 2019-08-15 一种人脸检索方法及装置
CN201910755742.8 2019-08-15

Publications (1)

Publication Number Publication Date
WO2021027555A1 true WO2021027555A1 (zh) 2021-02-18

Family

ID=74570498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/105160 WO2021027555A1 (zh) 2019-08-15 2020-07-28 一种人脸检索方法及装置

Country Status (2)

Country Link
CN (1) CN112395448A (zh)
WO (1) WO2021027555A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792168A (zh) * 2021-08-11 2021-12-14 同盾科技有限公司 人脸底库自维护的方法、系统、电子装置和存储介质
CN115661911A (zh) * 2022-12-23 2023-01-31 四川轻化工大学 一种人脸特征提取方法、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664914A (zh) * 2018-05-04 2018-10-16 腾讯科技(深圳)有限公司 人脸检索方法、装置及服务器
CN109241325A (zh) * 2018-09-11 2019-01-18 武汉魅瞳科技有限公司 一种基于深度特征的大规模人脸检索方法和设备
CN109710792A (zh) * 2018-12-24 2019-05-03 西安烽火软件科技有限公司 一种基于索引的快速人脸检索系统应用

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197532B (zh) * 2017-12-18 2019-08-16 深圳励飞科技有限公司 人脸识别的方法、装置及计算机装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664914A (zh) * 2018-05-04 2018-10-16 腾讯科技(深圳)有限公司 人脸检索方法、装置及服务器
CN109241325A (zh) * 2018-09-11 2019-01-18 武汉魅瞳科技有限公司 一种基于深度特征的大规模人脸检索方法和设备
CN109710792A (zh) * 2018-12-24 2019-05-03 西安烽火软件科技有限公司 一种基于索引的快速人脸检索系统应用

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792168A (zh) * 2021-08-11 2021-12-14 同盾科技有限公司 人脸底库自维护的方法、系统、电子装置和存储介质
CN115661911A (zh) * 2022-12-23 2023-01-31 四川轻化工大学 一种人脸特征提取方法、设备及存储介质

Also Published As

Publication number Publication date
CN112395448A (zh) 2021-02-23

Similar Documents

Publication Publication Date Title
US10997443B2 (en) User identity verification method, apparatus and system
US11816078B2 (en) Automatic entity resolution with rules detection and generation system
WO2020155418A1 (zh) 一种跨模态信息检索方法、装置和存储介质
CN108664526B (zh) 检索的方法和设备
WO2020253127A1 (zh) 脸部特征提取模型训练方法、脸部特征提取方法、装置、设备及存储介质
CN111739539B (zh) 确定说话人数量的方法、装置及存储介质
WO2020192112A1 (zh) 人脸识别方法及装置
US20120314962A1 (en) Auto-recognition for noteworthy objects
US10528844B2 (en) Method and apparatus for distance measurement
WO2021027555A1 (zh) 一种人脸检索方法及装置
US10474872B2 (en) Fingerprint matching using virtual minutiae
CN111931548B (zh) 人脸识别系统、建立人脸识别数据的方法及人脸识别方法
WO2023020214A1 (zh) 检索模型的训练和检索方法、装置、设备及介质
TWI803243B (zh) 圖像擴增方法、電腦設備及儲存介質
EP3874404A1 (en) Video recognition using multiple modalities
US11881052B2 (en) Face search method and apparatus
CN112200772A (zh) 痘痘检测设备
CN112395449A (zh) 一种人脸检索方法及装置
WO2024183465A1 (zh) 一种模型确定方法和相关装置
WO2024183465A9 (zh) 一种模型确定方法和相关装置
CN111062199B (zh) 一种不良信息识别方法及装置
CN113128278A (zh) 一种图像识别方法及装置
CN116071804A (zh) 人脸识别的方法、装置和电子设备
CN116467463A (zh) 基于子图学习的多模态知识图谱表示学习系统及产品
JP2017162230A (ja) 情報処理装置、類似データ検索方法、及び類似データ検索プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20851694

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20851694

Country of ref document: EP

Kind code of ref document: A1