CN110147710B - Method and device for processing human face features and storage medium - Google Patents

Method and device for processing human face features and storage medium Download PDF

Info

Publication number
CN110147710B
CN110147710B CN201811506344.4A CN201811506344A CN110147710B CN 110147710 B CN110147710 B CN 110147710B CN 201811506344 A CN201811506344 A CN 201811506344A CN 110147710 B CN110147710 B CN 110147710B
Authority
CN
China
Prior art keywords
data
feature vector
feature
face
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811506344.4A
Other languages
Chinese (zh)
Other versions
CN110147710A (en
Inventor
陈超
吴佳祥
沈鹏程
王文全
李安平
梁亦聪
张睿欣
徐兴坤
李绍欣
汪铖杰
李季檩
黄飞跃
吴永坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811506344.4A priority Critical patent/CN110147710B/en
Publication of CN110147710A publication Critical patent/CN110147710A/en
Application granted granted Critical
Publication of CN110147710B publication Critical patent/CN110147710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a method and a device for processing human face features and a storage medium. Wherein, the method comprises the following steps: extracting features of a face object to be recognized in a target image to obtain an original feature vector of a first data type; normalizing the original feature vector to obtain a first feature vector; performing conversion processing on the first feature vector according to a first type conversion relation to obtain a second feature vector of a second data type, wherein the storage space occupied by the second feature vector is smaller than that occupied by the original feature vector; acquiring a target feature vector of a pre-stored target face object, and comparing the second feature vector with the target feature vector to acquire the similarity between the face object to be recognized and the target face object; and under the condition that the similarity is greater than a first target threshold value, determining the face object to be recognized as a target face object. The invention solves the technical problem of high cost of processing the face features in the related technology.

Description

Method and device for processing human face features and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for processing human face features and a storage medium.
Background
At present, when the face features are processed, a deep network model needs to be trained, and the compression of the face features is realized by modifying feature output dimensions of the deep network model, for example, 1024-dimensional face features are reduced to 256-dimensional face features.
Although the method can realize the compression of the face features, the switching of the feature dimensions requires retraining the deep network model, thereby greatly increasing the cost for processing the face features.
In view of the above-mentioned problem of high cost of processing the face features, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing human face features and a storage medium, which are used for at least solving the technical problem of high cost of processing the human face features in the related technology.
According to one aspect of the embodiment of the invention, a method for processing human face features is provided. The method comprises the following steps: extracting features of a face object to be recognized in a target image to obtain an original feature vector of a first data type; normalizing the original feature vector to obtain a first feature vector; converting the first feature vector according to the first type conversion relation to obtain a second feature vector of a second data type, wherein the storage space occupied by the second feature vector is smaller than that occupied by the original feature vector; acquiring a target feature vector of a pre-stored target face object, and comparing the second feature vector with the target feature vector to acquire the similarity between the face object to be recognized and the target face object; and under the condition that the similarity is greater than a first target threshold value, determining the face object to be recognized as a target face object.
According to another aspect of the embodiment of the invention, a device for processing human face features is also provided. The device comprises: the first extraction unit is used for extracting the features of a face object to be recognized in a target image to obtain an original feature vector of a first data type; the first processing unit is used for carrying out normalization processing on the original characteristic vector to obtain a first characteristic vector; the conversion unit is used for carrying out conversion processing on the first characteristic vector according to the first type conversion relation to obtain a second characteristic vector of a second data type, wherein the storage space occupied by the second characteristic vector is smaller than that occupied by the original characteristic vector; the first acquisition unit is used for acquiring a target feature vector of a target face object stored in advance, and comparing the second feature vector with the target feature vector to acquire the similarity between the face object to be recognized and the target face object; and the first determining unit is used for determining the face object to be recognized as the target face object under the condition that the similarity is greater than the first target threshold value.
In the embodiment of the invention, the feature extraction is carried out on the face object to be recognized in the target image to obtain the original feature vector of the first data type, the conversion processing is carried out on the original feature vector of the first data type after the normalization processing according to the first type conversion relation to obtain the second feature vector of the second data type, the storage space occupied by the second feature vector is smaller than the storage space occupied by the original feature vector, the second feature vector is compared with the target feature vector of the target face object stored in advance, the face object to be recognized is determined to be the target face object under the condition that the similarity between the face object to be recognized and the target face object is larger than the first target threshold value, namely, the original feature vector of the first data type is converted according to the first type conversion relation to obtain the second feature vector of the second data type, the purpose of compressing the face feature data is achieved, the pressure of storing the face feature is reduced, the face feature is further determined to be the target face object under the condition that the similarity between the face object to be recognized and the target face object is larger than the first target face threshold value, the problem of large face feature processing is solved, and the technical problem of large face feature conversion processing is further caused by the face feature conversion processing.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
fig. 1 is a schematic diagram of a hardware environment of a processing method of a face feature according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for processing human face features according to an embodiment of the invention;
FIG. 3 is a flow chart of a quantization-based face feature compression method according to an embodiment of the present invention;
fig. 4 is a flowchart of a method of extracting Float32 feature data from a face image according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for quantizing face feature data according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for comparing face feature data according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating quantization-based face feature compression according to an embodiment of the present invention;
FIG. 8 is a schematic view of a scene of a human face core according to an embodiment of the present invention;
fig. 9 is a schematic view of a scene of face retrieval according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a processing device for human face features according to an embodiment of the present invention; and
fig. 11 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one aspect of the embodiment of the invention, a method for processing human face features is provided. Optionally, as an optional implementation, the above processing method of the face features may be applied, but not limited to, in an environment as shown in fig. 1. Fig. 1 is a schematic diagram of a hardware environment of an image detection method according to an embodiment of the present invention. As shown in FIG. 1, a user 102 may be in data communication with a user device 104, which may include, but is not limited to, a memory 106 and a processor 108.
In this embodiment, the user device 104 may input a target image, and may execute step S102 via the processor 108 to send data of the target image to the server 112 via the network 110. The server 112 includes a database 114 and a processor 116.
After the server 112 acquires the data of the target image, the processor 116 performs feature extraction on the face object to be recognized in the target image to obtain an original feature vector of a first data type, performs normalization processing on the original feature vector to obtain a first feature vector, performs conversion processing on the first feature vector according to a first type conversion relationship to obtain a second feature vector of a second data type, where a storage space occupied by the second feature vector is smaller than a storage space occupied by the original feature vector, the processor 116 acquires a target feature vector of a pre-stored target face object from the database 114, compares the second feature vector with the target feature vector to obtain a similarity between the face object to be recognized and the target face object, determines the face object to be recognized as the target face object when the similarity is greater than a first target threshold, and then executes step S104, and returns a result that the face object to be recognized is the target face object to the user equipment 104 through the network 110.
The user device 104 may store the result that the face object to be recognized is the target face object in the memory 106.
In the related art, when the face features are processed, the deep network model needs to be retrained due to the switching of feature dimensions, so that the cost for processing the face features is greatly increased. The embodiment of the invention converts the original feature vector of the first data type according to the first type conversion relationship to obtain the second feature vector of the second data type, achieves the purpose of compressing the face feature data, reduces the pressure of storing the face features, and further determines the face object to be recognized as the target face object under the condition that the similarity between the face object to be recognized and the target face object is greater than the first target threshold value, thereby avoiding the problem of high cost caused by the need of retraining the model due to the switching of feature dimensions when the face features are processed, realizing the technical effect of reducing the cost of processing the face features, and further solving the technical problem of high cost of processing the face features in the related technology.
Fig. 2 is a flowchart of a processing method of human face features according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
step S202, extracting the features of the face object to be recognized in the target image to obtain an original feature vector of the first data type.
In the technical solution provided in step S202, an original feature vector of a first data type of a face object to be recognized is extracted from a target image, and the original feature vector is used to represent a face feature of the face object to be recognized according to the first data type.
In this embodiment, the target image may be a currently input image including a face, and the face detection, the face registration, and the face feature recognition may be performed on the target image, so as to obtain an original feature vector of a first data type of a face object to be recognized, where the face object to be recognized is a face to be recognized.
Optionally, in the embodiment, the face detection is performed on the input target image through the face detection network model, and the position of the face is accurately located in the target image, so that the face is found in the target image, and a face detection result of the face object to be recognized is obtained.
After the face detection result of the face object to be recognized is obtained, face key point registration is carried out according to the face detection result, namely, detection and positioning of the face characteristic points are carried out according to the face detection result, face key point registration can be carried out according to the face detection result through a face registration network model, positions of features such as eyebrows, eyes, a nose, a mouth and the like are found, and a registration result of the face object to be recognized is obtained.
After the registration result of the face object to be recognized is obtained, the face alignment is performed according to the registration result, that is, the face correction is performed according to the registration result, so that the face becomes positive. After the face alignment is performed according to the registration result, the aligned image is subjected to matting, for example, the aligned image is scratched into a 248 × 248 face image, and then the scratched face image is input to a face recognition model, where the face recognition model may be a deep neural network, for example, a convolutional neural network, and an original feature vector of a first data type is extracted from the scratched face image through the face recognition model, where the original feature vector is used to represent a face feature of a face object to be recognized according to the first data type, for example, the first data type is a single-precision floating point type, and the original feature vector includes a set of feature data of the single-precision floating point type, that is, includes feature data of a multi-dimensional single-precision floating point type, where the multi-dimensional can be 1024 dimensions, and the single-precision floating point type can be a Float32 type, float64 type, and the like.
Step S204, the original characteristic vector is normalized to obtain a first characteristic vector.
In the technical scheme provided in step S204, after feature extraction is performed on a face object to be recognized in a target image to obtain an original feature vector of a first data type, normalization processing is performed on the original feature vector to obtain a first feature vector of the first data type, that is, normalization processing is performed on feature data of multiple dimensions included in the original feature vector, so that each feature data in the original feature vector is considered to be in the same degree, each feature data is quantized to a uniform interval, it is ensured that conversion processing is performed on the first feature vector according to a first type conversion relationship to obtain a second feature vector of a second data type, and then the second feature vector is directly compared with the target feature vector to obtain a similarity between the face object to be recognized and the target face object, and to enhance comparability between feature data, wherein the target feature vector is also a feature vector after normalization processing.
Optionally, in this embodiment, when the original feature vector is normalized to obtain the first feature vector, the modular length of the original feature vector is obtained according to the feature data of multiple dimensions of the first feature vector, and the quotient of the feature data of each dimension and the modular length is determined as the feature data of the first feature vector.
Optionally, a sum of squares of feature data of multiple dimensions of the original feature vector is obtained, the sum of squares is squared to obtain a modular length of the original feature vector, and a quotient of the feature data of each dimension and the modular length is determined as feature data of the first feature vector, so that normalization processing is performed on the original feature vector through the feature data of the multiple dimensions of the original feature vector, and the modular length of the original feature vector is 1.
Step S206, the first feature vector is converted according to the first type conversion relation, and a second feature vector of a second data type is obtained.
In the technical solution provided in step S206, after the original feature vector is normalized to obtain a first feature vector, the first feature vector is compressed, and the first feature vector is converted according to the first type conversion relationship to obtain a second feature vector of the second data type, where the second feature vector is used to represent the face features of the face object to be recognized according to the second data type, and the storage space occupied by the second feature vector is smaller than the storage space occupied by the original feature vector, and the second feature vector is a normalized feature vector.
In this embodiment, the first data type makes the original feature data continuous with respect to the second feature vector of the second data type, the second data type makes the second feature vector discrete with respect to the original feature vector of the first data type, for example, the second data type is integer, which may be Int8, int16, etc., the first data type of the original feature vector is single-precision floating-point type, which may be Float32 type, float64 type, etc., the original feature vector of the single-precision floating-point type is continuous with respect to the second feature vector of the integer, and the second feature vector of the integer is discrete with respect to the original feature vector of the single-precision floating-point type, so that the original feature vector of the first data type is converted into the second feature vector of the relatively discrete second data type, and quantization processing of the original feature vector is realized, thereby reducing the pressure for storing the feature data.
The first type conversion relationship in this embodiment is a mapping relationship for converting the first feature vector of the first data type into the second feature vector of the second data type, for example, the mapping relationship is a mapping relationship between the first feature vector in Float32 space and the second feature vector in Int8 space, so that the storage space occupied by the second feature vector of the second data type is smaller than the storage space occupied by the original feature vector, the volume of the hard disk and the memory occupied by the data feature can be reduced by 4 times, the pressure for storing the feature data is reduced, a deep network does not need to be retrained, the cost for processing the face feature is reduced, the speed for comparing the face is increased, the conversion processing of the first feature vector according to the first type conversion relationship is ensured, and the effect of the obtained second feature vector of the second data type is basically lossless.
Step S208, a target feature vector of a pre-stored target face object is obtained, and the second feature vector is compared with the target feature vector to obtain the similarity between the face object to be recognized and the target face object.
In the technical scheme provided in step S208, after the first feature vector is converted according to the first type conversion relationship to obtain the second feature vector of the second data type, the face recognition comparison is performed through the second feature vector, so as to obtain the target feature vector of the pre-stored target face object, and the second feature vector is compared with the target feature vector to obtain the similarity between the face object to be recognized and the target face object.
The target face object in this embodiment may be a face whose identity information has been previously entered into the database, and the target feature vector is used to represent the face feature of the target face object according to the second data type, and is a feature vector after normalization processing. After the target feature vector of the pre-stored target face object is obtained, the second feature vector and the target feature vector are used as two feature vectors participating in comparison to obtain the similarity between the face object to be recognized and the target face object, wherein the similarity is used for indicating the similarity between the face object to be recognized and the target face object.
Step S210, determining the face object to be recognized as the target face object when the similarity is greater than the first target threshold.
In the technical solution provided in step S210, after the similarity between the face object to be recognized and the target face object is obtained, it is determined whether the similarity is greater than a first target threshold, where the first target threshold is also a determination threshold, and may be a preset critical value for measuring the similarity, for example, the first target threshold is 75%. Determining the face object to be recognized as a target face object under the condition that the similarity is greater than a first target threshold value, namely under the condition that the similarity between the face object to be recognized and the target face object is high, wherein the face object to be recognized and the target face object can be considered to be from the same person; optionally, when the similarity is not greater than the first target threshold, that is, when the similarity between the face object to be recognized and the target face object is low, it is determined that the difference between the face object to be recognized and the target face object is large, and it can be considered that the face object to be recognized and the target face object are not from the same person, so that the face is recognized and compared, the storage pressure on the face feature data is reduced, the computation amount of face comparison is reduced, and the speed of face comparison is increased.
Through the steps S202 to S210, the original feature vector of the first data type is converted according to the first type conversion relationship to obtain the second feature vector of the second data type, so as to achieve the purpose of compressing the face feature data, reduce the pressure for storing the face features, and further determine the face object to be recognized as the target face object when the similarity between the face object to be recognized and the target face object is greater than the first target threshold value, thereby avoiding the problem of high cost caused by the need of retraining the model due to feature dimension switching when processing the face features, achieving the technical effect of reducing the cost for processing the face features, and further solving the technical problem of high cost for processing the face features in the related art.
As an optional implementation manner, before performing conversion processing on the first feature vector according to the first-type conversion relationship to obtain a second feature vector of the second data type in step S206, the method further includes: respectively extracting the features of the face object in the image samples to obtain a plurality of feature vector samples of a first data type; normalizing the plurality of feature vector samples; acquiring a first data interval of a plurality of feature vector samples after normalization processing; filtering the first data interval to obtain a key interval; a first type of conversion relationship is determined based on the key interval.
In this embodiment, before the first feature vector is subjected to the conversion processing according to the first type conversion relationship to obtain the second feature vector of the second data type, a key interval may be estimated based on the plurality of image samples, where the key interval is a quantization interval for processing the first feature vector of the first data type, and is used to determine the first type conversion relationship when the original feature vector is subjected to the quantization processing, and the second feature vector of the second data type of this embodiment needs to be in the key interval.
In this embodiment, a plurality of image samples are input, the plurality of target images may be million-level face images, feature extraction is performed on a face object in the plurality of image samples respectively to obtain a plurality of feature vector samples of a first data type, face detection, face registration and face feature recognition may be performed on the plurality of image samples respectively to obtain a plurality of feature vector samples, and the plurality of feature vector samples may be million-level face features.
Optionally, in this embodiment, the face detection network model performs face detection on each input image sample, and the position of the face is accurately located in each image sample, so as to obtain a face detection result in each image sample, and then performs face key point registration according to the face detection result in each image sample, that is, the detection and location of the face feature point are performed according to the face detection result in each image sample, and the face key point registration can be performed according to the face detection result by the face registration network model, so as to obtain the registration result of the face object in each image sample. After the registration result of the face object in each image sample is obtained, face alignment is performed according to the registration result, that is, face correction is performed according to the registration result, so that the face becomes positive. After the face alignment is carried out according to the registration result of the face object in each image sample, the aligned image is subjected to matting, the matting face image is input into a face recognition model, a feature vector sample is extracted from the matting face image through the face recognition model, and the feature vector sample can comprise 1024-dimensional single-precision floating point type feature data.
After obtaining a plurality of feature vector samples of the first data type, performing normalization processing on the plurality of feature vector samples, that is, performing normalization processing on feature data of a plurality of dimensions included in each feature vector sample, so as to consider each feature data in each feature vector sample to the same extent, and quantizing each feature data in each feature vector sample to a uniform interval.
After the plurality of feature vector samples are normalized, a first data interval of the plurality of feature vector samples after normalization is obtained, the upper bound data of the first data interval may be the maximum feature data in the plurality of feature vector samples after normalization, and the lower bound data of the first data interval may be the minimum feature data in the plurality of feature vector samples after normalization. And filtering the first data interval to obtain a key interval, namely, the key interval is contained in the first data interval, and then determining a first type conversion relation based on the key interval, wherein the feature vector in the first data interval is in normal distribution, and the key interval may not include a small amount of large feature data or a small amount of small feature data in the first data interval, so that the precision of converting the original feature vector of the first data type into the second feature vector of the second data type is improved.
The method for determining the first type of transformation relation based on the key interval is described below.
As an optional implementation, the determining the first type conversion relation based on the key interval includes: acquiring first upper bound data of a key interval, first lower bound data of the key interval, second upper bound data of a second data interval and second lower bound data of the second data interval, wherein the second data interval is associated with a second data type; and determining a target model for indicating the first type of conversion relation through the first upper-bound data, the first lower-bound data, the second upper-bound data and the second lower-bound data.
In this embodiment, the first upper bound data of the key interval and the first lower bound data of the key interval are symmetric, and the second upper bound data of the second data interval associated with the second data type and the second lower bound data are symmetric, for example, if the first upper bound data of the key interval is max and the first lower bound data of the key interval is min, then-max = min, the second data type is Int8 type, the second upper bound data of the second data interval and the second lower bound data are symmetric, the second upper bound data may be 127, and the second lower bound data may be-127. First upper-bound data of the key interval, first lower-bound data of the key interval, second upper-bound data of the second data interval, and second lower-bound data of the second data interval may be acquired, and a target model for indicating the first-type conversion relationship may be determined through the first upper-bound data, the first lower-bound data of the key interval, the second upper-bound data of the second data interval, and the second lower-bound data of the second data interval, and may be a target function for calculating the first feature vector of the first data type.
Optionally, when determining a target model indicating a first type of conversion relationship by using the first upper bound data, the first lower bound data, the second upper bound data, and the second lower bound data, the embodiment determines a difference between the first upper bound data and the first lower bound data as a first difference of the target model; determining a difference between the second upper bound data and the second lower bound data as a second difference of the target model; determining a quotient between the second difference and the first difference as a median value of the target model; determining a difference between an input variable of the target model and the first lower-bound data as a third difference of the target model, and determining a product between the third difference and the intermediate value as a first product of the target model, wherein the input variable is used for representing feature data in the first feature vector; determining a result obtained by rounding the difference between the first product and the second upper bound data as an output result of the target model, wherein the output result is used for representing feature data in the second feature vector; and determining a second feature vector through the output result.
In this embodiment, a difference between the first upper bound data and the first lower bound data may be determined as a first difference of the target model, for example, a difference between the first upper bound data max and the first lower bound data min may be determined as a first difference (max-min) of the target model.
The difference between the second upper bound data and the second lower bound data is determined as the second difference of the target model, for example, the difference between the second upper bound data 127 and the second lower bound data-127 is determined as the second difference 127- (-127) =254 of the target model.
The quotient between the second difference and the first difference is determined as an intermediate value of the target model, for example, the quotient between the second difference 254 and the first difference (max-min) is determined as an intermediate value scale = 254/(max-min) of the target model, which is the quantization accuracy of the first feature vector of the first data type converted into the second feature vector of the second data type.
The difference between the input variable of the target model and the first lower bound data is determined as the third difference of the target model, for example, the difference between the input variable feature _ Float32_ value of the target model and the first lower bound value min is determined as the third difference of the target model (feature _ Float32_ value-min), and the product between the third difference and the intermediate value is determined as the first product of the target model, for example, the product between the third difference (feature _ Float32_ value-min) and the intermediate value scale is determined as the first product of the target model (feature _ Float32_ value-min).
The result obtained by rounding the difference between the first product and the second upper bound data is determined as the output result of the target model, for example, the result round (scale) obtained by rounding the difference between the first product scale (feature _ Float32_ value-min) and the second upper bound data 127 is determined as the output result of the target model, and the output result is used for representing the feature data in the second feature vector, that is, the target model can be represented as feature _ Int8_ value = round (scale _ Float32_ value-min) -127), wherein round is used for representing rounding, and then the second feature vector of the second data type is determined by the output result, so that the target model is determined by the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound data, and the target model is determined by the first upper bound data, the first lower bound data and the second lower bound data.
It should be noted that the method for determining the target model indicating the first type of conversion relationship through the first upper-bound data, the first lower-bound data, the second upper-bound data and the second lower-bound data is only an example of the embodiment of the present invention, and does not represent that the method for determining the target model according to the embodiment of the present invention is only the above method, and any method for determining the target model through the first upper-bound data, the first lower-bound data, the second upper-bound data and the second lower-bound data is within the scope of the embodiment of the present invention, and is not illustrated here.
The process of obtaining the second feature vector of the second data type of this embodiment is described below.
As an alternative implementation, the process of obtaining the second feature vector of the second data type includes: processing the feature data in the first feature vector through the target model to obtain an output result; determining the second upper bound data as the feature data of the second feature vector under the condition that the output result is larger than the second upper bound data; determining the second lower bound data as the feature data of a second feature vector under the condition that the output result is smaller than the second lower bound data; determining the output result as the feature data of the second feature vector under the condition that the output result is greater than or equal to the second lower bound data and less than or equal to the second upper bound data; a second feature vector is determined from the feature data of the second feature vector.
After a target model used for indicating a first type conversion relationship is determined through first upper bound data, first lower bound data, second upper bound data and second lower bound data, conversion processing is carried out on the first feature vector through the target model to obtain a second feature vector of a second data type, and the second feature vector needs to be ensured to be located between the second lower bound data and the second upper bound data. In this embodiment, a data saturation policy is set, and when the output result is greater than the second upper bound data, the second upper bound data is directly determined as feature data of the second feature vector, for example, if the second upper bound data is 127, then feature _ Int8_ value = min (127, feature_int 8 \/value); in the case that the output result is smaller than the second lower bound data, determining the second lower bound data as feature data of a second feature vector, for example, feature _ Int8_ value = max (-127, feature _int8 _value); and under the condition that the output result is greater than or equal to the second lower-bound data and less than or equal to the second upper-bound data, determining the direct output result as the feature data of the second feature vector, so as to ensure that the second feature vector is between the second lower-bound data and the second upper-bound data, and further determining the second feature vector through the feature data of the second feature vector.
According to the embodiment, after the target model for indicating the first type conversion relationship is determined through the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound data, the first feature vector is converted through the target model, and then the second feature vector of the second data type is obtained, so that the original feature vector of the first data type is converted into the relatively discrete second feature vector of the second data type, the pressure for storing the face features is reduced, the cost for processing the face features is reduced, the first feature vector is converted according to the first type conversion relationship, and the effect of the obtained second feature vector of the second data type is basically lossless.
As an optional implementation manner, the obtaining the first data interval of the plurality of feature vector samples after the normalization processing includes: acquiring feature data of each feature vector sample on multiple dimensions after normalization processing to obtain multiple feature data; a first data interval corresponding to the plurality of feature data is determined.
In this embodiment, when a first data interval of a plurality of feature vector samples after normalization processing is obtained, feature data of each feature vector sample after normalization processing on a plurality of dimensions is obtained, a plurality of feature data are obtained, maximum feature data and minimum feature data are determined from the plurality of feature data, the maximum feature data may be determined as first upper bound data of the first data interval, and the minimum feature data may be determined as first lower bound data of the first data interval, so as to determine the first data interval.
The method for filtering the first data interval to obtain the key interval is described below.
As an optional implementation manner, the filtering the first data interval, and obtaining the key interval includes: and filtering feature data which are larger than the first feature data and smaller than the second feature data from the first data interval to obtain a key interval, wherein the proportion of the feature data in the key interval to the feature data in the first data interval is larger than a second target threshold.
In this embodiment, when the first data interval is filtered to obtain the key interval, first feature data and second feature data may be determined first, where the first feature data may be critical feature data used to distinguish a small amount of larger feature data in the first data interval, and the second feature data may be critical feature data used to distinguish a small amount of smaller feature data in the first data interval. Filtering feature data larger than the first feature data and smaller than the second feature data to obtain a key interval, wherein the feature data in the key interval accounts for a proportion of the feature data in the first data interval and is larger than a second target threshold, that is, while improving the precision of converting the original feature vector of the first data type into the second feature vector of the second data type, it is also determined that as much feature data as possible in the first data interval falls in the key interval, for example, 99.8% of the feature data in the first data interval falls in the key interval.
As an optional implementation, the obtaining a target feature vector of a pre-stored target face object includes: acquiring pre-stored feature vectors of a plurality of preset face objects, wherein the feature vector of each preset face object is a normalized feature vector of a second data type; and determining the traversed feature vector of one preset face object as a target feature vector of the target face object.
In this embodiment, feature vectors of a plurality of predetermined face objects may be stored in the database in advance, where the plurality of predetermined face objects may be a plurality of faces whose identity information is entered into the database in advance, each predetermined face object corresponds to one feature vector, and the feature vector of each predetermined face object is a normalized feature vector of the second data type and includes a face feature used for representing one face object according to the second data type, so that each feature data in the feature vector of each predetermined face object is treated to the same extent, and each feature data is quantized to a uniform interval. Traversing the feature vectors of the plurality of predetermined face objects, determining the traversed feature vector of one predetermined face object as a target feature vector of the target face object, and further comparing the second feature vector with the target feature vector determined by each traversal to obtain the similarity between the face object to be recognized and the plurality of face objects, so as to obtain a plurality of similarities.
Optionally, the embodiment determines whether the maximum similarity among the multiple similarities is greater than a first target threshold, and determines that the face object to be recognized and the face object corresponding to the maximum similarity are from the same person when the maximum similarity is greater than the first target threshold, and may determine the face object corresponding to the maximum similarity as a retrieval result for retrieving the face object to be recognized. Optionally, under the condition that the maximum similarity is not greater than the first target threshold, it is determined that the face object to be recognized is not any one of the plurality of face objects, and it is determined that a face object close to the face object is retrieved from the face object not to be recognized, so that face retrieval is realized.
Optionally, the embodiment may also be configured to verify the identity of the person from which the face object to be recognized comes, and if the similarity between the face object to be recognized and the target face object is greater than the first target threshold, determine that the face object to be recognized is the target face object, and determine that the identity of the person from which the face object to be recognized comes is satisfactory, or determine the identity of the person from which the target face object comes as the identity of the person from which the face object to be recognized comes.
As an optional implementation manner, in step S208, comparing the second feature vector with the target feature vector to obtain a similarity between the face object to be recognized and the target face object includes: acquiring a first cosine distance between the second feature vector and a target feature vector of the second data type; performing conversion processing on the first cosine distance according to a second type conversion relation to obtain a second cosine distance of the first data type; and determining the second cosine distance as the similarity.
In this embodiment, when the second feature vector and the target feature vector are compared to obtain the similarity between the face object to be recognized and the target face object, the comparison distance between the second feature vector and the target feature vector of the second data type may be calculated. The data type of the first cosine distance is also a second data type.
For example, the second feature vector [ x ] 1 、x 2 ……x n ]Target feature vector [ y 1 、y 2 ……y n ]. Wherein x is 1 、x 2 ……x n Feature data for representing a second feature vector, y 1 、y 2 ……y n Feature data representing a target feature vector, and n representing a dimension. The first cosine distance may be formulated as:
Figure BDA0001899546270000161
wherein due to the second feature vector [ x ] 1 、x 2 ……x n ]Target feature vector [ y 1 、y 2 ……y n ]In order to be a normalized feature vector,
Figure BDA0001899546270000171
thus, the first cosine distance->
Figure BDA0001899546270000172
That is, the first cosine distance is an inner product (dot product) of the second feature vector and the target feature vector.
After the first cosine distance between the second feature vector and the target feature vector of the second data type is obtained, the first cosine distance is converted according to a second type conversion relationship to obtain a second cosine distance of the first data type, where the data type of the second cosine distance is the first data type, that is, the first cosine distance is mapped back to the data space of the first data type, for example, the first cosine distance is mapped back to the Float space, optionally, in this embodiment, the square of the quotient between the dot product result between the second feature vector and the target feature vector and the scale = 254/(max-min) is determined as the second cosine distance, and the second cosine distance is determined as the similarity between the face object to be recognized and the target face object, and further, in the case that the similarity is greater than a first target threshold value, the face object to be recognized is determined as the target face object, so that the recognition and comparison of the face are realized, and the speed of the face comparison is increased, for example, the speed of the operation acceleration can be 1 to 4 times (determined according to a specific hardware platform).
It should be noted that the method for obtaining the similarity between the face object to be recognized and the target face object by obtaining the first cosine distance between the second feature vector and the target feature vector of the second data type to compare the second feature vector with the target feature vector is only an example of the embodiment of the present invention, and does not represent that the method for obtaining the similarity between the face object to be recognized and the target face object of the embodiment of the present invention is only the above method.
As an alternative implementation, the first data type is a single-precision floating-point type, and the second data type is an integer type.
In this embodiment, the first data type is a single precision floating point type, for example, a Float type, which may be Float32, float64, or the like, for indicating the data type of the original feature vector. The second data type is integer, e.g., int8, int16, etc., for indicating the data type of the second feature vector.
It should be noted that the first data type is Float32 and Float64, the second data type is Int8 and Int16 is only an illustration of the embodiment of the present invention, and the original data type is Float32 and Float64, and the second data type is Int8 and Int16, which does not represent the embodiment of the present invention, and any method for converting the original feature vector into the relatively discrete second feature vector to reduce the pressure for storing the feature data is within the scope of the embodiment of the present invention, and is not illustrated here.
The embodiment provides a simple, rapid and effective human face feature processing method, which comprises the steps of carrying out conversion processing on an original feature vector of a first data type according to a first type conversion relation to obtain a second feature vector of a second data type, achieving the purpose of compressing human face feature data, enabling the volume of a hard disk and a memory occupied by features to be reduced by 4 times, and reducing the pressure for storing human face features, further determining a human face object to be recognized as a target human face object under the condition that the similarity between the human face object to be recognized and the target human face object is greater than a first target threshold value, realizing human face retrieval acceleration, wherein the operation acceleration can reach 1-4 times, avoiding the problem of high cost caused by retraining a model due to feature dimension switching when the human face features are processed, realizing the technical effect of reducing the cost for processing the human face features, and ensuring that the effect of the human face features is lossless.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The technical solution of the present invention will be described below with reference to preferred embodiments.
The embodiment provides a simple, quick and effective algorithm for compressing the face features and accelerating the face retrieval, and the acceleration algorithm mainly quantizes the face features into Int8 from Float32, so that the pressure of face feature storage is reduced, and the speed of face comparison is accelerated.
The quantization-based face feature compression method of the embodiment of the present invention is described below.
Fig. 3 is a flowchart of a quantization-based face feature compression method according to an embodiment of the present invention. As shown in fig. 3, the method comprises the steps of:
step S301, extracting Float32 characteristic data from million-level face images, and estimating a quantization interval.
Step S302, based on the quantization interval, the Float32 feature data which needs to be compressed is quantized, and compressed face feature data is obtained.
Step S303, comparing the compressed face feature data with the stored target face feature data to determine whether the faces corresponding to the two feature data involved in the comparison are from one person.
The following describes an embodiment of the present invention, which extracts Float32 feature data from a million-level face image and estimates a quantization interval.
Fig. 4 is a flowchart of a method for extracting Float32 feature data from a face image according to an embodiment of the present invention. As shown in fig. 4, the method comprises the steps of:
step S401, acquiring million-level face images.
And S402, carrying out face detection on the million-level face image to obtain a face detection result.
In the embodiment, the face detection is carried out on the input face image through the face detection network model, and the position of the face is accurately positioned in the face image to obtain the face detection result.
And S403, registering key points of the human face according to the human face detection result to obtain a registration result.
According to the embodiment, after the face detection is carried out on a million-level face image, the detection and the positioning of the human face characteristic points are carried out according to the face detection result, the registration of the key points of the face can be carried out according to the face detection result through the face registration network model, and the positions of the characteristics of eyes, nose, mouth and the like are found on the face, so that the registration result is obtained.
And S404, aligning the face according to the registration result and matting.
This embodiment can align and matte the face into a 248 x 248 image based on the face registration results.
And S405, inputting the scratched face image into a face recognition model to obtain a multidimensional Float32 characteristic.
The face recognition model of the embodiment is a convolutional neural network, and can convert a 248 × 248 face image into a 1024-dimensional Float32 feature, and the feature can be used for face comparison in application scenarios such as face kernel, face retrieval and the like.
In step S406, a quantization interval is estimated.
In this embodiment, the modulus of each face feature data is 1 by performing normalization processing on the face feature data, so as to ensure that the cos distance can be directly calculated in Int8 and mapped back to Float space after the Float32 feature is quantized into Int8 feature. Optionally, in this embodiment, the square values of the feature data of each dimension in the feature face feature data are summed, and the current modular length is obtained by rooting, and then the value of each dimension of the original feature is divided by the modular length, so as to obtain the face feature data after normalization processing.
And determining an upper bound max and a lower bound min of the quantization interval based on the face feature data after normalization processing, so that 99.8% of feature data can fall between the upper bound and the lower bound, and the upper bound and the lower bound are symmetrical, namely, -max = min, so as to determine the quantization interval, so that as many values as possible fall in the quantization interval.
The following describes a method for performing quantization processing on Float32 feature data that needs to be compressed based on a quantization interval to obtain compressed face feature data according to an embodiment of the present invention.
Fig. 5 is a flowchart of a method for performing quantization processing on face feature data according to an embodiment of the present invention. As shown in fig. 5, the method comprises the steps of:
step S501, float32 face feature data needing to be compressed is obtained.
Float32 face feature data of a face object to be recognized is extracted from an input image. The input image can be an image containing a human face, and the input image can be subjected to human face detection, human face registration and human face feature recognition, so that the Float32 human face feature data of the human face object to be recognized is obtained.
Step S502, the extracted Float32 face feature data is quantized to obtain Int8 face feature data after quantization.
In this embodiment, a mapping relationship between data in Float32 space and data corresponding to Int8 space is established.
Optionally, scale = 254/(max-min), where max is used to represent the upper bound value of the quantization interval, min is used to represent the lower bound value of the quantization interval, and 254 is determined according to the range of the data interval of the face feature data obtained by mapping the Float32 face feature data to be compressed into the Int8 space, for example, from-127 to 127.
In this embodiment, the mapped face feature data feature _ Int8_ value = round (scale) ((feature _ Float32_ value-min) -127), where feature _ Float32_ value is used to represent the flow 32 face feature data before mapping, and round represents rounding.
Alternatively, feature _ Int8_ value' = min (127,feature _int8 _value).
Alternatively, issue _ Int8_ value' = max (-127, feature _int8 _value).
Therefore, the facial feature data in the Float32 space are mapped into the Int8 space through the mapping relation, and the quantification processing of the Float32 facial feature data is realized.
The method for comparing the compressed face feature data with the stored target face feature data according to the embodiment of the present invention is described below.
Fig. 6 is a flowchart of a method for comparing face feature data according to an embodiment of the present invention. As shown in fig. 6, the method comprises the steps of:
step S601, a dot product result between the compressed face feature data and the stored target face feature data is obtained.
The data type of the dot product between the compressed face feature data and the stored target face feature data in this embodiment is Int8.
In step S602, the dot product result is mapped back to Float32 space, and the similarity is obtained.
In this embodiment, the square mapping of the quotient between the scale and the dot product result between the compressed face feature data and the stored target face feature data may be used to map the dot product result back to the Float32 space map, and the obtained value may be determined as the similarity between the compressed face feature data and the stored target face feature data.
Step S603, determining whether the similarity is greater than a target threshold.
In step S604, the faces corresponding to the two face feature data participating in the comparison are from one person.
After judging whether the similarity is greater than the target threshold value or not, if the similarity is judged to be less than the target threshold value, determining that the faces corresponding to the two face feature data participating in the comparison are from one person.
In step S605, the faces corresponding to the two face feature data participating in the comparison are from different persons.
After judging whether the similarity is greater than the target threshold value or not, if the similarity is not greater than the target threshold value, determining that the faces corresponding to the two face feature data participating in the comparison are from different persons.
Fig. 7 is a schematic diagram of quantization-based face feature compression according to an embodiment of the present invention. As shown in fig. 7, the face feature compression based on quantization includes a quantization interval estimation stage, a feature quantization stage and a feature comparison stage. Optionally, in the stage of estimating the quantization interval, performing face detection on an input million-level face image through a face detection network model to obtain a face detection result, registering according to the face detection result through a face registration network model, aligning a face according to the registration result, and matting the face into a 248 × 248 image, and then inputting the matting face image into a face recognition model, where the face recognition model is a convolutional neural network, and can convert a 248 × 248 face image into million-level face features, for example, into a set of 1024-dimensional Float32 features, and the features can be used for comparing a face kernel with a face in a face retrieval application scene.
In this embodiment, the normalized features make each face feature data modulo 1, which has the effect of ensuring that the cos distance can be computed directly within Int8 and mapped back into Float space after quantization of Float32 features to Int8 features. The specific method may be to sum the square values of the face feature data of each dimension of the face feature data and take the root to obtain the current modular length, and then divide the face feature data of each dimension of the original features by the modular length to obtain the normalized features.
And an upper bound max and a lower bound min are obtained based on the normalized features, so that 99.8% of feature data falls between the upper bound and the lower bound, and the upper bound and the lower bound are symmetrical, namely-max = min, thereby determining the quantization interval, and enabling as many values as possible to fall in the quantization interval.
In the feature quantization stage, for the Float32 feature data that needs to be compressed, the mapping relationship from the Float32 space to the Int8 space may be: scale = 254/(max-min);
the post-mapping face feature data feature _ Int8_ value = round (scale (feature _ Float32_ value-min) -127), where feature _ Float32_ value is used to represent the pre-mapping Float32 face feature data, and round represents rounding.
Alternatively, feature _ Int8_ value' = min (127,feature _int8 _value).
Alternatively, issue _ Int8_ value' = max (-127, feature _int8 _value).
Therefore, by means of the mapping relation, the feature quantization processing is carried out on the Float32 feature data needing to be compressed, and the compressed Int8 feature data is obtained.
In the feature comparison stage, the inner product of the comparison features is obtained, the inner product of the comparison features is mapped back to Float32 space, and the comparison distance is used for indicating the similarity degree of the comparison features. Alternatively, the alignment distance in Float32 space can be obtained by dividing the dot product of two face feature data in Int8 space by the square mapping of scale. After the mapping is finished, if the comparison distance is higher than the judgment threshold value, the face corresponding to the two face feature data participating in the comparison is judged to be from one person, otherwise, the face corresponding to the two face feature data participating in the comparison is from a different person.
The application environment of the embodiment of the present invention may refer to the application environment in the above embodiments, but is not described herein again. The embodiment of the invention provides an optional specific application of the processing method for implementing the human face features.
Fig. 8 is a schematic view of a scene of a human face nuclear according to an embodiment of the present invention. As shown in fig. 8, in this embodiment, a face object to be recognized may be detected through a terminal, for example, a face image is obtained by performing face detection on an input image of the face object to be recognized through the terminal, a position of the face is accurately located in the face image, detection and location of a person face feature point are performed according to an obtained face detection result, for example, positions of features such as eyes, a nose, a mouth, and the like are found on the face, and then the face is aligned and scratched, finally, 1024-dimensional Float32 feature data is identified from the scratched face image, quantization processing is performed on the 1024-dimensional Float32 feature data based on a quantization interval, and compressed face feature data Int8 feature data is obtained, and a similarity between the compressed face feature data and stored target feature data of a target face object may be obtained, where the target face object may be a face object into which legal identity information is previously entered.
After the similarity between the compressed face feature data and the stored target face feature data of the target face object is obtained, if the similarity is greater than a threshold value, determining that the face object to be recognized is the target face object, and displaying a result that the identity of a person from which the face object to be recognized comes meets the requirement, wherein optionally, the user can further select to execute the next operation or return to the previous operation; if the similarity is not greater than the threshold, determining that the face object to be recognized is not the target face object, displaying the result that the identity of the person from which the face object to be recognized does not meet the requirement, and further selecting to finish the current operation by the user, or returning to the previous operation to perform verification again, performing face detection on the input image of the face object to be recognized by the terminal again, and repeatedly executing the method until the identity of the person from which the face object to be recognized meets the requirement, displaying the result that the verification passes, or finishing face coring when the verification times are greater than the preset times.
Fig. 9 is a scene schematic diagram of face retrieval according to an embodiment of the present invention. As shown in fig. 9, by detecting a face object to be recognized through a terminal, float32 feature data of a face image can be recognized and acquired through the face recognition method shown in fig. 9, and the Float32 feature data is quantized to obtain discrete Int8 feature data.
The embodiment stores a plurality of sets of face feature data of a plurality of face objects in a database in advance, for example, stores face feature data of a face object a, face feature data of a face object B, and face feature data of a face object C. The plurality of face objects may be a plurality of faces into which identity information is previously entered into a database. Traversing each group of face feature data from multiple groups of face feature data, determining one group of face feature data traversed each time as target face feature data, and comparing Int8 feature data with the target face feature data determined each time, so as to obtain similarities between a face object to be recognized and a plurality of face objects, and obtain a plurality of similarities, such as a similarity, B similarity and C similarity. Wherein A similarity > B similarity > C similarity.
Alternatively, the a similarity is 96%, the B similarity is 89%, and the C similarity is 82. And judging whether the maximum similarity in the multiple similarities is larger than a threshold value 90%. For example, it is determined whether the a-similarity is greater than a threshold. And under the condition that the A similarity is greater than the threshold value, determining that the face object to be recognized and the face object corresponding to the A similarity are from the same person, determining the A face object as a retrieval result of retrieving the face object to be recognized, and displaying the retrieval result of retrieving the A. The user may further choose to return to a previous operation or to perform a next operation.
Optionally, the similarity a is 70%, the similarity B is 65%, the similarity C is 55%, and if the maximum similarity 70% is not greater than the threshold 90%, it is determined that the face object to be recognized is not any of the face objects, and it is determined that a face object close to the face object to be recognized is not retrieved from the face object to be recognized. The user can further select to return to the previous operation, perform face detection on the input image of the face object to be recognized again through the terminal, and repeatedly execute the method until a retrieval result for retrieving the face object to be recognized is obtained, or when the retrieval times are greater than the preset times, finish face retrieval.
The method can be applied to all scenes of face retrieval, for example, a suspect bank is established, retrieval is carried out in the suspect bank according to the currently obtained face image, if the face image close to the currently obtained face image can be retrieved, the person from which the currently obtained face image comes can be determined to be a criminal, and if the face image close to the currently obtained face image cannot be retrieved, the person from which the currently obtained face image comes can be determined not to be the criminal, so that the face comparison efficiency is effectively improved.
The embodiment realizes the face retrieval by the method, and reduces the storage pressure on the face feature data and the computation amount of the face retrieval by converting the continuous amount of Float32 feature data into discrete amount of Float32 feature data, thereby accelerating the speed of the face retrieval.
It should be noted that the scene embodiment shown in fig. 8 and fig. 9 is only an example of the embodiment of the present invention, and does not represent that the application scene of the embodiment of the present invention is only the above, and any scene in which a face comparison can be performed based on a quantized face feature compression method is within the scope of the embodiment of the present invention, and is not illustrated here.
The embodiment extracts the Float32 characteristic data from the million-level face image, estimates the quantization interval, quantizes the extracted Float32 characteristic data based on the quantization interval to obtain compressed face characteristic data, and compares the compressed face characteristic data with the stored target face characteristic data to determine whether the faces corresponding to the two characteristic data participating in comparison come from one person or not, without retraining a deep network, so that the pressure for processing the face characteristic data is reduced, the effect of the compressed face characteristic is basically lossless, and 4 times of hard disk storage and memory occupation compression and 1-4 times of retrieval acceleration can be realized.
According to another aspect of the embodiment of the present invention, a processing apparatus for face features is further provided, which is used for implementing the above processing method for face features. Fig. 10 is a schematic diagram of a processing device for human face features according to an embodiment of the present invention. As shown in fig. 10, the processing apparatus 100 for human face features may include: a first extraction unit 10, a first processing unit 20, a conversion unit 30, a first acquisition unit 40 and a first determination unit 50.
The first extraction unit 10 is configured to perform feature extraction on a face object to be recognized in a target image to obtain an original feature vector of a first data type.
The first processing unit 20 is configured to perform normalization processing on the original feature vector to obtain a first feature vector.
The conversion unit 30 is configured to perform conversion processing on the first feature vector according to the first type conversion relationship to obtain a second feature vector of the second data type, where a storage space occupied by the second feature vector is smaller than a storage space occupied by the original feature vector.
The first obtaining unit 40 is configured to obtain a pre-stored target feature vector of the target face object, and compare the second feature vector with the target feature vector to obtain a similarity between the face object to be recognized and the target face object.
And the first determining unit 50 is used for determining the face object to be recognized as the target face object when the similarity is greater than the first target threshold value.
Optionally, the apparatus further comprises: the second extraction unit is used for respectively extracting the features of the face object in the plurality of image samples to obtain a plurality of feature vector samples of the first data type before the first feature vector is converted according to the first type conversion relation to obtain a second feature vector of the second data type; the second processing unit is used for carrying out normalization processing on the plurality of feature vector samples; the second acquisition unit is used for acquiring first data intervals of the feature vector samples after normalization processing; the filtering unit is used for filtering the first data interval to obtain a key interval; and the second determining unit is used for determining the first type conversion relation based on the key interval.
Optionally, the second determining unit includes: the device comprises a first acquisition module, a second acquisition module and a first display module, wherein the first acquisition module is used for acquiring first upper-bound data of a key interval, first lower-bound data of the key interval, second upper-bound data of a second data interval and second lower-bound data of the second data interval, and the second data interval is associated with a second data type; the first determining module is used for determining a target model for indicating the first type of conversion relation through the first upper-bound data, the first lower-bound data, the second upper-bound data and the second lower-bound data.
Optionally, the conversion unit comprises: the processing module is used for processing the feature data in the first feature vector through the target model to obtain an output result; the second determining module is used for determining the second upper-bound data as the feature data of the second feature vector under the condition that the output result is larger than the second upper-bound data; determining the second lower bound data as the feature data of a second feature vector under the condition that the output result is smaller than the second lower bound data; determining the output result as feature data of a second feature vector under the condition that the output result is greater than or equal to second lower bound data and less than or equal to second upper bound data; a second feature vector is determined from the feature data of the second feature vector.
Optionally, the second obtaining unit includes: the second acquisition module is used for acquiring feature data of each feature vector sample subjected to normalization processing on multiple dimensions to obtain multiple feature data; and the third determining module is used for determining a first data interval corresponding to the plurality of characteristic data.
It should be noted that the first extracting unit 10 in this embodiment may be configured to execute step S202 in this embodiment, the first processing unit 20 in this embodiment may be configured to execute step S204 in this embodiment, the converting unit 30 in this embodiment may be configured to execute step S206 in this embodiment, the first obtaining unit 40 in this embodiment may be configured to execute step S208 in this embodiment, and the first determining unit 50 in this embodiment may be configured to execute step S210 in this embodiment.
The embodiment performs feature extraction on a face object to be recognized in a target image to obtain an original feature vector of a first data type, performs conversion processing on the original feature vector of the first data type after normalization processing according to a first type conversion relation to obtain a second feature vector of a second data type, wherein a storage space occupied by the second feature vector is smaller than a storage space occupied by the original feature vector, and further compares the second feature vector with a target feature vector of a target face object stored in advance.
It should be noted here that the above units and modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the above units and modules as part of the apparatus may operate in a hardware environment as shown in fig. 1, may be implemented by software, and may also be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiment of the present invention, an electronic device for implementing the above-mentioned processing method of human face features is also provided.
Fig. 11 is a block diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 11, the electronic device comprises a memory 1102 in which a computer program is stored and a processor 1104 arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Alternatively, in this embodiment, the processor 1104 may be configured to execute the following steps by a computer program:
s1, extracting features of a face object to be recognized in a target image to obtain an original feature vector of a first data type;
s2, normalizing the original feature vector to obtain a first feature vector;
s3, converting the first feature vector according to the first type conversion relation to obtain a second feature vector of a second data type, wherein the storage space occupied by the second feature vector is smaller than that occupied by the original feature vector;
s4, acquiring a target feature vector of a pre-stored target face object, and comparing the second feature vector with the target feature vector to acquire the similarity between the face object to be recognized and the target face object;
and S5, determining the face object to be recognized as the target face object under the condition that the similarity is greater than the first target threshold value.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 11 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 11 is a diagram illustrating the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 11, or have a different configuration than shown in FIG. 11.
The memory 1102 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for processing human face features in the embodiments of the present invention, and the processor 1104 executes various functional applications and data processing by running the software programs and modules stored in the memory 1102, that is, implements the above-described method for processing human face features. The memory 1102 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1102 can further include memory located remotely from the processor 1104 and such remote memory can be coupled to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1102 may be specifically, but not limited to, used for storing information such as extracted face feature data for identifying a face object. As an example, as shown in fig. 11, the memory 1102 may include, but is not limited to, the first extracting unit 10, the first processing unit 20, the converting unit 30, the first acquiring unit 40, and the first determining unit 50 in the processing apparatus 100 including the human face features. In addition, the present invention may further include, but is not limited to, other module units in the processing apparatus for human face features, which are not described in detail in this example.
The transmission device 1106 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1106 includes a Network adapter (NIC) that can be connected to a router via a Network cable to communicate with the internet or a local area Network. In one example, the transmitting device 1106 is a Radio Frequency (RF) module used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1108, configured to display an execution state of the object code in the first objective function; the connection bus 1110 is used to connect the module components in the electronic device.
According to a further aspect of embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, extracting features of a face object to be recognized in a target image to obtain an original feature vector of a first data type;
s2, normalizing the original feature vector to obtain a first feature vector;
s3, converting the first feature vector according to the first type conversion relation to obtain a second feature vector of a second data type, wherein the storage space occupied by the second feature vector is smaller than that occupied by the original feature vector;
s4, acquiring a target feature vector of a pre-stored target face object, and comparing the second feature vector with the target feature vector to acquire the similarity between the face object to be recognized and the target face object;
and S5, determining the face object to be recognized as the target face object under the condition that the similarity is greater than the first target threshold value.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, before a first feature vector is converted according to a first type conversion relation to obtain a second feature vector of a second data type, feature extraction is respectively carried out on a face object in a plurality of image samples to obtain a plurality of feature vector samples of the first data type;
s2, normalizing the plurality of feature vector samples;
s3, acquiring a first data interval of the feature vector samples after normalization processing;
s4, filtering the first data interval to obtain a key interval; a first type of conversion relationship is determined based on the key interval.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring first upper bound data of a key interval, first lower bound data of the key interval, second upper bound data of a second data interval and second lower bound data of the second data interval, wherein the second data interval is associated with a second data type;
and S2, determining a target model for indicating the first type conversion relationship through the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound data.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, processing feature data in a first feature vector through a target model to obtain an output result;
s2, determining the second upper-bound data as the feature data of the second feature vector under the condition that the output result is larger than the second upper-bound data;
s3, determining the second lower bound data as the feature data of a second feature vector under the condition that the output result is smaller than the second lower bound data;
s4, determining the output result as the feature data of the second feature vector under the condition that the output result is greater than or equal to the second lower bound data and less than or equal to the second upper bound data;
and S5, determining a second feature vector through feature data of the second feature vector.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring feature data of each feature vector sample subjected to normalization processing on multiple dimensions to obtain multiple feature data;
and S2, determining a first data interval corresponding to the plurality of characteristic data.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
and filtering feature data which are larger than the first feature data and smaller than the second feature data from the first data interval to obtain a key interval, wherein the proportion of the feature data in the key interval to the feature data in the first data interval is larger than a second target threshold.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring pre-stored feature vectors of a plurality of preset face objects, wherein the feature vector of each preset face object is a normalized feature vector of a second data type;
and S2, determining the traversed feature vector of the preset face object as a target feature vector of the target face object.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a first cosine distance between a second feature vector and a target feature vector of a second data type;
s2, converting the first cosine distance according to a second type conversion relation to obtain a second cosine distance of the first data type;
and S3, determining the second cosine distance as the similarity.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (15)

1. A method for processing human face features is characterized by comprising the following steps:
extracting features of a face object to be recognized in a target image to obtain an original feature vector of a first data type;
normalizing the original feature vector to obtain a first feature vector;
according to a first type conversion relation determined based on a mapping relation between the first data type and a second data type, quantizing the first feature vector into a second feature vector of the second data type, wherein the storage space occupied by the second feature vector is smaller than that occupied by the original feature vector;
acquiring a target feature vector of a pre-stored target face object, wherein the target feature vector is the second data type;
obtaining a dot product result of the second feature vector and the target feature vector, and converting the dot product result into a numerical value of the first data type to obtain a similarity between the face object to be recognized and the target face object;
and under the condition that the similarity is greater than a first target threshold value, determining the face object to be recognized as the target face object.
2. The method of claim 1, wherein prior to quantizing the first eigenvector into a second eigenvector of the second data type, the method further comprises:
respectively extracting features of the face object in a plurality of image samples to obtain a plurality of feature vector samples of the first data type;
normalizing the plurality of feature vector samples;
acquiring a first data interval of the plurality of feature vector samples after normalization processing;
filtering the first data interval to obtain a key interval;
determining the first type of conversion relationship based on the key interval.
3. The method of claim 2, wherein determining the first type of conversion relationship based on the key interval comprises:
acquiring first upper bound data of the key interval, first lower bound data of the key interval, second upper bound data of a second data interval and second lower bound data of the second data interval, wherein the second data interval is associated with the second data type;
determining a target model for indicating the first type of conversion relationship through the first upper bound data, the first lower bound data, the second upper bound data, and the second lower bound data.
4. The method of claim 3, wherein quantizing the first eigenvector into a second eigenvector of the second data type comprises:
processing the feature data in the first feature vector through the target model to obtain an output result;
determining the second upper bound data as feature data of the second feature vector if the output result is larger than the second upper bound data;
determining the second lower bound data as the feature data of the second feature vector under the condition that the output result is smaller than the second lower bound data;
determining the output result as feature data of the second feature vector if the output result is greater than or equal to the second lower bound data and less than or equal to the second upper bound data;
determining the second feature vector from feature data of the second feature vector.
5. The method of claim 2, wherein obtaining the first data interval of the plurality of normalized feature vector samples comprises:
acquiring feature data of each feature vector sample on multiple dimensions after normalization processing to obtain multiple feature data;
determining the first data interval corresponding to the plurality of feature data.
6. The method of claim 2, wherein filtering the first data interval to obtain the key interval comprises:
and filtering feature data which are larger than first feature data and smaller than second feature data from the first data interval to obtain the key interval, wherein the proportion of the feature data in the key interval to the feature data in the first data interval is larger than a second target threshold.
7. The method of claim 1, wherein obtaining a pre-stored target feature vector of a target face object comprises:
acquiring pre-stored feature vectors of a plurality of preset face objects, wherein the feature vector of each preset face object is a normalized feature vector of the second data type;
and determining the traversed feature vector of one preset face object as the target feature vector of the target face object.
8. The method according to claim 1, wherein obtaining a dot product of the second feature vector and the target feature vector, and converting the dot product into a numerical value of the first data type to obtain a similarity between the face object to be recognized and the target face object comprises:
obtaining a first cosine distance between the second feature vector and the target feature vector of the second data type;
performing conversion processing on the first cosine distance according to a second type conversion relation to obtain a second cosine distance of the first data type;
determining the second cosine distance as the similarity.
9. The method of any of claims 1 to 8, wherein the first data type is a single precision floating point type and the second data type is integer.
10. An apparatus for processing human face features, comprising:
the first extraction unit is used for extracting the features of the face object to be recognized in the target image to obtain an original feature vector of a first data type;
the first processing unit is used for carrying out normalization processing on the original characteristic vector to obtain a first characteristic vector;
the conversion unit is used for quantizing and converting the first feature vector into a second feature vector of the second data type according to a first type conversion relation determined based on the mapping relation between the first data type and the second data type, wherein the storage space occupied by the second feature vector is smaller than that occupied by the original feature vector;
the first acquisition unit is used for acquiring a target feature vector of a target face object stored in advance, wherein the target feature vector is the second data type; obtaining a dot product result of the second feature vector and the target feature vector, and converting the dot product result into a numerical value of the first data type to obtain a similarity between the face object to be recognized and the target face object;
and the first determining unit is used for determining the face object to be recognized as the target face object under the condition that the similarity is greater than a first target threshold value.
11. The apparatus of claim 10, further comprising:
a second extraction unit, configured to perform feature extraction on a face object in multiple image samples respectively before quantizing and converting the first feature vector into a second feature vector of the second data type, so as to obtain multiple feature vector samples of the first data type;
the second processing unit is used for carrying out normalization processing on the plurality of feature vector samples;
the second acquisition unit is used for acquiring first data intervals of the plurality of feature vector samples after normalization processing;
the filtering unit is used for filtering the first data interval to obtain a key interval;
a second determining unit, configured to determine the first type conversion relationship based on the key interval.
12. The apparatus according to claim 11, wherein the second determining unit comprises:
a first obtaining module, configured to obtain first upper bound data of the key interval, first lower bound data of the key interval, second upper bound data of a second data interval, and second lower bound data of the second data interval, where the second data interval is associated with the second data type;
a first determining module, configured to determine, through the first upper bound data, the first lower bound data, the second upper bound data, and the second lower bound data, a target model indicating the first type of conversion relationship.
13. The apparatus of claim 12, wherein the conversion unit comprises:
the processing module is used for processing the feature data in the first feature vector through the target model to obtain an output result;
a second determining module, configured to determine, when the output result is greater than the second upper bound data, the second upper bound data as feature data of the second feature vector; determining the second lower bound data as feature data of the second feature vector when the output result is smaller than the second lower bound data; determining the output result as feature data of the second feature vector if the output result is greater than or equal to the second lower bound data and less than or equal to the second upper bound data; determining the second feature vector from feature data of the second feature vector.
14. The apparatus of claim 11, wherein the second obtaining unit comprises:
the second acquisition module is used for acquiring feature data of each feature vector sample subjected to normalization processing on multiple dimensions to obtain multiple feature data;
a third determining module, configured to determine the first data interval corresponding to the multiple feature data.
15. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 9 when executed.
CN201811506344.4A 2018-12-10 2018-12-10 Method and device for processing human face features and storage medium Active CN110147710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811506344.4A CN110147710B (en) 2018-12-10 2018-12-10 Method and device for processing human face features and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811506344.4A CN110147710B (en) 2018-12-10 2018-12-10 Method and device for processing human face features and storage medium

Publications (2)

Publication Number Publication Date
CN110147710A CN110147710A (en) 2019-08-20
CN110147710B true CN110147710B (en) 2023-04-18

Family

ID=67588394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811506344.4A Active CN110147710B (en) 2018-12-10 2018-12-10 Method and device for processing human face features and storage medium

Country Status (1)

Country Link
CN (1) CN110147710B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942014B (en) * 2019-11-22 2023-04-07 浙江大华技术股份有限公司 Face recognition rapid retrieval method and device, server and storage device
CN111178540A (en) * 2019-12-29 2020-05-19 浪潮(北京)电子信息产业有限公司 Training data transmission method, device, equipment and medium
CN111191612B (en) * 2019-12-31 2023-05-12 深圳云天励飞技术有限公司 Video image matching method, device, terminal equipment and readable storage medium
CN111291682A (en) * 2020-02-07 2020-06-16 浙江大华技术股份有限公司 Method and device for determining target object, storage medium and electronic device
CN111428652B (en) * 2020-03-27 2021-06-08 恒睿(重庆)人工智能技术研究院有限公司 Biological characteristic management method, system, equipment and medium
CN111652242B (en) * 2020-04-20 2023-07-04 北京迈格威科技有限公司 Image processing method, device, electronic equipment and storage medium
CN111797746B (en) 2020-06-28 2024-06-14 北京小米松果电子有限公司 Face recognition method, device and computer readable storage medium
CN112241686A (en) * 2020-09-16 2021-01-19 四川天翼网络服务有限公司 Trajectory comparison matching method and system based on feature vectors
CN112633297B (en) * 2020-12-28 2023-04-07 浙江大华技术股份有限公司 Target object identification method and device, storage medium and electronic device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6422467B2 (en) * 1995-12-18 2002-07-23 Metrologic Instruments, Inc. Reading system a variable pass-band
CN104573696B (en) * 2014-12-29 2018-09-21 杭州华为数字技术有限公司 Method and apparatus for handling face characteristic data
CN108090433B (en) * 2017-12-12 2021-02-19 厦门集微科技有限公司 Face recognition method and device, storage medium and processor

Also Published As

Publication number Publication date
CN110147710A (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN110147710B (en) Method and device for processing human face features and storage medium
CN111950653B (en) Video processing method and device, storage medium and electronic equipment
RU2505856C2 (en) Method and apparatus for representing and identifying feature descriptors using compressed histogram of gradients
CN112163637B (en) Image classification model training method and device based on unbalanced data
CN110532746B (en) Face checking method, device, server and readable storage medium
CN113515988A (en) Palm print recognition method, feature extraction model training method, device and medium
JP6460926B2 (en) System and method for searching for an object in a captured image
CN113157962B (en) Image retrieval method, electronic device, and storage medium
CN113128278A (en) Image identification method and device
CN113743533B (en) Picture clustering method and device and storage medium
CN115359390A (en) Image processing method and device
CN115205613A (en) Image identification method and device, electronic equipment and storage medium
CN110956098B (en) Image processing method and related equipment
CN114359993A (en) Model training method, face recognition device, face recognition equipment, face recognition medium and product
CN113673449A (en) Data storage method, device, equipment and storage medium
CN111316326A (en) Image encoding method, apparatus and computer-readable storage medium
CN112200247B (en) Image processing system and method based on multi-dimensional image mapping
CN112767348B (en) Method and device for determining detection information
CN112149470B (en) Pedestrian re-identification method and device
CN118015386B (en) Image recognition method and device, storage medium and electronic equipment
CN111400680B (en) Mobile phone unlocking password prediction method based on sensor and related device
CN111783711B (en) Skeleton behavior identification method and device based on body component layer
CN117011949A (en) Identity authentication method, model training method, device, equipment and storage medium
CN114359986A (en) Information updating method and device, storage medium and electronic device
CN113782033A (en) Voiceprint recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant