CN110147710A - Processing method, device and the storage medium of face characteristic - Google Patents
Processing method, device and the storage medium of face characteristic Download PDFInfo
- Publication number
- CN110147710A CN110147710A CN201811506344.4A CN201811506344A CN110147710A CN 110147710 A CN110147710 A CN 110147710A CN 201811506344 A CN201811506344 A CN 201811506344A CN 110147710 A CN110147710 A CN 110147710A
- Authority
- CN
- China
- Prior art keywords
- data
- feature vector
- face
- characteristic
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a kind of processing method of face characteristic, device and storage mediums.Wherein, this method comprises: carrying out feature extraction to face object to be identified in the target image, the original feature vector of the first data type is obtained;Original feature vector is normalized, first eigenvector is obtained;Conversion process is carried out according to first kind transformational relation to first eigenvector, obtains the second feature vector of the second data type, wherein memory space shared by second feature vector is less than memory space shared by original feature vector;The target feature vector of pre-stored target face object is obtained, and compares second feature vector and target feature vector, to obtain the similarity between face object to be identified and target face object;In the case where similarity is greater than first object threshold value, determine that face object to be identified is target face object.The present invention solves the big technical problem of the cost handled in the related technology to face characteristic.
Description
Technical field
The present invention relates to field of image processings, in particular to a kind of processing method of face characteristic, device and storage
Medium.
Background technique
Currently, needing to train depth network model, by modifying depth network model when handling face characteristic
Feature output dimension realize the compression to face characteristic, for example, the face characteristic of 1024 dimensions to be reduced to the face of 256 dimensions
Feature.
Although the above method may be implemented the compression to face characteristic, but the switching of characteristic dimension, need re -training
Depth network model, to considerably increase this cost handled face characteristic.
For the big problem of the above-mentioned cost handled face characteristic, effective solution side is not yet proposed at present
Case.
Summary of the invention
The embodiment of the invention provides a kind of processing method of face characteristic, device and storage mediums, at least to solve phase
The big technical problem of the cost that face characteristic is handled in the technology of pass.
According to an aspect of an embodiment of the present invention, a kind of processing method of face characteristic is provided.This method comprises:
Feature extraction is carried out to face object to be identified in target image, obtains the original feature vector of the first data type;To original
Feature vector is normalized, and obtains first eigenvector;To first eigenvector according to first kind transformational relation into
Row conversion process obtains the second feature vector of the second data type, wherein memory space shared by second feature vector is less than
Memory space shared by original feature vector;It obtains the target feature vector of pre-stored target face object, and compares the
Two feature vectors and target feature vector, to obtain the similarity between face object to be identified and target face object;In phase
In the case where being greater than first object threshold value like degree, determine that face object to be identified is target face object.
According to another aspect of an embodiment of the present invention, a kind of processing unit of face characteristic is additionally provided.The device includes:
First extraction unit obtains the first data type for carrying out feature extraction to face object to be identified in the target image
Original feature vector;First processing units obtain first eigenvector for original feature vector to be normalized;
Converting unit obtains the second data type for carrying out conversion process according to first kind transformational relation to first eigenvector
Second feature vector, wherein memory space shared by second feature vector be less than original feature vector shared by memory space;
First acquisition unit for obtaining the target feature vector of pre-stored target face object, and compares second feature vector
With target feature vector, to obtain the similarity between face object to be identified and target face object;First determination unit is used
In in the case where similarity is greater than first object threshold value, determine face object to be identified for target face object.
In embodiments of the present invention, feature extraction is carried out to face object to be identified in the target image, obtains the first number
According to the original feature vector of type, the original feature vector of the first data type after normalized is turned according to the first kind
It changes relationship and carries out conversion process, obtain the second feature vector of the second data type, memory space shared by second feature vector
Less than memory space shared by original feature vector, and then compare second feature vector and pre-stored target face object
The case where target feature vector, the similarity between face object to be identified and target face object is greater than first object threshold value
Under, determine face object to be identified be target face object, that is to say, that by the original feature vector of the first data type according to
First kind transformational relation carries out conversion process, obtains the second feature vector of the second data type, has reached to face characteristic
The purpose that data are compressed reduces the pressure stored to face characteristic, and then in face object to be identified and target
In the case that similarity between face object is greater than first object threshold value, determine that face object to be identified is target face pair
As avoiding when handling face characteristic, since the switching of characteristic dimension needs cost caused by re -training model
Big problem, realizing reduces the technical effect of cost handled face characteristic, so solve it is right in the related technology
The big technical problem of the cost that face characteristic is handled.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair
Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the schematic diagram of the hardware environment of the processing method of face characteristic according to an embodiment of the present invention;
Fig. 2 is a kind of flow chart of the processing method of face characteristic according to an embodiment of the present invention;
Fig. 3 is a kind of flow chart of face characteristic compression method based on quantization according to an embodiment of the present invention;
Fig. 4 is a kind of stream of method that Float32 characteristic is extracted from facial image according to an embodiment of the present invention
Cheng Tu;
Fig. 5 is the flow chart of the method for a kind of pair of face characteristic data progress quantification treatment according to embodiments of the present invention;
Fig. 6 is a kind of flow chart of the method for face characteristic comparing according to an embodiment of the present invention;
Fig. 7 is a kind of schematic diagram of face characteristic compression based on quantization according to embodiments of the present invention;
Fig. 8 is a kind of schematic diagram of a scenario of face core body according to an embodiment of the present invention;
Fig. 9 is a kind of schematic diagram of a scenario of face retrieval according to an embodiment of the present invention;
Figure 10 is a kind of schematic diagram of the processing unit of face characteristic according to an embodiment of the present invention;And
Figure 11 is a kind of structural block diagram of electronic device according to an embodiment of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work
It encloses.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover
Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to
Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product
Or other step or units that equipment is intrinsic.
According to an aspect of an embodiment of the present invention, a kind of processing method of face characteristic is provided.Optionally, as one
The optional embodiment of kind, the processing method of above-mentioned face characteristic can be, but not limited to be applied in environment as shown in Figure 1.Its
In, Fig. 1 is the schematic diagram of the hardware environment of image detecting method according to an embodiment of the present invention.As shown in Figure 1, user 102 can
Data interaction can be carried out between user equipment 104, can be, but not limited to include 106 He of memory in user equipment 104
Processor 108.
In this embodiment, user equipment 104 can input target image, can execute step by processor 108
The data of above-mentioned target image are sent to server 112 by network 110 by S102.It include database in server 112
114 and processor 116.
Server 112 is after the data for getting above-mentioned target image, and processor 116 is in the target image to be identified
Face object carries out feature extraction, obtains the original feature vector of the first data type, original feature vector is normalized
Processing, obtains first eigenvector, carries out conversion process according to first kind transformational relation to first eigenvector, obtains second
The second feature vector of data type, wherein memory space shared by second feature vector is less than shared by original feature vector
Memory space, processor 116 obtains the target feature vector of pre-stored target face object from database 114, and compares
To second feature vector and target feature vector, to obtain the similarity between face object to be identified and target face object,
In the case where similarity is greater than first object threshold value, determine that face object to be identified is target face object, and then execute step
The result that face object to be identified is target face object is returned to user equipment 104 by network 110 by rapid S104.
User equipment 104 can be by storing the result that face object to be identified is target face object in memory 106.
In the related art, when handling face characteristic, the switching of characteristic dimension needs re -training depth net
Network model, to considerably increase this cost handled face characteristic.And the embodiment of the present invention is by the first data class
The original feature vector of type carries out conversion process according to first kind transformational relation, obtain the second feature of the second data type to
Amount, has achieved the purpose that compress face characteristic data, reduces the pressure stored to face characteristic, so to
In the case where identifying that the similarity between face object and target face object is greater than first object threshold value, face to be identified is determined
Object is target face object, is avoided when handling face characteristic, since the switching of characteristic dimension needs to instruct again
Practice the big problem of cost caused by model, realizes the technical effect for reducing the cost handled face characteristic, and then solve
The big technical problem of the cost that face characteristic is handled in the related technology of having determined.
Fig. 2 is a kind of flow chart of the processing method of face characteristic according to an embodiment of the present invention.As shown in Fig. 2, the party
Method may comprise steps of:
Step S202 carries out feature extraction to face object to be identified in the target image, obtains the first data type
Original feature vector.
In the technical solution that step S202 is provided, the first data of face object to be identified are extracted from target image
The original feature vector of type, the original feature vector are used to go out according to the first data types to express the people of face object to be identified
Face feature.
In this embodiment, target image can be the image currently entered comprising face, can to target image into
Row Face datection, face registration and face characteristic identification, to obtain the original of the first data type of face object to be identified
Feature vector, wherein face object to be identified is the face of pending identification.
Optionally, which carries out Face datection by target image of the Face datection network model to input, in mesh
It is accurately positioned in logo image to the position of face and obtains the people of face object to be identified to find face in the target image
Face testing result.
After the Face datection result for obtaining face object to be identified, face key point is carried out according to Face datection result
Registration can be registrated network model by face that is, carrying out the detection and positioning of human face characteristic point according to Face datection result
According to Face datection result carry out face key point registration, find the position of the features such as eyebrow, eyes, nose, mouth, obtain to
Identify the registration result of face object.
After the registration result for obtaining face object to be identified, face alignment is carried out according to registration result, that is, according to
Registration result carries out face correction, so that face becomes just.After carrying out face alignment according to registration result, to the figure after alignment
As carrying out scratching figure, for example, the image after alignment to be scratched into the facial image of 248*248, then the facial image scratched is inputted
To human face recognition model, which can be deep neural network, for example, passing through face for convolutional neural networks
Identification model extracts the original feature vector of the first data type from the facial image scratched, which is used for
Go out the face characteristic of face object to be identified according to the first data types to express, for example, the first data type is single-precision floating point
Type, the original feature vector include the characteristic of one group of single-precision floating point type, that is, including the single-precision floating point of multidimensional
The characteristic of type, wherein multidimensional can be 1024 dimensions, and single-precision floating point type can be Float32 type, Float64
Type etc..
Step S204, is normalized original feature vector, obtains first eigenvector.
In the technical solution that step S204 is provided, feature extraction is carried out to face object to be identified in the target image,
After obtaining the original feature vector of the first data type, original feature vector is normalized, obtains the first data
The first eigenvector of type, that is, the characteristic for multiple dimensions that original feature vector includes is standardized,
To each characteristic of equal extent treated in original feature vector, by each characteristic quantization to unified area
Between, so that it is guaranteed that obtaining the second data type carrying out conversion process according to first kind transformational relation to first eigenvector
Second feature vector after, second feature vector is directly compared with target feature vector, to obtain face to be identified
Similarity between object and target face object, the comparativity between Enhanced feature data, wherein target feature vector is also
Feature vector after normalized.
Optionally, which is normalized to original feature vector, when obtaining first eigenvector, according to
The mould that the characteristics of multiple dimensions of first eigenvector obtains original feature vector is long, by the characteristic of each dimension with
The quotient of mould length is determined as the characteristic of first eigenvector.
Optionally, the quadratic sum for obtaining the characteristic of multiple dimensions of original feature vector, opens the quadratic sum
Side, the mould for obtaining original feature vector is long, and then by the quotient of the characteristic of each dimension and mould length, be determined as fisrt feature to
The characteristic of amount carries out original feature vector to realize the characteristic of multiple dimensions by original feature vector
Normalized, so that the mould of original feature vector a length of 1.
Step S206 carries out conversion process according to first kind transformational relation to first eigenvector, obtains the second data
The second feature vector of type.
It in the technical solution that step S206 is provided, is normalized to original feature vector, obtains the first spy
Levy vector after, to the first eigenvector carry out compression processing, by first eigenvector according to first kind transformational relation into
Row conversion process obtains the second feature vector of the second data type, wherein second feature vector is used for according to the second data class
Type represents the face characteristic of face object to be identified, and memory space shared by the second feature vector be less than primitive character to
The shared memory space of amount, the second feature vector are normalized feature vector.
In this embodiment, the first data type makes initial characteristic data relative to the second feature of the second data type
Vector is continuous, and the second data type makes second feature vector discrete relative to the original feature vector of the first data type, than
Such as, which is integer, can be Int8, Int16 etc., and the first data type of original feature vector is single precision
Floating point type can be Float32 type, Float64 type etc., and the original feature vector of single-precision floating point type is relative to whole
The second feature vector of type is continuous, the second feature vector of integer relative to single-precision floating point type original feature vector from
It dissipates, to realize the original feature vector of the first data type, is converted to the second of the second data type of relative discrete
Feature vector realizes and carries out quantification treatment to original feature vector, to reduce the pressure stored to characteristic.
The first kind transformational relation of the embodiment is by the first eigenvector of the first data type to the second data class
The mapping relations that the second feature vector of type is converted, for example, for first eigenvector and Int8 sky in the space Float32
Between second feature vector between mapping relations so that storage shared by the second feature vector of the second data type is empty
Between be less than original feature vector shared by memory space, can make hard disk and memory shared by data characteristics volume reduce 4 times,
The pressure stored to characteristic is reduced, and is not necessarily to re -training depth network, is reduced at face characteristic
The cost of reason, accelerates the speed of face alignment, and ensure that and carry out to first eigenvector according to first kind transformational relation
The effect of conversion process, the second feature vector of the second obtained data type is substantially lossless.
Step S208 obtains the target feature vector of pre-stored target face object, and compares second feature vector
With target feature vector, to obtain the similarity between face object to be identified and target face object.
In the technical solution that step S208 is provided, first eigenvector is being turned according to first kind transformational relation
Processing is changed, after obtaining the second feature vector of the second data type, recognition of face comparison is carried out by second feature vector, it can
To obtain the target feature vector of pre-stored target face object, and second feature vector and target feature vector are compared,
To obtain the similarity between face object to be identified and target face object.
The target face object of the embodiment can be in advance to the face of Input of Data identity information, target signature
Vector is used to go out the face characteristic of target face object according to the second data types to express, for the feature after normalized to
Amount.After the target feature vector for obtaining pre-stored target face object, by second feature vector and target signature to
It measures as two feature vectors for participating in comparing, to obtain the similarity between face object to be identified and target face object,
The similarity is used to indicate the similarity degree between face object to be identified and target face object.
Step S210 determines that face object to be identified is target person in the case where similarity is greater than first object threshold value
Face object.
In the technical solution that step S210 is provided, the phase between face object and target face object to be identified is being obtained
After degree, judge whether similarity is greater than first object threshold value, the first object threshold value namely judgment threshold can be preparatory
What is set is used to measure the critical value of similarity size, for example, the first object threshold value is 75%.It is greater than first in similarity
In the case where targets threshold, that is, the higher situation of similarity degree between face object to be identified and target face object
Under, determine that face object to be identified is target face object, it is believed that face object to be identified comes from target face object
In the same person;Optionally, similarity be not more than first object threshold value in the case where, that is, face object to be identified with
In the lower situation of similarity degree between target face object, the gap of face object to be identified Yu target face object is determined
It is larger, it is believed that face object to be identified and target face object not from the same person, thus realize to face into
Row identification compares, and while reducing the storage pressure to face characteristic data, reduces the operand of face alignment, in turn
Accelerate the speed of face alignment.
S202 to step S210 through the above steps turns the original feature vector of the first data type according to the first kind
It changes relationship and carries out conversion process, obtain the second feature vector of the second data type, reached and face characteristic data are pressed
The purpose of contracting, reduces the pressure stored to face characteristic, so face object to be identified and target face object it
Between similarity be greater than first object threshold value in the case where, determine face object to be identified be target face object, avoid
It is real as the problem that the switching of characteristic dimension needs cost caused by re -training model big when handling face characteristic
The technical effect for reducing the cost handled face characteristic is showed, and then has solved and face characteristic is carried out in the related technology
The big technical problem of the cost of processing.
As an alternative embodiment, in step S206, to first eigenvector according to first kind transformational relation
Carry out conversion process, before obtaining the second feature vector of the second data type, this method further include: respectively in multiple images sample
Feature extraction is carried out to face object in this, obtains multiple feature vector samples of the first data type;To multiple feature vectors
Sample is normalized;First data interval of multiple feature vector samples after obtaining normalized;To the first number
It is filtered processing according to section, obtains interval key;First kind transformational relation is determined based on interval key.
In this embodiment, conversion process is being carried out according to first kind transformational relation to first eigenvector, is obtaining the
Before the second feature vector of two data types, can based on multiple images sample estimate interval key, the interval key namely
To the quantized interval that the first eigenvector of the first data type is handled, for determining to the original feature vector amount of progress
Change first kind transformational relation when processing, and the second feature vector of the second data type of the embodiment is needed in the pass
In between keypad.
In this embodiment, multiple images sample is inputted, multiple target image can be million grades of facial images, respectively
Feature extraction is carried out to face object in multiple images sample, obtains multiple feature vector samples of the first data type, it can
It is identified with carrying out Face datection, face registration and face characteristic to multiple images sample respectively, to obtain multiple feature vectors
Sample, multiple feature vector sample can be million grades of face characteristics.
Optionally, which carries out Face datection by each image pattern of the Face datection network model to input,
In each image pattern be accurately positioned arrive face position, thus obtain the Face datection in each image pattern as a result, into
And face key point registration is carried out according to the Face datection result in each image pattern, that is, according in each image pattern
Face datection result carry out human face characteristic point detection and positioning, can by face be registrated network model according to Face datection
As a result face key point registration is carried out, the registration result of the face object in each image pattern is obtained.Obtaining each image
After the registration result of face object in sample, face alignment is carried out according to registration result, that is, carrying out according to registration result
Face correction, so that face becomes just.It is aligned carrying out face according to the registration result of the face object in each image pattern
Afterwards, the image after alignment is carried out scratching figure, the facial image scratched is input to human face recognition model, passes through human face recognition model
Feature vector sample is extracted from the facial image scratched, this feature vector sample may include the single-precision floating point of 1024 dimensions
The characteristic of type.
After obtaining multiple feature vector samples of the first data type, multiple feature vector samples are normalized
Processing, that is, being standardized to the characteristic for multiple dimensions that each feature vector sample includes, thus same journey
The each characteristic of degree treated in each feature vector sample, by each characteristic amount in each feature vector sample
Change to unified section.
Multiple feature vectors after multiple feature vector samples are normalized, after obtaining normalized
The upper bound data of first data interval of sample, first data interval can be multiple feature vector samples after normalized
The lower bound data of maximum characteristic in this, first data interval can be multiple feature vector samples after normalized
Minimal characteristic data in this.Processing is filtered to the first data interval, obtains interval key, that is, interval key includes
First kind transformational relation is determined in the first data interval, and then based on interval key, wherein the feature in the first data interval
Vector is normal distribution, interval key can not include larger characteristic a small amount of in the first data interval or it is a small amount of compared with
Small characteristic, thus improve by the original feature vector of the first data type be converted into the second feature of the second data type to
The precision of amount.
The above-mentioned method for determining first kind transformational relation based on interval key is introduced below.
As an alternative embodiment, determining that first kind transformational relation includes: to obtain key based on interval key
First upper bound data, the first lower bound data of interval key, the second upper bound data of the second data interval and second number in section
According to the second lower bound data in section, wherein the second data interval is associated with the second data type;By the first upper bound data,
First lower bound data, the second upper bound data and the second lower bound data determine the target mould for being used to indicate first kind transformational relation
Type.
In this embodiment, the first upper bound data of interval key and the first lower bound data of interval key are symmetrical, with
The second upper bound data and the second lower bound data of associated second data interval of two data types are symmetrical, for example, interval key
The first upper bound data be max, the first lower bound data of interval key are min, then-max=min, the second data type are
Int8 type, the second upper bound data and the second lower bound data of the second data interval are symmetrical, and the second upper bound data can be 127,
Second lower bound data can be -127.First upper bound data of available interval key, the first lower bound data of interval key,
Second upper bound data of the second data interval and the second lower bound data of the second data interval, pass through the first upper bound data, key
Second lower bound data of the first lower bound data in section, the second upper bound data of the second data interval and the second data interval determine
Be used to indicate the object module of first kind transformational relation, the object module can for the fisrt feature of the first data type to
Measure the objective function calculated.
Optionally, which is passing through the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound
Data, when determination is used to indicate the object module of first kind transformational relation, by the first upper bound data and the first lower bound data two
Difference between person, be determined as object module first are poor;By the second upper bound data and the second lower bound data difference between the two, really
Be set to object module second is poor;By the second difference quotient between the two with the first difference, it is determined as the median of object module;By mesh
The input variable and the first lower bound data difference between the two of model are marked, the third for being determined as object module is poor, and third is poor
The product between the two with median is determined as the first product of object module, wherein input variable is for indicating first eigenvector
In characteristic;By to the first long-pending difference between the second upper bound data be rounded as a result, being determined as target mould
The output result of type, wherein output result is used to indicate the characteristic in second feature vector;The is determined by exporting result
Two feature vectors.
In this embodiment it is possible to which the difference that the first upper bound data and the first lower bound data are between the two, is determined as target
The first of model is poor, for example, the difference that the first upper bound data max and the first lower bound data min is between the two, is determined as target mould
The first of type is poor (max-min).
By the second upper bound data and the second lower bound data difference between the two, be determined as object module second is poor, for example,
By the second upper bound data 127 and the second lower bound data -127 difference between the two, be determined as object module the second difference 127- (-
127)=254.
By the second difference quotient between the two with the first difference, it is determined as the median of object module, for example, by second poor 254
The quotient between the two with first poor (max-min), is determined as the median scale=254/ (max-min) of object module, in this
Between be worth also i.e. by the first eigenvector of the first data type be converted into the second data type second feature vector quantization essence
Degree.
By the input variable of object module and the first lower bound data difference between the two, it is determined as the third of object module
Difference, for example, the difference that the input variable feature_Float32_value of object module and the first floor value min is between the two,
The third for being determined as object module is poor (feature_Float32_value-min) and third difference and median is between the two
Product, be determined as object module first product, for example, by third poor (feature_Float32_value-min) and median
Product scale between the two is determined as the first product scale* (feature_Float32_value-min) of object module.
By to the first long-pending difference between the second upper bound data be rounded as a result, being determined as the defeated of object module
It out as a result, for example, will be between the first product scale* (feature_Float32_value-min) and the second upper bound data 127
The result round (scale* (feature_Float32_value-min) -127) that is rounded of difference, be determined as target
The output of model is as a result, the output result is used to indicate the characteristic in second feature vector, that is, object module can be with table
It is shown as feature_Int8_value=round (scale* (feature_Float32_value-min) -127), wherein
Round rounds up for indicating, and then the second feature vector of the second data type is determined by exporting result, to realize
By the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound data, determination is used to indicate the first kind
The purpose of the object module of type transformational relation.
It should be noted that above by the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound
Data, the method for determining the object module for being used to indicate first kind transformational relation are only a kind of citing of the embodiment of the present invention,
Not representing the set the goal really method of model of the embodiment of the present invention is only the above method, any to pass through the first upper bound number
According to, the first lower bound data, the second upper bound data and the second lower bound data, determine the method for object module all in the embodiment of the present invention
Within the scope of, it no longer illustrates one by one herein.
The process of the second feature vector for obtaining the second data type of the embodiment is introduced below.
As an alternative embodiment, the process for obtaining the second feature vector of the second data type includes: to pass through
Object module handles the characteristic in first eigenvector, obtains output result;It is greater than on second in output result
In the case where boundary's data, the second upper bound data are determined as to the characteristic of second feature vector;In output result less than second
In the case where lower bound data, by the second lower bound data, it is determined as the characteristic of second feature vector;Be greater than in output result etc.
In the second lower bound data and in the case where being less than or equal to the second upper bound data, output result is determined as to the spy of second feature vector
Levy data;Second feature vector is determined by the characteristic of second feature vector.
By the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound data, determine for referring to
After the object module for showing first kind transformational relation, conversion process is carried out to first eigenvector by object module, is obtained
The second feature vector of second data type needs to guarantee that second feature vector is in the second lower bound data and the second upper bound data
Between.The embodiment sets data saturation strategy, in the case where exporting result and being greater than the second upper bound data, directly by the
Two upper bound data are determined as the characteristic of second feature vector, for example, the second upper bound data are 127, then feature_Int8_
Value=min (127, feature_Int8_value);In the case where exporting result less than the second lower bound data, by second
Lower bound data are determined as the characteristic of second feature vector, for example, feature_Int8_value=max (- 127,
feature_Int8_value);It is more than or equal to the second lower bound data in output result and is less than or equal to the feelings of the second upper bound data
Under condition, direct output result is determined as to the characteristic of second feature vector, so that it is guaranteed that second feature vector is in second
Between lower bound data and the second upper bound data, and then second feature vector is determined by the characteristic of second feature vector.
The embodiment is passing through the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound data, really
Surely it is used to indicate after the object module of first kind transformational relation, first eigenvector is carried out at conversion by object module
Reason, and then the second feature vector of the second data type is obtained, it realizes the original feature vector of the first data type, conversion
For the second feature vector of the second data type of relative discrete, the pressure stored to face characteristic is reduced, and then is reduced
The cost that face characteristic is handled, and ensure that first eigenvector is turned according to first kind transformational relation
Processing is changed, the effect of the second feature vector of the second obtained data type is substantially lossless.
As an alternative embodiment, obtaining the first data field of multiple feature vector samples after normalized
Between include: characteristic of each feature vector sample in multiple dimensions after obtaining normalized, obtain multiple features
Data;Determine the first data interval corresponding with multiple characteristics.
In this embodiment, it when obtaining the first data interval of multiple feature vector samples after normalized, obtains
Characteristic of each feature vector sample in multiple dimensions after taking normalized, obtains multiple characteristics, from more
Maximum characteristic and minimal characteristic data are determined in a characteristic, maximum characteristic can be determined as the first data field
Between the first upper bound data, minimal characteristic data are determined as to the first lower bound data of the first data interval, so that it is determined that first
Data interval.
Processing is filtered to the first data interval to above-mentioned below, the method for obtaining interval key is introduced.
As an alternative embodiment, be filtered processing to the first data interval, obtain interval key include: from
In first data interval, the characteristic of second feature data is filtered out greater than fisrt feature data and be less than, key area is obtained
Between, wherein the characteristic in interval key accounts for the ratio of the characteristic in the first data interval, is greater than the second target threshold
Value.
In this embodiment, processing is being filtered to the first data interval, when obtaining interval key, can first determining
One characteristic and second feature data, the fisrt feature data can be for for distinguishing in the first data interval on a small quantity larger
The critical characteristic data of characteristic, second feature data can be in the first data interval for distinguishing a small amount of smaller feature
The critical characteristic data of data.The characteristic for filtering out greater than fisrt feature data and being less than second feature data, is closed
Between keypad, the characteristic in the interval key accounts for the ratio of the characteristic in the first data interval, is greater than the second target threshold
Value, that is, make improve by the original feature vector of the first data type be converted into the second feature of the second data type to
While the precision of amount, determine also that characteristic as much as possible is fallen in interval key in the first data interval, for example, making
In the first data interval 99.8% characteristic fall in interval key.
As an alternative embodiment, the target feature vector for obtaining pre-stored target face object includes:
Obtain the feature vector of pre-stored multiple predetermined face objects, wherein the feature vector of each predetermined face object is the
The normalized feature vector of two data types;By the feature vector of traverse predetermined face object, it is determined as target
The target feature vector of face object.
In this embodiment it is possible to store the feature vector of multiple predetermined face objects in the database in advance, it is multiple
Predetermined face object can be for multiple faces of Input of Data identity information, each predetermined face object is one corresponding in advance
Feature vector, the feature vector of each predetermined face object are the normalized feature vector of the second data type, including with
In the face characteristic for going out a face object according to the second data types to express, so that equal extent treats each predetermined face
Each characteristic in the feature vector of object, by each characteristic quantization to unified section.From multiple predetermined faces
The feature vector of object is traversed, and by the feature vector of traverse predetermined face object, is determined as target face pair
The target feature vector of elephant, and then second feature vector and the every time determining target feature vector of traversal are compared, to obtain wait know
Similarity between others' face object and multiple face objects, obtains multiple similarities.
Optionally, which judges whether the maximum similarity in multiple similarities is greater than first object threshold value, most
In the case that big similarity is greater than first object threshold value, face object to be identified and face pair corresponding with maximum similarity are determined
As coming from same people, face object corresponding with maximum similarity can be determined as retrieving face object to be identified
Search result.Optionally, in the case where maximum similarity is not more than first object threshold value, it is determined that face object to be identified is not
For any face object in multiple face objects, determine not to face object retrieval to be identified go out with its similar in face pair
As being converted to the second of relative discrete to realize face retrieval, and by for the original feature vector of the first data type
The second feature vector of data type reduces face retrieval while reducing the storage pressure to face characteristic data
Operand, and then accelerate the speed of face retrieval.
Optionally, the embodiment can be also used for veritifying face object to be identified from people identity, if wait know
Similarity between others' face object and target face object is greater than first object threshold value, it is determined that face object to be identified is mesh
Mark face object, then face object to be identified from the identity of people be satisfactory, or by target face object institute
From people identity be determined as face object to be identified from people identity.
As an alternative embodiment, step S208, compares second feature vector and target feature vector, to obtain
Similarity between face object to be identified and target face object includes: to obtain the second data type of second feature vector sum
The first COS distance between target feature vector;First COS distance is carried out at conversion according to Second Type transformational relation
Reason, obtains the second COS distance of the first data type;Second COS distance is determined as similarity.
In this embodiment, second feature vector and target feature vector are being compared, with obtain face object to be identified with
When similarity between target face object, can calculate the second data type of second feature vector sum target feature vector it
Between comparison distance, which can calculate the first COS distance between second feature vector sum target feature vector, than
Such as, the dot product result between second feature vector sum target feature vector is determined as the first COS distance, first cosine away from
From namely cosine similarity, be used to indicate the size of the similarity degree between second feature vector sum target feature vector.This
The data type of one COS distance is also the second data type.
For example, second feature vector [x1、x2……xn], target feature vector [y1、y2……yn].Wherein, x1、
x2……xnFor indicating the characteristic of second feature vector, y1、y2……ynFor indicating the characteristic of target feature vector
According to n is for indicating dimension.First COS distance can be indicated by formula:
Wherein, due to second feature vector [x1、x2……xn], target feature vector [y1、y2……yn] it is normalized
Feature vector,Thus, the first COS distanceThat is, the
One COS distance is the inner product (dot product result) of second feature vector sum target feature vector.
After the first COS distance between the target feature vector for obtaining the second data type of second feature vector sum,
Conversion process is carried out according to Second Type transformational relation to the first COS distance, obtain the second cosine of the first data type away from
From the data type of second COS distance is the first data type, that is, the first COS distance is mapped back the first data class
In the data space of type, for example, the first COS distance is mapped back the space Float, optionally, the embodiment by second feature to
Measure between target feature vector dot product result and quotient above-mentioned scale=254/ (max-min) between the two square really
It is set to the second COS distance, and then the second COS distance is determined as the phase between face object to be identified and target face object
Like degree, and then in the case where similarity is greater than first object threshold value, determine that face object to be identified is target face object, from
And realize and identification comparison is carried out to face, the speed of face alignment is accelerated, for example, operation accelerates to reach 1-4 times (depending on tool
Depending on body hardware platform).
It should be noted that above by between the target feature vector for obtaining the second data type of second feature vector sum
The first COS distance, to compare second feature vector and target feature vector, to obtain face object to be identified and target person
The method of similarity between face object is only a kind of citing of the embodiment of the present invention, does not represent the acquisition of the embodiment of the present invention
The method of similarity between face object to be identified and target face object is only the above method, any available to be identified
The method of similarity between face object and target face object is all within the scope of the embodiment of the present invention, for example, passing through
Euclidean distance obtains the similarity between face object to be identified and target face object, no longer illustrates one by one herein.
As an alternative embodiment, the first data type is single-precision floating point type, the second data type is whole
Type.
In this embodiment, the first data type is single-precision floating point type, for example, being Float type, Ke Yiwei
Float32, Float64 etc. are used to indicate the data type of original feature vector.Second data type is integer, for example, being
Int8, Int16 etc. are used to indicate the data type of second feature vector.
It should be noted that the first data type of the embodiment is Float32, Float64, the second data type is
Int8, Int16 are only one kind of the embodiment of the present invention for example, not representing the original data type of the embodiment of the present invention only
For Float32, Float64, the second data type is only Int8, Int16, any related to convert original feature vector to relatively
The data type of discrete second feature vector, to reduce the method for the pressure stored to characteristic all of the invention real
Within the scope of applying example, no longer illustrate one by one herein.
This embodiment offers a kind of simple, quickly and effectively face characteristic processing methods, by the first data type
Original feature vector carries out conversion process according to first kind transformational relation, obtains the second feature vector of the second data type,
Achieve the purpose that compress face characteristic data, has allowed the volume of hard disk and memory shared by feature to reduce 4 times, drop
The low pressure that face characteristic is stored, and then the similarity between face object to be identified and target face object is big
In the case where first object threshold value, determines that face object to be identified is target face object, realize that face retrieval accelerates, operation
Accelerate that 1-4 times can be reached, avoids when handling face characteristic, since the switching of characteristic dimension needs re -training mould
The big problem of cost caused by type realizes the technical effect for reducing the cost handled face characteristic, ensure that face
The effect of feature is lossless.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules is not necessarily of the invention
It is necessary.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation
The method of example can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but it is very much
In the case of the former be more preferably embodiment.Based on this understanding, technical solution of the present invention is substantially in other words to existing
The part that technology contributes can be embodied in the form of software products, which is stored in a storage
In medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, calculate
Machine, server or network equipment etc.) execute method described in each embodiment of the present invention.
Technical solution of the present invention is illustrated below with reference to preferred embodiment.
This embodiment offers a kind of algorithms simple, that quickly and effectively face characteristic compression, face retrieval accelerate, should
Accelerating algorithm is mainly by being quantified as Int8 by Float32 for face characteristic, to reduce the pressure of face characteristic storage, accelerate
The speed of face alignment.
The face characteristic compression method based on quantization of the embodiment of the present invention is introduced below.
Fig. 3 is a kind of flow chart of face characteristic compression method based on quantization according to an embodiment of the present invention.Such as Fig. 3 institute
Show, method includes the following steps:
Step S301 extracts Float32 characteristic from the facial image of million ranks, and estimates quantized interval.
Step S302 carries out quantification treatment to the Float32 characteristic for needing to carry out compression processing based on quantized interval,
Obtain compressed face characteristic data.
Step S303 is compared by compressed face characteristic data with the target face characteristic of storage, with
Determine the corresponding face of two characteristics for participating in comparing whether from a people.
Float32 characteristic, and estimator are extracted from the facial image of million ranks to the embodiment of the present invention below
Change section to be introduced.
Fig. 4 is a kind of stream of method that Float32 characteristic is extracted from facial image according to an embodiment of the present invention
Cheng Tu.As shown in figure 4, method includes the following steps:
Step S401 obtains million grades of facial images.
Step S402 carries out Face datection to million grades of facial images, obtains Face datection result.
The embodiment carries out Face datection by facial image of the Face datection network model to input, in facial image
The position of face is accurately positioned, obtains Face datection result.
Step S403 carries out face key point registration according to Face datection result, obtains registration result.
It is special that the embodiment carries out face after carrying out Face datection to million grades of facial images, according to Face datection result
The detection and positioning of point are levied, network model can be registrated by face and face key point registration is carried out according to Face datection result,
The position that the features such as eyes, nose, mouth are found on face, obtains registration result.
Step S404 is aligned face according to registration result and scratches figure.
Face can be aligned according to face registration result and be scratched into the image of 248*248 by the embodiment.
The facial image scratched is inputted human face recognition model, obtains the Float32 feature of multidimensional by step S405.
The human face recognition model of the embodiment is a convolutional neural networks, can be by the facial image of a 248*248
The Float32 feature of one 1024 dimension is converted to, this feature can be used in the application scenarios such as face core body, face retrieval
Face alignment.
Step S406 estimates quantized interval.
In this embodiment, by the way that face characteristic data are normalized, so that each face characteristic
Mould a length of 1, to ensure directly calculate cos distance in Int8 after Float32 characteristic quantification is Int8 feature and reflect
It is emitted back towards the space Float.Optionally, which seeks the square value of each dimensional feature data in eigenface characteristic
With, and open root and obtain that existing mould is long, it is long divided by the mould later for the value in each dimension of former feature, to obtain normalizing
Change treated face characteristic data.
The upper bound max and lower bound min of quantized interval are determined based on the face characteristic data after normalized, so that
99.8% characteristic can be fallen between the upper bound and lower bound, while the upper bound and lower bound are symmetrical, that is,-max=min, from
And determine quantized interval, so that value as much as possible is fallen among quantized interval.
Below to the embodiment of the present invention based on quantized interval to need to carry out the Float32 characteristic of compression processing into
Row quantification treatment, the method for obtaining compressed face characteristic data are introduced.
Fig. 5 is the flow chart of the method for a kind of pair of face characteristic data progress quantification treatment according to embodiments of the present invention.Such as
Shown in Fig. 5, method includes the following steps:
Step S501 obtains the Float32 face characteristic data for needing to compress.
The Float32 face characteristic data of face object to be identified are extracted from the image of input.The image of input can
Think the image comprising face, Face datection, face registration and face characteristic can be carried out to the image of input and is identified, thus
To the Float32 face characteristic data of face object to be identified.
Step S502 carries out quantification treatment to the Float32 face characteristic data extracted, after obtaining quantification treatment
Int8 face characteristic data.
In this embodiment, the mapping established between the data corresponding with the space Int8 of the data in the space Float32 is closed
System.
Optionally, scale=254/ (max-min), wherein max is used to indicate that the upper dividing value of quantized interval, min to be used for
Indicate the floor value of quantized interval, 254 be that gained in the space Int8 is mapped to according to the Float32 face characteristic data to be compressed
To the ranges of data interval of face characteristic data be determined, for example, being -127~127.
In this embodiment, the face characteristic data feature_Int8_value=round (scale* after mapping
(feature_Float32_value-min) -127), wherein before feature_Float32_value is for indicating mapping
Float32 face characteristic data, round expression round up.
Optionally, (127, feature_Int8_value) feature_Int8_value '=min.
Optionally, eature_Int8_value '=max (- 127, feature_Int8_value).
It is real to which the face characteristic data in the space Float32 is mapped in the space Int8 by above-mentioned mapping relations
Show and quantification treatment is carried out to Float32 face characteristic data.
Below to the target face characteristic by compressed face characteristic data and storage of the embodiment of the present invention
The method being compared is introduced.
Fig. 6 is a kind of flow chart of the method for face characteristic comparing according to an embodiment of the present invention.As shown in fig. 6,
Method includes the following steps:
Step S601 obtains the dot product knot between compressed face characteristic data and the target face characteristic of storage
Fruit.
Dot product result between the compressed face characteristic data of the embodiment and the target face characteristic of storage
Data type be Int8.
Dot product result is mapped back in the space Float32, obtains similarity by step S602.
The embodiment can use the dot product between compressed face characteristic data and the target face characteristic of storage
As a result the quotient between scale square is reflected, and dot product result is mapped back the space Float32 and is hit, obtained value is determined as
Similarity between compressed face characteristic data and the target face characteristic of storage.
Step S603, judges whether similarity is greater than targets threshold.
Step S604, the corresponding face of two face characteristics for participating in comparing is from a people.
After judging whether similarity is greater than targets threshold, if it is judged that similarity is less than targets threshold, it is determined that
The corresponding face of two face characteristics for participating in comparing is from a people.
Step S605, the corresponding face of two face characteristics for participating in comparing is from non-same people.
After judging whether similarity is greater than targets threshold, if it is judged that similarity is not more than targets threshold, then really
The fixed corresponding face of two face characteristics for participating in comparing is from non-same people.
Fig. 7 is a kind of schematic diagram of face characteristic compression based on quantization according to embodiments of the present invention.As shown in fig. 7, base
Compressing in the face characteristic of quantization includes estimator section stage, characteristic quantification stage and aspect ratio to the stage.Optionally, exist
It estimates the quantized interval stage, carries out Face datection by million grade facial images of the Face datection network model to input, obtain
Face datection is registrated as a result, being registrated network model by face again according to Face datection result, will according to inspection registration result
Face is aligned and scratches into the image of 248*248, and the facial image scratched then is inputted human face recognition model, the recognition of face mould
Type is a convolutional neural networks, the facial image of a 248*248 can be converted million grades of face characteristics, for example, conversion
The Float32 feature tieed up for one group 1024, this feature can be used for the face ratio in face core body and face retrieval application scenarios
It is right.
In this embodiment, normalization characteristic makes the mould a length of 1 of each face characteristic, and the effect of this step is to ensure that
Cos distance can be directly calculated in Int8 after Float32 characteristic quantification is Int8 feature and maps back the space Float.
Specific practice can sum for the square value to each dimension face characteristic data of face characteristic data and open root to obtain existing mould long,
It is long divided by the mould for the face characteristic data in each dimension of former feature later, the feature after being normalized.
A upper bound max and lower bound min are obtained based on above-mentioned normalized feature, fall in 99.8% characteristic
Between boundary and lower bound, while the upper bound and lower bound are symmetrically-max=min, so that it is determined that quantized interval, falls value as much as possible
Among quantized interval.
In the characteristic quantification stage, for the Float32 characteristic that needs compress, the sky from Float32 space to Int8
Between mapping relations can be with are as follows: scale=254/ (max-min);
Face characteristic data feature_Int8_value=round (scale* (feature_Float32_ after mapping
Value-min) -127), wherein feature_Float32_value is used to indicate the Float32 face characteristic number before mapping
According to round expression rounds up.
Optionally, (127, feature_Int8_value) feature_Int8_value '=min.
Optionally, eature_Int8_value '=max (- 127, feature_Int8_value).
To which by above-mentioned mapping relations, the Float32 characteristic that needs are compressed carries out characteristic quantification processing, obtains
To compressed Int8 characteristic.
In aspect ratio to the stage, the inner product for comparing feature is sought, the interior machine for comparing feature is mapped back into the space Float32,
It obtains comparing distance, which is used to indicate the similarity degree for comparing feature.Optionally, the comparison in the space Float32
Distance can be obtained with the dot product result of two face characteristics in the space Int8 divided by a square mapping of scale.It is mapping
After the completion, if determining the corresponding people of two face characteristics for participating in comparing if comparing distance is higher than judgment threshold
Face is from a people, and the corresponding face of two face characteristics for otherwise participating in comparing is from non-same people.
The application environment of the embodiment of the present invention can be, but not limited to referring to the application environment in above-described embodiment, the present embodiment
In this is repeated no more.The embodiment of the invention provides the optional tools of one kind of the processing method for implementing above-mentioned face characteristic
Body application.
Fig. 8 is a kind of schematic diagram of a scenario of face core body according to an embodiment of the present invention.As shown in figure 8, the embodiment can
To be detected by terminal to face object to be identified, for example, by terminal to the image of the face object to be identified of input
Face datection is carried out, facial image is obtained, the position for arriving face is accurately positioned in facial image, according to obtained Face datection
As a result the detection and positioning of human face characteristic point are carried out, for example, the position of the features such as eyes, nose, mouth is found on face, into
And be aligned face and scratch figure, the Float32 characteristic of 1024 dimensions is finally identified from the facial image scratched, and is tieed up to 1024
Float32 characteristic be based on quantized interval carry out quantification treatment, obtain compressed face characteristic data Int8 characteristic
According to phase between available compressed face characteristic data and the target face characteristic of the target face object of storage
Like degree, wherein target face object can be the face object of preparatory typing legal identity information.
Between the target face characteristic of target face object for obtaining compressed face characteristic data and storage
Similarity after, if similarity is greater than threshold value, it is determined that face object to be identified is target face object, then people to be identified
Face object from the identity of people be satisfactory, show being verified as a result, optionally, user can further select
It selects and performs the next step operation or operated before returning;If similarity is not more than threshold value, it is determined that face object to be identified is not
For target face object, then face object to be identified from the status incongruence of people close and require, display verifying is unsanctioned
As a result, user can further select to terminate current operation, or operated before returning and re-start verifying, again through terminal
Face datection is carried out to the image of the face object to be identified of input, the above method is repeated, until face object to be identified
From the identity of people be satisfactory, show it is being verified as a result, when verifying number and being greater than pre-determined number,
Then terminate face core body.
Fig. 9 is a kind of schematic diagram of a scenario of face retrieval according to an embodiment of the present invention.As shown in figure 9, passing through terminal pair
Face object to be identified is detected, and the Float32 for obtaining facial image can be known by face identification method as shown in Figure 9
Characteristic, and Float32 characteristic is subjected to quantification treatment, obtain discrete Int8 characteristic.
The embodiment stores the multiple groups face characteristic data of multiple face objects in the database in advance, for example, storage A people
The face characteristic data of face object, the face characteristic data of B face object, the face characteristic data of C face object.Multiple people
Face object can be in advance to multiple faces of Input of Data identity information.To every group of face from multiple groups face characteristic data
Characteristic is traversed, the lineup's face characteristic that will be traversed every time, is determined as target face characteristic, and then compare
The target face characteristic determining to Int8 characteristic and each traversal, to obtain face object to be identified and multiple faces
Similarity between object obtains multiple similarities, for example, obtaining A similarity, B similarity and C similarity.Wherein, A is similar
Degree > B similarity > C similarity.
Optionally, it be 89%, C similarity is 82 that A similarity, which is 96%, B similarity,.Judge the maximum in multiple similarities
Whether similarity is greater than threshold value 90%.For example, judging whether A similarity is greater than threshold value.The case where A similarity is greater than threshold value
Under, determine that face object to be identified and face object corresponding with A similarity from same people, A face object can be determined
For the search result retrieved to face object to be identified, the search result for retrieving A is shown.User can further select
It is operated before selecting return, or performs the next step operation.
Optionally, it be 65%, C similarity is 55% that A similarity, which is 70%, B similarity, little in maximum similarity 70%
In the case where threshold value 90%, it is determined that face object to be identified is not any face object in multiple face objects, then really
It is fixed not to face object retrieval to be identified go out with its similar in face object.User can further select to operate before returning,
Face datection is carried out again through image of the terminal to the face object to be identified of input, repeats the above method, until
To the search result retrieved to face object to be identified, or when retrieving number greater than pre-determined number, then terminate face
Retrieval.
The above method can apply the scene in all face retrievals, for example, suspect library is established, according to what is currently obtained
Facial image is retrieved in suspect library, if can retrieve with its similar in facial image, can determine current
Facial image from people may be criminal, if fail to retrieve with its similar in facial image, can determine and work as
Preceding facial image from people may not be criminal, so that the efficiency of face alignment be effectively promoted.
The embodiment realizes face retrieval by the above method, and by that will be the Float32 characteristic of continuous quantity
According to according to the Float32 characteristic for being converted into discrete magnitude, while reducing the storage pressure to face characteristic data, drop
The low operand of face retrieval, and then accelerate the speed of face retrieval.
It should be noted that Fig. 8 and scene embodiment shown in Fig. 9 are only a kind of citing of the embodiment of the present invention, not
The application scenarios for representing the embodiment of the present invention are only above-mentioned, any to carry out face based on the face characteristic compression method of quantization
The scene of comparison all within the scope of the embodiment of the present invention, no longer illustrates one by one herein.
The embodiment estimates quantized interval by extracting Float32 characteristic from the facial image of million ranks, right
The Float32 characteristic of extraction is based on quantized interval and carries out quantification treatment, obtains compressed face characteristic data, passes through pressure
Face characteristic data after contracting are compared with the target face characteristic of storage, to determine two characteristics for participating in comparing
According to corresponding face whether from a people, it is not necessarily to re -training depth network, is reduced at face characteristic data
The pressure of reason, at the same compress after face characteristic effect it is substantially lossless, can be realized 4 times hard-disc storage and EMS memory occupation compress
Retrieval with 1-4 times accelerates.
According to another aspect of an embodiment of the present invention, it additionally provides a kind of for implementing the processing method of above-mentioned face characteristic
Face characteristic processing unit.Figure 10 is a kind of schematic diagram of the processing unit of face characteristic according to an embodiment of the present invention.
As shown in Figure 10, the processing unit 100 of the face characteristic may include: the first extraction unit 10, first processing units 20, conversion
Unit 30, first acquisition unit 40 and the first determination unit 50.
First extraction unit 10 obtains first for carrying out feature extraction to face object to be identified in the target image
The original feature vector of data type.
First processing units 20 obtain first eigenvector for original feature vector to be normalized.
Converting unit 30 obtains for carrying out conversion process according to first kind transformational relation to first eigenvector
The second feature vector of two data types, wherein memory space shared by second feature vector is less than shared by original feature vector
Memory space.
First acquisition unit 40 for obtaining the target feature vector of pre-stored target face object, and compares the
Two feature vectors and target feature vector, to obtain the similarity between face object to be identified and target face object.
First determination unit 50, for determining face pair to be identified in the case where similarity is greater than first object threshold value
As for target face object.
Optionally, the device further include: the second extraction unit, for being converted to first eigenvector according to the first kind
Relationship carries out conversion process, before obtaining the second feature vector of the second data type, respectively to people in multiple images sample
Face object carries out feature extraction, obtains multiple feature vector samples of the first data type;The second processing unit, for multiple
Feature vector sample is normalized;Second acquisition unit, for obtaining multiple feature vector samples after normalized
This first data interval;Filter element obtains interval key for being filtered processing to the first data interval;Second really
Order member, for determining first kind transformational relation based on interval key.
Optionally, the second determination unit includes: the first acquisition module, for obtain interval key the first upper bound data,
Second lower bound number of the first lower bound data of interval key, the second upper bound data of the second data interval and the second data interval
According to, wherein the second data interval is associated with the second data type;First determining module, for passing through the first upper bound data, the
One lower bound data, the second upper bound data and the second lower bound data determine the object module for being used to indicate first kind transformational relation.
Optionally, converting unit includes: processing module, for passing through object module to the characteristic in first eigenvector
According to being handled, output result is obtained;Second determining module is used in the case where exporting result greater than the second upper bound data,
Second upper bound data are determined as to the characteristic of second feature vector;Exporting the case where result is less than the second lower bound data
Under, by the second lower bound data, it is determined as the characteristic of second feature vector;It is more than or equal to the second lower bound data in output result
And in the case where being less than or equal to the second upper bound data, output result is determined as to the characteristic of second feature vector;Pass through
The characteristic of two feature vectors determines second feature vector.
Optionally, second acquisition unit includes: the second acquisition module, for obtain each feature after normalized to
Characteristic of the sample in multiple dimensions is measured, multiple characteristics are obtained;Third determining module, for determining and multiple features
Corresponding first data interval of data.
It should be noted that the first extraction unit 10 in the embodiment can be used for executing the step in the embodiment of the present application
Rapid S202, the first processing units 20 in the embodiment can be used for executing the step S204 in the embodiment of the present application, the implementation
Converting unit 30 in example can be used for executing the step S206 in the embodiment of the present application, the first acquisition unit in the embodiment
40 can be used for executing the step S208 in the embodiment of the present application, and the first determination unit 50 in the embodiment can be used for executing
Step S210 in the embodiment of the present application.
The embodiment carries out feature extraction to face object to be identified in the target image, obtains the original of the first data type
Beginning feature vector carries out the original feature vector of the first data type after normalized according to first kind transformational relation
Conversion process, obtains the second feature vector of the second data type, and memory space shared by second feature vector is less than original spy
Levy memory space shared by vector, so compare the target signature of second feature vector and pre-stored target face object to
Amount, in the case that similarity between face object to be identified and target face object is greater than first object threshold value, determine to
Identification face object is target face object, that is to say, that by the original feature vector of the first data type according to the first kind
Transformational relation carries out conversion process, obtains the second feature vector of the second data type, has reached and has carried out to face characteristic data
The purpose of compression reduces the pressure stored to face characteristic, and then in face object to be identified and target face object
Between similarity be greater than first object threshold value in the case where, determine face object to be identified be target face object, avoid
When handling face characteristic, as the problem that the switching of characteristic dimension needs cost caused by re -training model big,
Realizing reduces the technical effect of cost handled face characteristic, so solve in the related technology to face characteristic into
The big technical problem of the cost of row processing.
Herein it should be noted that example and application scenarios phase that said units and module are realized with corresponding step
Together, but it is not limited to the above embodiments disclosure of that.It should be noted that a part of said units and module as device
It may operate in hardware environment as shown in Figure 1, hardware realization can also be passed through, wherein hardware by software realization
Environment includes network environment.
Another aspect according to an embodiment of the present invention additionally provides a kind of for implementing the processing method of above-mentioned face characteristic
Electronic device.
Figure 11 is a kind of structural block diagram of electronic device according to an embodiment of the present invention.As shown in figure 11, the electronic device
Including memory 1102 and processor 1104, it is stored with computer program in the memory, which is arranged to pass through meter
Calculation machine program executes the step in any of the above-described embodiment of the method.
Optionally, in the present embodiment, above-mentioned electronic device can be located in multiple network equipments of computer network
At least one network equipment.
Optionally, in the present embodiment, above-mentioned processor 1104 can be set to execute by computer program following
Step:
S1 carries out feature extraction to face object to be identified in the target image, obtains the original spy of the first data type
Levy vector;
Original feature vector is normalized in S2, obtains first eigenvector;
S3 carries out conversion process according to first kind transformational relation to first eigenvector, obtains the second data type
Second feature vector, wherein memory space shared by second feature vector is less than memory space shared by original feature vector;
S4 obtains the target feature vector of pre-stored target face object, and compares second feature vector and target
Feature vector, to obtain the similarity between face object to be identified and target face object;
S5 determines that face object to be identified is target face object in the case where similarity is greater than first object threshold value.
Optionally, it will appreciated by the skilled person that structure shown in Figure 11 is only to illustrate, electronic device can also
To be smart phone (such as Android phone, iOS mobile phone), tablet computer, palm PC and mobile internet device
The terminal devices such as (Mobile Internet Devices, MID), PAD.Figure 11 it does not make to the structure of above-mentioned electronic device
At restriction.For example, electronic device may also include than shown in Figure 11 more perhaps less component (such as network interface) or
With the configuration different from shown in Figure 11.
Wherein, memory 1102 can be used for storing software program and module, such as the face characteristic in the embodiment of the present invention
The corresponding program instruction/module for the treatment of method and apparatus, processor 1104 by operation be stored in it is soft in memory 1102
Part program and module realize the processing side of above-mentioned face characteristic thereby executing various function application and data processing
Method.Memory 1102 may include high speed random access memory, can also include nonvolatile memory, such as one or more magnetism
Storage device, flash memory or other non-volatile solid state memories.In some instances, memory 1102 can further comprise
The memory remotely located relative to processor 1104, these remote memories can pass through network connection to terminal.Above-mentioned net
The example of network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.Wherein, memory
1102 specifically can be, but not limited to for storing the information such as the face characteristic data of identification face object extracted.Show as one kind
Example can be, but not limited to include the in the processing unit 100 of above-mentioned face characteristic in above-mentioned memory 1102 as shown in figure 11
One extraction unit 10, first processing units 20, converting unit 30, first acquisition unit 40 and the first determination unit 50.In addition, also
It can include but is not limited to other modular units in the processing unit of above-mentioned face characteristic, repeated no more in this example.
Above-mentioned transmitting device 1106 is used to that data to be received or sent via a network.Above-mentioned network specific example
It may include cable network and wireless network.In an example, transmitting device 1106 includes a network adapter (Network
Interface Controller, NIC), can be connected by cable with other network equipments with router so as to interconnection
Net or local area network are communicated.In an example, transmitting device 1106 is radio frequency (Radio Frequency, RF) module,
For wirelessly being communicated with internet.
In addition, above-mentioned electronic device further include: display 1108, for showing above-mentioned object code in first object function
In execution state;Bus 1110 is connected, for connecting the modules component in above-mentioned electronic device.
The another aspect of embodiment according to the present invention, additionally provides a kind of storage medium, is stored in the storage medium
Computer program, wherein the computer program is arranged to execute the step in any of the above-described embodiment of the method when operation.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps
Calculation machine program:
S1 carries out feature extraction to face object to be identified in the target image, obtains the original spy of the first data type
Levy vector;
Original feature vector is normalized in S2, obtains first eigenvector;
S3 carries out conversion process according to first kind transformational relation to first eigenvector, obtains the second data type
Second feature vector, wherein memory space shared by second feature vector is less than memory space shared by original feature vector;
S4 obtains the target feature vector of pre-stored target face object, and compares second feature vector and target
Feature vector, to obtain the similarity between face object to be identified and target face object;
S5 determines that face object to be identified is target face object in the case where similarity is greater than first object threshold value.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps
Calculation machine program:
S1 is carrying out conversion process according to first kind transformational relation to first eigenvector, is obtaining the second data type
Second feature vector before, respectively in multiple images sample to face object carry out feature extraction, obtain the first data class
Multiple feature vector samples of type;
Multiple feature vector samples are normalized in S2;
S3, the first data interval of multiple feature vector samples after obtaining normalized;
S4 is filtered processing to the first data interval, obtains interval key;Determine that the first kind turns based on interval key
Change relationship.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps
Calculation machine program:
S1, obtain the first upper bound data of interval key, the first lower bound data of interval key, the second data interval
Second lower bound data of two upper bound data and the second data interval, wherein the second data interval is associated with the second data type;
S2, by the first upper bound data, the first lower bound data, the second upper bound data and the second lower bound data, determination is used for
Indicate the object module of first kind transformational relation.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps
Calculation machine program:
S1 is handled the characteristic in first eigenvector by object module, obtains output result;
S2, export result be greater than the second upper bound data in the case where, by the second upper bound data be determined as second feature to
The characteristic of amount;
S3, in the case where exporting result less than the second lower bound data, by the second lower bound data, be determined as second feature to
The characteristic of amount;
S4 will be defeated in the case where exporting result more than or equal to the second lower bound data and being less than or equal to the second upper bound data
Result is determined as the characteristic of second feature vector out;
S5 determines second feature vector by the characteristic of second feature vector.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps
Calculation machine program:
S1, characteristic of each feature vector sample in multiple dimensions after obtaining normalized, obtains multiple
Characteristic;
S2 determines the first data interval corresponding with multiple characteristics.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps
Calculation machine program:
From the first data interval, the characteristic of second feature data is filtered out greater than fisrt feature data and is less than,
Obtain interval key, wherein the characteristic in interval key accounts for the ratio of the characteristic in the first data interval, is greater than the
Two targets thresholds.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps
Calculation machine program:
S1 obtains the feature vector of pre-stored multiple predetermined face objects, wherein the spy of each predetermined face object
Levy the normalized feature vector that vector is the second data type;
The feature vector of traverse predetermined face object is determined as the target signature of target face object by S2
Vector.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps
Calculation machine program:
S1 obtains the first COS distance between the target feature vector of the second data type of second feature vector sum;
S2 carries out conversion process according to Second Type transformational relation to the first COS distance, obtains the first data type
Second COS distance;
Second COS distance is determined as similarity by S3.
Optionally, the specific example in the present embodiment can be with reference to example described in above-described embodiment, the present embodiment
Details are not described herein.
Optionally, in the present embodiment, those of ordinary skill in the art will appreciate that in the various methods of above-described embodiment
All or part of the steps be that the relevant hardware of terminal device can be instructed to complete by program, the program can store in
In one computer readable storage medium, storage medium may include: flash disk, read-only memory (Read-Only Memory,
ROM), random access device (Random Access Memory, RAM), disk or CD etc..
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
If the integrated unit in above-described embodiment is realized in the form of SFU software functional unit and as independent product
When selling or using, it can store in above-mentioned computer-readable storage medium.Based on this understanding, skill of the invention
Substantially all or part of the part that contributes to existing technology or the technical solution can be with soft in other words for art scheme
The form of part product embodies, which is stored in a storage medium, including some instructions are used so that one
Platform or multiple stage computers equipment (can be personal computer, server or network equipment etc.) execute each embodiment institute of the present invention
State all or part of the steps of method.
In the above embodiment of the invention, it all emphasizes particularly on different fields to the description of each embodiment, does not have in some embodiment
The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed client, it can be by others side
Formula is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, and only one
Kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
It is desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or discussed it is mutual it
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of unit or module
It connects, can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (15)
1. a kind of processing method of face characteristic characterized by comprising
Feature extraction is carried out to face object to be identified in the target image, obtains the original feature vector of the first data type;
The original feature vector is normalized, first eigenvector is obtained;
Conversion process is carried out according to first kind transformational relation to the first eigenvector, obtains the second of the second data type
Feature vector, wherein it is empty that memory space shared by the second feature vector is less than storage shared by the original feature vector
Between;
The target feature vector of pre-stored target face object is obtained, and compares the second feature vector and the target
Feature vector, to obtain the similarity between the face object to be identified and the target face object;
In the case where the similarity is greater than first object threshold value, determine that the face object to be identified is the target face
Object.
2. the method according to claim 1, wherein being converted to the first eigenvector according to the first kind
Relationship carries out conversion process, before obtaining the second feature vector of the second data type, the method also includes:
Feature extraction is carried out to face object in multiple images sample respectively, obtains multiple features of first data type
Vector sample;
The multiple feature vector sample is normalized;
First data interval of the multiple feature vector sample after obtaining normalized;
Processing is filtered to first data interval, obtains interval key;
The first kind transformational relation is determined based on the interval key.
3. according to the method described in claim 2, it is characterized in that, determining that the first kind is converted based on the interval key
Relationship includes:
Obtain the first upper bound data of the interval key, the first lower bound data of the interval key, the second data interval
Second lower bound data of the second upper bound data and second data interval, wherein second data interval and described second
Data type is associated;
By first upper bound data, the first lower bound data, second upper bound data and the second lower bound data,
Determine the object module for being used to indicate the first kind transformational relation.
4. according to the method described in claim 3, it is characterized in that, obtaining the second feature vector of second data type
Process includes:
The characteristic in the first eigenvector is handled by the object module, obtains output result;
In the case where the output result is greater than second upper bound data, second upper bound data are determined as described the
The characteristic of two feature vectors;
In the case where the output result is less than the second lower bound data, the second lower bound data are determined as described
The characteristic of second feature vector;
In the case where the output result is more than or equal to the second lower bound data and is less than or equal to second upper bound data,
The output result is determined as to the characteristic of the second feature vector;
The second feature vector is determined by the characteristic of the second feature vector.
5. according to the method described in claim 2, it is characterized in that, obtaining the multiple feature vector sample after normalized
This first data interval includes:
Characteristic of each described eigenvector sample in multiple dimensions after obtaining normalized, obtains multiple features
Data;
Determine first data interval corresponding with the multiple characteristic.
6. according to the method described in claim 2, obtaining it is characterized in that, be filtered processing to first data interval
The interval key includes:
From first data interval, the characteristic of second feature data is filtered out greater than fisrt feature data and is less than,
Obtain the interval key, wherein the characteristic in the interval key accounts for the characteristic in first data interval
Ratio, be greater than the second targets threshold.
7. the method according to claim 1, wherein obtaining the target signature of pre-stored target face object
Vector includes:
Obtain the feature vector of pre-stored multiple predetermined face objects, wherein the feature of each predetermined face object
Vector is the normalized feature vector of second data type;
By the feature vector of traverse predetermined face object, it is determined as the target of the target face object
Feature vector.
8. the method according to claim 1, wherein compare the second feature vector and the target signature to
It measures, includes: to obtain the similarity between the face object to be identified and the target face object
Obtain the first cosine between the target feature vector of the second data type described in the second feature vector sum away from
From;
Conversion process is carried out according to Second Type transformational relation to first COS distance, obtains first data type
Second COS distance;
Second COS distance is determined as the similarity.
9. method as claimed in any of claims 1 to 8, which is characterized in that first data type is single essence
Floating point type is spent, second data type is integer.
10. a kind of processing unit of face characteristic characterized by comprising
First extraction unit obtains the first data class for carrying out feature extraction to face object to be identified in the target image
The original feature vector of type;
First processing units obtain first eigenvector for the original feature vector to be normalized;
Converting unit obtains second for carrying out conversion process according to first kind transformational relation to the first eigenvector
The second feature vector of data type, wherein memory space shared by the second feature vector be less than the primitive character to
The shared memory space of amount;
First acquisition unit for obtaining the target feature vector of pre-stored target face object, and compares described second
Feature vector and the target feature vector, to obtain the phase between the face object to be identified and the target face object
Like degree;
First determination unit, for determining the face to be identified in the case where the similarity is greater than first object threshold value
Object is the target face object.
11. device according to claim 10, which is characterized in that described device further include:
Second extraction unit is obtained for carrying out conversion process according to first kind transformational relation to the first eigenvector
To before the second feature vector of the second data type, feature extraction is carried out to face object in multiple images sample respectively,
Obtain multiple feature vector samples of first data type;
The second processing unit, for the multiple feature vector sample to be normalized;
Second acquisition unit, for obtaining the first data interval of the multiple feature vector sample after normalized;
Filter element obtains interval key for being filtered processing to first data interval;
Second determination unit, for determining the first kind transformational relation based on the interval key.
12. device according to claim 11, which is characterized in that second determination unit includes:
First obtains module, for obtaining the first upper bound data of the interval key, the first lower bound number of the interval key
According to, the second lower bound data of the second upper bound data of the second data interval and second data interval, wherein second number
It is associated with second data type according to section;
First determining module, for by first upper bound data, the first lower bound data, second upper bound data and
The second lower bound data determine the object module for being used to indicate the first kind transformational relation.
13. device according to claim 12, which is characterized in that the converting unit includes:
Processing module is obtained for being handled by the object module the characteristic in the first eigenvector
Export result;
Second determining module is used in the case where the output result is greater than second upper bound data, will be on described second
Boundary's data are determined as the characteristic of the second feature vector;It is less than the feelings of the second lower bound data in the output result
Under condition, by the second lower bound data, it is determined as the characteristic of the second feature vector;Be greater than in the output result etc.
In the second lower bound data and in the case where being less than or equal to second upper bound data, the output result is determined as described
The characteristic of second feature vector;The second feature vector is determined by the characteristic of the second feature vector.
14. device according to claim 11, which is characterized in that second acquisition unit includes:
Second obtains module, for obtaining feature of each described eigenvector sample after normalized in multiple dimensions
Data obtain multiple characteristics;
Third determining module, for determining first data interval corresponding with the multiple characteristic.
15. a kind of storage medium, which is characterized in that be stored with computer program in the storage medium, wherein the computer
Program is arranged to execute method described in any one of claim 1 to 9 when operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811506344.4A CN110147710B (en) | 2018-12-10 | 2018-12-10 | Method and device for processing human face features and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811506344.4A CN110147710B (en) | 2018-12-10 | 2018-12-10 | Method and device for processing human face features and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110147710A true CN110147710A (en) | 2019-08-20 |
CN110147710B CN110147710B (en) | 2023-04-18 |
Family
ID=67588394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811506344.4A Active CN110147710B (en) | 2018-12-10 | 2018-12-10 | Method and device for processing human face features and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110147710B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942014A (en) * | 2019-11-22 | 2020-03-31 | 浙江大华技术股份有限公司 | Face recognition rapid retrieval method and device, server and storage device |
CN111178540A (en) * | 2019-12-29 | 2020-05-19 | 浪潮(北京)电子信息产业有限公司 | Training data transmission method, device, equipment and medium |
CN111191612A (en) * | 2019-12-31 | 2020-05-22 | 深圳云天励飞技术有限公司 | Video image matching method and device, terminal equipment and readable storage medium |
CN111291682A (en) * | 2020-02-07 | 2020-06-16 | 浙江大华技术股份有限公司 | Method and device for determining target object, storage medium and electronic device |
CN111428652A (en) * | 2020-03-27 | 2020-07-17 | 恒睿(重庆)人工智能技术研究院有限公司 | Biological characteristic management method, system, equipment and medium |
CN111652242A (en) * | 2020-04-20 | 2020-09-11 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112241686A (en) * | 2020-09-16 | 2021-01-19 | 四川天翼网络服务有限公司 | Trajectory comparison matching method and system based on feature vectors |
CN112633297A (en) * | 2020-12-28 | 2021-04-09 | 浙江大华技术股份有限公司 | Target object identification method and device, storage medium and electronic device |
US11574502B2 (en) | 2020-06-28 | 2023-02-07 | Beijing Xiaomi Pinecone Electronics Co., Ltd. | Method and device for identifying face, and computer-readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020000466A1 (en) * | 1995-12-18 | 2002-01-03 | Mark G. Lucera | Laser scanning bar code symbol reader employing variable pass-band filter structures having frequency response characteristics controlled by time-measurement of laser-scanned bar code symbol |
CN104573696A (en) * | 2014-12-29 | 2015-04-29 | 杭州华为数字技术有限公司 | Method and device for processing face feature data |
CN108090433A (en) * | 2017-12-12 | 2018-05-29 | 厦门集微科技有限公司 | Face identification method and device, storage medium, processor |
-
2018
- 2018-12-10 CN CN201811506344.4A patent/CN110147710B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020000466A1 (en) * | 1995-12-18 | 2002-01-03 | Mark G. Lucera | Laser scanning bar code symbol reader employing variable pass-band filter structures having frequency response characteristics controlled by time-measurement of laser-scanned bar code symbol |
CN104573696A (en) * | 2014-12-29 | 2015-04-29 | 杭州华为数字技术有限公司 | Method and device for processing face feature data |
CN108090433A (en) * | 2017-12-12 | 2018-05-29 | 厦门集微科技有限公司 | Face identification method and device, storage medium, processor |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942014A (en) * | 2019-11-22 | 2020-03-31 | 浙江大华技术股份有限公司 | Face recognition rapid retrieval method and device, server and storage device |
CN110942014B (en) * | 2019-11-22 | 2023-04-07 | 浙江大华技术股份有限公司 | Face recognition rapid retrieval method and device, server and storage device |
CN111178540A (en) * | 2019-12-29 | 2020-05-19 | 浪潮(北京)电子信息产业有限公司 | Training data transmission method, device, equipment and medium |
CN111191612A (en) * | 2019-12-31 | 2020-05-22 | 深圳云天励飞技术有限公司 | Video image matching method and device, terminal equipment and readable storage medium |
CN111291682A (en) * | 2020-02-07 | 2020-06-16 | 浙江大华技术股份有限公司 | Method and device for determining target object, storage medium and electronic device |
CN111428652A (en) * | 2020-03-27 | 2020-07-17 | 恒睿(重庆)人工智能技术研究院有限公司 | Biological characteristic management method, system, equipment and medium |
CN111428652B (en) * | 2020-03-27 | 2021-06-08 | 恒睿(重庆)人工智能技术研究院有限公司 | Biological characteristic management method, system, equipment and medium |
CN111652242A (en) * | 2020-04-20 | 2020-09-11 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111652242B (en) * | 2020-04-20 | 2023-07-04 | 北京迈格威科技有限公司 | Image processing method, device, electronic equipment and storage medium |
US11574502B2 (en) | 2020-06-28 | 2023-02-07 | Beijing Xiaomi Pinecone Electronics Co., Ltd. | Method and device for identifying face, and computer-readable storage medium |
CN112241686A (en) * | 2020-09-16 | 2021-01-19 | 四川天翼网络服务有限公司 | Trajectory comparison matching method and system based on feature vectors |
CN112633297B (en) * | 2020-12-28 | 2023-04-07 | 浙江大华技术股份有限公司 | Target object identification method and device, storage medium and electronic device |
CN112633297A (en) * | 2020-12-28 | 2021-04-09 | 浙江大华技术股份有限公司 | Target object identification method and device, storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN110147710B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110147710A (en) | Processing method, device and the storage medium of face characteristic | |
CN110188641B (en) | Image recognition and neural network model training method, device and system | |
CN107844744A (en) | With reference to the face identification method, device and storage medium of depth information | |
JP4553650B2 (en) | Image group representation method, descriptor derived by representation method, search method, apparatus, computer program, and storage medium | |
EP2907058B1 (en) | Incremental visual query processing with holistic feature feedback | |
US11468682B2 (en) | Target object identification | |
CN106228188A (en) | Clustering method, device and electronic equipment | |
CN108875907B (en) | Fingerprint identification method and device based on deep learning | |
CN109190470A (en) | Pedestrian recognition methods and device again | |
CN107423306B (en) | Image retrieval method and device | |
CN111814744A (en) | Face detection method and device, electronic equipment and computer storage medium | |
CN110674677A (en) | Multi-mode multi-layer fusion deep neural network for anti-spoofing of human face | |
CN108874889A (en) | Objective body search method, system and device based on objective body image | |
JP6460926B2 (en) | System and method for searching for an object in a captured image | |
CN112966574A (en) | Human body three-dimensional key point prediction method and device and electronic equipment | |
CN112735437A (en) | Voiceprint comparison method, system and device and storage mechanism | |
CN116229528A (en) | Living body palm vein detection method, device, equipment and storage medium | |
CN105354228A (en) | Similar image searching method and apparatus | |
CN113157962B (en) | Image retrieval method, electronic device, and storage medium | |
CN110457704A (en) | Determination method, apparatus, storage medium and the electronic device of aiming field | |
CN110378304A (en) | Skin condition detection method, device, equipment and storage medium | |
CN113420683A (en) | Face image recognition method, device, equipment and computer readable storage medium | |
CN108647640A (en) | The method and electronic equipment of recognition of face | |
CN108090117A (en) | A kind of image search method and device, electronic equipment | |
CN111339973A (en) | Object identification method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |