CN107798308A - A kind of face identification method based on short-sighted frequency coaching method - Google Patents
A kind of face identification method based on short-sighted frequency coaching method Download PDFInfo
- Publication number
- CN107798308A CN107798308A CN201711095734.2A CN201711095734A CN107798308A CN 107798308 A CN107798308 A CN 107798308A CN 201711095734 A CN201711095734 A CN 201711095734A CN 107798308 A CN107798308 A CN 107798308A
- Authority
- CN
- China
- Prior art keywords
- face
- target
- benchmark
- characteristic value
- target face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of face identification method based on short-sighted frequency coaching method, method includes:Obtain the short-sighted frequency for including target face;Target face in short-sighted frequency is identified and tracked, extracts some target face pictures;Characteristics extraction is carried out respectively to some target face pictures of extraction, generation corresponds to the target face characteristic value of target face picture respectively;Some groups of target face characteristic values corresponding to some target face pictures are combined, generate target face eigenmatrix;Target face eigenmatrix and default benchmark face eigenmatrix are contrasted, target face is verified.The present invention without building faceform, realizes the accurate identification to face, by coarse positioning and the algorithm of fine positioning, realizes that fast face matches, the present invention has high-precision recognizer and high antifalsification by image feature value extracting mode.
Description
Technical field
The present invention relates to field of video monitoring, especially a kind of face identification method based on short-sighted frequency coaching method.
Background technology
The identification problem problem that still people often do not meet in daily life, and in national defence, scientific research,
Safety, intelligence production etc. various aspects are particularly important.As the face recognition technology of a main branch of identification,
Due to its safeguard national security with people life property safety and anti-terrorism, it is anti-probably in it is significant, be always industry
The focus of boundary's research;And with the fast development of microelectric technique, computer technology, Digital Image Processing and pattern-recognition
Section, artificial intelligence technology it is increasingly perfect, the application of face recognition technology is also constantly expanding, for example, criminal identification, peace
Full checking, quick demographics etc., and technically with being economically progressively possibly realized.
Recognition of face generally comprises three steps:Face datection, face characteristic extraction, recognition of face and checking.Current
Method mainly includes:
1)Template matching method.The face pattern of several standards is stored, for describing whole face and facial characteristics respectively;Calculate
Correlation between input picture and the pattern of storage and for detecting.
2)Method based on outward appearance.Learnt with template matching method on the contrary, being concentrated from training image so as to obtain mould
Type or template, and these models are used to detect.
Objectively due to:The change such as the shape of human face, size, texture, expression is complicated, it is difficult to is added with unified pattern
With description;Some attached foreign matters, such as glasses, earrings be present in face surface, make up etc.;The imaging circumstances such as illumination change, and make figure
As mass difference is larger;Image background change greatly etc. reason, causes matching degree between input picture and template poor, or train
The faceform's characteristic value gone out deviates face characteristic, causes face recognition algorithms main at present perfect can't be applied to all
Occasion.
Although with the development of the technologies such as artificial intelligence, detected with identifying based on front Static Human Face, face characteristic
Extraction, recognition of face based on multi-pose etc. have achieved substantial amounts of achievement, but recognition of face thinking main at present is all
It is to be trained using the photo of identification face, generates an identification model, then go to identify a certain individual using identification model,
The key issue of this recognition mode is whether enough face pictures and whether can train a high-precision people
Face model, it is to belong to static identification model algorithm.It can be seen that such recognition of face processing is in matching test image and facial image
It is required that both can be accurately or close to real human face target is accurately described, will once describing occurs in one party by mistake
Cause the maximum error of recognition of face, especially as the test image of target face, if artificial is forged, there will be pole
Its serious duplicity, cause the utter failure of recognition of face, exist more clearly disadvantageous.
Patent No. 201410211494.8(Publication date:2017.06.13)A kind of video face identification method is disclosed,
It essentially discloses herein below:S1:Face detection and tracking is carried out to video and obtains face sequence;S2:To the face sequence
Row are screened, and obtain face typical frame set;S3:Based on front face generation technique and image super-resolution technical optimization institute
Face typical frame set is stated, the face typical frame set strengthened;S4:By by the face typical frame set of the enhancing with
Default Static Human Face images match storehouse compares, and carries out recognition of face or checking.The scheme of the invention largely solves
The single contrast degree of accuracy is low, the problem of antifalsification difference.But the program to picture when being trained, it is necessary to first be carried out to picture
Optimization enhancing:Human face posture in the face typical frame set is corrected using front face generation technique and is more than predetermined threshold two
Typical frame;It is less than the typical frame of 60 pixels using face eye distance in the image super-resolution technology enhancing face typical frame set
Resolution ratio;The set of face typical frame and default Static Human Face images match storehouse to the enhancing carry out illumination pretreatment.
Then it is trained the non-natural facial image of the basic figure of study to image, and learning outcome is contained to image procossing mistake
The non-natural face characteristic that journey is left.Then on the one hand, the extra training result can influence the face picture to short video acquisition
Identification checking;On the other hand, the workload of face training study is added, influences face verification efficiency.
The content of the invention
The goal of the invention of the present invention is:For above-mentioned problem, there is provided a kind of people based on short-sighted frequency coaching method
Face recognition method and system, solves problems with:1st, solves the not high problem of the degree of accuracy based on the identification of single template matches;2nd, solve
Certainly based on the problem of in coaching method structure faceform, the face picture source demand required for building is huge;3rd, it is outer without considering
Influence of boundary's foreign matter to face;4th, without being pre-processed to collection image, realization is directly known based on nature facial image
, the additional features value for solving the problems, such as to bring by pretreatment does not influence recognition result, and numerous and diverse handling process for bringing of pretreatment and
Reduce verification efficiency problem;5th, avoid forging the problem of live body carries out recognition of face using photo.
The technical solution adopted by the present invention is as follows:
A kind of face identification method based on short-sighted frequency coaching method, comprises the following steps:
S001:Benchmark face eigenmatrix is built for face need to be searched;
S100:Obtain the short-sighted frequency for including target face;
S200:Target face in the short-sighted frequency is identified and tracked, extracts some target face pictures;
S300:Characteristics extraction is carried out respectively to some target face pictures of the extraction, generation corresponds to described some respectively
Some groups of target face characteristic values of target face picture;
S400:Some groups of targets face characteristic value is combined, the target face of the corresponding target face of generation is special
Levy matrix;
S500:The target face eigenmatrix and the benchmark face eigenmatrix are contrasted, with to the target person
Face is verified.
In the above method, because carrying out image trace and extraction to target face based on short-sighted frequency, solves facial image source
Carry out source problem, meanwhile, realize and comprehensive face is screened.By directly carrying out characteristics extraction to facial image, without
Enhancing optimization image in advance, reduces identification process, improves identification stability.By construction feature matrix, realization is based on more people
The multi-faceted identification to face of face characteristic value, face contrast orientation is improved, so as to increase identification accuracy.
Further, above-mentioned S400 is specially:
S4001:Judge that plan is stored in the target face characteristic value Q1 and target face characteristic square of the target face eigenmatrix
The similarity of each group of target face characteristic value in battle array;If judge the target face characteristic value Q1 and the target face
The similarity of all target face characteristic values in eigenmatrix for the moment, then marks the target person all in predetermined threshold range
Face characteristic value Q1 is effective target face characteristic value;Otherwise it is invalid targets face characteristic to mark the target face characteristic value Q1
Value;
S4002:The effective target face characteristic value is stored in target face eigenmatrix;It is special to abandon the invalid targets face
Value indicative;
S4003:Whether the group number for judging to be stored in the target face characteristic value of the target face eigenmatrix reaches predetermined
Value one;If so, then perform S500;Otherwise, S4001 is performed.
The program can set effective target face eigenmatrix, so as to avoid in target face eigenmatrix, deposit
In the target face characteristic value that identical/similarity is too high, cause substantive scheme substantially identical with single template identifying schemes;Or
Similarity is too low, causes the face characteristic value for being stored in non-same people, and causes the erroneous judgement to recognition result.Meanwhile pass through setting
The predetermined value one of meet demand, realize while accuracy rate identification face is done, reduce target face eigenmatrix number as far as possible,
So as to reduce characteristics extraction workload and subsequent contrast's amount, recognition efficiency is improved.Preferably, target face eigenmatrix writes
Minimum two groups of targets face characteristic value, using 5-7 group target face characteristic values as preferred scheme.
Further, above-mentioned S001 is specially:
S0001:Obtain the human face data source for including benchmark face;
S0002:Benchmark face in the human face data source is identified, extracts some benchmark face pictures;
S0003:Characteristics extraction is carried out respectively to some benchmark face pictures of the extraction, generation corresponds to described some respectively
Some benchmark face characteristic values of benchmark face picture;
S0004:Some groups of benchmark face characteristic values are combined, benchmark face of the generation corresponding to the benchmark face
Eigenmatrix.
Preferably, said reference face picture is at least two, with 5-7 different angles, the face pictures of different illumination
To be preferred.
The eigenmatrix of face to be found is built by same procedure, so as to improve the matching reliability with target face.
By building benchmark face eigenmatrix, the comprehensive feature for describing face to be found is short so as to only occur in target face
When between temporarily, the feature that can also occur a moment extraction to it is accurately matched, so as to realize the accuracy to object matching.
Preferably, above-mentioned human face data source is short-sighted frequency or some pictures for including the benchmark face.
The program realizes the feature base construction method based on short-sighted frequency or face picture, realizes to lack face to be found short
During video, the eigenmatrix structure based on face picture.Realize the lookup in plurality of human faces source.
Further, above-mentioned S500 is specially:
S5001:The target face eigenmatrix and the benchmark face eigenmatrix are scanned for contrasting;(Preferably square
Battle array multiplication cross mode is searched for, to improve search efficiency)Judge the target face characteristic value and the benchmark face characteristic value
Between similarity;
S5002:According to the pass of the similarity of the target face characteristic square value and the benchmark face characteristic value and predetermined value two
System, confirms the result to target face described in short-sighted frequency.
By setting the threshold value of similarity, the recognition result to target face can be quickly judged, know so as to improve face
Other efficiency.
Further, above-mentioned S5001 is specially:
S5001a:Dimension-reduction treatment is done to the benchmark face eigenmatrix and target face eigenmatrix, i.e., by described in each group
Benchmark face characteristic value and each group of target face characteristic value carry out dimensionality reduction, then by each group of benchmark face characteristic value of dimensionality reduction with
All target face characteristic values of dimensionality reduction carry out similarity comparison in the target signature matrix;
S5001b:If in the target face eigenmatrix, exist and exceed with the benchmark face characteristic value similarity of the dimensionality reduction
The target face characteristic value of the dimensionality reduction of predetermined value three, then judge that the target face characteristic value of the dimensionality reduction corresponds to the target before dimensionality reduction
Face characteristic matrix is effective target face characteristic matrix;Otherwise, it is determined that the target face eigenmatrix before the dimensionality reduction is nothing
Imitate target face eigenmatrix;
S5001c:The benchmark face eigenmatrix and the effective target face characteristic matrix are subjected to similarity comparison.
The program by the intersection search mode between improving matrix, highly shortened required for one-to-one retrieval when
Between, so as to the time required to shortening characteristic value contrast, improve recognition of face efficiency.Further, the program is realized first to mesh
Mark the coarse positioning of face, it would be desirable in the face Primary Location of identification to certain limit, then to the face memory essence in the range of this
It is thin to compare, so as to significantly improve recognition of face efficiency.In coarse positioning, by being contrasted to the dimensionality reduction of characteristic value, relative to normal
The contrast of characteristic value, greatly reduce comparing calculation amount.
The setting of above-mentioned predetermined value three, can be according to the actual requirements(For the required precision of recognition of face)Set, no
It is appreciated that explanation is unclear.
Further, above-mentioned S5002 is specially:
S5002a:According to the target face characteristic value and the similarity of the benchmark face characteristic value, surpass in the similarity
When crossing the predetermined value two, S5002b is performed, otherwise, it is determined that target face corresponding to the target face eigenmatrix is treated to be non-
Search face;
S5002b:The maximum similarity that the similarity exceedes predetermined value two, and the Similarity-Weighted to filtering out are filtered out, is obtained
To comparing result;According to the comparing result, reliability is calculated by pre-defined rule;
S5002c:Export comparing result and/or reliability.
In view of comparison process factor affected by environment, and cause the unstability of comparing result, such scheme is according to right
The Similarity-Weighted of ratio, the reliability of comparing result is calculated, so as to improve the reliability of comparing result.
Further, above-mentioned S5002b is specially:
S50021:Calculate:Exceed predetermined value two with the target face characteristic value similarity in the benchmark face eigenmatrix
The benchmark face characteristic value group number, the ratio with the total benchmark face eigenvalue cluster number of the benchmark face eigenmatrix,
And when the ratio meets predetermined condition, perform S50022;Otherwise, with the mesh in target face eigenmatrix described in each group
The similarity maximum for marking face characteristic value and all benchmark face characteristic values in the benchmark face eigenmatrix is contrast
And reliability as a result;
S50022:Filter out the target face characteristic value in target face eigenmatrix described in each group and benchmark face spy
The maximum similarity that all benchmark face characteristic value similarities in matrix exceed predetermined value two is levied, and according to predefined weight, it is right
The Similarity-Weighted filtered out, obtains comparing result;According to the comparing result and predefined weight, being calculated by pre-defined rule can
By degree.
Such scheme can further improve what contrast was recorded a demerit by the way that multigroup similarity is screened and weighted up to target value
Reliability.
Preferably, the determination methods of above-mentioned similarity are:Calculating benchmark face characteristic value and target face characteristic value
Similarity:, x is target face characteristic value, face on the basis of y
Characteristic value, n are target face characteristic value length, face characteristic value length on the basis of m.
By the Similarity Measure principle based on similar gap, Similarity Measure efficiency can be improved, so as to improve characteristic value
To specific efficiency, realize the quick comparison of face.
Further, the quality for face picture being extracted from short-sighted frequency and human face data source meets predetermined quality requirement, matter
Measuring computational methods is:, in formula, Q is picture quality, and A is image,Pass through Gauss for image A
Filtered image.
Feature extraction is carried out by the picture based on high quality, the difference degree between each characteristic value can be improved, so as to obvious
The resolution of face is improved, improves the accuracy of face contrast.
Further, the above-mentioned benchmark face picture number requirement extracted from the human face data source is:
If the human face data source is short-sighted frequency, predetermined quantity K is extracted from the short-sighted frequency and is filled the foot predetermined quality
It is required that face picture;
If the human face data source is the picture for including the benchmark face, when the quantity of the picture comprising benchmark face
When J is within the predetermined quantity K, the picture for including benchmark face of access amount J;When the figure for including benchmark face
Piece quantity is in more than the predetermined quantity K, K pictures for including benchmark face of access amount.
Preferably, the above-mentioned requirement that quantity is extracted to target face picture in the short-sighted frequency comprising target face, ibid
State benchmark face picture extraction quantitative requirement when human face data source is short-sighted frequency.
Further, when the picture number comprising benchmark face is in more than the predetermined quantity K, with quality by height
To the low picture for including benchmark face of sequential access amount K.
The program can construct the high benchmark face eigenmatrix of resolution, so as to improve to target recognition of face matching
Accuracy rate.
To solve above-mentioned all or part of problem, the invention provides a kind of recognition of face system based on short-sighted frequency news row method
System, including:
Data acquisition unit, for gathering the short-sighted frequency for including target face;
Image extraction unit, target person is included in the short-sighted frequency that is received from the data receipt unit, extraction to be some
The target face picture of face;
Face characteristic extraction unit, for extracting the characteristic value of some target face pictures, output respectively with it is described some
Some target face characteristic values corresponding to target face picture;
Eigenmatrix construction unit, for carrying out group to the target face characteristic value that the face characteristic extraction unit extracts
Close, generate target face eigenmatrix;
Face characteristic storehouse, it is stored with benchmark face eigenmatrix corresponding to face to be found;
Face verification unit, for the target face eigenmatrix and the base for generating the eigenmatrix construction unit
Quasi- face characteristic matrix is contrasted, to be verified to the target face.
Further, the mode that features described above matrix construction unit is combined to the target face characteristic value is:
Judge the target face characteristic value Q2 of face characteristic extraction unit output with it is every in the target face eigenmatrix
The similarity of one group of target face characteristic value;When judging the target face characteristic value Q2 and the target face eigenmatrix
In the similarity of all target face characteristic values when being in predetermined threshold range two, receive the target face characteristic value Q2, write
Enter the target face eigenmatrix;Otherwise the target face characteristic value Q2 is not received;
Also judge to write in the target face eigenmatrix, whether the group number of target face characteristic value reaches predetermined value A, if
It is the target face characteristic value for then no longer receiving the face characteristic extraction unit output.
Preferably, the predetermined value A is integer, and at least 2, it is preferred to take 5-7.
Further, above-mentioned face verification unit includes:
Face matching module, for the target face eigenmatrix and the benchmark face eigenmatrix to be scanned for pair
Than judging the similarity between the target face characteristic value and the benchmark face characteristic value;
Reliability determining module, for the target face characteristic value and the benchmark exported according to the face matching module
Similarity between face characteristic value, according to the rule that prestores, calculate reliability;
Results verification module, for what is calculated according to the face matching module:The target face characteristic value and the benchmark
The similarity of face characteristic value and the relation of predetermined value two, and the output result of the degree of accuracy determining module, output are final right
The result of target face in the short-sighted frequency.
Preferably, above-mentioned face characteristic extraction unit extracts the characteristic value of facial image by convolutional neural networks.
Further, contrast of the above-mentioned face matching module to target face eigenmatrix and benchmark face eigenmatrix
Cheng Wei:
Dimension-reduction treatment is done to the benchmark face eigenmatrix and target face eigenmatrix, i.e., by benchmark face described in each group
Characteristic value and each group of target face characteristic value carry out dimensionality reduction, then by each group of benchmark face characteristic value of dimensionality reduction and the target
All target face characteristic values of dimensionality reduction carry out similarity comparison in eigenmatrix;
If in the target face eigenmatrix, exist and exceed predetermined value three with the benchmark face characteristic value similarity of the dimensionality reduction
Dimensionality reduction target face characteristic value, then judge the target face eigenmatrix before the dimensionality reduction for effective target face characteristic square
Battle array;Otherwise, it is determined that the target face eigenmatrix before the dimensionality reduction is invalid targets face characteristic matrix;
By the benchmark face characteristic value of the benchmark face eigenmatrix and the target person of the effective target face characteristic matrix
Face characteristic value carries out similarity comparison.Contrast herein is the characteristic value contrast before dimensionality reduction.
The program is realized first to the coarse positioning of target face, it would be desirable in the face Primary Location of identification to certain limit,
The face memory in the range of this is finely compared again, so as to significantly improve recognition of face efficiency.In coarse positioning, by feature
The dimensionality reduction contrast of value, relative to the contrast of normal eigenvalues, greatly reduces comparing calculation amount.
It should be noted that the numbering after above-mentioned each predetermined threshold range, each predetermined value or each face characteristic value, only conduct
The sequence number statement used for the technical characteristic of the more preferable statement present invention, be not specific to specific predetermined threshold range, predetermined value or
Face characteristic value.
Further, above-mentioned reliability confirms that the mode that module calculates reliability is:
The similarity exported according to the face matching module, when the similarity exceedes the predetermined value B, to more than predetermined
Value B maximum similarity weighting, obtains weighted results;Always according to pre-defined rule, the weighted results are calculated, output can
By degree.
Further, above-mentioned reliability confirms that module obtains weighted results mode and is specially:
Calculate:Exceed the base of predetermined value two in the benchmark face eigenmatrix with the target face characteristic value similarity
The group number of quasi- face characteristic value, the ratio with the total benchmark face eigenvalue cluster number of the benchmark face eigenmatrix, and described
When ratio meets predetermined condition, to the maximum similarity more than predetermined value B, by the Weight to prestore, weighted results are obtained;
When the ratio is unsatisfactory for predetermined condition, with the benchmark face characteristic value with the target face characteristic value similarity comparison
Maximum be weighted results.
Further, above-mentioned face matching module calculating benchmark face characteristic value and the similarity of target face characteristic value
Mode is:, x is target face characteristic value, face characteristic value on the basis of y, n
For target face characteristic value length, face characteristic value length on the basis of m.
Further, above-mentioned image extraction unit is provided with predetermined value C, when the face picture quality for judging to extract is described pre-
When on definite value C, judge that the face picture of the extraction is sent to the face characteristic extraction unit, otherwise, abandon extraction
Face picture;Picture quality computational methods are:, in formula, Q is picture quality, and A is image,For images of the image A after gaussian filtering.
Further, system also includes:Face database, there is the human face data source comprising face to be found;And by institute
State human face data source and be sent to described image extraction unit;
Described image extraction unit is additionally operable to:From the human face data source, some benchmark people for including face to be found are extracted
Face picture;
The face characteristic extraction unit is additionally operable to:Extract the characteristic value of some benchmark face pictures, output respectively with institute
State some benchmark face characteristic values corresponding to some benchmark face pictures;
The eigenmatrix construction unit is additionally operable to:It is special to some benchmark faces of face characteristic extraction unit extraction
Value indicative is combined, and generates benchmark face eigenmatrix, and the benchmark face eigenmatrix is exported to the face characteristic
Storehouse is stored.
Preferably, above-mentioned human face data source is the short-sighted frequency comprising face to be found or includes some of face to be found
Benchmark face picture.
Further, above-mentioned image extraction unit realizes that extracting the benchmark face picture is specially:
If the human face data source is short-sighted frequency, predetermined quantity D pictures quality is extracted from the short-sighted frequency in predetermined value C
On benchmark face picture;
If the human face data source is the picture comprising benchmark face, when the quantity J of the picture comprising benchmark face exists
When within the predetermined quantity D, all pictures comprising benchmark face are sent to the face characteristic value extraction unit;Work as institute
The picture number comprising benchmark face is stated in more than the predetermined quantity D, takes picture of the quantity D comprising benchmark face to send
To the face characteristic value extraction unit.
Preferably, it is above-mentioned when the picture number comprising benchmark face is in more than the predetermined quantity D, with picture
The order of quality from high to low, picture of the quantity D comprising benchmark face is taken to be sent to the face characteristic value extraction unit.
Further, described image extraction unit extracts the mode of the target face picture, with above-mentioned human face data source
For short-sighted frequency when, extract benchmark face picture mode.
In summary, by adopting the above-described technical solution, the beneficial effects of the invention are as follows:
By scheme provided by the invention, realize and face intelligent capture is carried out to personnel to be identified by short-sighted frequency, add people
The data source of face image;By building face characteristic value group to facial image, comprehensive description to target face characteristic is added,
Improve the robustness of recognition result.By predetermined threshold range, the face characteristic value of each extraction is screened, ensure that face
Eigenvalue cluster builds the distinctiveness and validity of data, so as to increase the accuracy of recognition result.Pass through the friendship of eigenvalue matrix
Pitch multiplication search plan, it is more traditional it is single one to one contrast scheme one by one, highly shortened characteristic value match time, so as to
Improve matching efficiency.Further, the scheme that characteristic value similarity is calculated by similar gap that the present invention uses, is significantly reduced
Object feature value and the difficulty of reference characteristic value Similarity Measure, matching workload is reduced, improves recognition of face efficiency.Pass through spy
Value indicative extracts contrast scheme, because that can use the deep learning algorithm based on neutral net, can effectively avoid living using picture camouflage
The situation of body face.
Brief description of the drawings
Examples of the present invention will be described by way of reference to the accompanying drawings, wherein:
Fig. 1 is the face identification method flow chart based on short-sighted frequency coaching method.
Fig. 2 is face characteristic matrix structure flow chart.
Fig. 3 is the structure flow chart of benchmark face eigenmatrix.
Fig. 4 is target face eigenmatrix and benchmark face eigenmatrix contrast flow chart of steps.
Fig. 5 is that target face eigenmatrix is contrasted in step with benchmark face eigenmatrix, the checking of coarse positioning-fine positioning
Flow chart of steps.
Fig. 6 is reliability calculating flow chart.
Fig. 7 is reliability calculating mode decision flow chart.
Fig. 8 is the face identification system structure chart based on short-sighted frequency coaching method.
Fig. 9 is the structure chart of face verification unit.
Embodiment
All features disclosed in this specification, or disclosed all methods or during the step of, except mutually exclusive
Feature and/or step beyond, can combine in any way.
This specification(Including any accessory claim, summary)Disclosed in any feature, unless specifically stated otherwise,
Replaced by other equivalent or with similar purpose alternative features.I.e., unless specifically stated otherwise, each feature is a series of
An example in equivalent or similar characteristics.
As shown in Fig. 1, present embodiment discloses a kind of face identification method based on short-sighted frequency coaching method, including it is following
Step:
S001:Benchmark face eigenmatrix is built for face need to be searched;
S100:Obtain the short-sighted frequency for including target face;
S200:Target face in the short-sighted frequency is identified and tracked, extracts some target face pictures;Extract target
The quality of face picture need to meet predetermined quality requirement, and quality calculation method is:, in formula, Q
For picture quality, A is image,For images of the image A after gaussian filtering;The requirement for extracting quantity can be according to service condition
It is adjusted, preferably 5-7 different angles, the pictures of different illumination, in the present embodiment, picture quality reaches more than 60 pixels
;
S300:Characteristics extraction is carried out respectively to some target face pictures of the extraction, generation corresponds to described some respectively
Some groups of target face characteristic values of target face picture;Characteristic value such as is extracted in the method based on neutral net, horizontal/vertical
Nogata projects to image, then normalizes;The present embodiment extracts image feature value by taking 2DPCA as an example.
S400:Some groups of targets face characteristic value is combined, the target person of the corresponding target face of generation
Face eigenmatrix;
S500:The target face eigenmatrix and the benchmark face eigenmatrix are contrasted, with to the target person
Face is verified.
Further, referring to the drawings 2, the present embodiment specifically discloses the construction method of face characteristic matrix, i.e., above-mentioned reality
Apply the S400 of example:
S4001:Judge that plan is stored in the target face characteristic value Q1 and target face characteristic square of the target face eigenmatrix
The similarity of each group of target face characteristic value in battle array;If judge the target face characteristic value Q1 and the target face
The similarity of all target face characteristic values in eigenmatrix is all in predetermined threshold range one(Such as 0.5-0.95)When, then mark
It is effective target face characteristic value to remember the target face characteristic value Q1;Otherwise it is invalid to mark the target face characteristic value Q1
Target face characteristic value;
S4002:The effective target face characteristic value is stored in target face eigenmatrix;It is special to abandon the invalid targets face
Value indicative;
S4003:Whether the group number for judging to be stored in the target face characteristic value of the target face eigenmatrix reaches predetermined
Value(Such as 5);If so, then perform S500;Otherwise, S4001 is performed.
Referring to the drawings 3, the present embodiment specifically discloses the construction method of benchmark face eigenmatrix, i.e., in above-described embodiment
S001:
S0001:Obtain the human face data source for including benchmark face;The human face data source can be short-sighted frequency or some include the base
The picture of quasi- face;
S0002:Benchmark face in the human face data source is identified, extracts some benchmark face pictures;Specifically,
The benchmark face picture number of extraction requires:If the human face data source is short-sighted frequency, carried from the short-sighted frequency
Take predetermined quantity U(Such as 5)Be filled the face picture of the foot predetermined quality requirement;If the human face data source is comprising described
The picture of benchmark face, then as the quantity V of the picture comprising benchmark face(Such as 4)When within the predetermined quantity U,
The picture for including benchmark face of access amount V;When the picture number comprising benchmark face the predetermined quantity U with
When upper, by the order of quality from high to low, or different angle, different illumination conditions, U figures for including benchmark face of access amount
Piece.Preferably, 5-7 are taken.
S0003:Characteristics extraction is carried out respectively to some benchmark face pictures of the extraction, generation corresponds to described respectively
Some benchmark face characteristic values of some benchmark face pictures;
S0004:Some groups of benchmark face characteristic values are combined, benchmark face of the generation corresponding to the benchmark face
Eigenmatrix.
Preferably, said reference face picture is at least two, with 5-7 different angles, the face pictures of different illumination
To be preferred.
The program by with target facial image identical training method, if generating for matching, comprising same people
The benchmark face eigenmatrix of dry face characteristic value, trained so as to avoid using same characteristic point caused by different coaching methods
Different characteristic values, and influence the situation of face recognition result.
Referring to the drawings 4, the present embodiment specifically discloses the target signature matrix and benchmark face feature square of above-described embodiment
The similarity comparison process of battle array, i.e. S500:
S5001:The target face eigenmatrix and the benchmark face eigenmatrix are scanned for contrasting;(Preferably square
Battle array multiplication cross mode is searched for, to improve search efficiency)Judge the target face characteristic value and the benchmark face characteristic value
Between similarity;
S5002:According to the similarity of the target face characteristic square value and the benchmark face characteristic value and predetermined value two(Such as
0.95)Relation, confirm the result to target face described in short-sighted frequency.
Further, the determination methods of above-mentioned similarity are:Calculating benchmark face characteristic value and target face characteristic value
Similarity:, x is target face characteristic value, face on the basis of y
Characteristic value, n are target face characteristic value length, face characteristic value length on the basis of m, it is preferred that m=n=5.
Further, referring to the drawings 5, the present embodiment specifically discloses:For target signature matrix and benchmark face feature
The coarse positioning of matrix is to the control methods of fine positioning, i.e., above-mentioned S5001 detailed process:
S5001a:Dimension-reduction treatment is done to the benchmark face eigenmatrix and target face eigenmatrix, i.e., by described in each group
Benchmark face characteristic value and each group of target face characteristic value carry out dimensionality reduction, then by each group of benchmark face characteristic value of dimensionality reduction with
All target face characteristic values of dimensionality reduction carry out similarity comparison in the target signature matrix;The present embodiment is carried based on 2DPCA
Image feature value is taken, i.e., picture matrix is passed through into feature variance at a slow speed by 102*102 dimensionality reductions to 1*128, dimension-reduction treatment herein
Transformation matrix converts, by image array dimensionality reduction to 1*32;Characteristic value contrast is carried out on this basis again, relative to characteristic value 1*
Contrast between 128, herein dimensionality reduction contrast carry out coarse positioning, can greatly save operand-basis and repeatedly test, based on same
Hardware configuration(I3 processors, 2G internal memories), the control methods from coarse positioning to fine positioning, search speed and reach 48ms, and not
The method for doing coarse positioning, lookup speed are 200ms.
S5001b:If in the target face eigenmatrix, the benchmark face characteristic value similarity with the dimensionality reduction be present
More than predetermined value three(Such as 0.9)Dimensionality reduction target face characteristic value, then judge the target face eigenmatrix before the dimensionality reduction
For effective target face characteristic matrix;Otherwise, it is determined that the target face eigenmatrix before the dimensionality reduction is special for invalid targets face
Levy matrix;
S5001c:The benchmark face eigenmatrix and the effective target face characteristic matrix are subjected to similarity comparison.This
Locate the matrix that effective target face characteristic matrix is normal feature extraction value composition.
Further, referring to the drawings 6, the result treatment to above-mentioned contrast is:
S5002a:According to the target face characteristic value and the similarity of the benchmark face characteristic value, surpass in the similarity
Cross the predetermined value two(Such as 0.95)When, S5002b is performed, otherwise, it is determined that target person corresponding to the target face eigenmatrix
Face is non-face to be found;
S5002b:The maximum similarity that the similarity exceedes predetermined value two, and the Similarity-Weighted to filtering out are filtered out, is obtained
To comparing result;According to the comparing result, reliability is calculated by pre-defined rule;
S5002c:Export comparing result and/or reliability.
Wherein, referring to the drawings 7, in order to further improve the reliability of reliability, above-mentioned S5002b is specially:
S50021:Calculate:Exceed predetermined value two with the target face characteristic value similarity in the benchmark face eigenmatrix
The benchmark face characteristic value group number C, the ratio with the total benchmark face eigenvalue cluster number M of the benchmark face eigenmatrix
(C/M), and meet predetermined condition in the ratio(Such as C/M>=0.4)When, perform S50022;Otherwise, with target described in each group
Target face characteristic value in face characteristic matrix and all benchmark face characteristic values in the benchmark face eigenmatrix
Similarity maximum is comparing result and reliability;
S50022:Filter out the target face characteristic value in target face eigenmatrix described in each group and benchmark face spy
The maximum similarity that all benchmark face characteristic value similarities in matrix exceed predetermined value two is levied, and according to predefined weight, it is right
The Similarity-Weighted filtered out, obtains comparing result;According to the comparing result and predefined weight, being calculated by pre-defined rule can
By degree.
For example, said reference face characteristic value always organizes number M=5, and similarity is more than 0.95 similarity group number C=2, phase
It is respectively 0.952 and 0.981, C/M=0.4 like degree>=0.4, meet to require.Predefined weight from 0.95 to 1 between share 4 class
[0.95,0.96], (0.96,0.97], (0.97,0.98], (0.98,1.00], weight is followed successively by 1.0,1.05,1.15,1.25,
Maximum similarity 0.981 is then taken to be weighted to 0.981*1.25=1.23(Two-decimal is taken herein, is not limited when specifically used)Then
For comparing result;According to pre-defined rule:Comparing result/weight 100%=reliabilitys of *, obtaining reliability is:1.23*100%/1.25=
98.4%.Obviously, the reliability based on the contrast of target face eigenmatrix is higher, illustrates that the image based on the target face is searched
Possibility to face to be found is higher.
As shown in figure 8, present embodiment discloses a kind of face identification system based on short-sighted frequency news row method, including:
Data acquisition unit, for gathering the short-sighted frequency for including target face;
Image extraction unit, target person is included in the short-sighted frequency that is received from the data receipt unit, extraction to be some
The target face picture of face;
Face characteristic extraction unit, for extracting the characteristic value of some target face pictures, output respectively with it is described some
Some target face characteristic values corresponding to target face picture;, will as used the Eigenvalue Extraction Method based on neural network
Horizontal is projected image again, then does normalized.The present embodiment is using 2DPCA extraction image feature values.
Eigenmatrix construction unit, the target face characteristic value for being extracted to the face characteristic extraction unit are entered
Row combination, generates target face eigenmatrix;
Face database, there is the human face data source comprising face to be found, the human face data source is to include face to be found
Short-sighted frequency or some benchmark face pictures comprising face to be found;The human face data source is also sent to figure by face database
As extraction unit;Described image extraction unit is additionally operable to:From the human face data source, extraction is some comprising face to be found
Benchmark face picture;The face characteristic extraction unit is additionally operable to:The characteristic value of some benchmark face pictures is extracted, is exported
Some benchmark face characteristic values corresponding with some benchmark face pictures respectively;The eigenmatrix construction unit is also used
In:Some benchmark face characteristic values of face characteristic extraction unit extraction are combined, generation benchmark face is special
Matrix is levied, and the benchmark face eigenmatrix is exported and stored to face feature database.
Face characteristic storehouse, store benchmark face eigenmatrix corresponding to face to be found;
Face verification unit, for the target face eigenmatrix and the base for generating the eigenmatrix construction unit
Quasi- face characteristic matrix is contrasted, to be verified to the target face.
The mode that features described above matrix construction unit is combined to the target face characteristic value is:
Judge the target face characteristic value Q2 of face characteristic extraction unit output with it is every in the target face eigenmatrix
The similarity of one group of target face characteristic value;When judging the target face characteristic value Q2 and the target face eigenmatrix
In the similarity of all target face characteristic values when being in predetermined threshold range two, receive the target face characteristic value Q2, write
Enter the target face eigenmatrix;Otherwise the target face characteristic value Q2 is not received;
Also judge to write in the target face eigenmatrix, whether the group number of target face characteristic value reaches predetermined value A, if
It is the target face characteristic value for then no longer receiving the face characteristic extraction unit output.
Preferably, the predetermined value A is integer, and at least 2, it is preferred to take 5-7.
Referring to the drawings 9, the present embodiment specifically discloses the structure of above-mentioned face verification unit, including:
Face matching module, for the target face eigenmatrix and the benchmark face eigenmatrix to be scanned for pair
Than judging the similarity between the target face characteristic value and the benchmark face characteristic value;
Similarity calculating method is herein:Similarity, x is that target face is special
Value indicative, face characteristic value on the basis of y, n are target face characteristic value length, face characteristic value length, preferably n=m=5 on the basis of m.
Reliability determining module, for according to the target face characteristic value that the face matching module exports with it is described
Similarity between benchmark face characteristic value, according to the rule that prestores, calculate reliability;
Results verification module, for what is calculated according to the face matching module:The target face characteristic value and the benchmark
The similarity of face characteristic value and predetermined value two(Such as 0.95)Relation, and the output result of the degree of accuracy determining module is defeated
Go out finally to the result of target face in the short-sighted frequency.
Above-mentioned face matching module is to the comparison process of target face eigenmatrix and benchmark face eigenmatrix:
Dimension-reduction treatment is done to the benchmark face eigenmatrix and target face eigenmatrix, i.e., by benchmark face described in each group
Characteristic value and each group of target face characteristic value carry out dimensionality reduction, then by each group of benchmark face characteristic value of dimensionality reduction and the target
All target face characteristic values of dimensionality reduction carry out similarity comparison in eigenmatrix;
If in the target face eigenmatrix, exist and exceed predetermined value three with the benchmark face characteristic value similarity of the dimensionality reduction
Dimensionality reduction target face characteristic value, then judge the target face eigenmatrix before the dimensionality reduction for effective target face characteristic square
Battle array;Otherwise, it is determined that the target face eigenmatrix before the dimensionality reduction is invalid targets face characteristic matrix;
By the benchmark face characteristic value of the benchmark face eigenmatrix and the target person of the effective target face characteristic matrix
Face characteristic value carries out similarity comparison.Contrast herein is the characteristic value contrast before dimensionality reduction.
Specific embodiment refers to the above-mentioned variance of feature at a slow speed transformation matrix dimensionality reduction, is not repeated herein.
Above-mentioned reliability confirms that the mode that module calculates reliability is:
The similarity exported according to the face matching module, when the similarity exceedes the predetermined value B, to more than predetermined
Value B maximum similarity weighting, obtains weighted results;Always according to pre-defined rule(Such as weighted results 100%/weights of *), to described
Weighted results are calculated, and export reliability.Wherein obtaining weighted results is specially:
Calculate:Exceed the base of predetermined value two in the benchmark face eigenmatrix with the target face characteristic value similarity
The group number C of quasi- face characteristic value, the ratio with the total benchmark face eigenvalue cluster number M of the benchmark face eigenmatrix(C/M), and
Meet predetermined condition in the ratio(Such as C/M>=0.4)When, to the maximum similarity more than predetermined value B(Such as meet predetermined condition
Similarity be respectively 0.952 and 0.981, take 0.981 herein), by the Weight to prestore(4 are shared between from 0.95 to 1
Class [0.95,0.96], (0.96,0.97], (0.97,0.98], (0.98,1.00], its weight is respectively 1.0,1.05,
1.15 1.25), obtain weighted results(0.981*1.25=1.23);When the ratio is unsatisfactory for predetermined condition, with the base
With the maximum of the target face characteristic value similarity comparison it is weighted results in quasi- face characteristic value.
Further, above-mentioned image extraction unit is provided with predetermined value C, when the face picture quality for judging to extract is described pre-
When on definite value C, judge that the face picture of the extraction is sent to the face characteristic extraction unit, otherwise, abandon extraction
Face picture;Picture quality computational methods are:, in formula, Q is picture quality, and A is image,For images of the image A after gaussian filtering.
Above-mentioned image extraction unit realizes that extracting the benchmark face picture is specially:
If the human face data source is short-sighted frequency, predetermined quantity D pictures quality is extracted from the short-sighted frequency in predetermined value C
On benchmark face picture;
If the human face data source is the picture comprising benchmark face, when the quantity J of the picture comprising benchmark face exists
When within the predetermined quantity D, all pictures comprising benchmark face are sent to the face characteristic value extraction unit;Work as institute
The picture number comprising benchmark face is stated in more than the predetermined quantity D, with the order of picture quality from high to low, access amount
The D pictures comprising benchmark face are sent to the face characteristic value extraction unit.Image extraction unit extracts the target person
When the mode of face picture with human face data source is short-sighted frequency, the mode of benchmark face picture is extracted.
The invention is not limited in foregoing embodiment.The present invention, which expands to, any in this manual to be disclosed
New feature or any new combination, and disclose any new method or process the step of or any new combination.
Claims (10)
1. a kind of face identification method based on short-sighted frequency coaching method, it is characterised in that comprise the following steps:
S001:Benchmark face eigenmatrix is built for face need to be searched;
S100:Obtain the short-sighted frequency for including target face;
S200:Target face in the short-sighted frequency is identified and tracked, extracts some target face pictures;
S300:Characteristics extraction is carried out respectively to some target face pictures of the extraction, generation corresponds to described some respectively
Some groups of target face characteristic values of target face picture;
S400:Some groups of targets face characteristic value is combined, the target face of the corresponding target face of generation is special
Levy matrix;
S500:The target face eigenmatrix and the benchmark face eigenmatrix are contrasted, with to the target person
Face is verified.
2. the method as described in claim 1, it is characterised in that the S400 is specially:
S4001:Judge that plan is stored in the target face characteristic value Q1 and target face characteristic square of the target face eigenmatrix
The similarity of each group of target face characteristic value in battle array;If judge the target face characteristic value Q1 and the target face
The similarity of all target face characteristic values in eigenmatrix for the moment, then marks the target person all in predetermined threshold range
Face characteristic value Q1 is effective target face characteristic value;Otherwise it is invalid targets face characteristic to mark the target face characteristic value Q1
Value;
S4002:The effective target face characteristic value is stored in target face eigenmatrix;It is special to abandon the invalid targets face
Value indicative;
S4003:Whether the group number for judging to be stored in the target face characteristic value of the target face eigenmatrix reaches predetermined
Value one;If so, then perform S500;Otherwise, S4001 is performed.
3. method as claimed in claim 2, it is characterised in that the S001 is specially:
S0001:Obtain the human face data source for including benchmark face;
S0002:Benchmark face in the human face data source is identified, extracts some benchmark face pictures;
S0003:Characteristics extraction is carried out respectively to some benchmark face pictures of the extraction, generation corresponds to described some respectively
Some benchmark face characteristic values of benchmark face picture;
S0004:Some groups of benchmark face characteristic values are combined, benchmark face of the generation corresponding to the benchmark face
Eigenmatrix.
4. method as claimed in claim 3, it is characterised in that the S500 is specially:
S5001:The target face eigenmatrix and the benchmark face eigenmatrix are scanned for contrasting;Judge the mesh
Mark the similarity between face characteristic matrix and the benchmark face eigenmatrix;
S5002:According to the relation of the similarity of the target face characteristic value and the benchmark face characteristic value and predetermined value two,
Confirm the result to target face described in short-sighted frequency.
5. method as claimed in claim 4, it is characterised in that the S5001 is specially:
S5001a:Dimension-reduction treatment is done to the benchmark face eigenmatrix and target face eigenmatrix, i.e., by described in each group
Benchmark face characteristic value and each group of target face characteristic value carry out dimensionality reduction, then by each group of benchmark face characteristic value of dimensionality reduction with
All target face characteristic values of dimensionality reduction carry out similarity comparison in the target signature matrix;
S5001b:If in the target face eigenmatrix, exist and exceed with the benchmark face characteristic value similarity of the dimensionality reduction
The target face characteristic value of the dimensionality reduction of predetermined value three, then judge that the target face characteristic value of the dimensionality reduction corresponds to the target before dimensionality reduction
Face characteristic matrix is effective target face characteristic matrix;Otherwise, it is determined that the target face eigenmatrix before the dimensionality reduction is nothing
Imitate target face eigenmatrix;
S5001c:The benchmark face eigenmatrix and the effective target face characteristic matrix are subjected to similarity comparison.
6. method as claimed in claim 5, it is characterised in that the S5002 is specially:
S5002a:According to the target face characteristic value and the similarity of the benchmark face characteristic value, surpass in the similarity
When crossing the predetermined value two, S5002b is performed, otherwise, it is determined that target face corresponding to the target face eigenmatrix is treated to be non-
Search face;
S5002b:The maximum similarity that the similarity exceedes predetermined value two, and the Similarity-Weighted to filtering out are filtered out, is obtained
To comparing result;According to the comparing result, reliability is calculated by pre-defined rule;
S5002c:Export comparing result and/or reliability.
7. method as claimed in claim 6, it is characterized in that, the S5002b is specially:
S50021:Calculate:Exceed predetermined value two with the target face characteristic value similarity in the benchmark face eigenmatrix
The benchmark face characteristic value group number, the ratio with the total benchmark face eigenvalue cluster number of the benchmark face eigenmatrix,
And when the ratio meets predetermined condition, perform S50022;Otherwise, with the mesh in target face eigenmatrix described in each group
The similarity maximum for marking face characteristic value and all benchmark face characteristic values in the benchmark face eigenmatrix is contrast
And reliability as a result;
S50022:Filter out the target face characteristic value in target face eigenmatrix described in each group and benchmark face spy
The maximum similarity that all benchmark face characteristic value similarities in matrix exceed predetermined value two is levied, and according to predefined weight, it is right
The Similarity-Weighted filtered out, obtains comparing result;According to the comparing result and predefined weight, being calculated by pre-defined rule can
By degree.
8. method as claimed in claim 7, it is characterised in that the determination methods of the similarity are:
The similarity of calculating benchmark face characteristic value and target face characteristic value:,
X is target face characteristic value, face characteristic value on the basis of y, and n is target face characteristic value length, and face characteristic value is grown on the basis of m
Degree.
9. method as claimed in claim 8, it is characterised in that described to extract face picture from short-sighted frequency and human face data source
Quality meet predetermined quality requirement, quality calculation method is:, in formula, Q is picture quality,
A is image,For images of the image A after gaussian filtering.
10. method as claimed in claim 9, it is characterised in that the benchmark face extracted from the human face data source
Picture number requires:
If the human face data source is short-sighted frequency, predetermined quantity K is extracted from the short-sighted frequency and is filled the foot predetermined quality
It is required that face picture;
If the human face data source is the picture for including the benchmark face, when the quantity of the picture comprising benchmark face
When J is within the predetermined quantity K, the picture for including benchmark face of access amount J;When the figure for including benchmark face
Piece quantity is in more than the predetermined quantity K, K pictures for including benchmark face of access amount.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711095734.2A CN107798308B (en) | 2017-11-09 | 2017-11-09 | Face recognition method based on short video training method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711095734.2A CN107798308B (en) | 2017-11-09 | 2017-11-09 | Face recognition method based on short video training method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107798308A true CN107798308A (en) | 2018-03-13 |
CN107798308B CN107798308B (en) | 2020-09-22 |
Family
ID=61548007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711095734.2A Active CN107798308B (en) | 2017-11-09 | 2017-11-09 | Face recognition method based on short video training method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107798308B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109325964A (en) * | 2018-08-17 | 2019-02-12 | 深圳市中电数通智慧安全科技股份有限公司 | A kind of face tracking methods, device and terminal |
CN109508701A (en) * | 2018-12-28 | 2019-03-22 | 北京亿幕信息技术有限公司 | A kind of recognition of face and method for tracing |
CN110084162A (en) * | 2019-04-18 | 2019-08-02 | 上海钧正网络科技有限公司 | A kind of peccancy detection method, apparatus and server |
CN110415424A (en) * | 2019-06-17 | 2019-11-05 | 众安信息技术服务有限公司 | A kind of authentication method, apparatus, computer equipment and storage medium |
CN111507143A (en) * | 2019-01-31 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Expression image effect generation method and device and electronic equipment |
CN116935286A (en) * | 2023-08-03 | 2023-10-24 | 广州城市职业学院 | Short video identification system |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101187975A (en) * | 2007-12-25 | 2008-05-28 | 西南交通大学 | A face feature extraction method with illumination robustness |
CN102693418A (en) * | 2012-05-17 | 2012-09-26 | 上海中原电子技术工程有限公司 | Multi-pose face identification method and system |
US20130004028A1 (en) * | 2011-06-28 | 2013-01-03 | Jones Michael J | Method for Filtering Using Block-Gabor Filters for Determining Descriptors for Images |
CN104008370A (en) * | 2014-05-19 | 2014-08-27 | 清华大学 | Video face identifying method |
CN105184238A (en) * | 2015-08-26 | 2015-12-23 | 广西小草信息产业有限责任公司 | Human face recognition method and system |
CN105354543A (en) * | 2015-10-29 | 2016-02-24 | 小米科技有限责任公司 | Video processing method and apparatus |
CN105426860A (en) * | 2015-12-01 | 2016-03-23 | 北京天诚盛业科技有限公司 | Human face identification method and apparatus |
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
CN106503691A (en) * | 2016-11-10 | 2017-03-15 | 广州视源电子科技股份有限公司 | Identity labeling method and device for face picture |
CN106778522A (en) * | 2016-11-25 | 2017-05-31 | 江南大学 | A kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation |
CN106845357A (en) * | 2016-12-26 | 2017-06-13 | 银江股份有限公司 | A kind of video human face detection and recognition methods based on multichannel network |
CN106971158A (en) * | 2017-03-23 | 2017-07-21 | 南京邮电大学 | A kind of pedestrian detection method based on CoLBP symbiosis feature Yu GSS features |
-
2017
- 2017-11-09 CN CN201711095734.2A patent/CN107798308B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101187975A (en) * | 2007-12-25 | 2008-05-28 | 西南交通大学 | A face feature extraction method with illumination robustness |
US20130004028A1 (en) * | 2011-06-28 | 2013-01-03 | Jones Michael J | Method for Filtering Using Block-Gabor Filters for Determining Descriptors for Images |
CN102693418A (en) * | 2012-05-17 | 2012-09-26 | 上海中原电子技术工程有限公司 | Multi-pose face identification method and system |
CN104008370A (en) * | 2014-05-19 | 2014-08-27 | 清华大学 | Video face identifying method |
CN105184238A (en) * | 2015-08-26 | 2015-12-23 | 广西小草信息产业有限责任公司 | Human face recognition method and system |
CN105354543A (en) * | 2015-10-29 | 2016-02-24 | 小米科技有限责任公司 | Video processing method and apparatus |
CN105426860A (en) * | 2015-12-01 | 2016-03-23 | 北京天诚盛业科技有限公司 | Human face identification method and apparatus |
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
CN106503691A (en) * | 2016-11-10 | 2017-03-15 | 广州视源电子科技股份有限公司 | Identity labeling method and device for face picture |
CN106778522A (en) * | 2016-11-25 | 2017-05-31 | 江南大学 | A kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation |
CN106845357A (en) * | 2016-12-26 | 2017-06-13 | 银江股份有限公司 | A kind of video human face detection and recognition methods based on multichannel network |
CN106971158A (en) * | 2017-03-23 | 2017-07-21 | 南京邮电大学 | A kind of pedestrian detection method based on CoLBP symbiosis feature Yu GSS features |
Non-Patent Citations (2)
Title |
---|
DEEPSHIKHA BHATI等: "《Survey – A Comparative Analysis of Face Recognition Technique》", 《INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH AND GENERAL SCIENCE》 * |
杨欣等: "《基于类矩阵和特征融合的加权自适应人脸识别》", 《中国图象图形学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109325964A (en) * | 2018-08-17 | 2019-02-12 | 深圳市中电数通智慧安全科技股份有限公司 | A kind of face tracking methods, device and terminal |
CN109508701A (en) * | 2018-12-28 | 2019-03-22 | 北京亿幕信息技术有限公司 | A kind of recognition of face and method for tracing |
CN109508701B (en) * | 2018-12-28 | 2020-09-22 | 北京亿幕信息技术有限公司 | Face recognition and tracking method |
CN111507143A (en) * | 2019-01-31 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Expression image effect generation method and device and electronic equipment |
US12020469B2 (en) | 2019-01-31 | 2024-06-25 | Beijing Bytedance Network Technology Co., Ltd. | Method and device for generating image effect of facial expression, and electronic device |
CN110084162A (en) * | 2019-04-18 | 2019-08-02 | 上海钧正网络科技有限公司 | A kind of peccancy detection method, apparatus and server |
CN110415424A (en) * | 2019-06-17 | 2019-11-05 | 众安信息技术服务有限公司 | A kind of authentication method, apparatus, computer equipment and storage medium |
CN110415424B (en) * | 2019-06-17 | 2022-02-11 | 众安信息技术服务有限公司 | Anti-counterfeiting identification method and device, computer equipment and storage medium |
CN116935286A (en) * | 2023-08-03 | 2023-10-24 | 广州城市职业学院 | Short video identification system |
CN116935286B (en) * | 2023-08-03 | 2024-01-09 | 广州城市职业学院 | Short video identification system |
Also Published As
Publication number | Publication date |
---|---|
CN107798308B (en) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107798308A (en) | A kind of face identification method based on short-sighted frequency coaching method | |
CN106780906B (en) | A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks | |
CN104866829B (en) | A kind of across age face verification method based on feature learning | |
CN109165566A (en) | A kind of recognition of face convolutional neural networks training method based on novel loss function | |
CN107590452A (en) | A kind of personal identification method and device based on gait and face fusion | |
CN110728225B (en) | High-speed face searching method for attendance checking | |
CN104700078B (en) | A kind of robot scene recognition methods based on scale invariant feature extreme learning machine | |
Yao et al. | Robust CNN-based gait verification and identification using skeleton gait energy image | |
CN111274916A (en) | Face recognition method and face recognition device | |
CN108009482A (en) | One kind improves recognition of face efficiency method | |
CN110516616A (en) | A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set | |
CN113221655B (en) | Face spoofing detection method based on feature space constraint | |
CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
CN109409297A (en) | A kind of personal identification method based on binary channels convolutional neural networks | |
CN112989889B (en) | Gait recognition method based on gesture guidance | |
CN109063572A (en) | It is a kind of based on multiple dimensioned and multireel lamination Fusion Features fingerprint activity test methods | |
CN108171223A (en) | A kind of face identification method and system based on multi-model multichannel | |
CN111783748A (en) | Face recognition method and device, electronic equipment and storage medium | |
CN108428231A (en) | A kind of multi-parameter Part Surface Roughness learning method based on random forest | |
CN107977439A (en) | A kind of facial image base construction method | |
CN106650574A (en) | Face identification method based on PCANet | |
CN112926522B (en) | Behavior recognition method based on skeleton gesture and space-time diagram convolution network | |
CN107038400A (en) | Face identification device and method and utilize its target person tracks of device and method | |
CN108846269A (en) | One kind is towards manifold identity identifying method and identification authentication system | |
CN112949468A (en) | Face recognition method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |