CN108932468A - One kind being suitable for psychologic face recognition method - Google Patents
One kind being suitable for psychologic face recognition method Download PDFInfo
- Publication number
- CN108932468A CN108932468A CN201810395355.3A CN201810395355A CN108932468A CN 108932468 A CN108932468 A CN 108932468A CN 201810395355 A CN201810395355 A CN 201810395355A CN 108932468 A CN108932468 A CN 108932468A
- Authority
- CN
- China
- Prior art keywords
- image
- module
- face
- vector
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to area of face recognition technology, disclose one kind and are suitable for psychologic face recognition method, carry out Image Acquisition to obtain high-definition image;It is identified from high-definition image and is partitioned into face image;The facial image of segmentation module segmentation is handled;Generate the feature vector to be identified of facial detail feature;The feature vector of known face is pre-stored with by face characteristic library module;All feature vectors in the feature vector to be identified of output and the feature database are subjected to matching primitives, export the corresponding identity result of the feature vector to be identified;Display acquisition image information and matching result information.The present invention constructs facial feature database by face characteristic library module, carries out Rapid matching by characteristic matching module, so as in the relatively limited system of computing resource real-time judge go out the identity of facial image to be identified, time-consuming short accuracy rate is high.
Description
Technical field
The invention belongs to area of face recognition technology, more particularly to one kind to be suitable for psychologic face recognition method.
Background technique
Currently, the prior art commonly used in the trade is such:
Face recognition --- face recognition is also known as recognition of face, face recognizing, face identification etc., and face recognition uses logical
The video camera used is as identification information acquisition device.The face-image of identification object, department of computer science are obtained in a non-contact manner
System completes identification process after being compared after obtaining image with database images.However, existing face recognition is to face-image
Extract the identification accuracy inaccurately influenced to face;Simultaneously face recognition low efficiency, time-consuming.
Cognition and understanding to figure are the important foundations that the mankind obtain external information and judge and reflect.Wherein,
The similitude of automatic identification figure is to realize the important technology for improving human vision cognitive efficiency and expanding intelligent cognitive domain
One of.It is widely used in industrial technology, graph and image processing, pattern-recognition and artificial intelligence field, the daily life to us
Work plays unknown profound influence.It is very necessary for developing a set of shape similarity identification technology.With computer digit
Change is increasingly developed with graph technology, and the digitized processing efficiency of descriptive geometry characteristic information has also obtained large increase.Rationally,
The support of efficient algorithm and environmental level, but also research has sufficient feasibility.
Existing shape similarity often has the least mean-square error and geometry of probability statistics algorithm, characteristic value with recognition methods
The Weighted Average Algorithm etc. of external appearance characteristic necessary condition.Although achieving certain efficiency, there is also some shortcomings:Algorithm
The matching of realization process and visual discrimination is not intuitive;Algorithm is complicated, causes data processing amount big, and operating cost is high;Algorithm
Evenness analysis causes the variation of important geometrical characteristic in figure to the influence of overall similarity, and Stability and veracity is caused to be deposited
In certain deviation.
In conclusion problem of the existing technology is:
Existing face recognition is inaccurate to facial image zooming-out, influences the identification accuracy to face;Face recognition simultaneously
Low efficiency, time-consuming.
In existing figure segmentation, similarity often realizes that the matching of process and visual discrimination is not intuitive with recognition methods,
Algorithm is complicated, causes data processing amount big, and operating cost is high, and Stability and veracity is caused to there are problems that certain deviation.
The data processing accuracy of the prior art is poor.
Summary of the invention
In view of the problems of the existing technology, the present invention provides one kind to be suitable for psychologic face recognition method.
The invention is realized in this way it is a kind of suitable for psychologic facial-recognition security systems, it is described suitable for psychologic
Facial-recognition security systems include:
Image capture module is connect with image segmentation module, for carrying out Image Acquisition by high-definition camera to obtain
High-definition image;
Image segmentation module is connect with image capture module, main control module, for receiving high-definition image, and from high definition figure
It is identified as at and is partitioned into face image;
The mathematical model for establishing two figures establishes eigenmatrix corresponding with figure by the complete Vector Groups of description figure,
Calculate the angle on adjacent both sides;Calculate the minimum distance between two figures;Enhancement processing to calculated result.
The side length of the mathematical model polygon of the foundation and adjacent angle are by one vector S of construction counterclockwise1Indicate polygon
Shape:
S1=(l1,α1,l2,α2…lN-1,αN-1,lN,αN);
S1There are mapping relations one by one with the polygon, indicates unrelated with corner initial order;
The complete Vector Groups have 2N vector S counterclockwise1、S2……S2N-1、S2NHave one with polygon
One mapping relations constitute a complete Vector Groups of the polygon, are expressed as follows:
S1=(l1,α1,l2,α2…lN-1,αN-1,lN,αN);
S2=(α1,l2, α2…lN-1,αN-1,lN,αN,l1);
……
S2N-1=(lN,αN,l1,α1,l2, α2…lN-1,αN-1);
S2N=(αN,l1,α1,l2, α2…lN-1,αN-1,lN);
With matrix SEIt indicates complete vector, and defines SEFor the eigenmatrix of the polygon, SEIt is expressed as follows:
Main control module, it is special with image capture module, image segmentation module, image processing module, feature generation module, face
Library module, characteristic matching module, display module connection are levied, is worked normally for dispatching modules;
Image processing module is connect with main control module, is handled for the facial image to segmentation module segmentation;
Feature generation module, connect with main control module, generates facial detail feature for carrying out to facial image after processing
Feature vector to be identified;
Face characteristic library module, connect with main control module, for being pre-stored with the feature vector of known face;
Characteristic matching module is connect with main control module, the feature to be identified for exporting the feature generation module
All feature vectors in vector and the feature database carry out matching primitives, export the corresponding body of the feature vector to be identified
Part result;
Matching primitives method includes:
Step 1: collecting N number of sample as training set X, sample mean m is found out using following formula:
Wherein, xi ∈ sample training collection X=(x1, x2 ..., xN);
Step 2: finding out scatter matrix S:
Find out the eigenvalue λ i and corresponding feature vector ei of scatter matrix, wherein ei is principal component, by characteristic value from
It arrives greatly and small is arranged successively λ 1, λ 2 ...;
P value is taken out, λ 1, λ 2 ..., λ p determine face space E=(e1, e2 ..., eP), on this face space, training sample
In this X, the point that each element projects to the space is obtained by following formula:
X'i=Etxi, t=1,2 ..., N;
What is obtained by above formula is p dimensional vector by former vector after PCA dimensionality reduction;
Plurality of human faces identification is carried out using SRC face recognition algorithms, including:
The recognition result of each face of present frame is obtained to present frame Face datection and by coordinate sequence;It is each according to present frame
The recognition result of a face calculates corresponding each face respectively adjacent n frame recognition result;The identity for counting each face, by surpassing
The Unified Identity of more than half n/2 determines the final identity of target;
Wherein, calculate picture and face database to be identified it is of all categories between reconstruction error { r1, r2 ... rn }, r1<r2<……
<Rn, by obtained similarity value according toRule determine final recognition result;Wherein
T1 is rate value, T1=0.6;
Display module is connect with main control module, for showing acquisition image information and matching result information.
Further, source figure and targeted graphical work pretreatment include in the figure:
Appropriate thresholding is set according to figure minimum containment rectangle length-width ratio, is filtered;
Thresholding is set according to side length each in the figure of source and the minimum value of perimeter ratio, removes the surpriseization part in targeted graphical;
Abbreviation processing is made to targeted graphical number of edges, makes that there is identical number of edges with source figure;
Abbreviation is handled:
1) to the source signal of the surpriseization part in removal targeted graphicalIt carries out low
Energy pretreatment will that is, in each sampling instant pValue of the amplitude less than thresholding ε sets 0,
It obtainsThe setting of thresholding ε is determined according to the average energy for receiving signal;
2) the time-frequency numeric field data of p moment (p=0,1,2, P-1) non-zero is found out, is used
It indicates, whereinIndicate the response of p moment time-frequencyCorresponding frequency indices, right when non-zero
The normalization pretreatment of these non-zeros, obtains pretreated vector b (p, q)=[b1(p,q),b2(p,q),,bM(p,q)
]T, wherein
3) using clustering algorithm estimate the jumping moment of each jump and respectively jump corresponding normalized mixed moment array to
Amount, Hopping frequencies;It is right at p (p=0,1,2 ... the P-1) momentThe frequency values of expression are clustered, obtained cluster centre numberIndicate carrier frequency number existing for the p moment,A cluster centre then indicates the size of carrier frequency, uses respectively
It indicates;To each sampling instant p (p=0,1,2 ... P-1), clustering algorithm pair is utilizedIt is clustered, it is same availableA cluster centre is usedIt indicates;To allIt averages and is rounded, obtain the estimation of source signal numberI.e.:
It finds outAt the time of, use phIt indicates, to the p of each section of continuous valuehIntermediate value is sought, is used
Indicate the l sections of p that are connectedhIntermediate value, thenIndicate the estimation at first of frequency hopping moment;
It is obtained according to estimationp≠phAnd the 4th frequency hopping moment for estimating in step estimate it is each
It jumps correspondingA hybrid matrix column vectorSpecifically formula is:
HereIt is corresponding to indicate that l is jumpedA mixing
Matrix column vector estimated value;Estimate the corresponding carrier frequency of each jump, usesIt indicates that l is jumped to correspond to
'sA frequency estimation, calculation formula are as follows:
4) time-frequency domain frequency hopping source signal is estimated according to the normalization hybrid matrix column vector that estimation obtains;
5) the time-frequency domain frequency hopping source signal between different frequency hopping points is spliced;It is corresponding to estimate that l is jumpedA incidence
Angle is usedIndicate that l jumps the corresponding incident angle of n-th of source signal,Calculation formula it is as follows:
Indicate that l jumps n-th of hybrid matrix column vector that estimation obtainsM-th of element, c indicate light
Speed, i.e. vc=3 × 108Meter per second;Judge that l (l=2,3 ...) jumps the source signal of estimation and first and jump between the source signal of estimation
Corresponding relationship, judgment formula is as follows:
Wherein mn (l)Indicate that l jumps the m of estimationn (l)A signal and n-th of signal of the first jump estimation belong to the same source
Signal;By different frequency hopping point estimation to the signal for belonging to the same source signal be stitched together, as final time-frequency domain source
Signal estimation, uses YnTime-frequency domain estimated value of n-th of the source signal of (p, q) expression on time frequency point (p, q), p=0,1,2 ...,
P, q=0,1,2 ..., Nfft- 1, i.e.,:
6) according to source signal time-frequency domain estimated value, restore time domain frequency hopping source signal;To each sampling instant p (p=0,1,
2 ...) frequency domain data Yn(p, q), q=0,1,2, Nfft- 1 is NfftPoint IFFT transformation, obtain p sampling instant it is corresponding when
Domain frequency hopping source signal, uses yn(p,qt)(qt=0,1,2, Nfft- 1) it indicates;The time domain frequency hopping synthesizer letter obtained to above-mentioned all moment
Number yn(p,qt) processing is merged, final time domain frequency hopping source signal estimation is obtained, specific formula is as follows:
Here Kc=Nfft/ C, C are the sampling number at Short Time Fourier Transform adding window interval, NfftFor the length of FFT transform.
Further, the Euclidean distance of most like vector and maximum phase in source figure and targeted graphical eigenmatrix are obtained and is
Number specifically includes:
Firstly, establishing the eigenmatrix P of source figure P and targeted graphical Q respectively counterclockwiseEAnd QE:
PE=[P1 -P2 T…P2N-1 T P2N T];
QE=[Q1 T Q2 T…Q2N-1 T Q2N T];
Euclidean distance formula d (x, y) and included angle cosine formula sim (x, y) are as follows:
With d (x, y) and it is the basis sim (x, y), redefines two matrix Ds and S, make:
Find out the minimum value in D and S;
Eu is enabled respectivelye=min { Dij, 1≤i≤j=2N;Sime=max { Sij, 1≤i≤j=2N;
Then the eigenmatrix of needle directional structure vectorical structure figure P and Q, the above-mentioned calculation method of repetition find out two features in order again
Minimum value Eu in matrix between most complete vectorcAnd Simc;
Finally enable Eu=min { Eue, Euc};
Sim=min { Sime, Simc};
Eu and Sim be two figure of P, Q correspond to most like vector Euclidean distance and it is maximum mutually and coefficient;
It is suitable for psychologic face recognition method another object of the present invention is to provide one kind to include the following steps:
Step 1 carries out Image Acquisition by image capture module to obtain high-definition image;By image segmentation module from
It is identified at high-definition image and is partitioned into face image;
Step 2, main control module are dispatched image processing module and are handled the facial image of segmentation module segmentation;
Step 3 generates the feature vector to be identified of facial detail feature by feature generation module;Pass through face spy
Sign library module is pre-stored with the feature vector of known face;
Step 4, the feature vector to be identified exported the feature generation module by characteristic matching module with it is described
All feature vectors in feature database carry out matching primitives, export the corresponding identity result of the feature vector to be identified;
Step 5 shows acquisition image information and matching result information by display module.
Further, described image processing module processing method is as follows:
Firstly, image to be processed is carried out image segmentation by segmentation module, portrait image is obtained;
Secondly, carrying out Face datection to the portrait image, N number of characteristic point of the chin outline portion of face is obtained;
Then, one is formed according to multiple default key points in N number of characteristic point and the image to be processed to close
Close region;
Finally, carrying out FIG pull handle to the portrait image according to the enclosed region, human face region image is obtained.
Advantages of the present invention and good effect are:
The present invention provides the portrait image that image processing module will reject image background, carries out face inspection to portrait image
It surveys, obtains N number of characteristic point of the chin outline portion of face, the boundary of face and neck can be accurately divided, according to described N number of
Multiple default key points in characteristic point and the image to be processed form an enclosed region, determine and need to reject in next step
Range, according to the enclosed region to the portrait image carry out FIG pull handle, obtain human face region image, the face area
Area image is accurately separated with the following body part of neck, thus, it is possible on the basis of existing green curtain scratches figure, accurately by people
As the following body part of face and neck of image separates.Facial feature database is constructed by face characteristic library module simultaneously,
By characteristic matching module carry out Rapid matching, so as in the relatively limited system of computing resource real-time judge go out wait know
The identity of other facial image, time-consuming short accuracy rate are high.
The present invention extracts facial image eigenvector method, improves recognition of face degree to a certain extent, is conducive to image
Acquisition and identification.
The present invention improves machine to the visual discrimination effect of face shape similarity, especially to manually being not easy to differentiate high phase
Have very great help like the difficult point of degree figure;Test pattern effect has stronger stability and reliability;Detection time is short, and operation is high
Effect, implementation result are at low cost.The present invention only inquires the side of figure, reduces data processing amount.The present invention passes through construction
The eigenmatrix of figure chooses suitable decision criteria, and carries out multiple enhancement nonlinear transformation to eigenmatrix element, uses
Most values, multi-standard weighted average establish Measurement of Similarity, it is efficient and have stronger stability to have reached algorithm.
Source figure and targeted graphical, which are made to pre-process, in the figure includes:
Appropriate thresholding is set according to figure minimum containment rectangle length-width ratio, is filtered;
Thresholding is set according to side length each in the figure of source and the minimum value of perimeter ratio, removes the surpriseization part in targeted graphical;
Abbreviation processing is made to targeted graphical number of edges, makes that there is identical number of edges with source figure;
Abbreviation is handled:
1) to the source signal of the surpriseization part in removal targeted graphicalIt carries out low
Energy pretreatment will that is, in each sampling instant pValue of the amplitude less than thresholding ε sets 0,
It obtainsThe setting of thresholding ε is determined according to the average energy for receiving signal;It can obtain
It obtains and accurately handles data.
Detailed description of the invention
Fig. 1 is present invention implementation offer suitable for psychologic face recognition method flow chart.
Fig. 2 is present invention implementation offer suitable for psychologic facial-recognition security systems structural block diagram.
In figure:1, image capture module;2, image segmentation module;3, main control module;4, image processing module;5, feature is raw
At module;6, face characteristic library module;7, characteristic matching module;8, display module.
Fig. 3 is present invention implementation offer under AutoCAD2002 environment, provides one group of totally 12 resolution chart.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to embodiments, to the present invention
It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to
Limit the present invention.
With reference to the accompanying drawing and specific embodiment is further described application principle of the invention.
Include the following steps as shown in Figure 1, one kind provided by the invention is suitable for psychologic face recognition method:
S101 carries out Image Acquisition by image capture module to obtain high-definition image;By image segmentation module from height
It is identified at clear image and is partitioned into face image;
S102, main control module are dispatched image processing module and are handled the facial image of segmentation module segmentation;
S103 generates the feature vector to be identified of facial detail feature by feature generation module;Pass through face characteristic
Library module is pre-stored with the feature vector of known face;
S104, the feature vector to be identified for being exported the feature generation module by characteristic matching module and the spy
All feature vectors levied in library carry out matching primitives, export the corresponding identity result of the feature vector to be identified;
S105 shows acquisition image information and matching result information by display module.
As shown in Fig. 2, provided by the invention include suitable for psychologic facial-recognition security systems:Image capture module 1, figure
As segmentation module 2, main control module 3, image processing module 4, feature generation module 5, face characteristic library module 6, characteristic matching mould
Block 7, display module 8.
Image capture module 1 is connect with image segmentation module 2, for carrying out Image Acquisition by high-definition camera to obtain
Obtain high-definition image;
Image segmentation module 2 is connect with image capture module 1, main control module 3, for receiving high-definition image, and from high definition
It is identified at image and is partitioned into face image;
Main control module 3, with image capture module 1, image segmentation module 2, image processing module 4, feature generation module 5,
Face characteristic library module 6, characteristic matching module 7, display module 8 connect, and work normally for dispatching modules;
Image processing module 4 is connect with main control module 3, and the facial image for dividing to segmentation module 2 is handled;
Feature generation module 5 is connect with main control module 3, generates facial detail spy for carrying out to facial image after processing
The feature vector to be identified of point;
Face characteristic library module 6 is connect with main control module 3, for being pre-stored with the feature vector of known face;
Characteristic matching module 7 is connect with main control module 3, the spy to be identified for exporting the feature generation module
All feature vectors levied in vector and the feature database carry out matching primitives, and it is corresponding to export the feature vector to be identified
Identity result;
Display module 8 is connect with main control module 3, for showing acquisition image information and matching result information.
4 processing method of image processing module provided by the invention is as follows:
Firstly, image to be processed is carried out image segmentation by segmentation module, portrait image is obtained;
Secondly, carrying out Face datection to the portrait image, N number of characteristic point of the chin outline portion of face is obtained;
Then, one is formed according to multiple default key points in N number of characteristic point and the image to be processed to close
Close region;
Finally, carrying out FIG pull handle to the portrait image according to the enclosed region, human face region image is obtained.
Below with reference to concrete analysis, the invention will be further described.
Image segmentation module is connect with image capture module, main control module, for receiving high-definition image, and from high definition figure
It is identified as at and is partitioned into face image;
The mathematical model for establishing two figures establishes eigenmatrix corresponding with figure by the complete Vector Groups of description figure,
Calculate the angle on adjacent both sides;Calculate the minimum distance between two figures;Enhancement processing to calculated result.
The side length of the mathematical model polygon of the foundation and adjacent angle are by one vector S of construction counterclockwise1Indicate polygon
Shape:
S1=(l1,α1,l2,α2…lN-1,αN-1,lN,αN);
S1There are mapping relations one by one with the polygon, indicates unrelated with corner initial order;
The complete Vector Groups have 2N vector S counterclockwise1、S2……S2N-1、S2NHave one with polygon
One mapping relations constitute a complete Vector Groups of the polygon, are expressed as follows:
S1=(l1,α1,l2, α2…lN-1,αN-1,lN,αN);
S2=(α1,l2, α2…lN-1,αN-1,lN,αN,l1);
……
S2N-1=(lN,αN,l1,α1,l2,α2…lN-1,αN-1);
S2N=(αN,l1,α1,l2, α2…lN-1,αN-1,lN);
With matrix SEIt indicates complete vector, and defines SEFor the eigenmatrix of the polygon, SEIt is expressed as follows:
Main control module, it is special with image capture module, image segmentation module, image processing module, feature generation module, face
Library module, characteristic matching module, display module connection are levied, is worked normally for dispatching modules;
Image processing module is connect with main control module, is handled for the facial image to segmentation module segmentation;
Feature generation module, connect with main control module, generates facial detail feature for carrying out to facial image after processing
Feature vector to be identified;
Face characteristic library module, connect with main control module, for being pre-stored with the feature vector of known face;
Characteristic matching module is connect with main control module, the feature to be identified for exporting the feature generation module
All feature vectors in vector and the feature database carry out matching primitives, export the corresponding body of the feature vector to be identified
Part result;
Matching primitives method includes:
Step 1: collecting N number of sample as training set X, sample mean m is found out using following formula:
Wherein, xi ∈ sample training collection X=(x1, x2 ..., xN);
Step 2: finding out scatter matrix S:
Find out the eigenvalue λ i and corresponding feature vector ei of scatter matrix, wherein ei is principal component, by characteristic value from
It arrives greatly and small is arranged successively λ 1, λ 2 ...;
P value is taken out, λ 1, λ 2 ..., λ p determine face space E=(e1, e2 ..., eP), on this face space, training sample
In this X, the point that each element projects to the space is obtained by following formula:
X'i=Etxi, t=1,2 ..., N;
What is obtained by above formula is p dimensional vector by former vector after PCA dimensionality reduction;
Plurality of human faces identification is carried out using SRC face recognition algorithms, including:
The recognition result of each face of present frame is obtained to present frame Face datection and by coordinate sequence;It is each according to present frame
The recognition result of a face calculates corresponding each face respectively adjacent n frame recognition result;The identity for counting each face, by surpassing
The Unified Identity of more than half n/2 determines the final identity of target;
Wherein, calculate picture and face database to be identified it is of all categories between reconstruction error { r1, r2 ... rn }, r1<r2<……
<Rn, by obtained similarity value according toRule determine final recognition result;Wherein
T1 is rate value, T1=0.6;
Display module is connect with main control module, for showing acquisition image information and matching result information.
Source figure and targeted graphical, which are made to pre-process, in the figure includes:
Appropriate thresholding is set according to figure minimum containment rectangle length-width ratio, is filtered;
Thresholding is set according to side length each in the figure of source and the minimum value of perimeter ratio, removes the surpriseization part in targeted graphical;
Abbreviation processing is made to targeted graphical number of edges, makes that there is identical number of edges with source figure;
Abbreviation is handled:
1) to the source signal of the surpriseization part in removal targeted graphicalIt carries out low
Energy pretreatment will that is, in each sampling instant pValue of the amplitude less than thresholding ε sets 0,
It obtainsThe setting of thresholding ε is determined according to the average energy for receiving signal;
2) the time-frequency numeric field data of p moment (p=0,1,2, P-1) non-zero is found out, is used
It indicates, whereinIndicate the response of p moment time-frequencyCorresponding frequency indices, right when non-zero
The normalization pretreatment of these non-zeros, obtains pretreated vector b (p, q)=[b1(p,q),b2(p,q),,bM(p,q)
]T, wherein
3) using clustering algorithm estimate the jumping moment of each jump and respectively jump corresponding normalized mixed moment array to
Amount, Hopping frequencies;It is right at p (p=0,1,2 ... the P-1) momentThe frequency values of expression are clustered, obtained cluster centre
NumberIndicate carrier frequency number existing for the p moment,A cluster centre then indicates the size of carrier frequency, uses respectivelyIt indicates;To each sampling instant p (p=0,1,2 ... P-1), clustering algorithm pair is utilizedInto
Row cluster, it is same availableA cluster centre is usedIt indicates;To allIt averages and is rounded, obtain
To the estimation of source signal numberI.e.:
It finds outAt the time of, use phIt indicates, to the p of each section of continuous valuehIntermediate value is sought, is used
Indicate the l sections of p that are connectedhIntermediate value, thenIndicate the estimation at first of frequency hopping moment;
It is obtained according to estimationp≠phAnd the 4th frequency hopping moment for estimating in step estimate it is each
It jumps correspondingA hybrid matrix column vectorSpecifically formula is:
HereIt is corresponding to indicate that l is jumpedA mixing
Matrix column vector estimated value;Estimate the corresponding carrier frequency of each jump, usesIt indicates that l is jumped to correspond to
'sA frequency estimation, calculation formula are as follows:
4) time-frequency domain frequency hopping source signal is estimated according to the normalization hybrid matrix column vector that estimation obtains;
5) the time-frequency domain frequency hopping source signal between different frequency hopping points is spliced;It is corresponding to estimate that l is jumpedA incidence
Angle is usedIndicate that l jumps the corresponding incident angle of n-th of source signal,Calculation formula it is as follows:
Indicate that l jumps n-th of hybrid matrix column vector that estimation obtainsM-th of element, c indicate light
Speed, i.e. vc=3 × 108Meter per second;Judge that l (l=2,3 ...) jumps the source signal of estimation and first and jump between the source signal of estimation
Corresponding relationship, judgment formula is as follows:
Wherein mn (l)Indicate that l jumps the m of estimationn (l)A signal and n-th of signal of the first jump estimation belong to the same source
Signal;By different frequency hopping point estimation to the signal for belonging to the same source signal be stitched together, as final time-frequency domain source
Signal estimation, uses YnTime-frequency domain estimated value of n-th of the source signal of (p, q) expression on time frequency point (p, q), p=0,1,2 ...,
P, q=0,1,2 ..., Nfft- 1, i.e.,:
6) according to source signal time-frequency domain estimated value, restore time domain frequency hopping source signal;To each sampling instant p (p=0,1,
2 ...) frequency domain data Yn(p, q), q=0,1,2, Nfft- 1 is NfftPoint IFFT transformation, obtain p sampling instant it is corresponding when
Domain frequency hopping source signal, uses yn(p,qt)(qt=0,1,2, Nfft- 1) it indicates;The time domain frequency hopping synthesizer letter obtained to above-mentioned all moment
Number yn(p,qt) processing is merged, final time domain frequency hopping source signal estimation is obtained, specific formula is as follows:
Here Kc=Nfft/ C, C are the sampling number at Short Time Fourier Transform adding window interval, NfftFor the length of FFT transform.
The Euclidean distance of most like vector and maximum phase and coefficient are specific in acquisition source figure and targeted graphical eigenmatrix
Including:
Firstly, establishing the eigenmatrix P of source figure P and targeted graphical Q respectively counterclockwiseEAnd QE:
PE=[P1 T P2 T…P2N-1 T P2N T];
QE=[Q1 T Q2 T…Q2N-1 T Q2N T];
Euclidean distance formula d (x, y) and included angle cosine formula sim (x, y) are as follows:
With d (x, y) and it is the basis sim (x, y), redefines two matrix Ds and S, make:
Find out the minimum value in D and S;
Eu is enabled respectivelye=min { Dij, 1≤i≤j=2N;Sime=max { Sij, 1≤i≤j=2N;
Then the eigenmatrix of needle directional structure vectorical structure figure P and Q, the above-mentioned calculation method of repetition find out two features in order again
Minimum value Eu in matrix between most complete vectorcAnd Simc;
Finally enable Eu=min { Eue, Euc};
Sim=min { Sime, Simc};
Eu and Sim be two figure of P, Q correspond to most like vector Euclidean distance and it is maximum mutually and coefficient.
Below with reference to concrete analysis, the invention will be further described.
Multiple graphs similarity is calculated using Euclidean distance algorithm
Under AutoCAD2002 environment, one group of totally 12 resolution chart is provided, as shown in Figure 3.
The calculated result of Euclidean distance is as shown in table 1:
The Euclidean distance of target sample and master sample in table 1
Similarity value after 2 weighted average of table
Data are found out from two tables, and calculated result has local correction amendment before the calculated result ratio after weighting, compare
Meet artificial vision's resolving effect.Table 1 calculates time-consuming 0.472s.
Result reflect calculated result of the present invention be it is reliable, in time and efficiently.The present invention can be used for
The acquisition of target position in detecting and tracking in computer vision, according to existing template find in the picture one it is closest therewith
Region.Then it follows always.Existing some algorithms such as BlobTracking, Meanshift, Camshift, particle filter
Wave etc. is also all that the theory of this respect is needed to go to support.There are also the image retrievals for being on the one hand namely based on picture material, also
It is usually to say to scheme inspection figure.For example it is set out in the image data base of magnanimity to your a certain individual most more matched therewith
Image, this certain technology can may also be done so, and be several characteristic values by image abstraction, for example Trace is converted, image Hash
Or Sift feature vector etc., come according to deposited in database these characteristic matchings return again to corresponding image improve effect
Rate.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.
Claims (5)
1. one kind is suitable for psychologic facial-recognition security systems, which is characterized in that described to be suitable for psychologic face recognition system
System includes:
Image capture module is connect with image segmentation module, for carrying out Image Acquisition by high-definition camera to obtain high definition
Image;
Image segmentation module is connect with image capture module, main control module, for receiving high-definition image, and from high-definition image
It identifies and is partitioned into face image;
The mathematical model for establishing two figures is established eigenmatrix corresponding with figure by the complete Vector Groups of description figure, is calculated
The angle on adjacent both sides out;Calculate the minimum distance between two figures;Enhancement processing to calculated result.
The side length of the mathematical model polygon of the foundation and adjacent angle are by one vector S of construction counterclockwise1Indicate polygon:
S1=(l1,α1,l2, α2…lN-1,αN-1,lN,αN);
S1There are mapping relations one by one with the polygon, indicates unrelated with corner initial order;
The complete Vector Groups have 2N vector S counterclockwise1、S2……S2N-1、S2NHave with polygon and maps one by one
Relationship constitutes a complete Vector Groups of the polygon, is expressed as follows:
S1=(l1,α1,l2, α2…lN-1,αN-1,lN,αN);
S2=(α1,l2,α2…lN-1,αN-1,lN,αN,l1);
……
S2N-1=(lN,αN,l1,α1,l2,α2…lN-1,αN-1);
S2N=(αN,l1,α1,l2, α2…lN-1,αN-1,lN);
With matrix SEIt indicates complete vector, and defines SEFor the eigenmatrix of the polygon, SEIt is expressed as follows:
Main control module, with image capture module, image segmentation module, image processing module, feature generation module, face characteristic library
Module, characteristic matching module, display module connection, work normally for dispatching modules;
Image processing module is connect with main control module, is handled for the facial image to segmentation module segmentation;
Feature generation module, connect with main control module, for facial image after processing carry out generate facial detail feature to
The feature vector of identification;
Face characteristic library module, connect with main control module, for being pre-stored with the feature vector of known face;
Characteristic matching module is connect with main control module, the feature vector to be identified for exporting the feature generation module
Matching primitives are carried out with all feature vectors in the feature database, export the corresponding identity knot of the feature vector to be identified
Fruit;
Matching primitives method includes:
Step 1: collecting N number of sample as training set X, sample mean m is found out using following formula:
Wherein, xi ∈ sample training collection X=(x1, x2 ..., xN);
Step 2: finding out scatter matrix S:
Find out the eigenvalue λ i and corresponding feature vector ei of scatter matrix, wherein ei is principal component, by characteristic value from greatly to
It is small to be arranged successively λ 1, λ 2 ...;
P value is taken out, λ 1, λ 2 ..., λ p determine face space E=(e1, e2 ..., eP), on this face space, training sample X
In, the point that each element projects to the space is obtained by following formula:
X'i=Etxi, t=1,2 ..., N;
What is obtained by above formula is p dimensional vector by former vector after PCA dimensionality reduction;
Plurality of human faces identification is carried out using SRC face recognition algorithms, including:
The recognition result of each face of present frame is obtained to present frame Face datection and by coordinate sequence;According to each individual of present frame
The recognition result of face calculates corresponding each face respectively adjacent n frame recognition result;The identity for counting each face, by being more than half
The Unified Identity of number n/2 determines the final identity of target;
Wherein, calculate picture and face database to be identified it is of all categories between reconstruction error { r1, r2 ... rn }, r1<r2<……<Rn,
By obtained similarity value according toRule determine final recognition result;Wherein T1 is
Rate value, T1=0.6;
Display module is connect with main control module, for showing acquisition image information and matching result information.
2. being suitable for psychologic facial-recognition security systems as described in claim 1, which is characterized in that source figure in the figure
Making pretreatment with targeted graphical includes:
Appropriate thresholding is set according to figure minimum containment rectangle length-width ratio, is filtered;
Thresholding is set according to side length each in the figure of source and the minimum value of perimeter ratio, removes the surpriseization part in targeted graphical;
Abbreviation processing is made to targeted graphical number of edges, makes that there is identical number of edges with source figure;
Abbreviation is handled:
1) to the source signal of the surpriseization part in removal targeted graphicalCarry out low energy
Pretreatment will that is, in each sampling instant pValue of the amplitude less than thresholding ε sets 0, obtainsThe setting of thresholding ε is determined according to the average energy for receiving signal;
2) the time-frequency numeric field data of p moment (p=0,1,2 ... P-1) non-zero is found out, is used
It indicates, whereinIndicate the response of p moment time-frequencyCorresponding frequency indices, right when non-zero
The normalization pretreatment of these non-zeros, obtains pretreated vector b (p, q)=[b1(p,q),b2(p,q),…,bM(p,
q)]T, wherein
3) jumping moment of each jump is estimated using clustering algorithm and respectively jump corresponding normalized hybrid matrix column vector, jump
Frequent rate;It is right at p (p=0,1,2 ... the P-1) momentThe frequency values of expression are clustered, obtained cluster centre number
Indicate carrier frequency number existing for the p moment,A cluster centre then indicates the size of carrier frequency, uses respectively
It indicates;To each sampling instant p (p=0,1,2 ... P-1), clustering algorithm pair is utilizedIt is clustered, it is same availableA cluster centre is usedIt indicates;To allIt averages and is rounded, obtain the estimation of source signal numberI.e.:
It finds outAt the time of, use phIt indicates, to the p of each section of continuous valuehIntermediate value is sought, is usedL=1,2 ... table
Show the l sections of p that are connectedhIntermediate value, thenIndicate the estimation at first of frequency hopping moment;It is obtained according to estimationp≠phAnd the 4th frequency hopping moment for estimating in step estimate it is each jump it is correspondingIt is a mixed
Close matrix column vectorSpecifically formula is:
HereIt is corresponding to indicate that l is jumpedA hybrid matrix
Column vector estimated value;Estimate the corresponding carrier frequency of each jump, usesIt is corresponding to indicate that l is jumped
A frequency estimation, calculation formula are as follows:
4) time-frequency domain frequency hopping source signal is estimated according to the normalization hybrid matrix column vector that estimation obtains;
5) the time-frequency domain frequency hopping source signal between different frequency hopping points is spliced;It is corresponding to estimate that l is jumpedA incident angle,
WithIndicate that l jumps the corresponding incident angle of n-th of source signal,Calculation formula it is as follows:
Indicate that l jumps n-th of hybrid matrix column vector that estimation obtainsM-th of element, c indicate the light velocity, i.e. vc
=3 × 108Meter per second;It is corresponding between the source signal of estimation and the source signal of the first jump estimation to judge that l (l=2,3 ...) is jumped
Relationship, judgment formula are as follows:
Wherein mn (l)Indicate that l jumps the m of estimationn (l)A signal and n-th of signal of the first jump estimation belong to the same source and believe
Number;By different frequency hopping point estimation to the signal for belonging to the same source signal be stitched together, believe as final time-frequency domain source
Number estimation, use YnTime-frequency domain estimated value of n-th of the source signal of (p, q) expression on time frequency point (p, q), p=0,1,2 ..., P,
Q=0,1,2 ..., Nfft- 1, i.e.,:
6) according to source signal time-frequency domain estimated value, restore time domain frequency hopping source signal;To each sampling instant p (p=0,1,2 ...)
Frequency domain data Yn(p, q), q=0,1,2 ..., Nfft- 1 is NfftThe IFFT transformation of point, obtains the corresponding time domain of p sampling instant
Frequency hopping source signal, uses yn(p,qt)(qt=0,1,2 ..., Nfft- 1) it indicates;The time domain frequency hopping synthesizer letter obtained to above-mentioned all moment
Number yn(p,qt) processing is merged, final time domain frequency hopping source signal estimation is obtained, specific formula is as follows:
Here Kc=Nfft/ C, C are the sampling number at Short Time Fourier Transform adding window interval, NfftFor the length of FFT transform.
3. being suitable for psychologic facial-recognition security systems as claimed in claim 2, which is characterized in that obtain source figure and target
The Euclidean distance of most like vector and maximum phase and coefficient specifically include in graphic feature matrix:
Firstly, establishing the eigenmatrix P of source figure P and targeted graphical Q respectively counterclockwiseEAnd QE:
PE=[P1 T P2 T … P2N-1 T P2N T];
QE=[Q1 T Q2 T … Q2N-1 T Q2N T];
Euclidean distance formula d (x, y) and included angle cosine formula sim (x, y) are as follows:
With d (x, y) and it is the basis sim (x, y), redefines two matrix Ds and S, make:
Find out the minimum value in D and S;
Eu is enabled respectivelye=min { Dij, 1≤i≤j=2N;Sime=max { Sij, 1≤i≤j=2N;
Then the eigenmatrix of needle directional structure vectorical structure figure P and Q, the above-mentioned calculation method of repetition find out two eigenmatrixes in order again
In minimum value Eu between most complete vectorcAnd Simc;
Finally enable Eu=min { Eue, Euc};
Sim=min { Sime, Simc};
Eu and Sim be two figure of P, Q correspond to most like vector Euclidean distance and it is maximum mutually and coefficient.
4. a kind of be suitable for psychologic face recognition suitable for psychologic facial-recognition security systems as described in claim 1
Method, which is characterized in that described to include the following steps suitable for psychologic face recognition method:
Step 1 carries out Image Acquisition by image capture module to obtain high-definition image;By image segmentation module from high definition
It is identified at image and is partitioned into face image;
Step 2, main control module are dispatched image processing module and are handled the facial image of segmentation module segmentation;
Step 3 generates the feature vector to be identified of facial detail feature by feature generation module;Pass through face characteristic library
Module is pre-stored with the feature vector of known face;
Step 4, the feature vector to be identified and the feature for being exported the feature generation module by characteristic matching module
All feature vectors in library carry out matching primitives, export the corresponding identity result of the feature vector to be identified;
Step 5 shows acquisition image information and matching result information by display module.
5. being suitable for psychologic face recognition method as claimed in claim 4, which is characterized in that described image processing module
Processing method is as follows:
Firstly, image to be processed is carried out image segmentation by segmentation module, portrait image is obtained;
Secondly, carrying out Face datection to the portrait image, N number of characteristic point of the chin outline portion of face is obtained;
Then, a closed area is formed according to multiple default key points in N number of characteristic point and the image to be processed
Domain;
Finally, carrying out FIG pull handle to the portrait image according to the enclosed region, human face region image is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810395355.3A CN108932468B (en) | 2018-04-27 | 2018-04-27 | Face recognition method suitable for psychology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810395355.3A CN108932468B (en) | 2018-04-27 | 2018-04-27 | Face recognition method suitable for psychology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108932468A true CN108932468A (en) | 2018-12-04 |
CN108932468B CN108932468B (en) | 2021-10-12 |
Family
ID=64448439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810395355.3A Active CN108932468B (en) | 2018-04-27 | 2018-04-27 | Face recognition method suitable for psychology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108932468B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363180A (en) * | 2019-07-24 | 2019-10-22 | 厦门云上未来人工智能研究院有限公司 | A kind of method and apparatus and equipment that statistics stranger's face repeats |
CN110874419A (en) * | 2019-11-19 | 2020-03-10 | 山东浪潮人工智能研究院有限公司 | Quick retrieval technology for face database |
CN113644747A (en) * | 2021-10-18 | 2021-11-12 | 深圳市鑫道为科技有限公司 | A disguised switchgear monitored control system for intelligent monitoring warning |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5913542A (en) * | 1993-09-17 | 1999-06-22 | Bell Data Software Corporation | System for producing a personal ID card |
CN101276404A (en) * | 2007-03-30 | 2008-10-01 | 李季檩 | System and method for quickly and exactly processing intelligent image |
CN101320484A (en) * | 2008-07-17 | 2008-12-10 | 清华大学 | Three-dimensional human face recognition method based on human face full-automatic positioning |
CN101329724A (en) * | 2008-07-29 | 2008-12-24 | 上海天冠卫视技术研究所 | Optimized human face recognition method and apparatus |
CN101593269A (en) * | 2008-05-29 | 2009-12-02 | 汉王科技股份有限公司 | Face identification device and method |
CN102637302A (en) * | 2011-10-24 | 2012-08-15 | 北京航空航天大学 | Image coding method |
CN102663450A (en) * | 2012-03-21 | 2012-09-12 | 南京邮电大学 | Method for classifying and identifying neonatal pain expression and non-pain expression based on sparse representation |
CN103356203A (en) * | 2012-04-10 | 2013-10-23 | 中国人民解放军第四军医大学 | Method and device for detecting human body erect position balancing fatigue |
CN105005755A (en) * | 2014-04-25 | 2015-10-28 | 北京邮电大学 | Three-dimensional face identification method and system |
CN105681920A (en) * | 2015-12-30 | 2016-06-15 | 深圳市鹰硕音频科技有限公司 | Network teaching method and system with voice recognition function |
CN106022317A (en) * | 2016-06-27 | 2016-10-12 | 北京小米移动软件有限公司 | Face identification method and apparatus |
CN107169427A (en) * | 2017-04-27 | 2017-09-15 | 深圳信息职业技术学院 | One kind is applied to psychologic face recognition method and device |
-
2018
- 2018-04-27 CN CN201810395355.3A patent/CN108932468B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5913542A (en) * | 1993-09-17 | 1999-06-22 | Bell Data Software Corporation | System for producing a personal ID card |
CN101276404A (en) * | 2007-03-30 | 2008-10-01 | 李季檩 | System and method for quickly and exactly processing intelligent image |
CN101593269A (en) * | 2008-05-29 | 2009-12-02 | 汉王科技股份有限公司 | Face identification device and method |
CN101320484A (en) * | 2008-07-17 | 2008-12-10 | 清华大学 | Three-dimensional human face recognition method based on human face full-automatic positioning |
CN101329724A (en) * | 2008-07-29 | 2008-12-24 | 上海天冠卫视技术研究所 | Optimized human face recognition method and apparatus |
CN102637302A (en) * | 2011-10-24 | 2012-08-15 | 北京航空航天大学 | Image coding method |
CN102663450A (en) * | 2012-03-21 | 2012-09-12 | 南京邮电大学 | Method for classifying and identifying neonatal pain expression and non-pain expression based on sparse representation |
CN103356203A (en) * | 2012-04-10 | 2013-10-23 | 中国人民解放军第四军医大学 | Method and device for detecting human body erect position balancing fatigue |
CN105005755A (en) * | 2014-04-25 | 2015-10-28 | 北京邮电大学 | Three-dimensional face identification method and system |
CN105681920A (en) * | 2015-12-30 | 2016-06-15 | 深圳市鹰硕音频科技有限公司 | Network teaching method and system with voice recognition function |
CN106022317A (en) * | 2016-06-27 | 2016-10-12 | 北京小米移动软件有限公司 | Face identification method and apparatus |
CN107169427A (en) * | 2017-04-27 | 2017-09-15 | 深圳信息职业技术学院 | One kind is applied to psychologic face recognition method and device |
Non-Patent Citations (2)
Title |
---|
M.HEBERT 等: "A spherical representation for recognition of free-form surfaces", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
周桐: "基于PCA的人脸识别系统的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363180A (en) * | 2019-07-24 | 2019-10-22 | 厦门云上未来人工智能研究院有限公司 | A kind of method and apparatus and equipment that statistics stranger's face repeats |
CN110874419A (en) * | 2019-11-19 | 2020-03-10 | 山东浪潮人工智能研究院有限公司 | Quick retrieval technology for face database |
CN110874419B (en) * | 2019-11-19 | 2022-03-29 | 山东浪潮科学研究院有限公司 | Quick retrieval technology for face database |
CN113644747A (en) * | 2021-10-18 | 2021-11-12 | 深圳市鑫道为科技有限公司 | A disguised switchgear monitored control system for intelligent monitoring warning |
Also Published As
Publication number | Publication date |
---|---|
CN108932468B (en) | 2021-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Pedestrian detection method based on Faster R-CNN | |
CN102306290B (en) | Face tracking recognition technique based on video | |
CN103632132B (en) | Face detection and recognition method based on skin color segmentation and template matching | |
CN109902590A (en) | Pedestrian's recognition methods again of depth multiple view characteristic distance study | |
CN108681737B (en) | Method for extracting image features under complex illumination | |
CN104361313A (en) | Gesture recognition method based on multi-kernel learning heterogeneous feature fusion | |
CN101996308A (en) | Human face identification method and system and human face model training method and system | |
CN102332084A (en) | Identity identification method based on palm print and human face feature extraction | |
CN1828630A (en) | Manifold learning based human face posture identification method | |
CN102945374A (en) | Method for automatically detecting civil aircraft in high-resolution remote sensing image | |
CN108932468A (en) | One kind being suitable for psychologic face recognition method | |
Li et al. | Gesture recognition algorithm based on image information fusion in virtual reality | |
CN115527269B (en) | Intelligent human body posture image recognition method and system | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
CN104156690A (en) | Gesture recognition method based on image space pyramid bag of features | |
Song et al. | Fingerprint indexing based on pyramid deep convolutional feature | |
CN107784263A (en) | Based on the method for improving the Plane Rotation Face datection for accelerating robust features | |
CN102184384A (en) | Face identification method based on multiscale local phase quantization characteristics | |
CN105975906A (en) | PCA static gesture recognition method based on area characteristic | |
CN103942572A (en) | Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction | |
CN102129557A (en) | Method for identifying human face based on LDA subspace learning | |
CN103942545A (en) | Method and device for identifying faces based on bidirectional compressed data space dimension reduction | |
CN109740429A (en) | Smiling face's recognition methods based on corners of the mouth coordinate mean variation | |
Houtinezhad et al. | Off-line signature verification system using features linear mapping in the candidate points | |
CN105955473A (en) | Computer-based static gesture image recognition interactive system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |