CN106650574A - Face identification method based on PCANet - Google Patents
Face identification method based on PCANet Download PDFInfo
- Publication number
- CN106650574A CN106650574A CN201610832790.9A CN201610832790A CN106650574A CN 106650574 A CN106650574 A CN 106650574A CN 201610832790 A CN201610832790 A CN 201610832790A CN 106650574 A CN106650574 A CN 106650574A
- Authority
- CN
- China
- Prior art keywords
- face
- identification
- pcanet
- carried out
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Abstract
The invention discloses a face identification method based on PCANet. In the face detection link, a multi-angle face detection algorithm based on Adaboost is used to achieve face identification; then a detected face is subjected to alignment correction; a PCANet method based on section thinking is used to extract face features of more expressivity; and the extracted features are used to carry out face matching. Compared with the conventional face identification method, the method of the invention uses PCANet to extract more emotional and more effective face local features in different face regions, so as to obtain the higher identification rate. At the same time, the protocol buffer exchange technology is used in the large-scale face matching process, so that the problem that the face identification computation complexity and the identification matching time efficiency are contradict to each other is solved to a certain extent, and the identification matching time efficiency is enabled to reach the millisecond level.
Description
Technical field
The invention belongs to area of pattern recognition, and in particular to the correlation theories knowledge such as computer vision, pattern-recognition, it is used for
Face identification system.
Background technology
With active demand of the people to fast and effectively auto authentication, biometrics identification technology is promoted not
Disconnected development, promotes developing rapidly for the biometrics identification technology based on fingerprint recognition, iris recognition, recognition of face etc..
Biometrics identification technology is referred to by computer and optics, acoustics, biology sensor and biostatistics principle
It is intimately associated etc. high-tech means, using the intrinsic physiological property of human body and behavioural characteristic the new of personal identification identification is carried out
Technology.Being used for the biological characteristic of bio-identification at this stage mainly has hand, fingerprint, the shape of face, iris, retina, pulse, auricle
Deng based on these features, with the development of modern electronic technology and various hardware technologies, physical characteristics collecting apparatus and calculating point
Parser tool production cost is significantly reduced, and its accuracy and speed is all constantly lifted into order of magnitude ground, biometrics identification technology
Also it is widely used while significant progress is achieved.Moreover, with biometrics identification technology not
Disconnected development, Digital Image Processing, computer vision, pattern-recognition with biometrics identification technology association area, sensor skill
The fields such as art have also welcome new developmental research.
At present, it is fingerprint recognition, iris recognition that biometrics identification technology is most widely used, recognition of face.Wherein people
The advantages of face identification without the property invaded, low cost, easy to install, prosthetic due to participating in is so that face recognition technology has been obtained widely
Research and application.
In the real process of recognition of face, because facial image to be detected usually receives illumination, block, express one's feelings and people
The impact of some instability factors such as face deflection angle so that the precision of recognition of face is not ideal;Simultaneously because extracting
Face characteristic be face matching below critical data, be directly connected to the feasibility of whole identifying system, thus one
Under complex scene, robustness is good, and accuracy rate is high, and the good face recognition algorithms of real-time are particularly important.
Through the research of decades, researchers propose a large amount of face recognition algorithms, wherein calculating than major identification
Method has:
First, the method based on geometric properties.Geometric properties are earliest the descriptions and identification for face side profile, first
Some significant points are determined according to side profile curve, and by these significant points derive one group of characteristic measure for being used for identification such as away from
From, angle etc., then by the integral projection simulation side profile figure near the gray-scale map center line of front.Carried out just using geometric properties
Face recognition of face is generally by the geometric form for extracting the vitals such as human eye, mouth, the position of the Important Characteristic Points such as nose and eyes
Shape is used as characteristic of division.But this method has two, one is that the weight coefficient of various costs in energy function can only
By empirically determined, it is difficult to promote, two is that energy function optimization process is quite time-consuming, it is difficult to practical application.
2nd, eigenface method (Eigenface or PCA), also referred to as based on principal component analysis (principal
Component analysis, abbreviation PCA) face identification method.Its basic thought is:From the viewpoint of statistics, face is found
The basic element of image distribution, the i.e. characteristic vector of facial image sample set covariance matrix, with this face figure is approx characterized
Picture.These characteristic vectors are referred to as eigenface (Eigenface).Eigenface reflects the information lain in inside face sample set
With the structural relation of face.The characteristic vector of eyes, cheek, the sample set covariance matrix of lower jaw is referred to as into eigen eyes, feature
Jaw and feature lip, are referred to as sub-face of feature.Sub-face of feature generated subspace, referred to as sub-face space in corresponding image space.Meter
Projector distance of the test image window in sub-face space is calculated, if video in window meets threshold value comparison condition, its behaviour is judged
Face.Eigenface method is a kind of simple, quick, practical algorithm based on conversion coefficient feature, but due to it in itself according to
Rely in the Gray Correlation of training set and test set image, and require that test image compares picture with training set, so it has
Significant limitation.
3rd, neural net method (Neural Networks, Net).Application of the neural net method in recognition of face than
Aforementioned a few class methods have certain advantage, because it is phase to carry out dominant description to many rules or rule of recognition of face
When difficulty, and the process that neural net method can then pass through study is obtained to these rules and regular covert expression, it
Adaptability it is higher, typically also be easier realize.Neural net method generally needs face is defeated as an one-dimensional vector
Enter, therefore input node is huge, an important target of its identification is exactly dimension-reduction treatment.
Face recognition algorithms above have all reached its maturity, but multiple for illumination, expression, environmental change, deflection angle etc.
Robustness under miscellaneous scene need to be improved, while speed this contradiction sex chromosome mosaicism of the precision and algorithm operation of its identification also has
Wait to improve.
The content of the invention
The present invention goal of the invention be:For above-mentioned problem, there is provided one kind is applied to complex scene and calculating
The face identification method that the low face characteristic based on PCANet of complexity is extracted.
PCANet can be counted as simplest convolution deep learning network, and the core of PCANet is using PCA (features
Face method) learning multistage wave filter.PCANet extracts the process of the characteristics of image of input picture can be divided into three parts:
PCA wave filters, two-value Hash, blocked histogram.Wherein PCA be used to learn multiple filter, then use binary
Hashing and block histograms index respectively and merge, and finally draw the characteristics of image of each input picture, i.e.,
By the image feature vector for being characterized as 1x n dimensions of each image.
The present invention's is comprised the following steps based on the face identification method of PCANet:
Step 1:Image semantic classification.
Gray scale conversion is carried out to images to be recognized, then the gray-scale map to obtaining carries out grey level enhancement process, such as using straight
Side's figure is balanced and image smoothing and de-noising carries out grey level enhancement process, and grey level enhancement process is too strong mainly for light too dark and illumination
Picture, so as to weaken impact of the illumination to subsequent result.
Step 2:Face datection.
To the images to be recognized after Image semantic classification, multi-orientation Face inspection is carried out, to judge that present image whether there is people
Face, in order to judge whether to subsequent step.Detect for example with the multi-orientation Face based on Adaboost modes, to the greatest extent may be used
The human face region different angle different gestures conditions of energy obtains one and accurately divides, there is provided an accurate and efficient people
Face is positioned, to improve the efficiency of follow-up identification step.Compare with traditional method for detecting human face, the present invention can be detected accurately
Multiple faces in same picture frame, and the multi-orientation Face inspection based on Adaboost modes (angular range is 0~45 °)
Survey, can accurately and effectively detect the face in 30 degree of ranges of deflection, even have efficient standard in 45 degree of ranges of deflection
True testing result, facial angle can not be accurately processed so as to be largely overcoming conventional method in plurality of human faces detection
The problem of deflection.
Step 3:Face normalization.
On the basis of based on eyebrow, eyes, nose, face, multiple human face characteristic points are chosen from standard front face face figure,
With the standard location information that the location distribution information of selected human face characteristic point builds human face characteristic point.Can be by a large amount of marks
The location distribution information statistics of the human face characteristic point of quasi- front face figure, obtains the standard location information of human face characteristic point.
During the face that Determination is arrived, rotational reconstruction is carried out with the standard location information of human face characteristic point, realized
The correction of face so that multi-orientation Face obtains front face in the case of detecting, so that follow-up face matching becomes
More precise and high efficiency;In addition, carrying out the processing mode of face normalization, and traditional face school based on the standard location information for building
Holotype is compared, and data processing amount of the present invention during face normalization is also mitigated to a great extent.
Step 4:Face is matched.
After step 3 process, the face figure (front face image) of standard is obtained, then further according to the spy of PCANet
Extracting mode is levied, feature extraction is carried out to facial image to be identified (front face image).It is right with existing in feature extraction
Entire image is directly different by the way of PCANet, and the present invention is first based on face organ to be identified to facial image to be identified
Facial image carries out image block, is divided into different human face regions, then reuses PCANet and extracts each image respectively
The face characteristic of block (human face region), by the face characteristic of all image blocks of same facial image to be identified figure to be identified is constituted
The face characteristic data of picture, face characteristic data referred to as to be identified.
By the way of traversal matching, face characteristic data to be identified are entered with the face characteristic data in face database
Pedestrian's face is matched, and takes Optimum Matching as face recognition result.For example in taking all face characteristics in face database,
With card side's distance minimum of face characteristic data to be identified as Optimum Matching.Wherein, the face characteristic in face database
Data are also based on the mode of piecemeal and obtain, i.e., first facial image is carried out into image block based on face organ, reuse
PCANet extracts respectively the face characteristic data that the face characteristic of each image block obtains each facial image.And in face database
The read-write mode of face characteristic data is:The reading write with unserializing of serializing, i.e., the protocol for being increased income using google
Buffer switching technologies.Serializing process is carried out to the face characteristic data in face database so that in the matching process, directly
The read-write of binary stream is tapped into, so as to reduce initial data to the time overhead in bytes of stream data transfer process.
In addition, for the precision for further ensureing to recognize, by predetermined threshold value (empirical value, depending on practical application field
Close), when Optimum Matching is with the card side of face characteristic data to be identified distance less than or equal to predetermined threshold value, then it is assumed that matching
Accurately, using Optimum Matching as face recognition result;Otherwise it is assumed that recognition of face failure, i.e., current facial image to be identified without
Method is recognized.
The present invention extract characteristics of image when, based on the PCANet tupes of section thinking, can be directed to different
Human face region extracts the face characteristic for having more expressivity, so as to the discrimination more increased, simultaneously because repeatedly to different people
Face region is lower than existing face identification method using the characteristic dimension that PCA dimensionality reduction technologies are finally obtained, largely
On reduce calculating time and the expense of memory space of identifying processing.Simultaneously during face is matched, due to each
The characteristic information for comparing face saves as the matrix data of a 1*n, and the value of n is generally all 104Or more quantity
Level, so the data volume of the face in database for matching face characteristic data to be identified is very huge, the present invention is guaranteeing
While the high-accuracy of face matching, by literary in read-write face characteristic data to the face characteristic data in all databases
The protocol buffer switching technologies increased income using google during part, further reduce the time overhead of matching treatment, plus
Fast recognition speed, so that the expense of whole face matching process time reaches the other level of Millisecond.
In sum, as a result of above-mentioned technical proposal, the invention has the beneficial effects as follows:
(1) present invention (is carried out respectively in different human face regions using the feature extraction mode of Hash using PCANet
Face characteristic is extracted), different human face regions can be directed to and extract the face characteristic for having more expressivity;Simultaneously because repeatedly to not
PCA dimensionality reduction technologies are used with human face region so that the characteristic dimension for finally obtaining directly is adopted than existing to entire image
Feature extraction mode with PCANet is lower, reduces time and the memory space expense of identifying processing;
(2) during Face datection, using the Face datection mode of multi-angle, while in order to ensure the height of follow-up identification
Accuracy, add based on eyes, nose, face standard front face face location correcting mode so that whole identifying processing is more
Plus fit in practical application.
(3) in matching, to the face characteristic data of database using serializing write and unserializing reading, further
Reduce the time overhead of identifying processing so that the time efficiency of the identifying processing of the present invention reaches the level of Millisecond.
Description of the drawings
Fig. 1 is the flowchart of the present invention;
Fig. 2 is the human face characteristic point mark schematic diagram of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, with reference to embodiment and accompanying drawing, to this
It is bright to be described in further detail.
Referring to Fig. 1, the present invention's is mainly included based on the face identification method of PCANet:Image semantic classification, Face datection,
Face normalization, face characteristic are extracted, face matching.
Locate in advance for the images to be recognized (input datas of head end video stream or pictures) of input carries out first image
After reason, Face datection is carried out, if there is face, determine whether whether it is front face image, if so, then directly carried out
Face characteristic is extracted;Otherwise, first carry out carrying out face characteristic extraction after rotation correction again.
When face normalization process is carried out, the present invention is based on the position of face organ's characteristic point of standard front face facial image
Distributed intelligence is put, the standard location information of human face characteristic point is built.As shown in Fig. 2 49 face organ's characteristic points are provided with altogether,
Left and right eyebrow is respectively provided with 5 characteristic points:Brows, eyebrow tail, 3 characteristic points of mark eyebrow, left and right eye is respectively provided with 6 spies
Levy a little:2 characteristic points of palpebra inferior in 2 canthus, mark, nose arranges 9 characteristic points:4 characteristic points of the mark bridge of the nose, mark
Know 5 characteristic points of nose, face arranges 18 characteristic points:The left and right corners of the mouth, upper and lower lip outline are respectively provided with 8 characteristic points.
Face characteristic point in Fig. 2 is open configuration, if closure state, when correspondingly can reduce feature point number, i.e. closure state,
The characteristic point of lower lip Internal periphery there may be overlap in mark, then can de-redundancy, to reduce feature point number.
Then by the facial image to be identified of non-frontal, based on the standard location information of 49 human face characteristic points, revolved
Transfer to another school just, obtain positive facial image to be identified.
Then to facial image to be identified, different human face regions are first divided into, such as include respectively eyebrow, eyes, nose,
The face subregion of face, is then carried out after face characteristic extraction, with local data to face subregion using PCANet modes
Face characteristic data in storehouse carry out matching comparison, are identified result.
For example, 100 people are randomly selected from the AR face databases of Purdue universities, everyone 14 in different illumination, difference
The face picture of expression is 100*14 as test set, i.e. test set.The scale of local data base is 100*7, i.e. destination object
100 people, everyone has 7 object matching images.
By the face identification method of the PCANet of the present invention, test checking is carried out to above-mentioned test set, its test result
For:
Identification total number of persons:700, recognition failures number:36, discrimination:95%;Identification total time:218 seconds, identification was average
Time:0.311429 second;Therefore, recognition accuracy of the present invention is high, and robustness and recognition efficiency are good.
The above, specific embodiment only of the invention, any feature disclosed in this specification, except non-specifically
Narration, can alternative features equivalent by other or with similar purpose replaced;Disclosed all features or all sides
Method or during the step of, in addition to mutually exclusive feature and/or step, can be combined in any way.
Claims (5)
1. the face identification method of PCANet is based on, it is characterised in that comprised the following steps:
Step 1:Images to be recognized is carried out into Image semantic classification, Image semantic classification includes that gray-scale map is changed and grey level enhancement;
Step 2:Multi-orientation Face detection is carried out to pretreated images to be recognized, if there is face, execution step 3:
Step 3:If face to be identified is non-frontal face, face normalization is carried out:
Based on the location distribution information of face organ's characteristic point of standard front face facial image, the normal bit of human face characteristic point is built
Confidence ceases;
To non-frontal face, rotational reconstruction is carried out based on standard location information, obtain front face;
Step 4:Image block is carried out to facial image to be identified based on face organ, then each figure is extracted respectively using PCANet
As the face characteristic of block, obtain the face characteristic data of facial image to be identified, and with face database in face characteristic number
According to the matching of traversal formula face is carried out, Optimum Matching is taken as face recognition result;
The read-write mode of the face characteristic data wherein in face database is:The reading write with unserializing of serializing.
2. the method for claim 1, it is characterised in that step 3, the face organ described in 4 include eyebrow, eyes,
Nose and face.
3. method as claimed in claim 1 or 2, it is characterised in that in step 2, multi-angle is carried out using Adaboost modes
Face datection, angular range is 0~45 °.
4. the method for claim 1, it is characterised in that in step 4, Optimum Matching is that the face in face database is special
Levy data minimum with card side's distance of the face characteristic data of images to be recognized.
5. method as claimed in claim 4, it is characterised in that step 4 is still further comprised:Judge Optimum Matching with it is to be identified
Whether card side's distance of the face characteristic data of image is less than predetermined threshold value, if so, then using Optimum Matching as recognition of face knot
Really;Otherwise, recognition of face failure is exported.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610832790.9A CN106650574A (en) | 2016-09-19 | 2016-09-19 | Face identification method based on PCANet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610832790.9A CN106650574A (en) | 2016-09-19 | 2016-09-19 | Face identification method based on PCANet |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106650574A true CN106650574A (en) | 2017-05-10 |
Family
ID=58852345
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610832790.9A Pending CN106650574A (en) | 2016-09-19 | 2016-09-19 | Face identification method based on PCANet |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106650574A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767335A (en) * | 2017-11-14 | 2018-03-06 | 上海易络客网络技术有限公司 | A kind of image interfusion method and system based on face recognition features' point location |
CN107909008A (en) * | 2017-10-29 | 2018-04-13 | 北京工业大学 | Video target tracking method based on multichannel convolutive neutral net and particle filter |
CN109117724A (en) * | 2018-07-06 | 2019-01-01 | 深圳虹识技术有限公司 | A kind of method and apparatus of iris recognition |
CN109254654A (en) * | 2018-08-20 | 2019-01-22 | 杭州电子科技大学 | A kind of driving fatigue feature extracting method of combination PCA and PCANet |
CN109398074A (en) * | 2018-10-23 | 2019-03-01 | 湖北工业大学 | A kind of Security for fuel tank system and method based on recognition of face and fingerprint recognition |
CN109920422A (en) * | 2019-03-15 | 2019-06-21 | 百度国际科技(深圳)有限公司 | Voice interactive method and device, vehicle-mounted voice interactive device and storage medium |
WO2019137131A1 (en) * | 2018-01-10 | 2019-07-18 | Oppo广东移动通信有限公司 | Image processing method, apparatus, storage medium, and electronic device |
CN110458098A (en) * | 2019-08-12 | 2019-11-15 | 上海天诚比集科技有限公司 | A kind of face comparison method of facial angle measurement |
CN110458009A (en) * | 2019-07-04 | 2019-11-15 | 浙江大华技术股份有限公司 | Pictorial information, Face datection, processing method and relevant device to scheme to search figure |
CN113269137A (en) * | 2021-06-18 | 2021-08-17 | 常州信息职业技术学院 | Non-fit face recognition method combining PCANet and shielding positioning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102332086A (en) * | 2011-06-15 | 2012-01-25 | 夏东 | Facial identification method based on dual threshold local binary pattern |
CN102842033A (en) * | 2012-08-17 | 2012-12-26 | 苏州两江科技有限公司 | Human expression emotion semantic recognizing method based on face recognition |
CN104298973A (en) * | 2014-10-09 | 2015-01-21 | 北京工业大学 | Face image rotation method based on autoencoder |
CN104573729A (en) * | 2015-01-23 | 2015-04-29 | 东南大学 | Image classification method based on kernel principal component analysis network |
CN105512599A (en) * | 2014-09-26 | 2016-04-20 | 数伦计算机技术(上海)有限公司 | Face identification method and face identification system |
-
2016
- 2016-09-19 CN CN201610832790.9A patent/CN106650574A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102332086A (en) * | 2011-06-15 | 2012-01-25 | 夏东 | Facial identification method based on dual threshold local binary pattern |
CN102842033A (en) * | 2012-08-17 | 2012-12-26 | 苏州两江科技有限公司 | Human expression emotion semantic recognizing method based on face recognition |
CN105512599A (en) * | 2014-09-26 | 2016-04-20 | 数伦计算机技术(上海)有限公司 | Face identification method and face identification system |
CN104298973A (en) * | 2014-10-09 | 2015-01-21 | 北京工业大学 | Face image rotation method based on autoencoder |
CN104573729A (en) * | 2015-01-23 | 2015-04-29 | 东南大学 | Image classification method based on kernel principal component analysis network |
Non-Patent Citations (4)
Title |
---|
WU DAN: "面向图像分类的核主成分分析网络", 《东南大学学报(英文版)》 * |
刘栋梁: "基于PCANet的人脸识别算法", 《信息与电脑》 * |
陈敏: "《大数据浪潮 大数据整体解决方案及关键技术探索》", 31 October 2015 * |
陈浩: "基于Adaboost算法的正面多角度人脸检测", 《科技信息》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909008A (en) * | 2017-10-29 | 2018-04-13 | 北京工业大学 | Video target tracking method based on multichannel convolutive neutral net and particle filter |
CN107767335A (en) * | 2017-11-14 | 2018-03-06 | 上海易络客网络技术有限公司 | A kind of image interfusion method and system based on face recognition features' point location |
WO2019137131A1 (en) * | 2018-01-10 | 2019-07-18 | Oppo广东移动通信有限公司 | Image processing method, apparatus, storage medium, and electronic device |
CN109117724A (en) * | 2018-07-06 | 2019-01-01 | 深圳虹识技术有限公司 | A kind of method and apparatus of iris recognition |
CN109254654A (en) * | 2018-08-20 | 2019-01-22 | 杭州电子科技大学 | A kind of driving fatigue feature extracting method of combination PCA and PCANet |
CN109254654B (en) * | 2018-08-20 | 2022-02-01 | 杭州电子科技大学 | Driving fatigue feature extraction method combining PCA and PCANet |
CN109398074A (en) * | 2018-10-23 | 2019-03-01 | 湖北工业大学 | A kind of Security for fuel tank system and method based on recognition of face and fingerprint recognition |
CN109920422A (en) * | 2019-03-15 | 2019-06-21 | 百度国际科技(深圳)有限公司 | Voice interactive method and device, vehicle-mounted voice interactive device and storage medium |
CN110458009A (en) * | 2019-07-04 | 2019-11-15 | 浙江大华技术股份有限公司 | Pictorial information, Face datection, processing method and relevant device to scheme to search figure |
CN110458009B (en) * | 2019-07-04 | 2022-02-18 | 浙江大华技术股份有限公司 | Processing method for picture information, face detection and picture searching by picture and related equipment |
CN110458098A (en) * | 2019-08-12 | 2019-11-15 | 上海天诚比集科技有限公司 | A kind of face comparison method of facial angle measurement |
CN110458098B (en) * | 2019-08-12 | 2023-06-16 | 上海天诚比集科技有限公司 | Face comparison method for face angle measurement |
CN113269137A (en) * | 2021-06-18 | 2021-08-17 | 常州信息职业技术学院 | Non-fit face recognition method combining PCANet and shielding positioning |
CN113269137B (en) * | 2021-06-18 | 2023-10-31 | 常州信息职业技术学院 | Non-matching face recognition method combining PCANet and shielding positioning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650574A (en) | Face identification method based on PCANet | |
CN108921100B (en) | Face recognition method and system based on visible light image and infrared image fusion | |
CN104036278B (en) | The extracting method of face algorithm standard rules face image | |
US8620036B2 (en) | System and method for controlling image quality | |
WO2016145940A1 (en) | Face authentication method and device | |
KR20080033486A (en) | Automatic biometric identification based on face recognition and support vector machines | |
KR20170006355A (en) | Method of motion vector and feature vector based fake face detection and apparatus for the same | |
Reese et al. | A comparison of face detection algorithms in visible and thermal spectrums | |
WO2020195732A1 (en) | Image processing device, image processing method, and recording medium in which program is stored | |
CN109376717A (en) | Personal identification method, device, electronic equipment and the storage medium of face comparison | |
CN106599785A (en) | Method and device for building human body 3D feature identity information database | |
CN104008364A (en) | Face recognition method | |
Shrivastava et al. | Conceptual model for proficient automated attendance system based on face recognition and gender classification using Haar-Cascade, LBPH algorithm along with LDA model | |
Lin et al. | Face detection based on skin color segmentation and neural network | |
CN110929570B (en) | Iris rapid positioning device and positioning method thereof | |
CN110969101A (en) | Face detection and tracking method based on HOG and feature descriptor | |
Strueva et al. | Student attendance control system with face recognition based on neural network | |
KR101343623B1 (en) | adaptive color detection method, face detection method and apparatus | |
CN110110606A (en) | The fusion method of visible light neural network based and infrared face image | |
Hsiao et al. | EfficientNet based iris biometric recognition methods with pupil positioning by U-net | |
Gürel | Development of a face recognition system | |
Nguyen et al. | Reliable detection of eye features and eyes in color facial images using ternary eye-verifier | |
Pathak et al. | Entropy based CNN for segmentation of noisy color eye images using color, texture and brightness contour features | |
Karungaru et al. | Face recognition in colour images using neural networks and genetic algorithms | |
Xu et al. | Efficient eye states detection in real-time for drowsy driving monitoring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170510 |