CN108171834A - A kind of intelligent access control system - Google Patents

A kind of intelligent access control system Download PDF

Info

Publication number
CN108171834A
CN108171834A CN201711416630.7A CN201711416630A CN108171834A CN 108171834 A CN108171834 A CN 108171834A CN 201711416630 A CN201711416630 A CN 201711416630A CN 108171834 A CN108171834 A CN 108171834A
Authority
CN
China
Prior art keywords
face
access control
module
control system
ultrasonic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711416630.7A
Other languages
Chinese (zh)
Inventor
杨泽霖
马雁祥
罗红亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen He Zhongcheng Technology Co Ltd
Original Assignee
Shenzhen He Zhongcheng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen He Zhongcheng Technology Co Ltd filed Critical Shenzhen He Zhongcheng Technology Co Ltd
Priority to CN201711416630.7A priority Critical patent/CN108171834A/en
Publication of CN108171834A publication Critical patent/CN108171834A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to intelligent building access control system fields, more particularly to a kind of intelligent access control system of the recognition of face based on RGB image deep learning, including face recognition module, ultrasonic module, access control control module, access control control module is electrically connected with face recognition module, and ultrasonic module is electrically connected with face recognition module.The present invention solves access control system recognition of face 1:Power consumption imbalance problem in N schemes, and improve the various visual angles robustness in recognition of face.The ultrasonic wave module subplan of the present invention can carry out vivo identification, and the calculation amount that reduces access control system, the heat dissipation for improving system are stable, PCB thermal ageing resistant performances, prolong the service life, substantially reduce the failure rate caused by fever and maintenance cost.The present invention has used the feature extraction scheme based on video flowing for face many attitude facial model so that the library typing of gate inhibition bottom is more convenient, recognition accuracy higher.

Description

A kind of intelligent access control system
Technical field
The invention belongs to intelligent building access control system fields, and in particular to a kind of face based on RGB image deep learning The intelligent access control system of identification.
Background technology
Traditional access control system is utilized based on RFID and fingerprint authentication;The recognition of face of a small amount of product utilization, but mostly based on passing System algorithm, such as fisher, Eigenface etc., non-deep learning algorithm;Unavoidably there are poor robustness, stability difference etc. to ask Topic.
Face identification system based on RGB image deep learning is applied is divided into 1 in gate inhibition field:1 and 1:N both of which, 1:1 pattern is that the id information of user is obtained using non-contact methods such as RFID, and then the corresponding information of ID and scene are obtained Video stream information verified;And 1:The pattern of N eliminates the step of swiping the card, and directly obtains stream information in real time by system, right Video flowing detects the appearance of face, and carries out feature extraction to Given Face, the feature extracted and the spy in ID databases Sign is compared, and is then passed to the corresponding pulse signal of access control system, realizes the unlatching to door lock;
1:The load of N patterns flow information processing is larger, it is desirable that operational capability it is higher, to realize uninterrupted place in real time Reason, but the workload of access control system often focuses on special time period, as office building morning, noon personnel pass in and out more, night Between using less;There is the imbalance of computing load, the fever of electronic component, power consumption, service life sides in practice scene The problem of face, also influences whether the economy of system, reliability.
Based on the face identification system of RGB image deep learning model, quality, model training side in image information acquisition The technical details such as method, there are many spaces to be modified, and due to face RGB information, there is the factors of aerial prospective, and face is not There are a different GRB flat image map informations with posture, the access control system of 2D image recognitions of face, which needs to improve this factor, to be caused Error.
The recognition of face of 2D images, if being attacked by dimensional printing goods, lacks strong robustness due to lacking steric information In vivo detection algorithm, algorithm based on LBP (local binary pattern) easily by ambient light, RGB cameras in itself It influences, the attack of printed matter can not be also resisted based on infrared camera;In addition the present invention be directed to 1:N modes are not additional RFID, fingerprint etc. verification, it is therefore desirable to more reliable auxiliary In vivo detection flow.
Invention content
For problems of the prior art, the object of the present invention is to provide one kind to solve gate inhibition 1:In N schemes Power consumption imbalance problem improves accuracy decline problem caused by the error of visual angle in recognition of face, and assists the intelligence of In vivo detection It can access control system.
To achieve these goals, the present invention adopts the following technical scheme that:
A kind of intelligent access control system, including face recognition module, ultrasonic module, access control control module, the gate inhibition Switch control module is electrically connected with face recognition module, and the ultrasonic module is electrically connected with face recognition module.
The technical program advanced optimizes, and the ultrasonic module is special frequency channel ultrasonic equipment.
The technical program advanced optimizes, and the ultrasonic module emits ultrasonic pulse, and ultrasonic module receiving terminal is collected The reflection wave signal of face for reflected waveform data, is analyzed to obtain face various pieces structure and returned using computerized algorithm Characteristic wave in wave train;Using naive Bayesian (NB) model, classification is trained to small data set;Using K close on And support vector machines (SVM) model, classification based training is carried out for larger data collection, distinguishes real human face and a variety of non-genuine people Face assists In vivo detection.
The technical program advanced optimizes, and the ultrasonic module is in RGB images picture in certain angle, certain normal direction In distance, persistently detected, In vivo detection, passed through through NB or SVM model checkings, the two dimensional image that RGB cameras capture The face that computer vision face recognition module can just be entered captures flow.
The technical program advanced optimizes, and the angle ranging from 55~65 degree of fan section argument, and the distance is 35-135 Centimetre.
The technical program advanced optimizes, and the face recognition module is 1:N patterns, based on RGB image deep learning Face identification system, for extracting face characteristic, identification and verification visitor.
The technical program advanced optimizes, and the face recognition module is used for face many attitude based on video flowing Feature extraction scheme, several seconds are recorded on requirement registration object upper and lower, left and right twisting head in the registration process of feature bottom library Continuous videos.
The technical program advanced optimizes, and the face recognition module adds multiple judgment mechanism, using DLIB, MTCNN bimodels carry out feature point extraction.
The technical program advanced optimizes, and the face recognition module utilizes Feature Selection Model Facenet Inception carries out feature extraction.
The technical program advanced optimizes, the Feature Selection Model Facenet Inception by CNN convolutional layers, Pond layer, full articulamentum and softmax classification layer compositions.
The beneficial effects of the present invention are:
(1) invention introduces ultrasonic modules, emit ultrasonic pulse in a specific way, ultrasonic module receiving terminal is to face Reflection wave signal be collected, for reflected waveform data, analyze to obtain face various pieces structure using computerized algorithm Characteristic wave in echo sequence;It is closed on using K and SVM algorithm, classification based training, more accurate area is carried out for larger data collection Divide real human face and non-genuine face;
(2) ultrasonic wave module subplan of the invention reduces the calculation amount of access control system, improves the heat dissipation of system Stablize, PCB thermal ageing resistant performances, prolong the service life, substantially reduce the failure rate caused by high power consumption and maintenance cost.And Flow information processing is optimized, and realizes efficient, the low consumption in true production circumstances;
(3) present invention has used the feature extraction scheme based on video flowing for face many attitude facial model so that The library typing of gate inhibition bottom is more convenient, recognition accuracy higher.
(4) present invention solves gate inhibition 1:Power consumption imbalance problem in N schemes improves difference in recognition of face of knowing clearly and regards The robustness at angle.
Description of the drawings
Fig. 1 is ultrasonic wave auxiliary recognition of face flow chart of the present invention;
Fig. 2, which is that the present invention is based on the face characteristics of video, to be put in storage flow diagram;
Fig. 3 is overall schematic of the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work Embodiment shall fall within the protection scope of the present invention.
A kind of intelligent access control system, including face recognition module, ultrasonic module, access control control module, the gate inhibition Switch control module is electrically connected with face recognition module, and the ultrasonic module is electrically connected with face recognition module.Ultrasonic mould Block is special frequency channel ultrasonic equipment.The ultrasonic module emits ultrasonic pulse, reflection of the ultrasonic module receiving terminal to face Wave signal is collected, and for reflected waveform data, analyzes to obtain face various pieces structure in echo using computerized algorithm Characteristic wave in sequence;Using NB models, classification is trained to small data set;It is closed on using K and SVM models, for Larger data collection carries out classification based training, distinguishes real human face and non-genuine face, obtains live body judging result.The ultrasonic module It in RGB images picture in certain angle, in certain normal distance, is persistently detected, In vivo detection, through NB or SVM moulds Type verification passes through, and the face that the two dimensional image that RGB cameras capture can just enter computer vision face recognition module captures Flow.55~65 degree of fan section argument is the angle ranging from, the distance is 35-135 centimetres.
The face recognition module is 1:Face identification system of the N patterns based on RGB image deep learning, for extracting people Face feature, identification and verification visitor.The face recognition module is carried for face many attitude using the feature based on video flowing Scheme is taken, several seconds continuous videos are recorded on requirement registration object upper and lower, left and right twisting head in the registration process of feature bottom library. The face recognition module adds multiple judgment mechanism, is extracted using DLIB, MTCNN model double characteristic point.
As shown in Figure 1, the present invention carries out initiative information intake, the ultrasonic wave of transmitting using narrowband ultrasonic equipment to face Frequency is 60KHz, and ultrasonic transducer sound field launch angle is 30 °;Ultrasonic wave encounters face reflection, and back wave is by one or more A receiving terminal is collected, and receiving terminal transducer directivity is unidirectional, 30 ° or so of incident angle, more than the incidence wave of this angle, level Feedback is weaker, signal contamination, raising signal-to-noise ratio caused by inhibiting secondary reflection;RGB camera CMOS/CCD sensitized lithographies are with surpassing Sound module is located to hang down in plane samely, and camera ken central axes are aligned cause with ultrasonic listening region central axes close to weight It closes, and the visual angle of adjustment camera collection image, the transmitting of ultrasonic equipment ultrasonic wave approach with receiving terminal incidence angle and overlap;Ultrasound Wave device sounding distance is estimated with echo time difference:
Distance (centimetre)=0.017 × time (microsecond);
The pulse that will reflect back into cuts choosing according to time window, and the interval range of choosing is cut needed for being obtained apart from equation, this is specially Profit is analyzed for echo in 35-85 centimetres of distance, i.e., in pulse-echo reverberations time window at 4.0 milliseconds to 10.0 millis Between second.User is limited with this, identification service is being opened in gate inhibition's camera closer distance, do not started except distance herein The process flow of rgb video information.
Detection is realized for lower cost, and the pulse width of rectangular pulse and pulse spacing are limited by the following conditions:
tw<Tg × 0.5 and tg>10 milliseconds;
That is the pulse spacing is more than pulse width twice, and the pulse spacing is more than 10 milliseconds;Pulse frequency is 20 left sides per second The right side, the present invention are set by pulse parameter so that energy converter can with relatively low ultrasonic power, mean power 0.5 watt it It is interior.Face and other objects, such as paper, there are larger differences for plastics reflectance signature;The positions such as nose, forehead, ear are to super Sound wave reflected phase has differences, and face difference posture is also very different the reflection of ultrasonic wave, and each organ of face is to ultrasound Wave reflection intensity is different, and the reflected energy of nose is relatively low, and phase is forward, and the energy wave crest of forehead is higher;Receiving end signal is examined Ultrasound data obtained by wave is by converting (Fast Fourier Transform (FFT)), and (present invention is surpassed using 60KHz in separation ultrasonic spectrum region Sound wave), then carry out inverse transformation, filtration fraction environmental noise;Afterwards using threshold method, threshold definitions are:
Filtering threshold=noise average value+(5 × noise standard deviation);
All conduct suspect signals more than threshold value are handled into next step, and the signal for being less than threshold value is considered as noise;Intensity scale Quasi- difference is calculated to be sampled using Random Discrete, for 100 signal wave peak values are uniformly acquired in pulse width, is further calculated It arrives;One typical face reflects ultrasonic wave signal sequence, successively comprising nose, forehead, cheek, ear reflection signal.This Invention carries out echo-signal number to various paper printing faces, papery mask, hard plastic mask, latex mask and real human face According to acquisition, filtering transformation denoising, amplitude, time window homogenization obtain measured object echo matrix data set, finally utilize machine Learning model carries out classification based training.
As shown in Figure 1, the present invention is trained small data set classification using naive Bayesian (NB) model;It utilizes Support vector machines (SVM) model carries out classification based training for larger data collection.SVM models pass through the training of 5000 number of cases evidences, reality It now distinguishes, exclude papery, hard plastic, latex etc. non-genuine face, assist In vivo detection.
The ultrasonic module of the present invention is (such as aforementioned in certain angle in RGB images picture:30 degree or so of argument), a sizing It is (such as aforementioned into distance:35-135 centimetres), persistently carry out motive objects detection, In vivo detection;Pass through through model checking, RGB takes the photograph As the two dimensional image that head captures just can capture flow into the face of computer vision;Unnecessary algorithm response is avoided, Reduce power consumption.
In In vivo detection, the next step of subsidiary discriminant flow, caught for the increase income Haar Cascade faces of algorithm of OpenCV Model is caught, face detection tentatively is carried out to RGB image information, the coordinate information of face frame is obtained, is used in coordinate information MTCNN DLIB models carry out quadratic character point capture.According to the feature point coordinates that above-mentioned model obtains in the video frame, carry It takes out accurate face and scratches figure information, face texture feature extraction, generation are carried out using Facenet Inception models of increasing income The vector of 1024 dimensions.
N number of target face feature vector pair in the feature vector and target characteristic database of flow generation shown in FIG. 1 Than so as to obtain face and the similarity with target face in video frame.
The Establishing process of target sample data in access control system database as shown in Figure 2, RGB image pass through Haar Cascades, DLIB/MTCNN human face characteristic point algorithm capture, and tens characteristic coordinates points are obtained, including eyes, nose, mouth Bar, eyebrow, each characteristic coordinates in face periphery etc., using the physical message of these characteristic coordinates, it is a variety of that the present invention screens face Posture uses the feature extraction scheme based on video flowing;In property data base registration process requirement registration object it is upper and lower, it is left, Record several seconds videos in fight continuity twisting head;Multiple characteristic points are taken to video frame facial image, take the tail of the eye, the corners of the mouth, nose, The points such as chin according to the relative position of these feature point coordinates, are mapped to obtain head orientation information by 2D to 3D, the present invention makes It with multiple judgment mechanism, i.e., is extracted using DLIB, MTCNN model double characteristic point, avoids the deficiency of single model, such as MTCNN model accuracies are inadequate, and DLIB model velocities are slow, recurrence result distortion etc.;Become by the interframe of each characteristic point ratio Change, to capture multiple face typical case postures, so as to the head portrait to Given Face capture multi-angle multi-pose, 30 ° of face or so, on Ten left images are uniformly obtained within lower 30 °, if head pose amplitude of variation is too small, access control system is carried correlation is fed back Show.
For specific user, the facial photo of ten or so different postures, various visual angles facial image feature extraction are used;It is special Sign extraction model Facenet Inception are made of CNN convolutional layers, pond layer, full articulamentum and softmax classification layer, are instructed Practice facial photo of the data from the tens of thousands of people of east and west, the photo for there are face different visual angles.Haar is used in training process Cascades obtains face ROI (region of interest) region, does face using MTCNN, DLIB model characteristic point and scratches Figure, and ignore the region that forehead is easily covered by cap, hair, rgb matrix obtained by stingy figure is switched to the gray-scale map matrix of 96*112, Face scratches 0 filling of the region outside figure edge, reduction background interference;The present invention exports the model setting face spy of 1024 dimensions Sign vector.In feature vector input database, each user about stores ten feature vectors in the library of feature bottom, corresponds to respectively The 2D images of this person's different visual angles;It is similar in the library of search characteristics bottom according to the face data captured in video frame in identification Highest characteristic is spent, search process use is closest to search (NNS) algorithm.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, although with reference to aforementioned reality Example is applied the present invention is described in detail, it for those skilled in the art, still can be to aforementioned each implementation Technical solution recorded in example modifies or equivalent replacement is carried out to which part technical characteristic, all in essence of the invention With within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention god.

Claims (10)

1. a kind of intelligent access control system, which is characterized in that including face recognition module, ultrasonic module, access control control module, The access control control module is electrically connected with face recognition module, and the ultrasonic module electrically connects with face recognition module It connects.
2. a kind of intelligent access control system according to claim 1, which is characterized in that the ultrasonic module surpasses for special frequency channel Acoustic wave device.
3. a kind of intelligent access control system according to claim 1, which is characterized in that the ultrasonic module emits ultrasonic wave arteries and veins Punching, ultrasonic module receiving terminal are collected the reflection wave signal of face, for reflected waveform data, are analyzed using computerized algorithm Obtain characteristic wave of the face various pieces structure in echo sequence;Using NB models, small data set is trained point Class;It is closed on using K and SVM models, carries out classification based training for larger data collection, distinguish real human face and non-genuine face, Obtain live body judging result.
4. a kind of intelligent access control system according to claim 3, which is characterized in that the ultrasonic module images picture in RGB In width in certain angle, in certain normal distance, persistently detected, In vivo detection, passed through through NB or SVM model checkings, RGB The face that the two dimensional image that camera captures can just enter computer vision face recognition module captures flow.
5. a kind of intelligent access control system according to claim 4, which is characterized in that the angle ranging from fan section argument 55- 65 degree, the distance is 35-135 centimetres.
6. according to a kind of intelligent access control system of claim 1-5 any one of them, which is characterized in that the face recognition module It is 1:N patterns, based on the face identification system of RGB image deep learning, for extracting face characteristic, identification and verification visitor.
7. a kind of intelligent access control system according to claim 6, which is characterized in that the face recognition module is directed to face Many attitude uses the feature extraction scheme based on video flowing, in the registration process of feature bottom library requirement registration object it is upper and lower, Record several seconds continuous videos in left and right twisting head.
8. a kind of intelligent access control system according to claim 6, which is characterized in that the face recognition module adds more Weight judgment mechanism, is extracted using DLIB, MTCNN model double characteristic point.
9. a kind of intelligent access control system according to claim 6, which is characterized in that the face recognition module utilizes feature Extraction model Facenet Inception carry out feature extraction.
A kind of 10. intelligent access control system according to claim 9, which is characterized in that the Feature Selection Model Facenet Inception is made of CNN convolutional layers, pond layer, full articulamentum and softmax classification layer.
CN201711416630.7A 2017-12-25 2017-12-25 A kind of intelligent access control system Pending CN108171834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711416630.7A CN108171834A (en) 2017-12-25 2017-12-25 A kind of intelligent access control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711416630.7A CN108171834A (en) 2017-12-25 2017-12-25 A kind of intelligent access control system

Publications (1)

Publication Number Publication Date
CN108171834A true CN108171834A (en) 2018-06-15

Family

ID=62520187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711416630.7A Pending CN108171834A (en) 2017-12-25 2017-12-25 A kind of intelligent access control system

Country Status (1)

Country Link
CN (1) CN108171834A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472894A (en) * 2018-10-24 2019-03-15 常熟理工学院 Distributed human face recognition door lock system based on convolutional neural networks
CN109885994A (en) * 2019-01-08 2019-06-14 深圳禾思众成科技有限公司 A kind of offline identity authorization system, equipment and computer readable storage medium
CN110599129A (en) * 2019-09-16 2019-12-20 世纪海航(厦门)科技有限公司 Campus attendance checking method, device, identification terminal and system based on image tracking
CN110647797A (en) * 2019-08-05 2020-01-03 深圳市大拿科技有限公司 Visitor detection method and device
CN110874588A (en) * 2020-01-17 2020-03-10 南京甄视智能科技有限公司 Method and device for dynamically optimizing light influence in face recognition
CN112801038A (en) * 2021-03-02 2021-05-14 重庆邮电大学 Multi-view face living body detection method and system
CN115345280A (en) * 2022-08-16 2022-11-15 东北林业大学 Face recognition attack detection system, method, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040081020A1 (en) * 2002-10-23 2004-04-29 Blosser Robert L. Sonic identification system and method
CN101396277A (en) * 2007-09-26 2009-04-01 中国科学院声学研究所 Ultrasonics face recognition method and device
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN105096420A (en) * 2015-07-31 2015-11-25 北京旷视科技有限公司 Access control system and data processing method for same
CN105869251A (en) * 2016-05-17 2016-08-17 珠海格力电器股份有限公司 Switchgear equipment and authentication method thereof as well as switch system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040081020A1 (en) * 2002-10-23 2004-04-29 Blosser Robert L. Sonic identification system and method
CN101396277A (en) * 2007-09-26 2009-04-01 中国科学院声学研究所 Ultrasonics face recognition method and device
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN105096420A (en) * 2015-07-31 2015-11-25 北京旷视科技有限公司 Access control system and data processing method for same
CN105869251A (en) * 2016-05-17 2016-08-17 珠海格力电器股份有限公司 Switchgear equipment and authentication method thereof as well as switch system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472894A (en) * 2018-10-24 2019-03-15 常熟理工学院 Distributed human face recognition door lock system based on convolutional neural networks
CN109885994A (en) * 2019-01-08 2019-06-14 深圳禾思众成科技有限公司 A kind of offline identity authorization system, equipment and computer readable storage medium
CN110647797A (en) * 2019-08-05 2020-01-03 深圳市大拿科技有限公司 Visitor detection method and device
CN110599129A (en) * 2019-09-16 2019-12-20 世纪海航(厦门)科技有限公司 Campus attendance checking method, device, identification terminal and system based on image tracking
CN110874588A (en) * 2020-01-17 2020-03-10 南京甄视智能科技有限公司 Method and device for dynamically optimizing light influence in face recognition
CN110874588B (en) * 2020-01-17 2020-04-14 南京甄视智能科技有限公司 Method and device for dynamically optimizing light influence in face recognition
CN112801038A (en) * 2021-03-02 2021-05-14 重庆邮电大学 Multi-view face living body detection method and system
CN112801038B (en) * 2021-03-02 2022-07-22 重庆邮电大学 Multi-view face in-vivo detection method and system
CN115345280A (en) * 2022-08-16 2022-11-15 东北林业大学 Face recognition attack detection system, method, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN108171834A (en) A kind of intelligent access control system
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN102332095B (en) Face motion tracking method, face motion tracking system and method for enhancing reality
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
US6671391B1 (en) Pose-adaptive face detection system and process
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN105447432B (en) A kind of face method for anti-counterfeit based on local motion mode
Lu et al. [Retracted] Face Detection and Recognition Algorithm in Digital Image Based on Computer Vision Sensor
Cherla et al. Towards fast, view-invariant human action recognition
CN105243376A (en) Living body detection method and device
CN111582197A (en) Living body based on near infrared and 3D camera shooting technology and face recognition system
Zhang et al. A survey on face anti-spoofing algorithms
CN104112152A (en) Two-dimensional code generation device, human image identification device and identity verification device
CN107038400A (en) Face identification device and method and utilize its target person tracks of device and method
Ekenel et al. Face recognition for smart interactions
CN108280421A (en) Human bodys&#39; response method based on multiple features Depth Motion figure
CN105760815A (en) Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait
CN114550270A (en) Micro-expression identification method based on double-attention machine system
Singh et al. An overview of face recognition in an unconstrained environment
Singh et al. Face liveness detection through face structure analysis
Gottumukkal et al. Real time face detection from color video stream based on PCA method
Abusham Face verification using local graph stucture (LGS)
Wanjale et al. Use of haar cascade classifier for face tracking system in real time video
Liu Face matching system in multi-pose changing scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180615