CN110992522A - Indoor and outdoor universal human face recognition new algorithm security system - Google Patents

Indoor and outdoor universal human face recognition new algorithm security system Download PDF

Info

Publication number
CN110992522A
CN110992522A CN201910804903.8A CN201910804903A CN110992522A CN 110992522 A CN110992522 A CN 110992522A CN 201910804903 A CN201910804903 A CN 201910804903A CN 110992522 A CN110992522 A CN 110992522A
Authority
CN
China
Prior art keywords
data
module
model
artificial intelligence
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910804903.8A
Other languages
Chinese (zh)
Inventor
陈工
宋经纬
郭二帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
120 Technology Beijing Co Ltd
Original Assignee
120 Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 120 Technology Beijing Co Ltd filed Critical 120 Technology Beijing Co Ltd
Priority to CN201910804903.8A priority Critical patent/CN110992522A/en
Publication of CN110992522A publication Critical patent/CN110992522A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00571Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys operated by interacting with a central unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention designs an indoor and outdoor universal face recognition new algorithm security system, which comprises: the system comprises a human body data checking/inputting module, a training artificial intelligence model module, an artificial intelligence model verifying and storing module, an artificial intelligence model testing module, a model calculating module, a communication module and a central system authority issuing module; the face photos are deeply learned and trained, the trained models are tested, evaluated and stored, and the central system sets authority and issues the authority to the intelligent door lock terminal.

Description

Indoor and outdoor universal human face recognition new algorithm security system
The technical field is as follows: the invention relates to the field of community security automation, in particular to an indoor and outdoor universal human face recognition new algorithm security system designed by an artificial intelligence technology.
Background art: in recent years, with the advancement of urbanization and the improvement of living standard of residents, more and more residents live into communities, the communities have larger and larger scales, and the community security becomes one of the key problems concerned by public and government security departments. The existing community security is still mainly based on human security, the automation degree is low, the day and night duty is needed, the human consumption is high, and for community residents, the community residents often forget to take keys and the keys are moved by other people to enter the community. The degree of computer automation is continuously improved, and particularly, the coming of an artificial intelligence technology provides a new idea for solving problems.
The invention content is as follows:
the invention aims to provide an indoor and outdoor universal human face recognition new algorithm security system which is more intelligent and has a comprehensive identity recognition database compared with the traditional community security system.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the overall structure design comprises the following steps of inputting a third-party face database, applying the third-party face database to deep learning or transfer learning, and carrying out precision evaluation in the process. And migrating the model trained from the migration learning database, evaluating the model by using the test data, and if the test is passed, storing the model, wherein the model is used for identity verification of face recognition. The management end can issue the model to manage the access authority of each door lock.
Personal identification data check/entry. Before data input (including migration learning data and test data), whether folders of the data are classified according to three folders of a trace identification test or not is checked, whether data names in the three folders are according to a training rule or not is checked, and data proportions of the three folders and a file format are checked.
And training an artificial intelligence model. The model is trained using data in a standard database and can then be saved and migrated (i.e., the ability of the model to learn is applied to another model). And applying the data checked in the previous step to perform new model training (the application is a training data set train). Training artificial intelligence model data was derived from two large-scale face libraries (FERET and FRGC v 2.0).
And verifying and training the artificial intelligence model. And (3) verifying the trained model by using a verification data set (the validation data set in the step 3.1), wherein the verification accuracy is up to the standard, the standard is set by the user, and is generally about 88%. If the requirements are not met, the model training is carried out again; if so, the storage model can be selected. Verification the artificial intelligence model data was derived from randomly sampled portions of two large-scale face libraries (FERET and FRGC v 2.0).
And (6) evaluating the accuracy of the model. Precision assessment exists throughout the process, including the assessment of training data sets, validation data sets, and test data sets, where precision, recall, and a precision-recall curve are calculated to assess model precision
Model test/compute data functions. The testing process comprises the steps that part of target data is used as a testing data set, precision testing and evaluation are conducted on the trained model, if the target data does not reach a set threshold value, the model is judged not to meet requirements, and the model is trained again; and if the test precision meets the requirement and meets the precision requirement, applying the test precision to the target data. When the model testing precision meets the requirement, the target data is calculated and classified, the final calculation result is reserved and stored, and meanwhile, the calculation and classification precision report can be reserved.
And issuing the authority of the central system. The central system authority issuing function comprises the step of transmitting the trained model to access control terminals through a network, wherein the terminals comprise an access control system of a community and access control systems installed by each user of the community. The authority issuing function also includes transmitting personal information to the access control terminal to control whether or not a specific individual is allowed to unlock. The system authority also comprises the step of controlling the terminal to transmit part of the personal identity information stored by the terminal to the overall system. The system may issue a command to upgrade the terminal firmware.
Description of the drawings:
FIG. 1 flow chart of personal identification data checking/inputting function
FIG. 2 functional flow diagram for training artificial intelligence models
FIG. 3 is a functional flow diagram of a verification training artificial intelligence model
FIG. 4 flow chart of model accuracy assessment
FIG. 5 model test/calculate data function flow diagram
FIG. 6 flow chart of the central system rights issuing function
The specific implementation mode is as follows:
personal identification data check/entry. Before data input (including migration learning data and test data), whether folders of the data are classified according to three folders of a train evaluation test or not is checked, whether data names in the three folders are named according to a training rule or not is checked, and the data proportions of the three folders are as follows, for example, 3: 1: 1, and file format.
And training an artificial intelligence model. The model is trained using data in a standard database and can then be saved and migrated (i.e., the ability of the model to learn is applied to another model). And applying the data checked in the previous step to perform new model training (the application is a training data set train).
Training artificial intelligence model data was derived from two large-scale face libraries (FERET and FRGC v 2.0).
And verifying and training the artificial intelligence model. And (3) verifying the trained model by using a verification data set (the validation data set in the step 3.1), wherein the verification accuracy is up to the standard, the standard is set by the user, and is generally about 88%. If the requirements are not met, the model training is carried out again; if so, the storage model can be selected.
And (6) evaluating the accuracy of the model. Precision assessment exists throughout the process, including the assessment of training data sets, validation data sets, and test data sets, where precision, recall, and a precision-recall curve are calculated to assess model precision
Model test/compute data functions. The testing process comprises the steps that part of target data is used as a testing data set, precision testing and evaluation are conducted on the trained model, if the target data does not reach a set threshold value, the model is judged not to meet requirements, and the model is trained again; and if the test precision meets the requirement and meets the precision requirement, applying the test precision to the target data. When the model testing precision meets the requirement, the target data is calculated and classified, the final calculation result is reserved and stored, and meanwhile, the calculation and classification precision report can be reserved.
In the model test/calculation data function, global and local facial features are integrated in a serial and parallel combination mode, firstly, global features are used for rough matching, and then the global and local features are integrated for fine confirmation.
In the aspect of spectrum analysis, the global features correspond to low frequencies, and the local features correspond to high frequencies.
And extracting global Fourier features. The low-frequency part of a two-dimensional Discrete Fourier Transform (DFT) coefficient is used as a global feature, and the two-dimensional discrete Fourier transform of an image can be represented by the following formula:
Figure DEST_PATH_IMAGE001
where f (x, y) represents a two-dimensional image of size M N, u and v are frequency domain variables image f (x, y) is a real-valued function, so that the output of the Fourier transform is complex, i.e., the Fourier transform is complex
Figure 814511DEST_PATH_IMAGE002
The method comprises the following steps of obtaining a face image, wherein R (u, v) and I (u, v) respectively represent a real part and an imaginary part of F (u, v), after Fourier transformation, the image can be represented as a real part and an imaginary part transformation coefficient of all frequency bands, and although the transformation coefficients of all the frequency bands contain information in the image, the global information of the image is mostly contained in low-frequency coefficients.
And extracting local Gabor characteristics. Two-dimensional Gabor wavelet transform (2D Gabor wavelet transform, GWT for short) was originally proposed by Daugman [11] for modeling the spatial receptive field of primary visual cortical simple cells, and has recently been considered one of the most successful face description methods.
Figure DEST_PATH_IMAGE003
Wherein k isuvKv denotes the frequency (scale) of the kernel function, ϕuBy setting different scales and directions, a group of Gabor wavelet kernel functions can be obtained.
And constructing global and local classifiers and performing serial and parallel integration of the global and local classifiers. The global characteristics mainly describe the overall attributes of the human face and are used for rough matching; while local features mainly describe changes in details of the face for fine confirmation, therefore, to improve the accuracy and speed of recognition, we propose to construct a two-layered classifier: the global classifier is used to perform coarse matching at level 1, and the global and local classifiers are integrated (i.e., global classifier) at level 2 to perform fine validation, as can be seen from the classifier construction process in the previous section, the global classifier uses fewer features and is therefore faster, but the precision is lower; the whole classifier is composed of N +1 component classifiers, utilizes more features, has lower speed, therefore, the speed of recognition can be improved by adopting the global classifier in the layer 1, and the accuracy of recognition can be improved by adding the local classifier in the layer 2.
After global and local feature extraction, we can get N +1 feature vectors, 1 global feature vector and N local feature vectors, because of the higher dimension of these feature vectors, we propose to use Linear Discriminant Analysis (LDA) to further reduce the dimension of these N +1 feature vectors, for two face images, by comparing the corresponding feature vectors, we can get N +1 similarities, in this document, we use the commonly used normalized cross-correlation (NCC) method to calculate the similarity of the corresponding feature vectors, in the task of pattern recognition, once the similarity between samples (or feature vectors) is calculated, the task of classifier becomes very simple (for example, classification can be done with nearest neighbor classifier), therefore, for convenience of description, global and local feature vectors after LDA dimension reduction are respectively called a Global Classifier (GC) and a Local Component Classifier (LCC), because the facial features used by the classifiers E are different, the classifiers have large difference (diversity), and the classification error rate can be effectively reduced by integrating the classifiers in a certain form according to the ensemble learning (ensemble) theory.
In this document, we perform classifier integration at the level of similarity (score level), that is, perform weighted summation on the similarities output by multiple classifiers to obtain the final similarity.
First, we perform weighted summation on N local component classifiers to obtain a Local Classifier (LC):
Figure 948295DEST_PATH_IMAGE004
then, the global classifier and the local classifier are also integrated in parallel in a weighted summation mode to obtain an overall classifier (UC):
Figure DEST_PATH_IMAGE005
where wG represents the weights of the global classifier.
In view of the different characteristics of the global and local features, the resolution of the face image is different when extracting the two features (as shown in fig. 4). the global features mainly reflect the overall properties of the face, so a lower resolution face image can meet the requirements.
In the face recognition problem, a system inputs identity information declared by a user while inputting a face image, and the system judges whether the identity of the input face image is consistent with the declared identity, so that the process is one-to-one.
For the face recognition problem, we first use the global classifier to find the similarity between the input face image and all the candidate face images in the database in the 1 st layer, and sort the similarity to eliminate the candidate with larger difference with the input face image, then use the global classifier to further recognize the rest of the candidates similar to the input image in the 2 nd layer, thus, the slower global classifier only needs to process a smaller subset of the original face database, so the recognition speed can be improved significantly, in this process, there is a problem to be noted that we need to ensure that the candidate face image with the same identity as the input face image is not basically eliminated by the global classifier in order to maintain the accuracy of the whole recognition system, this needs to adjust the number of the candidates (assumed as M) kept in the 1 st layer classification, obviously, the larger the value of M, the more candidates that are retained, the less likely candidates that are identical to the input face image will be excluded, however, as the number of retained candidates increases, the speed of the level 2 classification will decrease accordingly.
For the human face confirmation problem, firstly, a global classifier is utilized in a layer 1 to obtain the similarity between an input human face image and a human face image in a database which is consistent with the declared identity of a user, the similarity is rough, but certain judgment information is also included, if the similarity is lower than a certain small threshold (T1), the two human face images are very likely to belong to the same person, so that the system can directly judge that the identities of the two human face images are different, otherwise, if the similarity is higher than a certain large threshold (T2), the two human face images are very likely to belong to the same person, so that the system can directly judge that the identities of the two human face images are the same, except for the two situations, namely, when the similarity given by the global classifier is between T1 and T2, a correct decision is difficult to be made according to the similarity, based on the above strategy we can see that the system speed is significantly increased if in most cases the system can make decisions based on the similarity given by the global classifier, rather than having to use a slower global classifier, but the system accuracy is reduced if it always relies on the global classifier, so we also need to adjust the thresholds T1 and T2 to achieve a balance between speed and accuracy.
In Fourier feature extraction, the size of the face image is 64 × 80 and the distance between the two eyes is 28. to use Fast Fourier Transform (FFT), the image is expanded to 128 × 128. therefore, considering symmetry, the width of the Fourier spectrum is 64. global information is mostly contained in the Fourier transform coefficients of the low frequency band.
And issuing the authority of the central system. The central system authority issuing function comprises the step of transmitting the trained model to access control terminals through a network, wherein the terminals comprise an access control system of a community and access control systems installed by each user of the community. The authority issuing function also includes transmitting personal information to the access control terminal to control whether or not a specific individual is allowed to unlock. The system authority also comprises the step of controlling the terminal to transmit part of the personal identity information stored by the terminal to the overall system. The system may issue a command to upgrade the terminal firmware.

Claims (9)

1. Indoor outer general face identification new algorithm security protection system, its characterized in that includes: the system comprises a personal identification data checking/inputting module, a training artificial intelligence model module, an artificial intelligence model verifying and storing module, an artificial intelligence model testing module, a model calculating module, a communication module and a central system authority issuing module.
2. The system of claim 1, wherein the pin id data check/input module checks whether folders of data are classified by three folders of trail validationtest, whether data naming in three folders is according to training rules, three folder data proportions, and file format before data input (including migratory learning data and test data).
3. The system of claim 1, wherein the training artificial intelligence model module checks whether the folders of the data are classified according to three folders of a trace validation test, checks whether the data names in the three folders are according to a training rule, three folder data proportions, and a file format before inputting the data (including migrating learning data and testing data);
the verification and storage artificial intelligence model module verifies the trained model by using a verification data set, if the verification precision meets the standard, the standard is set by the user, and if the verification precision does not meet the requirement, the model training is carried out again; if so, the storage model can be selected.
4. The system of claim 1, wherein the artificial intelligence model test module is configured to perform an accuracy assessment that includes a training dataset, a validation dataset, and a test dataset, wherein the accuracy assessment is configured to compute accuracy, recall, and plot accuracy-recall curves to assess model accuracy.
5. The system of claim 1, wherein the model calculation module calculates and classifies the target data when the model test accuracy meets the requirement, and the final calculation result is retained and stored while the calculation and classification accuracy report can be retained.
6. The system of claim 1, wherein the central system authority issuing module is configured to transmit the trained model to the access terminals via a network, and the terminals include a community access system and access systems installed by users in the community.
7. The authority issuing function also includes transmitting personal information to the access control terminal to control whether or not a specific individual is allowed to unlock.
8. The system authority also comprises the step of controlling the terminal to transmit part of the personal identity information stored by the terminal to the overall system.
9. The system may issue a command to upgrade the terminal firmware.
CN201910804903.8A 2019-08-29 2019-08-29 Indoor and outdoor universal human face recognition new algorithm security system Pending CN110992522A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910804903.8A CN110992522A (en) 2019-08-29 2019-08-29 Indoor and outdoor universal human face recognition new algorithm security system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910804903.8A CN110992522A (en) 2019-08-29 2019-08-29 Indoor and outdoor universal human face recognition new algorithm security system

Publications (1)

Publication Number Publication Date
CN110992522A true CN110992522A (en) 2020-04-10

Family

ID=70081629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910804903.8A Pending CN110992522A (en) 2019-08-29 2019-08-29 Indoor and outdoor universal human face recognition new algorithm security system

Country Status (1)

Country Link
CN (1) CN110992522A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184962A (en) * 2020-09-29 2021-01-05 中国银行股份有限公司 Control method, system and control device for digital lock of cash box
CN114937320A (en) * 2022-02-15 2022-08-23 百廿科技(北京)有限公司 Door lock system with face recognition and temperature measurement functions

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393980A (en) * 2011-12-14 2012-03-28 杭州市公安局拱墅区分局 Intelligent door control system
CN106803301A (en) * 2017-03-28 2017-06-06 广东工业大学 A kind of recognition of face guard method and system based on deep learning
CN109902757A (en) * 2019-03-08 2019-06-18 山东领能电子科技有限公司 One kind being based on the improved faceform's training method of Center Loss
CN110148232A (en) * 2019-04-11 2019-08-20 腾讯科技(深圳)有限公司 Visitor management system, method, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393980A (en) * 2011-12-14 2012-03-28 杭州市公安局拱墅区分局 Intelligent door control system
CN106803301A (en) * 2017-03-28 2017-06-06 广东工业大学 A kind of recognition of face guard method and system based on deep learning
CN109902757A (en) * 2019-03-08 2019-06-18 山东领能电子科技有限公司 One kind being based on the improved faceform's training method of Center Loss
CN110148232A (en) * 2019-04-11 2019-08-20 腾讯科技(深圳)有限公司 Visitor management system, method, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184962A (en) * 2020-09-29 2021-01-05 中国银行股份有限公司 Control method, system and control device for digital lock of cash box
CN114937320A (en) * 2022-02-15 2022-08-23 百廿科技(北京)有限公司 Door lock system with face recognition and temperature measurement functions

Similar Documents

Publication Publication Date Title
CN106228142B (en) Face verification method based on convolutional neural networks and Bayesian decision
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN109636658B (en) Graph convolution-based social network alignment method
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
CN101419671B (en) Face gender identification method based on fuzzy support vector machine
CN105469034B (en) Face identification method based on Weighting type distinctive sparse constraint Non-negative Matrix Factorization
CN101558431B (en) Face authentication device
CN106022317A (en) Face identification method and apparatus
CN108564040B (en) Fingerprint activity detection method based on deep convolution characteristics
CN111339988B (en) Video face recognition method based on dynamic interval loss function and probability characteristic
Ghorpade et al. Pattern recognition using neural networks
CN103730114A (en) Mobile equipment voiceprint recognition method based on joint factor analysis model
CN107491729B (en) Handwritten digit recognition method based on cosine similarity activated convolutional neural network
CN111104852B (en) Face recognition technology based on heuristic Gaussian cloud transformation
Cui et al. Measuring dataset granularity
CN110992522A (en) Indoor and outdoor universal human face recognition new algorithm security system
CN107220598A (en) Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN109726703A (en) A kind of facial image age recognition methods based on improvement integrated study strategy
CN108520201A (en) Robust face recognition method based on weighted mixed norm regression
CN102147862B (en) Face feature extracting method based on survival exponential entropy
Wang et al. Temperature forecast based on SVM optimized by PSO algorithm
CN105550677B (en) A kind of 3D palmprint authentications method
Kazempour et al. I fold you so! An internal evaluation measure for arbitrary oriented subspace clustering
Yan et al. A lightweight face recognition method based on depthwise separable convolution and triplet loss
CN113238197A (en) Radar target identification and data judgment method based on Bert and BiLSTM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination