CN106845460B - Intelligent household system based on face recognition - Google Patents
Intelligent household system based on face recognition Download PDFInfo
- Publication number
- CN106845460B CN106845460B CN201710154767.3A CN201710154767A CN106845460B CN 106845460 B CN106845460 B CN 106845460B CN 201710154767 A CN201710154767 A CN 201710154767A CN 106845460 B CN106845460 B CN 106845460B
- Authority
- CN
- China
- Prior art keywords
- training sample
- face
- face image
- face recognition
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00563—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2113—Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00571—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys operated by interacting with a central unit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention provides an intelligent home system based on face recognition, which comprises a face recognition subsystem, an access control subsystem, a control center, an intelligent home and a mobile terminal, wherein the face recognition subsystem, the access control subsystem, the intelligent home and the mobile terminal are all connected with the control center; the face recognition subsystem is used for acquiring a face image and performing face recognition on the face image; the control center controls the access control subsystem to remove access control when face recognition is successful, corresponding operation is carried out on the smart home according to preset parameters of a user, the face image is sent to the mobile terminal through the network to inform the user when face recognition is failed, and the user can select to send an instruction through the mobile terminal to open the access control. The invention can communicate with the mobile terminal of the user after the face data can not be recognized when a stranger enters the recognition range, thereby ensuring the safety of the whole system and families.
Description
Technical Field
The invention relates to the technical field of intelligent home, in particular to an intelligent home system based on face recognition.
Background
The intelligent home organically combines various subsystems related to home life by utilizing advanced computer technology, network communication technology and comprehensive wiring technology, optimizes the life style of people through overall management, helps people to effectively schedule time, enhances the safety of home life, and even saves funds for various energy expenses. The intelligent household products in the related technology are applied to common sensor technologies such as infrared sensing and the like, and have great defects in the aspect of safety.
Disclosure of Invention
In order to solve the problems, the invention provides an intelligent home system based on face recognition.
The purpose of the invention is realized by adopting the following technical scheme:
the intelligent home system based on face recognition comprises a face recognition subsystem, an access control subsystem, a control center, an intelligent home and a mobile terminal, wherein the face recognition subsystem, the access control subsystem, the intelligent home and the mobile terminal are all connected with the control center; the face recognition subsystem is used for acquiring a face image and performing face recognition on the face image; the control center controls the access control subsystem to remove access control when face recognition is successful, corresponding operation is carried out on the smart home according to preset parameters of a user, the face image is sent to the mobile terminal through the network to inform the user when face recognition is failed, and the user can select to send an instruction through the mobile terminal to open the access control.
The invention has the beneficial effects that: the intelligent home is correspondingly operated according to the preset parameters of the user, so that the user can correspondingly adjust the home environment according to personal preference habits, and the intellectualization of the system can be better reflected; the face recognition subsystem can be used for communicating with the mobile terminal of the user after face data cannot be recognized when a stranger enters a recognition range, and the safety of the whole system and a family is guaranteed.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a block diagram of the structural connections of the present invention;
fig. 2 is a block diagram of the structural connection of the face recognition subsystem of the present invention.
Reference numerals:
the system comprises a face recognition subsystem 1, an access control subsystem 2, a control center 3, an intelligent home 4, a mobile terminal 5, a face image acquisition module 10 and a face image recognition module 20.
Detailed Description
The invention is further described with reference to the following examples.
Referring to fig. 1, the present embodiment provides an intelligent home system based on face recognition, where the system includes a face recognition subsystem 1, an access control subsystem 2, a control center 3, an intelligent home 4 and a mobile terminal 5, where the face recognition subsystem 1, the access control subsystem 2, the intelligent home 4 and the mobile terminal 5 are all connected to the control center 3; the face recognition subsystem 1 is used for acquiring a face image and performing face recognition on the face image; the control center 3 controls the access control subsystem 2 to remove access control when face recognition is successful, corresponding operation is carried out on the intelligent home 4 according to preset parameters of a user, a face image is sent to the mobile terminal 5 through a network to inform the user when face recognition is failed, and the user can select to send an instruction to open the access control through the mobile terminal 5.
Preferably, the access control subsystem 2 comprises a door, a door lock and a controller for controlling the door lock to be opened, and the controller is connected with the control center 3.
Preferably, the smart home 4 includes a lamp, a water heater, and an air conditioner.
According to the embodiment of the invention, the smart home 4 is correspondingly operated according to the preset parameters of the user, so that the user can correspondingly adjust the home environment according to personal preference habits, and the intellectualization of the system can be reflected; the face recognition subsystem 1 can communicate with the mobile terminal 5 of the user after the face data can not be recognized when a stranger enters the recognition range, thereby ensuring the safety of the whole system and families.
Preferably, as shown in fig. 2, the face recognition subsystem 1 includes a face image acquisition module 10 and a face image recognition module 20 connected to each other; the face image acquisition module 10 is configured to acquire a plurality of face images to be recognized, and screen out a face image with the largest image quality from the acquired face images as an optimal face image for face recognition; the face image recognition module 20 is used for recognizing the optimal face image and outputting a face recognition result to the control center 3; wherein, the calculation formula for defining the image quality measurement is as follows:
in the formula, ZiThe image quality degree rho of the ith human face image in the plurality of imagesiThe average gray value of the set area of the ith human face image in the plurality of images is represented by a gray value threshold value set according to actual conditions, viV is the edge sharpness threshold value set according to the actual situation for the ith human face image in a plurality of images,is the average gray-scale value of a plurality of images,the average edge sharpness of a plurality of images, and m is the number of the plurality of images acquired from the camera system; a isiα is a set proportion threshold value when α is the proportion of the face in the ith human face image in the human face imageiWhen- α is not less than 0, f (α)i-α)=1,αi-α<0, f (α)i-α)=0。
In the preferred embodiment, the appropriate face images are selected according to the user-defined image quality calculation formula to perform face recognition detection, so that the system storage space can be greatly saved, the speed of face recognition detection is improved, the factors of the proportion, the edge sharpness and the gray value of the face images are considered in the image quality calculation formula, the limitation of image quality evaluation through single characteristics is avoided, the high-quality images can be accurately selected to perform face recognition, the operand of image screening is simplified, and the efficiency of image screening is further improved.
Preferably, the recognizing the optimal face image includes:
(1) selecting N face images from a face database constructed in advance by the face recognition subsystem 1 to construct a training sample set X ═ X1,X2,…,XN]Taking the screened face image as a test sample Y, carrying out filtering pretreatment on a training sample set, reserving training samples which have large influences on representation and classification of the test sample, and constructing an optimal training sample set by using the reserved training samples;
(2) equally dividing each pair of face images in the optimal training sample set into R blocks, and then dividing the optimal training sample set into R sub sample sets ApP is 1, …, R, each subsample set is made up of the pth block of each face image;
(3) the test sample is divided equally into R blocks, i.e. Y ═ Yp,p=1,…,R]And carrying out block weighting on the optimal training sample set and the test samples according to the following formula:
in the formula, vpSparse residual mean, v, of the p-th block of all face images in the optimal training sample set1、v2To set residual threshold, v1<v2,f(vp) For the decision function, when vp<v1When f (v)p) When v is equal to 1p>v2When f (v)p)=0;
In the formula, mupFor the ratio of the inter-class distance variance and the intra-class distance variance in the optimal training sample set, μ1、μ2To set the discrimination threshold, mu1<μ2,f(μp) For the decision function, when μp<μ2When, f (μ)p) When μ is 0p>μ1When, f (μ)p)=1;
Wherein v ispThe calculation process comprises the following steps: for any one face image in the optimal training sample set, carrying out sparse representation on the sample by using residual images except the face image to obtain sparse residual errors of all blocks of the face image, and then calculating the sparse residual error mean value of the p-th block of all the face images;
(4) and performing sparse representation on the weighted test samples by using the weighted optimal training sample set, calculating the reconstruction residual error of each class, and finally classifying the test samples into the class corresponding to the minimum reconstruction residual error.
In the preferred embodiment, the face images in the test sample and the optimal training sample set are divided into blocks with the same size, information with higher discriminability can be captured better in the recognition and detection process, the optimal training sample set and the test sample are subjected to block weighting according to the formula, the blocking block and the discriminability block can be selected more accurately, the influence of the blocking part on the face recognition performance is reduced, the recognition rate of the face images can be improved, and the security effect of the intelligent home system is improved.
Preferably, the performing filtering preprocessing on the training sample set, retaining the training samples having a large influence on the representation and classification of the test samples, and constructing the optimal training sample set by using the retained training samples specifically includes:
(1) linearly representing the test sample Y by using the training sample set X, and calculating a representation coefficient S ═ S of each training sample vector in the training sample set X1,S2,…,SN]TWherein, the calculation formula for expressing the coefficient S is as follows:
S=(XTX+ξI)-1XTY
in the formula, I is an identity matrix, and xi is a set coefficient;
(2) let the training sample set X have M classes, the jth class has njAnd (3) training samples, and calculating the reconstructed residual error of each class as follows:
in the formula, EjFor reconstructed residual of jth class, XjTraining sample set, S, representing the jth classkRepresenting a representation coefficient corresponding to a k training sample in a j class;
(3) selecting classes corresponding to the first m minimum reconstruction residuals as candidate classes, and constructing a neighbor dictionary D ═ D by using the m candidate classes1,D2,…,Dm],Dj(j ═ 1, …, m) represents the training sample set of the jth class in the candidate class, the test sample Y is linearly represented by the candidate class, and the representation coefficient corresponding to each candidate class in the neighbor dictionary D is calculated:
S′=(DTD+ξI)-1DTY
in the formula, S' represents a representation coefficient corresponding to the candidate class, and S ═ S1′,S2′,…,Sm′],Sj(j ═ 1, …, m) represents the representation coefficient corresponding to the jth class in the candidate class;
(4) the retained training samples are used for constructing an optimal training sample set as follows:
in the formula (I), the compound is shown in the specification,representing the kth training sample in the training sample set of the jth class.
In the preferred embodiment, the training samples which have great influence on the representation and classification of the test samples are reserved by adopting the mode, so that the quantity of the training samples is reduced, and the calculation complexity is reduced, thereby shortening the time of face recognition and improving the security efficiency of the intelligent home; and weighting the training samples of the alternative classes by adopting the representation coefficients corresponding to the alternative classes, wherein the larger the weight value is, the stronger the representation capability of the corresponding training sample on the test sample is, so that the constructed optimal training sample set can better approximate the test sample.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (3)
1. An intelligent home system based on face recognition is characterized by comprising a face recognition subsystem, an access control subsystem, a control center, an intelligent home and a mobile terminal, wherein the face recognition subsystem, the access control subsystem, the intelligent home and the mobile terminal are all connected with the control center; the face recognition subsystem is used for acquiring a face image and performing face recognition on the face image; the control center controls the access control subsystem to release the access control when the face recognition is successful, correspondingly operates the smart home according to preset parameters of a user, sends a face image to the mobile terminal through the network to inform the user when the face recognition is failed, and the user can select to send an instruction to open the access control through the mobile terminal; the face recognition subsystem comprises a face image acquisition module and a face image recognition module which are connected; the face image acquisition module is used for acquiring a plurality of face images to be recognized and screening the face image with the maximum image quality from the acquired face images as the optimal face image for face recognition; the face image recognition module is used for recognizing the optimal face image and outputting a face recognition result to the control center; the computational formula for defining the image quality metric is:
in the formula, ZiThe image quality degree rho of the ith human face image in the plurality of imagesiThe average gray value of the set area of the ith human face image in the plurality of images is represented by a gray value threshold value set according to actual conditions, viV is the edge sharpness threshold value set according to the actual situation for the ith human face image in a plurality of images,is the average gray-scale value of a plurality of images,for the average edge sharpness of a plurality of images, m being the number of images acquired from the camera system αiα is a set proportion threshold value when α is the proportion of the face in the ith human face image in the human face imageiWhen- α is not less than 0, f (α)i-α)=1,αi-α<0, f (α)i-α)=0;
The identification of the optimal face image comprises the following steps:
(1) selecting N human face images from a human face database constructed in advance by a human face recognition subsystem to construct a training sample set X ═ X1,X2,…,XN]Taking the screened face image as a test sample Y, carrying out filtering pretreatment on a training sample set, reserving the training sample with larger influence on the representation and classification of the test sample, and utilizing the reservationConstructing an optimal training sample set by the remained training samples;
(2) equally dividing each pair of face images in the optimal training sample set into R blocks, and then dividing the optimal training sample set into R sub sample sets ApP is 1, …, R, each subsample set is made up of the pth block of each face image;
(3) the test sample is divided equally into R blocks, i.e. Y ═ Yp,p=1,…,R]And carrying out block weighting on the optimal training sample set and the test samples according to the following formula:
in the formula, vpSparse residual mean, v, of the p-th block of all face images in the optimal training sample set1、v2To set residual threshold, v1<ν2,f(vp) To determine the function, when vp<v1When f (v)p) When v is equal to 1p>v2When f (v)p)=0;
In the formula, mupFor the ratio of the inter-class distance variance and the intra-class distance variance in the optimal training sample set, μ1、μ2To set the discrimination threshold, mu1<μ2,f(μp) For the decision function, when μp<μ2When, f (μ)p) When μ is 0p>μ1When, f (μ)p)=1;
(4) Performing sparse representation on the weighted test sample by using the weighted optimal training sample set, calculating the reconstruction residual error of each class, and finally classifying the test sample into the class corresponding to the minimum reconstruction residual error;
the filtering preprocessing is performed on the training sample set, the training samples with large influences on the representation and classification of the test samples are reserved, and the reserved training samples are used for constructing the optimal training sample set, which specifically comprises the following steps:
(1) linearly representing the test sample Y by using the training sample set X, and calculating a representation coefficient S ═ S of each training sample vector in the training sample set X1,S2,…,SN]TWherein, the calculation formula for expressing the coefficient S is as follows:
S=(XTX+ξI)-1XTY
in the formula, I is an identity matrix, and xi is a set coefficient;
(2) let the training sample set X have M classes, the jth class has njAnd (3) training samples, and calculating the reconstructed residual error of each class as follows:
in the formula, EjFor reconstructed residual of jth class, XjTraining sample set, S, representing the jth classkRepresenting a representation coefficient corresponding to a k training sample in a j class;
(3) selecting classes corresponding to the first m minimum reconstruction residuals as candidate classes, and constructing a neighbor dictionary D ═ D by using the m candidate classes1,D2,…,Dm],Dj(j ═ 1, …, m) represents the training sample set of the jth class in the candidate class, the test sample Y is linearly represented by the candidate class, and the representation coefficient corresponding to each candidate class in the neighbor dictionary D is calculated:
S′=(DTD+ξI)-1DTY
in the formula, S' represents a representation coefficient corresponding to the candidate class, and S ═ S1′,S2′,…,Sm′],Sj(j ═ 1, …, m) represents the representation coefficient corresponding to the jth class in the candidate class;
(4) the retained training samples are used for constructing an optimal training sample set as follows:
2. The intelligent household system based on the face recognition is characterized in that the access control subsystem comprises a door, a door lock and a controller for controlling the door lock to be opened, and the controller is connected with a control center.
3. The intelligent home system based on the face recognition is characterized in that the intelligent home comprises a lamp, a water heater and an air conditioner.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710154767.3A CN106845460B (en) | 2017-03-15 | 2017-03-15 | Intelligent household system based on face recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710154767.3A CN106845460B (en) | 2017-03-15 | 2017-03-15 | Intelligent household system based on face recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106845460A CN106845460A (en) | 2017-06-13 |
CN106845460B true CN106845460B (en) | 2020-09-25 |
Family
ID=59145064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710154767.3A Active CN106845460B (en) | 2017-03-15 | 2017-03-15 | Intelligent household system based on face recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106845460B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107369210A (en) * | 2017-08-16 | 2017-11-21 | 李松 | A kind of vehicle maintenance and maintenance enterprise VR panorama planning and designing methods |
US11776308B2 (en) | 2017-10-25 | 2023-10-03 | Johnson Controls Tyco IP Holdings LLP | Frictionless access control system embodying satellite cameras for facial recognition |
CN108921991A (en) * | 2018-06-26 | 2018-11-30 | 佛山市中格威电子有限公司 | It is a kind of based on solar powered door-locking system |
CN108961497A (en) * | 2018-06-26 | 2018-12-07 | 佛山市中格威电子有限公司 | A kind of door-locking system with warning function |
CN109062064A (en) * | 2018-08-07 | 2018-12-21 | 武汉工程大学 | A kind of intelligent home control device and control method based on electrnic house number plates |
CN109658563A (en) * | 2018-12-12 | 2019-04-19 | 广州小楠科技有限公司 | A kind of effective intelligent access control system |
CN109977658A (en) * | 2019-02-22 | 2019-07-05 | 苏州宏裕千智能设备科技有限公司 | A kind of interface processing method and intelligent terminal based on intelligent terminal |
CN114049718A (en) * | 2021-11-10 | 2022-02-15 | 深圳市巨龙创视科技有限公司 | Access control system based on face recognition |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310179A (en) * | 2012-03-06 | 2013-09-18 | 上海骏聿数码科技有限公司 | Method and system for optimal attitude detection based on face recognition technology |
CN104536389A (en) * | 2014-11-27 | 2015-04-22 | 苏州福丰科技有限公司 | 3D face identification technology based intelligent household system and realization method thereof |
CN204631474U (en) * | 2015-05-27 | 2015-09-09 | 武汉东湖学院 | Based on the intelligent household management system of recognition of face |
CN105224921A (en) * | 2015-09-17 | 2016-01-06 | 桂林远望智能通信科技有限公司 | A kind of facial image preferentially system and disposal route |
CN105427421A (en) * | 2015-11-16 | 2016-03-23 | 苏州市公安局虎丘分局 | Entrance guard control method based on face recognition |
CN106203294A (en) * | 2016-06-30 | 2016-12-07 | 广东微模式软件股份有限公司 | The testimony of a witness unification auth method analyzed based on face character |
CN106204523A (en) * | 2016-06-23 | 2016-12-07 | 中国科学院深圳先进技术研究院 | A kind of image quality evaluation method and device |
-
2017
- 2017-03-15 CN CN201710154767.3A patent/CN106845460B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310179A (en) * | 2012-03-06 | 2013-09-18 | 上海骏聿数码科技有限公司 | Method and system for optimal attitude detection based on face recognition technology |
CN104536389A (en) * | 2014-11-27 | 2015-04-22 | 苏州福丰科技有限公司 | 3D face identification technology based intelligent household system and realization method thereof |
CN204631474U (en) * | 2015-05-27 | 2015-09-09 | 武汉东湖学院 | Based on the intelligent household management system of recognition of face |
CN105224921A (en) * | 2015-09-17 | 2016-01-06 | 桂林远望智能通信科技有限公司 | A kind of facial image preferentially system and disposal route |
CN105427421A (en) * | 2015-11-16 | 2016-03-23 | 苏州市公安局虎丘分局 | Entrance guard control method based on face recognition |
CN106204523A (en) * | 2016-06-23 | 2016-12-07 | 中国科学院深圳先进技术研究院 | A kind of image quality evaluation method and device |
CN106203294A (en) * | 2016-06-30 | 2016-12-07 | 广东微模式软件股份有限公司 | The testimony of a witness unification auth method analyzed based on face character |
Also Published As
Publication number | Publication date |
---|---|
CN106845460A (en) | 2017-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106845460B (en) | Intelligent household system based on face recognition | |
US10565433B2 (en) | Age invariant face recognition using convolutional neural networks and set distances | |
CN107871100B (en) | Training method and device of face model, and face authentication method and device | |
He et al. | Two-stage nonnegative sparse representation for large-scale face recognition | |
CN101271515B (en) | Image detection device capable of recognizing multi-angle objective | |
JP4543423B2 (en) | Method and apparatus for automatic object recognition and collation | |
EP2091021A1 (en) | Face authentication device | |
CN111311809A (en) | Intelligent access control system based on multi-biological-feature fusion | |
CN105956570B (en) | Smiling face's recognition methods based on lip feature and deep learning | |
JP2023138492A (en) | System and method for improving robustness of pre-trained system in deep neural network using randomization and sample rejection | |
CN111950429A (en) | Face recognition method based on weighted collaborative representation | |
CN108520201A (en) | Robust face recognition method based on weighted mixed norm regression | |
CN110175500B (en) | Finger vein comparison method, device, computer equipment and storage medium | |
Abushariah et al. | Automatic person identification system using handwritten signatures | |
CN117253318B (en) | Intelligent self-service payment terminal system and method | |
CN106845461B (en) | Electronic commerce transaction system based on face recognition and password recognition | |
CN106940905B (en) | Classroom automatic roll-calling system based on WIFI and smart phone | |
CN111191598A (en) | Facial expression recognition method based on intelligent accompanying robot | |
Dar et al. | Real time face authentication system using stacked deep auto encoder for facial reconstruction | |
KR100621883B1 (en) | An adaptive realtime face detecting method based on training | |
CN111553202B (en) | Training method, detection method and device for neural network for living body detection | |
Kakarwal et al. | Hybrid feature extraction technique for face recognition | |
Huang et al. | Research on Face Recognition System Based on Deep Convolutional Machine Learning Model | |
CN106650597B (en) | A kind of biopsy method and device | |
Huang et al. | Face detection using a modified radial basis function neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200825 Address after: No. 16, Huarui Road, Yanlong street, Yandu District, Yancheng City, Jiangsu Province Applicant after: JIANGSU ANWEISHI INTELLIGENT SECURITY Co.,Ltd. Address before: Elite Building No. 1024 Nanshan Nanshan District Guangdong street, 518000 Avenue in Shenzhen city in Guangdong province 206 Applicant before: SHENZHEN HUITONG INTELLIGENT TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |