CN112395901A - Improved face detection, positioning and recognition method in complex environment - Google Patents
Improved face detection, positioning and recognition method in complex environment Download PDFInfo
- Publication number
- CN112395901A CN112395901A CN201910738193.3A CN201910738193A CN112395901A CN 112395901 A CN112395901 A CN 112395901A CN 201910738193 A CN201910738193 A CN 201910738193A CN 112395901 A CN112395901 A CN 112395901A
- Authority
- CN
- China
- Prior art keywords
- sample
- face
- samples
- training
- weight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 50
- 238000004364 calculation method Methods 0.000 claims abstract description 10
- 238000012216 screening Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000010586 diagram Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 241000953561 Toia Species 0.000 claims description 2
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 238000013519 translation Methods 0.000 claims description 2
- 238000009825 accumulation Methods 0.000 abstract description 4
- 230000001680 brushing effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
A face detection, positioning and recognition method under an improved complex environment is characterized in that haar-like features are trained through a large number of data sets, and the features are weighted according to the feature occurrence rate. A threshold is set for the weight accumulation sum. The features with higher weights are voted in the classifier, and the features are transmitted to the next stage when the weight accumulation sum meets the threshold condition. The time cost caused by the large number of features is greatly reduced by the operation. And the method is combined with an integral graph, so that the rapid and accurate detection and positioning of the face area in a complex environment are realized. The system detects the human face based on haar-like characteristics, and realizes high classification accuracy with low calculation cost by weak classifier combination. And then, the combination of the cascade weak classifiers is used for screening the input samples layer by layer, and finally, the human face area is positioned. And inputting the detected face region into a feature space formed by training, and realizing the identification of the input face region through voting of the nearest sample. The detection and the identification of the face in the complex environment are realized.
Description
Technical Field
The invention belongs to the field of image processing, particularly relates to face detection and recognition application, and provides an improved face detection, positioning and recognition method in a complex environment.
Background
In recent years, with the development of face recognition technology, face recognition is gradually widely used in life. The face brushing unlocking of the mobile phone, the face brushing door opening of the dormitory building and the like facilitate daily life of people. It is very easy for a person to detect and recognize a face in a complex environment, but it is very complicated for a machine to recognize whether or not a face exists from a complex environment.
The face recognition algorithm existing in the society at present comprises the following steps: (1) a face recognition method based on geometric features. The method mainly comprises the step of extracting positions and geometric features of important feature points such as human eyes, mouths and noses to serve as classification features. But the geometric features are not highly accurate for face recognition. (2) And face recognition based on the characteristic face. Firstly, constructing a pivot space according to a group of human face training images prepared in the early stage, and then characterizing the human face by weighting and representing the features. While the identification of a particular face requires only comparing these weights to known personal weights. (3) And face recognition based on the elastic model. Global feature description is adopted, and for local feature key points, the method is one of representative methods of a sampling point-based Gabor wavelet face recognition algorithm. By the method, the global characteristics of the face are reserved, and the local key characteristics are modeled. The defects are high time complexity, low speed and complex realization.
With the rise of deep learning, more and more people are invested in face detection research based on deep learning. Face recognition algorithms based on neural networks are also emerging. For example, the beijing femto-search technology limited company provides a 'face detection method and system based on a three-level convolutional network' (patent application No. 201710078431.3, publication No. CN 106874868A), a three-level convolutional neural network is adopted to perform face detection, the network with gradually enhanced multi-level performance is trained step by step, and the training result of the previous n levels is used as the input of the next level, but the three-level convolutional neural network needs to be trained in a staged manner, so that the problems of low efficiency, complicated training steps, incapability of performing joint tasks exist, the generalization capability of the network is poor, and certain limitations are realized.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an improved face detection, positioning and recognition method under a complex environment, which can accurately detect and recognize faces of pictures which are input in a complex background environment and contain a plurality of pieces of face information and are simultaneously in the complex environment, and solves the problems of low accuracy rate and poor recognition effect of face information processing in the complex environment; the system adopts a method of weighting the features according to the occurrence rate of the features in the training sample set to realize the rapid calculation of the data, and solves the problem of long detection time caused by the calculation of a large amount of data.
An improved face detection, positioning and recognition method under a complex environment is disclosed, as shown in figure 1, a system is divided into three parts, firstly, an input image is processed, and face information is screened out from the complex environment; then training a sample set and establishing a feature space; and finally, identifying the screened face information, specifically:
firstly, complex environment face detection:
1. detecting the face information by using haar-like characteristics: reading input picture information and setting an initial rectangular frame; expressing the small rectangle as a black part and a white part, respectively summing pixel gray values covered by the two parts, and finally subtracting the sum of the black part pixel gray values from the sum of the white part pixel gray values to obtain a haar-like feature; common haar-like features are shown in fig. 3, and weight is set on the haar-like features by the haar-like feature occurrence rate in the training sample;
2. sorting the features according to weight, and setting a threshold value: firstly, calculating the features with higher weight of the current block diagram, and performing data processing on the extracted haar-like features to form an integral diagram, as shown in fig. 4; the accumulated sum of the characteristic weights of the same block diagram is transmitted to the next stage when the accumulated sum is higher than a threshold value;
3. training a weak classifier:
determining the number of features, training the feature weak classifier for the feature fh(x,f,p,a):
Wherein, x represents a detection window, f is a feature, p identifies the direction of the unequal sign, a represents a threshold, the purpose of training the weak classifier is to determine the optimal threshold of the feature, the error of the weak classifier is the lowest through the threshold, and when all training samples are classified, the training process of the weak classifier can be known through the following steps:
(1) calculating the characteristic values of all training samples of the characteristic f;
(2) sorting the characteristic values obtained by the previous step;
(3) for each element in the sorted order:
(a) calculating the weight sum T1 of all face samples;
(b) calculating the weight sum T2 of all the non-face samples;
(c) calculating the sum T3 of all weights of the face sample before the element;
(d) calculating the sum T4 of all weights of the non-face sample before the element;
(4) the threshold is selected as a number between the previous feature value and the current feature value, and the classification error of the threshold can be calculated by the following formula:
the threshold that minimizes the classification error can be selected for the weak classifier by scanning this sorted table through;
4. and (3) combining weak classifiers:
firstly, defining a weight for each training sample, wherein the weight represents the probability that the sample can be correctly classified, so that the sample which is focused in each training depends on the weight of the sample, the change of the sample weight depends on whether the sample is correctly classified in the previous round, and if the sample is correctly classified in the previous round, the weight of the sample is reduced; if the previous round of samples is not correctly classified, the weight of the samples is increased, and the next round of samples is focused on the samples which are wrongly classified; secondly, the weak classifiers form the strong classifiers by adopting a weighted voting mode, namely, defining weight for each weak classifier, defining larger weight for the classifier with smaller classification error to ensure that the classifier occupies larger influence in voting, and defining smaller weight for the classifier with larger classification error rate to reduce the influence occupied in voting; in conclusion, the weak classifiers form a strong classifier with stronger classification capability through weighted combination;
presume that the input trains the setWhereinIt is shown that the training samples are,whether the sample is a face or not is shown, the number of learning cycles is T, the number of face images is m, and the number of non-face images islThe method comprises the following steps:
(1) sample weights are initialized, and the weights of face samples are respectively initialized toThe non-face sample weight is;
(2) For T cycles, T =1,2,3 … … T;
(b) for each feature j, training the classificationDevice for cleaning the skinCalculating the weighted error rate of all the features:
(c) finding the weak classifier with the minimum weighted error rate from the weighted error values of the weak classifiers;
(d) redefining the weight of each sample:
(3) and continuously adjusting the weight of the weak classifier to form a strong classifier:
whereinRepresenting weak classifiers, H is the strong classifier finally formed, which is the weight of T weak classifiersVoting takes place, setting all pictures in the first loopThe weights of the same type of pictures are the same, and through each circulation, the weights of the samples which are classified wrongly are gradually increased, so that the samples are focused during the next classification, the probability of correct classification is increased, and all weak classifiers are combined to form a strong classifier;
5. a cascade classifier:
for the first-stage classifier, the training samples are all input training samples, the non-face samples of the second-stage training samples are false detection samples of the original non-face samples of the first stage, and the cascaded strong classifier is constructed through the screening and classification of the first stage;
when an input image is detected, because the feature size of a face image is not fixed, the input image needs to be detected in multiple areas and sizes; the multi-region detection is to obtain information of multiple regions of the image through the translation operation of the sampling sub-window so as to detect each region; the samples adopted in the sample training are images with set sizes, but the input images are not fixed sizes, so that a multi-scale detection mode is required in the detection process to solve the problem of detection of the input images with larger sizes than the training samples; the system carries out multi-size detection by continuously enlarging the size of a sampling sub-window, and carries out optimization calculation by using an integral graph, and the calculation of each rectangular area only needs to carry out addition and subtraction calculation of four values; during the detection process, a program samples a large number of sub-windows, and the sub-windows are screened by a first level and a second level; in the detection process, only the face area is detected to enter the next stage for detection, and the obtained sub-window can be finally determined as the face only through all the cascaded classifiers and judged as the face area;
step two, sample training:
4. performing eigenvalue decomposition on the covariance matrix to obtain an eigenvalueSelecting the eigenvectors corresponding to the first k eigenvalues to form a new projection matrixIs a p-dimensional vector;
5. projecting the original sample to a new feature space to obtain a new dimension reduction sample,is a pixn dimensional matrix;
step three, face recognition:
1. initialization: setting the threshold distance to the maximum of the distances between all samples;
2. calculating the distance d between the newly input sample and other samples in the training set;
3. selecting K nearest samples, and solving the maximum value D of the distance between the K nearest samples;
4. if all D is larger than D, the sample is not considered to belong to the sample set, and if D is smaller than D, the training sample is taken as a K-nearest sample;
and selecting the sample name with the most occurrence number as the name of the input sample according to the occurrence number of each class of the several class numbers in the K-nearest adjacent sample.
A face detection, positioning and recognition method under an improved complex environment is characterized in that haar-like features are trained through a large number of data sets, and the features are weighted according to the feature occurrence rate. A threshold is set for the weight accumulation sum. The features with higher weights are voted in the classifier, and the features are transmitted to the next stage when the weight accumulation sum meets the threshold condition. The time cost caused by the large number of features is greatly reduced by the operation. And the method is combined with an integral graph, so that the rapid and accurate detection and positioning of the face area in a complex environment are realized. The system detects the human face based on haar-like characteristics, and realizes high classification accuracy with low calculation cost by weak classifier combination. And then, the combination of the cascade weak classifiers is used for screening the input samples layer by layer, and finally, the human face area is positioned. And inputting the detected face region into a feature space formed by training, and realizing the identification of the input face region through voting of the nearest sample. The detection and the identification of the face in the complex environment are realized.
Drawings
FIG. 1 is a diagram of the basic architecture of the system;
FIG. 2 is a flow chart of face detection in a complex environment;
FIG. 3 is a graph of a conventional haar-like feature;
FIG. 4 is an integral plot model;
fig. 5 is a view of a face region determination process;
FIG. 6 is a sample training flow diagram;
fig. 7 is a diagram of a face recognition process.
Detailed Description
The sample training is to read the training sample on MATLAB, and ORL face database is selected to shorten the training time. Firstly, reading the face pictures in the database, storing the gray information of one face picture into one line to form an X n matrix X. Where p represents the number of samples and n represents the ordered arrangement of all gray values of a sample. Then, the average value of the matrix X is calculated, the average value is subtracted from all the values in the matrix, and the data are centralized to form a new matrix. Computing a covariance matrix,A dimension matrix. And solving the eigenvalue and the eigenvector of the covariance matrix, and arranging the solved eigenvalue from big to small. And setting a threshold value as 90%, calculating the sum of all characteristic values, accumulating the characteristic values from large to small, and discarding the subsequent characteristic values and corresponding characteristic vectors when the sum is divided by the sum to be equal to 90%. And forming a new projection matrix A by the residual eigenvalues and eigenvectors thereof, and projecting the original sample to a new eigenspace.
The current face recognition mainly aims at images with face information as the main part, but is a bit of inconvenience for people. The system realizes the face detection and identification in the complex environment, can realize the face identification in a longer distance, for example, the system can realize the face identification in a longer distance when the door control of a residential area is carried, and does not need to be particularly close to people. The public transport area carrying the system can realize accurate identification in complex people and help to build a safe public environment.
Claims (1)
1. An improved face detection, positioning and identification method in a complex environment is characterized in that: the system is divided into three parts, firstly, an input image is processed, and face information is screened out from a complex environment; then training a sample set and establishing a feature space; and finally, identifying the screened face information, specifically:
firstly, complex environment face detection:
1. detecting the face information by using haar-like characteristics: reading input picture information and setting an initial rectangular frame; expressing the small rectangle as a black part and a white part, respectively summing pixel gray values covered by the two parts, and finally subtracting the sum of the black part pixel gray values from the sum of the white part pixel gray values to obtain a haar-like feature; setting weight for haar-like characteristics by the haar-like characteristic occurrence rate in the training sample;
2. sorting the features according to weight, and setting a threshold value: firstly, calculating the features with higher weight of the current block diagram, carrying out data processing on the extracted haar-like features to form an integral diagram, and transmitting the area to the next stage when the accumulated sum is higher than a threshold value through the accumulated sum of the feature weights of the same block diagram;
3. training a weak classifier:
determining the number of features, training the feature weak classifier for the feature fh(x,f,p,a):
Wherein, x represents a detection window, f is a feature, p identifies the direction of the unequal sign, a represents a threshold, the purpose of training the weak classifier is to determine the optimal threshold of the feature, the error of the weak classifier is the lowest through the threshold, and when all training samples are classified, the training process of the weak classifier can be known through the following steps:
(1) calculating the characteristic values of all training samples of the characteristic f;
(2) sorting the characteristic values obtained by the previous step;
(3) for each element in the sorted order:
(a) calculating the weight sum T1 of all face samples;
(b) calculating the weight sum T2 of all the non-face samples;
(c) calculating the sum T3 of all weights of the face sample before the element;
(d) calculating the sum T4 of all weights of the non-face sample before the element;
(4) the threshold is selected as a number between the previous feature value and the current feature value, and the classification error of the threshold can be calculated by the following formula:
the threshold that minimizes the classification error can be selected for the weak classifier by scanning this sorted table through;
4. and (3) combining weak classifiers:
firstly, defining a weight for each training sample, wherein the weight represents the probability that the sample can be correctly classified, so that the sample which is focused in each training depends on the weight of the sample, the change of the sample weight depends on whether the sample is correctly classified in the previous round, and if the sample is correctly classified in the previous round, the weight of the sample is reduced; if the previous round of samples is not correctly classified, the weight of the samples is increased, and the next round of samples is focused on the samples which are wrongly classified; secondly, the weak classifiers form the strong classifiers by adopting a weighted voting mode, namely, defining weight for each weak classifier, defining larger weight for the classifier with smaller classification error to ensure that the classifier occupies larger influence in voting, and defining smaller weight for the classifier with larger classification error rate to reduce the influence occupied in voting; in conclusion, the weak classifiers form a strong classifier with stronger classification capability through weighted combination;
presume that the input trains the setWhereinIt is shown that the training samples are,whether the sample is a face or not is shown, the number of learning cycles is T, the number of face images is m, and the number of non-face images islThe method comprises the following steps:
(1) sample weights are initialized, and the weights of face samples are respectively initialized toThe non-face sample weight is;
(2) For T cycles, T =1,2,3 … … T;
(b) for each feature j, training a classifierCalculating the weighted error rate of all the features:
(c) finding the weak classifier with the minimum weighted error rate from the weighted error values of the weak classifiers;
(d) redefining the weight of each sample:
(3) and continuously adjusting the weight of the weak classifier to form a strong classifier:
whereinRepresenting weak classifiers, H is the strong classifier finally formed, which is the weight of T weak classifiersVoting is generated, during the first circulation, the weights of all pictures are set, the weights of the pictures of the same type are the same, and through each circulation, the weights of the samples which are wrongly classified are gradually increased, so that the samples are focused during the next classification, the probability of correct classification is increased, and all weak classifiers are combined to form a strong classifier;
5. a cascade classifier:
for the first-stage classifier, the training samples are all input training samples, the non-face samples of the second-stage training samples are false detection samples of the original non-face samples of the first stage, and the cascaded strong classifier is constructed through the screening and classification of the first stage;
when an input image is detected, because the feature size of a face image is not fixed, the input image needs to be detected in multiple areas and sizes; the multi-region detection is to obtain information of multiple regions of the image through the translation operation of the sampling sub-window so as to detect each region; the samples adopted in the sample training are images with set sizes, but the input images are not fixed sizes, so that a multi-scale detection mode is required in the detection process to solve the problem of detection of the input images with larger sizes than the training samples; the system carries out multi-size detection by continuously enlarging the size of a sampling sub-window, and carries out optimization calculation by using an integral graph, and the calculation of each rectangular area only needs to carry out addition and subtraction calculation of four values; during the detection process, a program samples a large number of sub-windows, and the sub-windows are screened by a first level and a second level; in the detection process, only the face area is detected to enter the next stage for detection, and the obtained sub-window can be finally determined as the face only through all the cascaded classifiers and judged as the face area;
step two, sample training:
4. performing eigenvalue decomposition on the covariance matrix to obtain an eigenvalueSelecting the eigenvectors corresponding to the first k eigenvalues to form a new projection matrixIs a p-dimensional vector;
5. projecting the original sample to a new feature space to obtain a new dimension reduction sample,is a pixn dimensional matrix;
step three, face recognition:
1. initialization: setting the threshold distance to the maximum of the distances between all samples;
2. calculating the distance d between the newly input sample and other samples in the training set;
3. selecting K nearest samples, and solving the maximum value D of the distance between the K nearest samples;
4. if all D is larger than D, the sample is not considered to belong to the sample set, and if D is smaller than D, the training sample is taken as a K-nearest sample;
and selecting the sample name with the most occurrence number as the name of the input sample according to the occurrence number of each class of the several class numbers in the K-nearest adjacent sample.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910738193.3A CN112395901A (en) | 2019-08-12 | 2019-08-12 | Improved face detection, positioning and recognition method in complex environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910738193.3A CN112395901A (en) | 2019-08-12 | 2019-08-12 | Improved face detection, positioning and recognition method in complex environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112395901A true CN112395901A (en) | 2021-02-23 |
Family
ID=74602133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910738193.3A Pending CN112395901A (en) | 2019-08-12 | 2019-08-12 | Improved face detection, positioning and recognition method in complex environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112395901A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408804A (en) * | 2021-06-24 | 2021-09-17 | 广东电网有限责任公司 | Electricity stealing behavior detection method, system, terminal equipment and storage medium |
CN115311824A (en) * | 2022-07-05 | 2022-11-08 | 南京邮电大学 | Campus security management system and method based on Internet |
CN115827995A (en) * | 2022-12-13 | 2023-03-21 | 深圳市爱聊科技有限公司 | Social matching method based on big data analysis |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101526997A (en) * | 2009-04-22 | 2009-09-09 | 无锡名鹰科技发展有限公司 | Embedded infrared face image identifying method and identifying device |
CN101964063A (en) * | 2010-09-14 | 2011-02-02 | 南京信息工程大学 | Method for constructing improved AdaBoost classifier |
CN103116756A (en) * | 2013-01-23 | 2013-05-22 | 北京工商大学 | Face detecting and tracking method and device |
CN105550708A (en) * | 2015-12-14 | 2016-05-04 | 北京工业大学 | Visual word bag model constructing model based on improved SURF characteristic |
CN105913053A (en) * | 2016-06-07 | 2016-08-31 | 合肥工业大学 | Monogenic multi-characteristic face expression identification method based on sparse fusion |
CN107316036A (en) * | 2017-06-09 | 2017-11-03 | 广州大学 | A kind of insect recognition methods based on cascade classifier |
CN108898093A (en) * | 2018-02-11 | 2018-11-27 | 陈佳盛 | A kind of face identification method and the electronic health record login system using this method |
-
2019
- 2019-08-12 CN CN201910738193.3A patent/CN112395901A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101526997A (en) * | 2009-04-22 | 2009-09-09 | 无锡名鹰科技发展有限公司 | Embedded infrared face image identifying method and identifying device |
CN101964063A (en) * | 2010-09-14 | 2011-02-02 | 南京信息工程大学 | Method for constructing improved AdaBoost classifier |
CN103116756A (en) * | 2013-01-23 | 2013-05-22 | 北京工商大学 | Face detecting and tracking method and device |
CN105550708A (en) * | 2015-12-14 | 2016-05-04 | 北京工业大学 | Visual word bag model constructing model based on improved SURF characteristic |
CN105913053A (en) * | 2016-06-07 | 2016-08-31 | 合肥工业大学 | Monogenic multi-characteristic face expression identification method based on sparse fusion |
CN107316036A (en) * | 2017-06-09 | 2017-11-03 | 广州大学 | A kind of insect recognition methods based on cascade classifier |
CN108898093A (en) * | 2018-02-11 | 2018-11-27 | 陈佳盛 | A kind of face identification method and the electronic health record login system using this method |
Non-Patent Citations (1)
Title |
---|
杨超: ""车牌识别系统研究与设计"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408804A (en) * | 2021-06-24 | 2021-09-17 | 广东电网有限责任公司 | Electricity stealing behavior detection method, system, terminal equipment and storage medium |
CN115311824A (en) * | 2022-07-05 | 2022-11-08 | 南京邮电大学 | Campus security management system and method based on Internet |
CN115827995A (en) * | 2022-12-13 | 2023-03-21 | 深圳市爱聊科技有限公司 | Social matching method based on big data analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sun et al. | Deep learning face representation by joint identification-verification | |
Ma et al. | Robust precise eye location under probabilistic framework | |
US8320643B2 (en) | Face authentication device | |
CN111126482B (en) | Remote sensing image automatic classification method based on multi-classifier cascade model | |
KR101254177B1 (en) | A system for real-time recognizing a face using radial basis function neural network algorithms | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
Sasankar et al. | A study for Face Recognition using techniques PCA and KNN | |
KR101589149B1 (en) | Face recognition and face tracking method using radial basis function neural networks pattern classifier and object tracking algorithm and system for executing the same | |
CN112395901A (en) | Improved face detection, positioning and recognition method in complex environment | |
CN110555386A (en) | Face recognition identity authentication method based on dynamic Bayes | |
CN111832405A (en) | Face recognition method based on HOG and depth residual error network | |
Oleiwi et al. | Integrated different fingerprint identification and classification systems based deep learning | |
Shuai et al. | Multi-source feature fusion and entropy feature lightweight neural network for constrained multi-state heterogeneous iris recognition | |
Cheng et al. | Unified classification and rejection: A one-versus-all framework | |
Hiremath et al. | Human age and gender prediction using machine learning algorithm | |
Pryor et al. | Deepfake detection analyzing hybrid dataset utilizing CNN and SVM | |
CN111898400A (en) | Fingerprint activity detection method based on multi-modal feature fusion | |
Karungaru et al. | Face recognition in colour images using neural networks and genetic algorithms | |
Sukkar et al. | A Real-time Face Recognition Based on MobileNetV2 Model | |
KR100621883B1 (en) | An adaptive realtime face detecting method based on training | |
Navabifar et al. | A short review paper on Face detection using Machine learning | |
CN114743278A (en) | Finger vein identification method based on generation of confrontation network and convolutional neural network | |
Jian et al. | Cascading global and local features for face recognition using support vector machines and local ternary patterns | |
CN111898473A (en) | Driver state real-time monitoring method based on deep learning | |
Shreedevi et al. | An improved local binary pattern algorithm for face recognition applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210223 |
|
RJ01 | Rejection of invention patent application after publication |