CN112395901A - Improved face detection, positioning and recognition method in complex environment - Google Patents

Improved face detection, positioning and recognition method in complex environment Download PDF

Info

Publication number
CN112395901A
CN112395901A CN201910738193.3A CN201910738193A CN112395901A CN 112395901 A CN112395901 A CN 112395901A CN 201910738193 A CN201910738193 A CN 201910738193A CN 112395901 A CN112395901 A CN 112395901A
Authority
CN
China
Prior art keywords
sample
face
samples
training
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910738193.3A
Other languages
Chinese (zh)
Inventor
徐江涛
王相锋
聂凯明
高志远
查万斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University Marine Technology Research Institute
Original Assignee
Tianjin University Marine Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University Marine Technology Research Institute filed Critical Tianjin University Marine Technology Research Institute
Priority to CN201910738193.3A priority Critical patent/CN112395901A/en
Publication of CN112395901A publication Critical patent/CN112395901A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A face detection, positioning and recognition method under an improved complex environment is characterized in that haar-like features are trained through a large number of data sets, and the features are weighted according to the feature occurrence rate. A threshold is set for the weight accumulation sum. The features with higher weights are voted in the classifier, and the features are transmitted to the next stage when the weight accumulation sum meets the threshold condition. The time cost caused by the large number of features is greatly reduced by the operation. And the method is combined with an integral graph, so that the rapid and accurate detection and positioning of the face area in a complex environment are realized. The system detects the human face based on haar-like characteristics, and realizes high classification accuracy with low calculation cost by weak classifier combination. And then, the combination of the cascade weak classifiers is used for screening the input samples layer by layer, and finally, the human face area is positioned. And inputting the detected face region into a feature space formed by training, and realizing the identification of the input face region through voting of the nearest sample. The detection and the identification of the face in the complex environment are realized.

Description

Improved face detection, positioning and recognition method in complex environment
Technical Field
The invention belongs to the field of image processing, particularly relates to face detection and recognition application, and provides an improved face detection, positioning and recognition method in a complex environment.
Background
In recent years, with the development of face recognition technology, face recognition is gradually widely used in life. The face brushing unlocking of the mobile phone, the face brushing door opening of the dormitory building and the like facilitate daily life of people. It is very easy for a person to detect and recognize a face in a complex environment, but it is very complicated for a machine to recognize whether or not a face exists from a complex environment.
The face recognition algorithm existing in the society at present comprises the following steps: (1) a face recognition method based on geometric features. The method mainly comprises the step of extracting positions and geometric features of important feature points such as human eyes, mouths and noses to serve as classification features. But the geometric features are not highly accurate for face recognition. (2) And face recognition based on the characteristic face. Firstly, constructing a pivot space according to a group of human face training images prepared in the early stage, and then characterizing the human face by weighting and representing the features. While the identification of a particular face requires only comparing these weights to known personal weights. (3) And face recognition based on the elastic model. Global feature description is adopted, and for local feature key points, the method is one of representative methods of a sampling point-based Gabor wavelet face recognition algorithm. By the method, the global characteristics of the face are reserved, and the local key characteristics are modeled. The defects are high time complexity, low speed and complex realization.
With the rise of deep learning, more and more people are invested in face detection research based on deep learning. Face recognition algorithms based on neural networks are also emerging. For example, the beijing femto-search technology limited company provides a 'face detection method and system based on a three-level convolutional network' (patent application No. 201710078431.3, publication No. CN 106874868A), a three-level convolutional neural network is adopted to perform face detection, the network with gradually enhanced multi-level performance is trained step by step, and the training result of the previous n levels is used as the input of the next level, but the three-level convolutional neural network needs to be trained in a staged manner, so that the problems of low efficiency, complicated training steps, incapability of performing joint tasks exist, the generalization capability of the network is poor, and certain limitations are realized.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an improved face detection, positioning and recognition method under a complex environment, which can accurately detect and recognize faces of pictures which are input in a complex background environment and contain a plurality of pieces of face information and are simultaneously in the complex environment, and solves the problems of low accuracy rate and poor recognition effect of face information processing in the complex environment; the system adopts a method of weighting the features according to the occurrence rate of the features in the training sample set to realize the rapid calculation of the data, and solves the problem of long detection time caused by the calculation of a large amount of data.
An improved face detection, positioning and recognition method under a complex environment is disclosed, as shown in figure 1, a system is divided into three parts, firstly, an input image is processed, and face information is screened out from the complex environment; then training a sample set and establishing a feature space; and finally, identifying the screened face information, specifically:
firstly, complex environment face detection:
1. detecting the face information by using haar-like characteristics: reading input picture information and setting an initial rectangular frame; expressing the small rectangle as a black part and a white part, respectively summing pixel gray values covered by the two parts, and finally subtracting the sum of the black part pixel gray values from the sum of the white part pixel gray values to obtain a haar-like feature; common haar-like features are shown in fig. 3, and weight is set on the haar-like features by the haar-like feature occurrence rate in the training sample;
2. sorting the features according to weight, and setting a threshold value: firstly, calculating the features with higher weight of the current block diagram, and performing data processing on the extracted haar-like features to form an integral diagram, as shown in fig. 4; the accumulated sum of the characteristic weights of the same block diagram is transmitted to the next stage when the accumulated sum is higher than a threshold value;
3. training a weak classifier:
determining the number of features, training the feature weak classifier for the feature fh(x,f,p,a)
Figure 177997DEST_PATH_IMAGE001
Wherein, x represents a detection window, f is a feature, p identifies the direction of the unequal sign, a represents a threshold, the purpose of training the weak classifier is to determine the optimal threshold of the feature, the error of the weak classifier is the lowest through the threshold, and when all training samples are classified, the training process of the weak classifier can be known through the following steps:
(1) calculating the characteristic values of all training samples of the characteristic f;
(2) sorting the characteristic values obtained by the previous step;
(3) for each element in the sorted order:
(a) calculating the weight sum T1 of all face samples;
(b) calculating the weight sum T2 of all the non-face samples;
(c) calculating the sum T3 of all weights of the face sample before the element;
(d) calculating the sum T4 of all weights of the non-face sample before the element;
(4) the threshold is selected as a number between the previous feature value and the current feature value, and the classification error of the threshold can be calculated by the following formula:
Figure 945971DEST_PATH_IMAGE002
the threshold that minimizes the classification error can be selected for the weak classifier by scanning this sorted table through;
4. and (3) combining weak classifiers:
firstly, defining a weight for each training sample, wherein the weight represents the probability that the sample can be correctly classified, so that the sample which is focused in each training depends on the weight of the sample, the change of the sample weight depends on whether the sample is correctly classified in the previous round, and if the sample is correctly classified in the previous round, the weight of the sample is reduced; if the previous round of samples is not correctly classified, the weight of the samples is increased, and the next round of samples is focused on the samples which are wrongly classified; secondly, the weak classifiers form the strong classifiers by adopting a weighted voting mode, namely, defining weight for each weak classifier, defining larger weight for the classifier with smaller classification error to ensure that the classifier occupies larger influence in voting, and defining smaller weight for the classifier with larger classification error rate to reduce the influence occupied in voting; in conclusion, the weak classifiers form a strong classifier with stronger classification capability through weighted combination;
presume that the input trains the set
Figure 45645DEST_PATH_IMAGE003
Wherein
Figure 71589DEST_PATH_IMAGE004
It is shown that the training samples are,
Figure 480573DEST_PATH_IMAGE005
whether the sample is a face or not is shown, the number of learning cycles is T, the number of face images is m, and the number of non-face images islThe method comprises the following steps:
(1) sample weights are initialized, and the weights of face samples are respectively initialized to
Figure 738773DEST_PATH_IMAGE006
The non-face sample weight is
Figure 235613DEST_PATH_IMAGE007
(2) For T cycles, T =1,2,3 … … T;
(a) weight normalization:
Figure 761404DEST_PATH_IMAGE008
calculated at this time
Figure 652874DEST_PATH_IMAGE009
Is t cycles toiA weight of each sample;
(b) for each feature j, training the classificationDevice for cleaning the skin
Figure 21538DEST_PATH_IMAGE010
Calculating the weighted error rate of all the features:
Figure 525332DEST_PATH_IMAGE011
(c) finding the weak classifier with the minimum weighted error rate from the weighted error values of the weak classifiers;
(d) redefining the weight of each sample:
Figure 971593DEST_PATH_IMAGE012
wherein:
Figure 66588DEST_PATH_IMAGE013
if the sample is correctly classified
Figure 797914DEST_PATH_IMAGE014
0, if not correctly classified
Figure 869513DEST_PATH_IMAGE014
Is 1;
(3) and continuously adjusting the weight of the weak classifier to form a strong classifier:
Figure 369896DEST_PATH_IMAGE015
wherein
Figure 370213DEST_PATH_IMAGE016
Representing weak classifiers, H is the strong classifier finally formed, which is the weight of T weak classifiers
Figure 683776DEST_PATH_IMAGE017
Voting takes place, setting all pictures in the first loopThe weights of the same type of pictures are the same, and through each circulation, the weights of the samples which are classified wrongly are gradually increased, so that the samples are focused during the next classification, the probability of correct classification is increased, and all weak classifiers are combined to form a strong classifier;
5. a cascade classifier:
for the first-stage classifier, the training samples are all input training samples, the non-face samples of the second-stage training samples are false detection samples of the original non-face samples of the first stage, and the cascaded strong classifier is constructed through the screening and classification of the first stage;
when an input image is detected, because the feature size of a face image is not fixed, the input image needs to be detected in multiple areas and sizes; the multi-region detection is to obtain information of multiple regions of the image through the translation operation of the sampling sub-window so as to detect each region; the samples adopted in the sample training are images with set sizes, but the input images are not fixed sizes, so that a multi-scale detection mode is required in the detection process to solve the problem of detection of the input images with larger sizes than the training samples; the system carries out multi-size detection by continuously enlarging the size of a sampling sub-window, and carries out optimization calculation by using an integral graph, and the calculation of each rectangular area only needs to carry out addition and subtraction calculation of four values; during the detection process, a program samples a large number of sub-windows, and the sub-windows are screened by a first level and a second level; in the detection process, only the face area is detected to enter the next stage for detection, and the obtained sub-window can be finally determined as the face only through all the cascaded classifiers and judged as the face area;
step two, sample training:
1. establishing a face identity database, reading in face image information,
Figure 60531DEST_PATH_IMAGE018
2. data centralization:
Figure 149841DEST_PATH_IMAGE019
3. computing a covariance matrix
Figure 554015DEST_PATH_IMAGE020
And X is the data after the centralization,
Figure 118988DEST_PATH_IMAGE021
a dimension matrix;
4. performing eigenvalue decomposition on the covariance matrix to obtain an eigenvalue
Figure 174800DEST_PATH_IMAGE022
Selecting the eigenvectors corresponding to the first k eigenvalues to form a new projection matrix
Figure 4435DEST_PATH_IMAGE023
Is a p-dimensional vector;
5. projecting the original sample to a new feature space to obtain a new dimension reduction sample,
Figure 80975DEST_PATH_IMAGE024
is a pixn dimensional matrix;
step three, face recognition:
1. initialization: setting the threshold distance to the maximum of the distances between all samples;
2. calculating the distance d between the newly input sample and other samples in the training set;
3. selecting K nearest samples, and solving the maximum value D of the distance between the K nearest samples;
4. if all D is larger than D, the sample is not considered to belong to the sample set, and if D is smaller than D, the training sample is taken as a K-nearest sample;
and selecting the sample name with the most occurrence number as the name of the input sample according to the occurrence number of each class of the several class numbers in the K-nearest adjacent sample.
A face detection, positioning and recognition method under an improved complex environment is characterized in that haar-like features are trained through a large number of data sets, and the features are weighted according to the feature occurrence rate. A threshold is set for the weight accumulation sum. The features with higher weights are voted in the classifier, and the features are transmitted to the next stage when the weight accumulation sum meets the threshold condition. The time cost caused by the large number of features is greatly reduced by the operation. And the method is combined with an integral graph, so that the rapid and accurate detection and positioning of the face area in a complex environment are realized. The system detects the human face based on haar-like characteristics, and realizes high classification accuracy with low calculation cost by weak classifier combination. And then, the combination of the cascade weak classifiers is used for screening the input samples layer by layer, and finally, the human face area is positioned. And inputting the detected face region into a feature space formed by training, and realizing the identification of the input face region through voting of the nearest sample. The detection and the identification of the face in the complex environment are realized.
Drawings
FIG. 1 is a diagram of the basic architecture of the system;
FIG. 2 is a flow chart of face detection in a complex environment;
FIG. 3 is a graph of a conventional haar-like feature;
FIG. 4 is an integral plot model;
fig. 5 is a view of a face region determination process;
FIG. 6 is a sample training flow diagram;
fig. 7 is a diagram of a face recognition process.
Detailed Description
The sample training is to read the training sample on MATLAB, and ORL face database is selected to shorten the training time. Firstly, reading the face pictures in the database, storing the gray information of one face picture into one line to form an X n matrix X. Where p represents the number of samples and n represents the ordered arrangement of all gray values of a sample. Then, the average value of the matrix X is calculated, the average value is subtracted from all the values in the matrix, and the data are centralized to form a new matrix
Figure 274190DEST_PATH_IMAGE025
. Computing a covariance matrix
Figure 960124DEST_PATH_IMAGE026
Figure 883081DEST_PATH_IMAGE021
A dimension matrix. And solving the eigenvalue and the eigenvector of the covariance matrix, and arranging the solved eigenvalue from big to small. And setting a threshold value as 90%, calculating the sum of all characteristic values, accumulating the characteristic values from large to small, and discarding the subsequent characteristic values and corresponding characteristic vectors when the sum is divided by the sum to be equal to 90%. And forming a new projection matrix A by the residual eigenvalues and eigenvectors thereof, and projecting the original sample to a new eigenspace.
The current face recognition mainly aims at images with face information as the main part, but is a bit of inconvenience for people. The system realizes the face detection and identification in the complex environment, can realize the face identification in a longer distance, for example, the system can realize the face identification in a longer distance when the door control of a residential area is carried, and does not need to be particularly close to people. The public transport area carrying the system can realize accurate identification in complex people and help to build a safe public environment.

Claims (1)

1. An improved face detection, positioning and identification method in a complex environment is characterized in that: the system is divided into three parts, firstly, an input image is processed, and face information is screened out from a complex environment; then training a sample set and establishing a feature space; and finally, identifying the screened face information, specifically:
firstly, complex environment face detection:
1. detecting the face information by using haar-like characteristics: reading input picture information and setting an initial rectangular frame; expressing the small rectangle as a black part and a white part, respectively summing pixel gray values covered by the two parts, and finally subtracting the sum of the black part pixel gray values from the sum of the white part pixel gray values to obtain a haar-like feature; setting weight for haar-like characteristics by the haar-like characteristic occurrence rate in the training sample;
2. sorting the features according to weight, and setting a threshold value: firstly, calculating the features with higher weight of the current block diagram, carrying out data processing on the extracted haar-like features to form an integral diagram, and transmitting the area to the next stage when the accumulated sum is higher than a threshold value through the accumulated sum of the feature weights of the same block diagram;
3. training a weak classifier:
determining the number of features, training the feature weak classifier for the feature fh(x,f,p,a)
Figure 59526DEST_PATH_IMAGE001
Wherein, x represents a detection window, f is a feature, p identifies the direction of the unequal sign, a represents a threshold, the purpose of training the weak classifier is to determine the optimal threshold of the feature, the error of the weak classifier is the lowest through the threshold, and when all training samples are classified, the training process of the weak classifier can be known through the following steps:
(1) calculating the characteristic values of all training samples of the characteristic f;
(2) sorting the characteristic values obtained by the previous step;
(3) for each element in the sorted order:
(a) calculating the weight sum T1 of all face samples;
(b) calculating the weight sum T2 of all the non-face samples;
(c) calculating the sum T3 of all weights of the face sample before the element;
(d) calculating the sum T4 of all weights of the non-face sample before the element;
(4) the threshold is selected as a number between the previous feature value and the current feature value, and the classification error of the threshold can be calculated by the following formula:
Figure 448919DEST_PATH_IMAGE002
the threshold that minimizes the classification error can be selected for the weak classifier by scanning this sorted table through;
4. and (3) combining weak classifiers:
firstly, defining a weight for each training sample, wherein the weight represents the probability that the sample can be correctly classified, so that the sample which is focused in each training depends on the weight of the sample, the change of the sample weight depends on whether the sample is correctly classified in the previous round, and if the sample is correctly classified in the previous round, the weight of the sample is reduced; if the previous round of samples is not correctly classified, the weight of the samples is increased, and the next round of samples is focused on the samples which are wrongly classified; secondly, the weak classifiers form the strong classifiers by adopting a weighted voting mode, namely, defining weight for each weak classifier, defining larger weight for the classifier with smaller classification error to ensure that the classifier occupies larger influence in voting, and defining smaller weight for the classifier with larger classification error rate to reduce the influence occupied in voting; in conclusion, the weak classifiers form a strong classifier with stronger classification capability through weighted combination;
presume that the input trains the set
Figure 107172DEST_PATH_IMAGE003
Wherein
Figure 660644DEST_PATH_IMAGE004
It is shown that the training samples are,
Figure 268736DEST_PATH_IMAGE005
whether the sample is a face or not is shown, the number of learning cycles is T, the number of face images is m, and the number of non-face images islThe method comprises the following steps:
(1) sample weights are initialized, and the weights of face samples are respectively initialized to
Figure 794527DEST_PATH_IMAGE006
The non-face sample weight is
Figure 826942DEST_PATH_IMAGE007
(2) For T cycles, T =1,2,3 … … T;
(a) weight normalization:
Figure 867711DEST_PATH_IMAGE008
calculated at this time
Figure 804793DEST_PATH_IMAGE009
Is t cycles toiA weight of each sample;
(b) for each feature j, training a classifier
Figure 981827DEST_PATH_IMAGE010
Calculating the weighted error rate of all the features:
Figure 935877DEST_PATH_IMAGE011
(c) finding the weak classifier with the minimum weighted error rate from the weighted error values of the weak classifiers;
(d) redefining the weight of each sample:
Figure 900160DEST_PATH_IMAGE012
wherein:
Figure 614169DEST_PATH_IMAGE013
if the sample is correctly classified
Figure 350437DEST_PATH_IMAGE014
0, if not correctly classified
Figure 740967DEST_PATH_IMAGE014
Is 1;
(3) and continuously adjusting the weight of the weak classifier to form a strong classifier:
Figure 959590DEST_PATH_IMAGE015
wherein
Figure 179088DEST_PATH_IMAGE016
Representing weak classifiers, H is the strong classifier finally formed, which is the weight of T weak classifiers
Figure 783244DEST_PATH_IMAGE017
Voting is generated, during the first circulation, the weights of all pictures are set, the weights of the pictures of the same type are the same, and through each circulation, the weights of the samples which are wrongly classified are gradually increased, so that the samples are focused during the next classification, the probability of correct classification is increased, and all weak classifiers are combined to form a strong classifier;
5. a cascade classifier:
for the first-stage classifier, the training samples are all input training samples, the non-face samples of the second-stage training samples are false detection samples of the original non-face samples of the first stage, and the cascaded strong classifier is constructed through the screening and classification of the first stage;
when an input image is detected, because the feature size of a face image is not fixed, the input image needs to be detected in multiple areas and sizes; the multi-region detection is to obtain information of multiple regions of the image through the translation operation of the sampling sub-window so as to detect each region; the samples adopted in the sample training are images with set sizes, but the input images are not fixed sizes, so that a multi-scale detection mode is required in the detection process to solve the problem of detection of the input images with larger sizes than the training samples; the system carries out multi-size detection by continuously enlarging the size of a sampling sub-window, and carries out optimization calculation by using an integral graph, and the calculation of each rectangular area only needs to carry out addition and subtraction calculation of four values; during the detection process, a program samples a large number of sub-windows, and the sub-windows are screened by a first level and a second level; in the detection process, only the face area is detected to enter the next stage for detection, and the obtained sub-window can be finally determined as the face only through all the cascaded classifiers and judged as the face area;
step two, sample training:
1. establishing a face identity database, reading in face image information,
Figure 360987DEST_PATH_IMAGE018
2. data centralization:
Figure 31353DEST_PATH_IMAGE019
3. computing a covariance matrix
Figure 352744DEST_PATH_IMAGE020
And X is the data after the centralization,
Figure 14670DEST_PATH_IMAGE021
a dimension matrix;
4. performing eigenvalue decomposition on the covariance matrix to obtain an eigenvalue
Figure 261849DEST_PATH_IMAGE022
Selecting the eigenvectors corresponding to the first k eigenvalues to form a new projection matrix
Figure 189485DEST_PATH_IMAGE023
Is a p-dimensional vector;
5. projecting the original sample to a new feature space to obtain a new dimension reduction sample,
Figure 235939DEST_PATH_IMAGE024
is a pixn dimensional matrix;
step three, face recognition:
1. initialization: setting the threshold distance to the maximum of the distances between all samples;
2. calculating the distance d between the newly input sample and other samples in the training set;
3. selecting K nearest samples, and solving the maximum value D of the distance between the K nearest samples;
4. if all D is larger than D, the sample is not considered to belong to the sample set, and if D is smaller than D, the training sample is taken as a K-nearest sample;
and selecting the sample name with the most occurrence number as the name of the input sample according to the occurrence number of each class of the several class numbers in the K-nearest adjacent sample.
CN201910738193.3A 2019-08-12 2019-08-12 Improved face detection, positioning and recognition method in complex environment Pending CN112395901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910738193.3A CN112395901A (en) 2019-08-12 2019-08-12 Improved face detection, positioning and recognition method in complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910738193.3A CN112395901A (en) 2019-08-12 2019-08-12 Improved face detection, positioning and recognition method in complex environment

Publications (1)

Publication Number Publication Date
CN112395901A true CN112395901A (en) 2021-02-23

Family

ID=74602133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910738193.3A Pending CN112395901A (en) 2019-08-12 2019-08-12 Improved face detection, positioning and recognition method in complex environment

Country Status (1)

Country Link
CN (1) CN112395901A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408804A (en) * 2021-06-24 2021-09-17 广东电网有限责任公司 Electricity stealing behavior detection method, system, terminal equipment and storage medium
CN115311824A (en) * 2022-07-05 2022-11-08 南京邮电大学 Campus security management system and method based on Internet
CN115827995A (en) * 2022-12-13 2023-03-21 深圳市爱聊科技有限公司 Social matching method based on big data analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526997A (en) * 2009-04-22 2009-09-09 无锡名鹰科技发展有限公司 Embedded infrared face image identifying method and identifying device
CN101964063A (en) * 2010-09-14 2011-02-02 南京信息工程大学 Method for constructing improved AdaBoost classifier
CN103116756A (en) * 2013-01-23 2013-05-22 北京工商大学 Face detecting and tracking method and device
CN105550708A (en) * 2015-12-14 2016-05-04 北京工业大学 Visual word bag model constructing model based on improved SURF characteristic
CN105913053A (en) * 2016-06-07 2016-08-31 合肥工业大学 Monogenic multi-characteristic face expression identification method based on sparse fusion
CN107316036A (en) * 2017-06-09 2017-11-03 广州大学 A kind of insect recognition methods based on cascade classifier
CN108898093A (en) * 2018-02-11 2018-11-27 陈佳盛 A kind of face identification method and the electronic health record login system using this method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526997A (en) * 2009-04-22 2009-09-09 无锡名鹰科技发展有限公司 Embedded infrared face image identifying method and identifying device
CN101964063A (en) * 2010-09-14 2011-02-02 南京信息工程大学 Method for constructing improved AdaBoost classifier
CN103116756A (en) * 2013-01-23 2013-05-22 北京工商大学 Face detecting and tracking method and device
CN105550708A (en) * 2015-12-14 2016-05-04 北京工业大学 Visual word bag model constructing model based on improved SURF characteristic
CN105913053A (en) * 2016-06-07 2016-08-31 合肥工业大学 Monogenic multi-characteristic face expression identification method based on sparse fusion
CN107316036A (en) * 2017-06-09 2017-11-03 广州大学 A kind of insect recognition methods based on cascade classifier
CN108898093A (en) * 2018-02-11 2018-11-27 陈佳盛 A kind of face identification method and the electronic health record login system using this method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨超: ""车牌识别系统研究与设计"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408804A (en) * 2021-06-24 2021-09-17 广东电网有限责任公司 Electricity stealing behavior detection method, system, terminal equipment and storage medium
CN115311824A (en) * 2022-07-05 2022-11-08 南京邮电大学 Campus security management system and method based on Internet
CN115827995A (en) * 2022-12-13 2023-03-21 深圳市爱聊科技有限公司 Social matching method based on big data analysis

Similar Documents

Publication Publication Date Title
Sun et al. Deep learning face representation by joint identification-verification
Ma et al. Robust precise eye location under probabilistic framework
US8320643B2 (en) Face authentication device
CN111126482B (en) Remote sensing image automatic classification method based on multi-classifier cascade model
KR101254177B1 (en) A system for real-time recognizing a face using radial basis function neural network algorithms
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
Sasankar et al. A study for Face Recognition using techniques PCA and KNN
KR101589149B1 (en) Face recognition and face tracking method using radial basis function neural networks pattern classifier and object tracking algorithm and system for executing the same
CN112395901A (en) Improved face detection, positioning and recognition method in complex environment
CN110555386A (en) Face recognition identity authentication method based on dynamic Bayes
CN111832405A (en) Face recognition method based on HOG and depth residual error network
Oleiwi et al. Integrated different fingerprint identification and classification systems based deep learning
Shuai et al. Multi-source feature fusion and entropy feature lightweight neural network for constrained multi-state heterogeneous iris recognition
Cheng et al. Unified classification and rejection: A one-versus-all framework
Hiremath et al. Human age and gender prediction using machine learning algorithm
Pryor et al. Deepfake detection analyzing hybrid dataset utilizing CNN and SVM
CN111898400A (en) Fingerprint activity detection method based on multi-modal feature fusion
Karungaru et al. Face recognition in colour images using neural networks and genetic algorithms
Sukkar et al. A Real-time Face Recognition Based on MobileNetV2 Model
KR100621883B1 (en) An adaptive realtime face detecting method based on training
Navabifar et al. A short review paper on Face detection using Machine learning
CN114743278A (en) Finger vein identification method based on generation of confrontation network and convolutional neural network
Jian et al. Cascading global and local features for face recognition using support vector machines and local ternary patterns
CN111898473A (en) Driver state real-time monitoring method based on deep learning
Shreedevi et al. An improved local binary pattern algorithm for face recognition applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210223

RJ01 Rejection of invention patent application after publication