CN108875645B - Face recognition method under complex illumination condition of underground coal mine - Google Patents

Face recognition method under complex illumination condition of underground coal mine Download PDF

Info

Publication number
CN108875645B
CN108875645B CN201810647665.XA CN201810647665A CN108875645B CN 108875645 B CN108875645 B CN 108875645B CN 201810647665 A CN201810647665 A CN 201810647665A CN 108875645 B CN108875645 B CN 108875645B
Authority
CN
China
Prior art keywords
face
image
feature
recognition
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810647665.XA
Other languages
Chinese (zh)
Other versions
CN108875645A (en
Inventor
范伟强
霍跃华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology Beijing CUMTB
Original Assignee
China University of Mining and Technology Beijing CUMTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTB filed Critical China University of Mining and Technology Beijing CUMTB
Priority to CN201810647665.XA priority Critical patent/CN108875645B/en
Publication of CN108875645A publication Critical patent/CN108875645A/en
Application granted granted Critical
Publication of CN108875645B publication Critical patent/CN108875645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a face recognition method under a complex illumination condition in a coal mine, which mainly comprises an initialization stage, a training stage and a recognition stage, wherein the initialization stage comprises image acquisition, image storage, image denoising, image enhancement and feature description, the training stage comprises feature vector dimension reduction and classifier model establishment, and the recognition stage is used for classifying and recognizing a face to be recognized according to a model established by a classifier; the image denoising and image enhancement are realized by adopting a fuzzy enhancement algorithm of wavelet decomposition, the feature description is carried out on the face image after the wavelet processing by adopting an ALBP operator, and a face model is constructed and a face sample database is established by adopting a classifier. The method can overcome the problem that the recognition rate is sharply reduced due to image shadows, bright and dark areas, dim light and highlight caused by complicated underground illumination conditions, and improve the face attendance recognition accuracy of the underground coal mine.

Description

Face recognition method under complex illumination condition of underground coal mine
Technical Field
The invention relates to a face recognition method under a complex illumination condition in a coal mine, in particular to a self-adaptive face recognition method which uses image enhancement and carries out on-line training through a classifier, and belongs to the technical field of image mode recognition.
Background
The current general flow of face recognition is as follows: the recognition system inputs a human face image containing an undetermined identity as a sample to be recognized and a plurality of human face images with known identities in a human face database as training samples, and the similarity of the sample to be recognized is output through an algorithm so as to indicate the identity of a person in the human face image protecting the undetermined identity. The face recognition method mainly comprises two parts of feature extraction and similarity calculation.
The face recognition technology has important significance in the application of video monitoring, work attendance checking, personnel positioning and the like under coal mines. At present, a face recognition system which is actually applied needs to collect facial images of a recognized person in a limited environment (such as fixed illumination and the like), but the underground illumination condition of a coal mine is complex, and special conditions of poor light, uneven illumination, much dust and the like exist, and the recognition rate can be sharply reduced by image shadows, bright and dark areas, dark light and high light caused by the complex illumination condition, so that a face recognition method suitable for the complex illumination condition of the underground coal mine is researched, and the face recognition method is a problem which needs to be solved urgently in the underground application of the coal mine by a face recognition technology.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a face recognition method for an underground coal mine, and solves the problem that the prior face recognition technology cannot meet the problem that the recognition rate is sharply reduced due to image shadow, dark area, dark light and high light caused by complex underground illumination conditions.
The invention specifically adopts the following technical scheme to solve the technical problems:
a face recognition method under a complex illumination condition in a coal mine mainly comprises an initialization stage, a training stage and a recognition stage, wherein the initialization stage comprises image acquisition, image storage, image denoising, image enhancement and feature description, the training stage comprises feature vector dimension reduction and classifier model establishment, the recognition stage is used for classifying and recognizing faces to be recognized according to a model established by a classifier, and the specific steps are as follows:
A. the initialization phase:
(1) firstly, acquiring a face image by using image acquisition equipment, and transmitting the image to an image processing module for image storage;
(2) the image processing module carries out multi-scale wavelet decomposition on the stored image, and carries out image denoising and image enhancement by adopting an image fuzzy enhancement algorithm so as to obtain enhanced human face image characteristic points and carry out wavelet reconstruction on the human face characteristic image;
(3) texture feature description is carried out on the human face feature image obtained after reconstruction through an ALBP operator, and therefore a human face feature vector is formed;
B. the training stage comprises the following steps:
(1) collecting a plurality of sample images of the same face through image collecting equipment;
(2) initializing the obtained multiple sample images according to the processing process of an initialization stage to obtain reconstructed sample face feature vectors;
(3) performing model training on the obtained face feature vector by adopting a pattern recognition classifier, and storing a generated face model (face feature file) into a face sample database;
(4) repeating the processes (1), (2) and (3) in the training stage, storing the face models which are generated by different faces to be identified in sequence into a face sample database, and constructing a complete multi-face sample database;
C. the identification phase comprises the following steps:
(1) acquiring a human face image to be recognized through image acquisition equipment;
(2) initializing the obtained face image according to the processing process of an initialization stage to obtain a reconstructed face feature vector;
(3) modeling the obtained portrait feature vector, and comparing and identifying the portrait feature vector with all face templates in a database obtained after training of a classifier;
(4) and judging whether the face to be identified is in a constructed face sample database or not according to the similarity value obtained after comparison and identification.
Furthermore, a visible light explosion-proof camera or an infrared explosion-proof camera is used for collecting face images underground, and the face images are connected with an image processing module through an RJ45 interface or a USB interface to perform image storage, image denoising and image enhancement.
Further, in the image fuzzy enhancement algorithm, a face image is decomposed into a low-frequency part and a high-frequency part by adopting wavelet decomposition; processing the low-frequency part by histogram equalization to enhance the overall contrast of the image; filtering the high-frequency part by adopting a wavelet denoising model of the fuzzy membership factor; carrying out fuzzy enhancement on the high-frequency part by adopting a PAL fuzzy enhancement algorithm, obtaining characteristic images with different scales and different directions by adopting nonlinear transformation of different thresholds, carrying out anti-fuzzy processing on the characteristic images, and carrying out wavelet reconstruction on the low-frequency part and the high-frequency part after the anti-fuzzy processing.
Further, in the feature extraction in the training stage, firstly, the ALBP operator is used
Figure BDA0001703866360000021
Extracting a feature layer of the reconstructed face feature image, then obtaining contrast values of different intervals and feature values of different intervals, and then constructing feature vectors, wherein maxC and minC in a formula respectively represent the maximum value and the minimum value of the contrast in a local area with the radius of an ALBP window being R and the number of pixels of a field point being P; gciIs the central pixel point in the ith area, gpiIs the ithAnd any neighborhood pixel point in the region.
Further, in the process of constructing the feature vector, after extracting according to the feature layer, the feature vector is extracted by adopting
Figure BDA0001703866360000031
Obtaining contrast values of different intervals, wherein L in the formula is the number of the intervals, r is the value range of the contrast of the different intervals, and r is (maxC-minC)/L, LriIs the contrast value of the ith interval; after obtaining the contrast values of the different intervals, the contrast values are obtained by adopting
Figure BDA0001703866360000032
Sequentially calculating and obtaining ALBP characteristic values of the ith interval, and sequentially connecting the ALBP characteristic values to form a face characteristic vector under a complex illumination condition, wherein i in the formula represents taking the ith ALBP window, and when g is measuredpi-gciWhen > 0, Ap=1;gpi-gciWhen the content is less than or equal to 0, Ap=0。
Further, in the process of constructing the feature vector, the face image is divided into N local regions for the acquired ALBP feature value in each interval, and the N local regions are acquired for each layer
Figure BDA0001703866360000033
Cascading, the ALBP for each region can be obtainedP,RFinally, the ALBP of different areas is usedP,RAnd connecting to obtain the feature vector describing the global face multi-contrast level.
Furthermore, in the comparison and identification process in the identification stage, the face image to be identified is scored in a confidence interval of 0-100%, the identification result is subjected to threshold judgment, and when the scored value is smaller than the threshold, the failure of identification is prompted, the face image to be identified is acquired again, and re-identification is carried out.
Drawings
FIG. 1 is a flow chart of classifier model building for face recognition according to the present invention;
FIG. 2 is a flow chart of classifier identification for face recognition according to the present invention;
fig. 3 is a flow chart of feature description of face recognition according to the present invention.
Fig. 4 is a diagram of image enhancement effect of face recognition according to the present invention.
Fig. 5 is a face feature diagram after LBP feature description for face recognition according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions and specific implementation methods of the present invention are described in detail below with reference to the accompanying drawings.
The classifier model establishing process of the face recognition method under the complex illumination condition of the underground coal mine is shown in figure 1; the method mainly comprises an initialization stage and a training stage, wherein the initialization stage comprises image acquisition, image storage, image enhancement, image denoising and feature description, and the training stage comprises feature vector dimension reduction and classifier model establishment;
the specific implementation steps are as follows:
(1) sample image acquisition (101): in an underground complex illumination environment, acquiring a face image by an explosion-proof visible light camera or an infrared camera which is arranged underground, and acquiring a plurality of face images of daily workers as sample images;
(2) image storage (102): after the face image is collected, the face image is connected to an image processing unit through an RJ45 interface or a USB interface on the collecting equipment, and image storage is carried out, wherein the image storage unit can adopt an image collecting card;
(3) wavelet decomposition (103): firstly, loading a miner face image collected and stored by equipment into an image processing unit, and performing multilayer wavelet decomposition on the image by adopting a wavelet basis function, wherein the wavelet basis function mainly comprises a Haar wavelet basis, a db series wavelet basis, a Biorthogonal (biornr. nd) wavelet system, a Coiflet (coifn) wavelet system, a Symlettsa (symn) wavelet system, a Molet (morl) wavelet, a Mexican Hat (mexh) wavelet and a Meyer wavelet, and the number of decomposition layers of the wavelet is mainly equal to or more than 1;
(4) low frequency coefficient (104): after the image is subjected to multilayer wavelet decomposition, decomposing the face image of the miner into partial high-frequency and low-frequency coefficients, and performing coefficient extraction on the decomposed low-frequency part through a wavelet low-frequency extraction function to obtain a low-frequency coefficient matrix A;
(5) histogram equalization (105): in order to improve the overall visual effect of the face image, inhibit image noise and keep the original information of the image, the low-frequency information of the underground personnel image after wavelet decomposition is subjected to gray level histogram equalization processing, and a low-frequency coefficient matrix A' after the overall brightness of the face image is enhanced is further obtained;
(6) high-frequency coefficient (106): after the image is subjected to multilayer wavelet decomposition, the face image of the miner is decomposed into partial high-frequency and low-frequency coefficients, and coefficient extraction is carried out on each high-frequency part of the n layers of decomposed images through a wavelet high-frequency coefficient extraction function to obtain different high-frequency coefficient matrixes B1,B2,…,Bn,BnAn nth layer wavelet high-frequency coefficient matrix;
(7) wavelet denoising (107): according to the n wavelet high-frequency coefficient matrixes obtained by extracting the high-frequency coefficients in the step (6), the obtained high-frequency coefficient matrix B is subjected to1,B2,…,BnPerforming wavelet denoising, wherein the wavelet denoising model can adaptively adjust a wavelet threshold according to the image noise distribution condition acquired by a camera by introducing a wavelet denoising model of a fuzzy membership factor s, and the wavelet threshold function construction expression in the wavelet denoising model is as follows:
Figure BDA0001703866360000041
in the formula, muTAs a function of wavelet threshold, ωijThe absolute value of a point (i, j) in a wavelet high-frequency coefficient is shown, sgn (·) is a sign function, s is a fuzzy membership factor, and T is a wavelet threshold;
the fuzzy membership factor s in the wavelet denoising model replaces a fixed threshold T in a traditional wavelet soft threshold, so that the flexibility of the model is greatly improved, and the calculation formula of the fuzzy membership factor s is as follows:
Figure BDA0001703866360000042
wherein a is a regulating factor, and a belongs to (0, 1)],ωijThe absolute value of the (i, j) point in the wavelet high-frequency coefficient matrix is shown, and T is a wavelet threshold; the coefficient of the noise information in the high-frequency coefficient of the image has a larger difference with the coefficient of the image information, the mean square error obtained by calculating the mean square error of the high-frequency part of the multi-scale wavelet decomposition is larger, then a certain coefficient value is determined, the mean square error of all the coefficients larger than the value is minimum, the effect is better, the coefficient value is the threshold value to be selected, and the calculation formula of the threshold value T is as follows: t2-nij2(ii)/σ'; where n is the highest wavelet decomposition level, σ is the image noise variance, and σ is mean (| ω ═ mijI)/0.6745, wherein the sigma' is the standard deviation of the wavelet decomposition coefficient of the image, and the expression is as follows:
Figure BDA0001703866360000051
in the formula, M is the maximum value of a row of a processed picture, N is the maximum value of a column of the processed picture, and the high-frequency coefficients under all scales are subjected to wavelet denoising according to the designed wavelet denoising function;
(8) blur enhancement (108): after the wavelet threshold denoising in step (7) is performed on the high-frequency information, blurring of image details and edge information is generally caused, in order to enhance effective texture information and suppress noise information, a blurring enhancement operator is adopted for the high-frequency coefficient part to perform blurring enhancement on the image, the blurring enhancement operator adopts a PAL blurring enhancement algorithm to perform blurring enhancement on the high-frequency coefficient, and an enhanced high-frequency coefficient matrix B is obtained1',B2',…,Bn';
(9) Wavelet reconstruction (109): obtaining high-frequency coefficient moment by fuzzy enhancement of the low-frequency coefficient A' after the histogram equalization processing in the step (5) and the step (8)Array B1',B2',…,BnPerforming wavelet reconstruction to obtain a human face characteristic image of a miner with improved image quality;
(10) feature description (110): firstly, extracting a feature layer of the obtained wavelet reconstructed miner face feature image, then calculating contrast values of different intervals and feature values of different intervals, constructing feature vectors, and performing feature vector dimension reduction, wherein the specific implementation process is detailed in an implementation part of an attached figure 3 of a part of the specification below;
(11) establishing a classifier model (111): performing dimensionality reduction on the feature vector in the step (10), obtaining a low-dimensional feature vector, performing classifier model training by inputting the low-dimensional feature vector into a selected classifier, performing repeated training for multiple times until an optimal face model is obtained, and storing the generated face model (face feature file) into a face sample database; repeating the process from the step (3) to the step (10), sequentially obtaining face models of all face sample data to be recognized, storing the face models into the constructed face sample database one by one, and finally obtaining a complete face sample database; the classifier comprises a Bayes classifier, a minimum neighbor classifier, a decision tree classifier, a neural network model classifier and a support vector machine classifier.
The flow of the classifier identification process of the face identification method under the complex illumination condition of the underground coal mine is shown in figure 2; the method mainly comprises an initialization stage and an identification stage, wherein the initialization stage comprises image acquisition, image storage, image enhancement, image denoising and feature description, and the identification stage comprises classifier identification of a face to be identified according to a model established by a classifier, and mainly comprises feature vector dimension reduction, comparison identification and face threshold judgment; the specific implementation steps are as follows:
(1) image acquisition to be identified (201): when a miner stands at a designated position, a visible light camera or an infrared camera on the underground face recognition device collects a complete face image of the miner, and the face image is used as an image to be recognized;
(2) image storage, wavelet decomposition, low-frequency coefficient acquisition, histogram equalization, high-frequency coefficient acquisition, wavelet de-noising, fuzzy enhancement, wavelet reconstruction and feature extraction, which are described in the detailed description of the implementation part of the attached figure 1 of the specification;
(3) classifier identification (202): carrying out the process from step (3) to step (10) in the implementation part of the attached figure 1 of the specification on the collected face image of the miner to be recognized, obtaining a low-dimensional feature vector, and comparing the low-dimensional feature vector with all face templates in a face sample database obtained after training of a classifier for recognition;
(4) in the comparison and identification process of the identification stage, comparing and identifying a face model in a face sample database and a low-dimensional feature vector of a face image to be identified, scoring the face similarity within a confidence interval of 0-100%, and judging a threshold value of an obtained identification result score; when the score value is larger than or equal to the threshold value, displaying that the recognition is successful and carrying out the face recognition of the next miner; and when the score value is smaller than the threshold value, prompting that the face recognition fails and repeating the steps 201 to 202 in the figure 2 to perform re-recognition.
A characteristic description flow chart of a face recognition method under a complex illumination condition in a coal mine is shown in FIG. 3; the feature description comprises feature layer extraction, obtaining the contrast of different intervals, obtaining the feature values of the different intervals, constructing a feature vector and reducing the dimension of the feature vector; the specific implementation steps are as follows:
(1) feature layer extraction (301): extracting the face characteristic image characteristics of miners after wavelet reconstruction by using an ALBP operator
Figure BDA0001703866360000061
Extracting a feature layer of the reconstructed face feature image; in the formula, maxC and minC respectively represent the maximum value and the minimum value of contrast in a local area with the radius of an ALBP window being R and the number of pixels of a field point being P; gciIs the central pixel point in the ith area, gpiIs any neighborhood pixel point in the ith area;
(2) obtaining contrast for different intervals (302): after the feature layer extraction by the implementation of the step (1), different intervals need to be calculatedBy using contrast values of
Figure BDA0001703866360000062
Acquiring contrast values of different intervals; in the formula, L is the number of intervals, r is the value range of the contrast of different intervals, and r is (maxC-minC)/L, LriIs the contrast value of the ith interval;
(3) obtaining characteristic values of different intervals (303): after the characteristic layer extraction is carried out in the step (1), the characteristic values of different intervals need to be calculated, and the characteristic values are adopted
Figure BDA0001703866360000071
Sequentially calculating and obtaining an ALBP characteristic value of the ith interval, wherein i in the formula represents taking the ith ALBP window, and when gpi-gciWhen > 0, Ap=1;gpi-gciWhen the content is less than or equal to 0, Ap=0;
(4) Construct feature vector (304): dividing the face image into N local regions by obtaining ALBP characteristic values in each interval obtained in the step (3), extracting each layer of each region, and then carrying out characteristic layer value obtaining
Figure BDA0001703866360000072
Cascading, so that ALBP to each regionP,RFinally, the ALBP of all the areas is usedP,RAnd connecting to obtain the feature vector describing the global face multi-contrast level.
(5) Feature vector dimension reduction (305): after the feature vectors are obtained through the implementation step (4), Principal Component Analysis (PCA) is carried out on the obtained feature vectors, a group of optimal unit orthogonal vector bases (namely principal components) is searched by adopting linear transformation, and the original sample is reconstructed by linear combination of partial vectors of the optimal unit orthogonal vector bases, so that the similar feature values of the reconstructed sample are reduced, and meanwhile, the face recognition efficiency is improved.

Claims (6)

1. A face recognition method under a complex illumination condition in a coal mine mainly comprises an initialization stage, a training stage and a recognition stage, wherein the initialization stage comprises image acquisition, image storage, image denoising, image enhancement and feature description, the training stage comprises feature vector dimension reduction and classifier model establishment, the recognition stage is used for classifying and recognizing faces to be recognized according to the established classifier model, and the specific steps are as follows:
A. the initialization phase:
(1) firstly, acquiring a face image by using image acquisition equipment, and transmitting the image to an image processing module for image storage;
(2) the image processing module carries out multi-scale wavelet decomposition on the stored image and decomposes the face image into a low-frequency part and a high-frequency part; processing the low-frequency part by histogram equalization to enhance the overall contrast of the image; carrying out image denoising and image enhancement on the high-frequency part by adopting an image blur enhancement algorithm so as to obtain enhanced human face image characteristic points; performing wavelet reconstruction on the low-frequency part after the histogram equalization and the high-frequency part after the fuzzy enhancement to obtain a face characteristic image after the wavelet reconstruction; the image fuzzy enhancement algorithm adopts a wavelet denoising model of a fuzzy membership factor to filter the high-frequency part; carrying out fuzzy enhancement on the high-frequency part after filtering processing by adopting a PAL fuzzy enhancement algorithm, obtaining characteristic images with different scales and different directions by adopting nonlinear transformation of different thresholds, and carrying out anti-fuzzy processing on the characteristic images;
(3) texture feature description is carried out on the human face feature image obtained after reconstruction through an ALBP operator, and therefore a human face feature vector is formed;
B. the training stage comprises the following steps:
(1) collecting a plurality of sample images of the same face through image collecting equipment;
(2) initializing the obtained multiple sample images according to the processing process of an initialization stage to obtain reconstructed sample face feature vectors;
(3) adopting a pattern recognition classifier, carrying out classifier model training after reducing the dimension of the obtained face feature vector, and storing the generated face model into a face sample database; the dimensionality reduction of the feature vector comprises the steps of carrying out principal component analysis on the feature vector, finding a group of optimal unit orthogonal vector bases by adopting linear transformation, and reconstructing an original sample by using linear combination of partial vectors of the optimal unit orthogonal vector bases to reduce a similar feature value of the reconstructed sample;
(4) repeating the processes (1), (2) and (3) in the training stage, storing the face models which are generated by different faces to be identified in sequence into a face sample database, and constructing a complete multi-face sample database;
C. the identification phase comprises the following steps:
(1) acquiring a human face image to be recognized through image acquisition equipment;
(2) initializing the obtained face image according to the processing process of an initialization stage to obtain a reconstructed face feature vector;
(3) reducing the dimension of the obtained portrait feature vector, and comparing and identifying with all face templates in a face sample database obtained after training of a classifier;
(4) and judging whether the face to be identified is in a constructed face sample database or not according to the similarity value obtained after comparison and identification.
2. The method for recognizing the face of the underground coal mine under the complex illumination condition as claimed in claim 1, wherein the method comprises the following steps: the visible light explosion-proof camera or the infrared explosion-proof camera is used for collecting face images underground, and the face images are connected with the image processing module through the RJ45 interface or the USB interface to perform image storage, image denoising and image enhancement.
3. The method for recognizing the face of the underground coal mine under the complex illumination condition as claimed in claim 1, wherein the method comprises the following steps: in the feature extraction of the training stage, firstly, the ALBP operator is used
Figure FDA0003299277700000021
Extracting feature layers of the reconstructed face feature images, then obtaining contrast values of different intervals and feature values of different intervals, and then constructing feature vectors, wherein maxC and minC in a formula are respectively represented in ALBThe radius of the window P is R, and the number of the pixels of the field points is the maximum value and the minimum value of the contrast in the local area of P; gciIs the central pixel point in the ith area, gpiIs any neighborhood pixel point in the ith area.
4. The method for recognizing the face of the underground coal mine under the complex illumination condition as claimed in claim 3, wherein the method comprises the following steps: in the process of constructing the feature vector, after extracting according to the feature layer, the feature vector is extracted by adopting
Figure FDA0003299277700000022
Obtaining contrast values of different intervals, wherein L in the formula is the number of the intervals, r is the value range of the contrast of the different intervals, and r is (maxC-minC)/L, LriIs the contrast value of the ith interval; after obtaining the contrast values of the different intervals, the contrast values are obtained by adopting
Figure FDA0003299277700000023
Sequentially calculating and obtaining ALBP characteristic values of the ith interval, and sequentially connecting the ALBP characteristic values to form a face characteristic vector under a complex illumination condition, wherein i in the formula represents taking the ith ALBP window, and when g is measuredpi-gciWhen > 0, Ap=1;gpi-gciWhen the content is less than or equal to 0, Ap=0。
5. The method for recognizing the face of the underground coal mine under the complex illumination condition as claimed in claim 3, wherein the method comprises the following steps: in the process of constructing the feature vector, the face image is divided into N local areas for the acquired ALBP feature value in each interval, and the N local areas are acquired for each layer
Figure FDA0003299277700000024
Cascading, the ALBP for each region can be obtainedP,RFinally, the ALBP of different areas is usedP,RAnd connecting to obtain the feature vector describing the global face multi-contrast level.
6. The method for recognizing the face of the underground coal mine under the complex illumination condition as claimed in claim 1, wherein the method comprises the following steps: in the comparison and recognition process of the recognition stage, the face image to be recognized is scored in a confidence interval of 0-100%, the recognition result is subjected to threshold judgment, and when the score value is smaller than the threshold, the recognition failure is prompted, the face image to be recognized is collected again, and re-recognition is carried out.
CN201810647665.XA 2018-06-22 2018-06-22 Face recognition method under complex illumination condition of underground coal mine Active CN108875645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810647665.XA CN108875645B (en) 2018-06-22 2018-06-22 Face recognition method under complex illumination condition of underground coal mine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810647665.XA CN108875645B (en) 2018-06-22 2018-06-22 Face recognition method under complex illumination condition of underground coal mine

Publications (2)

Publication Number Publication Date
CN108875645A CN108875645A (en) 2018-11-23
CN108875645B true CN108875645B (en) 2021-11-19

Family

ID=64340765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810647665.XA Active CN108875645B (en) 2018-06-22 2018-06-22 Face recognition method under complex illumination condition of underground coal mine

Country Status (1)

Country Link
CN (1) CN108875645B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502592B (en) * 2019-08-27 2023-08-11 深圳供电局有限公司 Project domain topic analysis system based on big data analysis technology
CN110674803A (en) * 2019-09-12 2020-01-10 常州市维多视频科技有限公司 Method for identifying coal block, coal gangue and rare sensitive material based on multicolor light source
CN111931671A (en) * 2020-08-17 2020-11-13 青岛北斗天地科技有限公司 Face recognition method for illumination compensation in underground coal mine adverse light environment
CN112434678B (en) * 2021-01-27 2021-06-04 成都无糖信息技术有限公司 Face measurement feature space searching system and method based on artificial neural network
CN113705462B (en) * 2021-08-30 2023-07-14 平安科技(深圳)有限公司 Face recognition method, device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202711272U (en) * 2012-08-27 2013-01-30 中国矿业大学(北京) Coal mine-entering personnel uniqueness detection device
CN103246883A (en) * 2013-05-20 2013-08-14 中国矿业大学(北京) Coal mine underground thermal infrared image face recognition method
CN104700089A (en) * 2015-03-24 2015-06-10 江南大学 Face identification method based on Gabor wavelet and SB2DLPP
CN106845362A (en) * 2016-12-27 2017-06-13 湖南长城信息金融设备有限责任公司 A kind of face identification method of the rarefaction representation based on multi-scale transform

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006010855A1 (en) * 2004-06-30 2006-02-02 France Telecom Method and device of face signature and recognition based on wavelet transforms
KR100814793B1 (en) * 2005-12-08 2008-03-19 한국전자통신연구원 Face recognition method using linear projection-based ICA with class-specific information and system thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202711272U (en) * 2012-08-27 2013-01-30 中国矿业大学(北京) Coal mine-entering personnel uniqueness detection device
CN103246883A (en) * 2013-05-20 2013-08-14 中国矿业大学(北京) Coal mine underground thermal infrared image face recognition method
CN104700089A (en) * 2015-03-24 2015-06-10 江南大学 Face identification method based on Gabor wavelet and SB2DLPP
CN106845362A (en) * 2016-12-27 2017-06-13 湖南长城信息金融设备有限责任公司 A kind of face identification method of the rarefaction representation based on multi-scale transform

Also Published As

Publication number Publication date
CN108875645A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108875645B (en) Face recognition method under complex illumination condition of underground coal mine
Yeh et al. Multi-scale deep residual learning-based single image haze removal via image decomposition
CN112766160A (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
CN109685045B (en) Moving target video tracking method and system
CN109102475B (en) Image rain removing method and device
CN112464730B (en) Pedestrian re-identification method based on domain-independent foreground feature learning
KR101382892B1 (en) Method of recognizing low-resolution image face and low resolution image face recognition device
CN106909884B (en) Hand region detection method and device based on layered structure and deformable part model
CN106127193B (en) A kind of facial image recognition method
CN108345835B (en) Target identification method based on compound eye imitation perception
CN107516083B (en) Recognition-oriented remote face image enhancement method
Velliangira et al. A novel forgery detection in image frames of the videos using enhanced convolutional neural network in face images
CN113077452B (en) Apple tree pest and disease detection method based on DNN network and spot detection algorithm
CN111666813B (en) Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN113421200A (en) Image fusion method based on multi-scale transformation and pulse coupling neural network
CN110909678B (en) Face recognition method and system based on width learning network feature extraction
Andrei et al. Unsupervised Machine Learning Algorithms Used in Deforested Areas Monitoring
Achban et al. Wrist hand vein recognition using local line binary pattern (LLBP)
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
Liang et al. Deep convolution neural networks for automatic eyeglasses removal
CN114627005B (en) Rain density classification guided double-stage single image rain removing method
CN116079713A (en) Multi-drill-arm cooperative control method, system, equipment and medium for drilling and anchoring robot
CN112380966B (en) Monocular iris matching method based on feature point re-projection
Balamurugan et al. Classification of Land Cover in Satellite Image using supervised and unsupervised Techniques
Revathi et al. An emerging trend of feature extraction method in video processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant