CN113033295A - Face detection speed optimization method and system - Google Patents
Face detection speed optimization method and system Download PDFInfo
- Publication number
- CN113033295A CN113033295A CN202110168329.9A CN202110168329A CN113033295A CN 113033295 A CN113033295 A CN 113033295A CN 202110168329 A CN202110168329 A CN 202110168329A CN 113033295 A CN113033295 A CN 113033295A
- Authority
- CN
- China
- Prior art keywords
- image
- face region
- area
- sample
- detection speed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000001514 detection method Methods 0.000 title claims abstract description 38
- 238000005457 optimization Methods 0.000 title claims abstract description 21
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 18
- 238000007781 pre-processing Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention discloses a face detection speed optimization method and a system, comprising the steps of obtaining a video image; extracting a face region aiming at the video image; the extracted face region is verified, the preprocessed gray level image is detected through the AdaBoost algorithm to extract the face region, and the extracted face region is verified through the area threshold method, so that the important characteristic information of the original image is reserved, the operation speed is increased, and the better operation precision is maintained. The invention is suitable for the field of image detection.
Description
Technical Field
The disclosure relates to the technical field of face detection and image processing, in particular to a face detection speed optimization method and system.
Background
Face detection is a hot research topic in the field of computer vision, and aims to determine whether a face exists in an image. The method has the advantages that the determination of the position of the face in a system related to face processing is the primary operation, the detection range can be narrowed for the problem of face processing, the speed of face detection is improved, the data size in the face processing is reduced, the overall speed of a face processing system is improved, the real-time face processing is met, and the method has important significance.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a technical solution of a face detection speed optimization method and system, and according to an aspect of the present disclosure, a face detection speed optimization method is provided, the method includes the following steps:
s100, acquiring a video image;
s200, extracting a face region aiming at a video image;
and S300, verifying the extracted face region.
Specifically, in S100, the video image is obtained by using a near-infrared camera, and a wavelength range of infrared light of the near-infrared camera is [700,1100] nm.
Specifically, in S200, the method for extracting the face region according to the video image includes:
s111, converting the video image into a gray image;
s112, preprocessing the gray level image;
and S113, detecting the preprocessed gray level image to extract a human face area.
Specifically, in S112, the method for preprocessing the grayscale image is one of a histogram equalization method, a median filtering method, a normalization method, a skin color-based preprocessing method, an edge detection method, and a variance preprocessing method.
Specifically, in S113, the method for detecting the preprocessed gray-scale image to extract the face region is to detect the preprocessed gray-scale image by using an AdaBoost algorithm to extract the face region, and the specific steps are as follows:
s1131, adding the 45 ° rectangular features forms new Haar-like features;
s1132, training the positive and negative samples by adopting a new Haar-like characteristic to obtain an AdaBoost classifier:
s1132-1, generating 1 classifier for each Haar-like feature:
where x is the sample, fi(x) For the value of the ith Haar-like feature on the sample x, piTo classify symbols, hiIs formed by fi(x) Formed classifier, thetaiFor the classifier hiI is a natural number greater than 1;
s1132-2, the input training sample image is set as (x)1,y1),(x2,y2),…,(xn,yn) Wherein, ynCorresponds to 0 ═ 0Negative sample, yn1 corresponds to a positive sample, (x)n,yn) N is a natural number greater than 0 and is a pixel of the training sample image;
s1132-3, for the negative sample, set the weight w1,tFor a positive sample, let the weight w be 1/2m1,t1/2l, where m and l are the number of positive and negative samples, respectively, and t is the number of iterations;
s1132-4, normalizing the weights to a probability distribution,wherein, wt,jTraining the weight of the t-th circulation of the jth sample; training 1 weak classifier h for each featureiThe current weight error is epsilonj=∑iwj|hi(xi)-yi|,wjTraining the weight of all the characteristics of the jth sample; selecting the weak classifier h with the smallest weight error of the jth samplet(ii) a Updating the weight of all samplesWhere e is the time when the sample j is correctly classifiedjWhen it is equal to 0, otherwise ej1, and βt=εt/(1-εt),εt=∑iwi|ht(xi)-yiL, |; the AdaBoost classifier, the final strong classifier, is:
And S1133, extracting Haar-like characteristics of the preprocessed gray level image, and inputting the Haar-like characteristics into an AdaBoost classifier to perform face detection so as to extract a face region.
Specifically, in S300, the method for verifying the extracted face region is an area threshold method.
Specifically, the area threshold method comprises the following steps:
s310, calculating the area of the detected face region;
s320, calculating the area of the preprocessed gray level image;
s330, if the area of the detected face region/the area of the preprocessed gray level image is larger than or equal to an area threshold value, judging that the detected face region is the face image, otherwise, judging that the detected face region is not the face image.
Specifically, the value range of the area threshold is (0.5,1), the area of the detected face region is the number of rows of image pixels of the detected face region × the number of columns of image pixels of the detected face region, and the area of the preprocessed grayscale image is the number of rows of the preprocessed grayscale image pixels × the number of columns of the preprocessed grayscale image pixels.
The invention also provides a face detection speed optimization system, which comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the following system:
an acquisition unit configured to acquire a video image;
an extraction unit for extracting a face region for a video image;
and the verification unit is used for verifying the extracted face area.
The beneficial effect of this disclosure does: the invention provides a method and a system for optimizing the human face detection speed, wherein the method comprises the steps of detecting a preprocessed gray level image through an AdaBoost algorithm to extract a human face region, and has less noise and less calculation amount.
Drawings
The foregoing and other features of the present disclosure will become more apparent from the detailed description of the embodiments shown in conjunction with the drawings in which like reference characters designate the same or similar elements throughout the several views, and it is apparent that the drawings in the following description are merely some examples of the present disclosure and that other drawings may be derived therefrom by those skilled in the art without the benefit of any inventive faculty, and in which:
FIG. 1 is a flow chart of a method for optimizing the speed of face detection;
fig. 2 is a structural diagram of a face detection speed optimization system.
Detailed Description
The conception, specific structure and technical effects of the present disclosure will be clearly and completely described below in conjunction with the embodiments and the accompanying drawings to fully understand the objects, aspects and effects of the present disclosure. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a flow chart of a face detection speed optimization method according to the present disclosure, and a face detection speed optimization method according to an embodiment of the present disclosure is described below with reference to fig. 1.
The present disclosure provides a face detection speed optimization method, which includes the following steps:
s100, acquiring a video image;
s200, extracting a face region aiming at a video image;
and S300, verifying the extracted face region.
Preferably, in S100, the video image is obtained by using a near-infrared camera, and the wavelength range of infrared light of the near-infrared camera is [700,1100] nm.
Preferably, in S200, the method for extracting the face region according to the video image includes:
s111, converting the video image into a gray image;
s112, preprocessing the gray level image;
and S113, detecting the preprocessed gray level image to extract a human face area.
Preferably, in S112, the method for preprocessing the grayscale image is one of a histogram equalization method, a median filtering method, a normalization method, a skin color-based preprocessing method, an edge detection method, and a variance preprocessing method.
Preferably, in S113, the method for detecting the preprocessed gray-scale image to extract the face region is to detect the preprocessed gray-scale image by using an AdaBoost algorithm to extract the face region, and the specific steps are as follows:
s1131, adding the 45 ° rectangular features forms new Haar-like features;
s1132, training the positive and negative samples by adopting a new Haar-like characteristic to obtain an AdaBoost classifier:
s1132-1, generating 1 classifier for each Haar-like feature:
where x is the sample, fi(x) For the value of the ith Haar-like feature on the sample x, piTo classify symbols, hiIs formed by fi(x) Formed classifier, thetaiFor the classifier hiI is a natural number greater than 1;
s1132-2, the input training sample image is set as (x)1,y1),(x2,y2),…,(xn,yn) Wherein, yn0 corresponds to a negative example, yn1 corresponds to a positive sample, (x)n,yn) N is a natural number greater than 0 and is a pixel of the training sample image;
s1132-3, for the negative sample, set the weight w1,tFor a positive sample, let the weight w be 1/2m1,t1/2l, where m and l are the number of positive and negative samples, respectively, and t is the number of iterations;
s1132-4, normalizing the weights to a probability distribution,wherein, wt,jTraining the weight of the t-th circulation of the jth sample; training 1 weak classifier h for each featureiThe current weight error is epsilonj=∑iwj|hi(xi)-yi|,wjTraining the weight of all the characteristics of the jth sample; selecting the weak classifier h with the smallest weight error of the jth samplet(ii) a Updating the weight of all samplesWhere e is the time when the sample j is correctly classifiedjWhen it is equal to 0, otherwise ej1, and βt=εt/(1-εt),εt=∑iwi|ht(xi)-yiL, |; the AdaBoost classifier, the final strong classifier, is:
And S1133, extracting Haar-like characteristics of the preprocessed gray level image, and inputting the Haar-like characteristics into an AdaBoost classifier to perform face detection so as to extract a face region.
Preferably, in S300, the method for verifying the extracted face region is an area threshold method.
Preferably, the area threshold method comprises the following steps:
s310, calculating the area of the detected face region;
s320, calculating the area of the preprocessed gray level image;
s330, if the area of the detected human face area/the area of the preprocessed gray level image is more than or equal to the area threshold value,
and judging that the detected face area is a face image, otherwise, judging that the detected face area is not the face image.
Preferably, the area threshold has a value range of (0.5,1), the area of the detected face region is the number of rows of image pixels of the detected face region × the number of columns of image pixels of the detected face region, and the area of the preprocessed grayscale image is the number of rows of the preprocessed grayscale image pixels × the number of columns of the preprocessed grayscale image pixels.
The embodiment of the present disclosure provides a face detection speed optimization system, which includes: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the following system:
an acquisition unit configured to acquire a video image;
an extraction unit for extracting a face region for a video image;
and the verification unit is used for verifying the extracted face area.
The face detection speed optimization system can be operated in computing equipment such as desktop computers, notebooks, palm computers and cloud servers. The human face detection speed-optimized system can be operated by a system comprising, but not limited to, a processor and a memory. It will be understood by those skilled in the art that the example is merely an example of a face detection speed optimization system, and is not intended to limit a face detection speed optimization system, and may include more or less components than, or in combination with, certain components, or different components, for example, the face detection speed optimization system may further include an input-output device, a network access device, a bus, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor is a control center of the operation system of the face detection speed optimization system, and various interfaces and lines are used for connecting various parts of the whole face detection speed optimization operable system.
The memory may be used to store the computer programs and/or modules, and the processor may implement the various functions of the face detection speed optimization system by running or executing the computer programs and/or modules stored in the memory and invoking the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
While the present disclosure has been described in considerable detail and with particular reference to a few illustrative embodiments thereof, it is not intended to be limited to any such details or embodiments or any particular embodiments, but it is to be construed as effectively covering the intended scope of the disclosure by providing a broad, potential interpretation of such claims in view of the prior art with reference to the appended claims. Furthermore, the foregoing describes the disclosure in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the disclosure, not presently foreseen, may nonetheless represent equivalent modifications thereto.
Claims (9)
1. A face detection speed optimization method is characterized by comprising the following steps:
s100, acquiring a video image;
s200, extracting a face region aiming at a video image;
and S300, verifying the extracted face region.
2. The method for optimizing human face detection speed according to claim 1, wherein in S100, the video image is obtained by a near-infrared camera, and the wavelength range of infrared light of the near-infrared camera is [700,1100] nm.
3. The method for optimizing human face detection speed according to claim 1, wherein in S200, the method for extracting the human face region according to the video image comprises:
s111, converting the video image into a gray image;
s112, preprocessing the gray level image;
and S113, detecting the preprocessed gray level image to extract a human face area.
4. The method for optimizing human face detection speed according to claim 3, wherein in S112, the method for preprocessing the gray-scale image is one of histogram equalization, median filtering, normalization, skin color-based preprocessing, edge detection, and variance preprocessing.
5. The method for optimizing the human face detection speed according to claim 3, wherein in the step S113, the method for detecting the preprocessed gray-scale image to extract the human face region is to detect the preprocessed gray-scale image to extract the human face region by adopting an AdaBoost algorithm, and the specific steps are as follows:
s1131, adding a 45-degree rectangular feature to form a new Haar-like feature;
s1132, training the positive and negative samples by adopting a new Haar-like characteristic to obtain an AdaBoost classifier:
s1132-1, generating 1 classifier for each Haar-like feature:
where x is the sample, fi(x) For the value of the ith Haar-like feature on the sample x, piTo classify symbols, hiIs formed by fi(x) Formed classifier,θiFor the classifier hiI is a natural number greater than 1;
s1132-2, the input training sample image is set as (x)1,y1),(x2,y2),…,(xn,yn) Wherein, yn0 corresponds to a negative example, yn1 corresponds to a positive sample, (x)n,yn) N is a natural number greater than 0 and is a pixel of the training sample image;
s1132-3, for the negative sample, set the weight w1,tFor a positive sample, let the weight w be 1/2m1,t1/2l, where m and 1 are the number of positive and negative samples, respectively, and t is the number of iterations;
s1132-4, normalizing the weights to a probability distribution,wherein, wt,jTraining the weight of the t-th circulation of the jth sample; training 1 weak classifier h for each featureiThe current weight error is epsilonj=∑iwj|hi(xi)-yi|,wjTraining the weight of all the characteristics of the jth sample;
selecting the weak classifier h with the smallest weight error of the jth samplet(ii) a Updating the weight of all samplesWhere e is the time when the sample j is correctly classifiedjWhen it is equal to 0, otherwise ej1, and βt=εt/(1-εt),εt=∑iwi|ht(xi)-yiL, |; the AdaBoost classifier, the final strong classifier, is:
And S1133, extracting Haar-like characteristics of the preprocessed gray level image, and inputting the Haar-like characteristics into an AdaBoost classifier to perform face detection so as to extract a face region.
6. The method for optimizing human face detection speed according to claim 3, wherein in S300, the method for verifying the extracted human face region is an area threshold method.
7. The method for optimizing the human face detection speed according to claim 6, wherein the area threshold method comprises the following steps:
s310, calculating the area of the detected face region;
s320, calculating the area of the preprocessed gray level image;
s330, if the area of the detected face region/the area of the preprocessed gray level image is larger than or equal to an area threshold value, judging that the detected face region is the face image, otherwise, judging that the detected face region is not the face image.
8. The method according to claim 7, wherein the area threshold is (0.5,1), the area of the detected face region is the number of rows of image pixels of the detected face region x the number of columns of image pixels of the detected face region, and the area of the pre-processed grayscale image is the number of rows of pre-processed grayscale image pixels x the number of columns of pre-processed grayscale image pixels.
9. A face detection speed optimization system, the system comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the following system:
an acquisition unit configured to acquire a video image;
an extraction unit for extracting a face region for a video image;
and the verification unit is used for verifying the extracted face area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110168329.9A CN113033295A (en) | 2021-02-07 | 2021-02-07 | Face detection speed optimization method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110168329.9A CN113033295A (en) | 2021-02-07 | 2021-02-07 | Face detection speed optimization method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113033295A true CN113033295A (en) | 2021-06-25 |
Family
ID=76460296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110168329.9A Pending CN113033295A (en) | 2021-02-07 | 2021-02-07 | Face detection speed optimization method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113033295A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739548A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Eye detection method and system |
CN103605964A (en) * | 2013-11-25 | 2014-02-26 | 上海骏聿数码科技有限公司 | Face detection method and system based on image on-line learning |
CN110046565A (en) * | 2019-04-09 | 2019-07-23 | 东南大学 | A kind of method for detecting human face based on Adaboost algorithm |
-
2021
- 2021-02-07 CN CN202110168329.9A patent/CN113033295A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739548A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Eye detection method and system |
CN103605964A (en) * | 2013-11-25 | 2014-02-26 | 上海骏聿数码科技有限公司 | Face detection method and system based on image on-line learning |
CN110046565A (en) * | 2019-04-09 | 2019-07-23 | 东南大学 | A kind of method for detecting human face based on Adaboost algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Nogueira et al. | Evaluating software-based fingerprint liveness detection using convolutional networks and local binary patterns | |
US20190392202A1 (en) | Expression recognition method, apparatus, electronic device, and storage medium | |
WO2021139324A1 (en) | Image recognition method and apparatus, computer-readable storage medium and electronic device | |
Singh et al. | A study of moment based features on handwritten digit recognition | |
US11380010B2 (en) | Image processing device, image processing method, and image processing program | |
CN107967461B (en) | SVM (support vector machine) differential model training and face verification method, device, terminal and storage medium | |
CN112200159A (en) | Non-contact palm vein identification method based on improved residual error network | |
US20230147685A1 (en) | Generalized anomaly detection | |
CN111694954B (en) | Image classification method and device and electronic equipment | |
Li et al. | Multi-view vehicle detection based on fusion part model with active learning | |
CN110796108B (en) | Method, device and equipment for detecting face quality and storage medium | |
CN110175500B (en) | Finger vein comparison method, device, computer equipment and storage medium | |
CN102314598A (en) | Retinex theory-based method for detecting human eyes under complex illumination | |
Narwade et al. | Offline handwritten signature verification using cylindrical shape context | |
Aithal | Two Dimensional Clipping Based Segmentation Algorithm for Grayscale Fingerprint Images | |
Travieso et al. | Bimodal biometric verification based on face and lips | |
Shaheed et al. | A hybrid proposed image quality assessment and enhancement framework for finger vein recognition | |
Vasanthi et al. | A hybrid method for biometric authentication-oriented face detection using autoregressive model with Bayes Backpropagation Neural Network | |
CN112926592A (en) | Trademark retrieval method and device based on improved Fast algorithm | |
Nugroho et al. | Nipple detection to identify negative content on digital images | |
Silva et al. | POEM-based facial expression recognition, a new approach | |
US20230069960A1 (en) | Generalized anomaly detection | |
CN113033295A (en) | Face detection speed optimization method and system | |
US11481881B2 (en) | Adaptive video subsampling for energy efficient object detection | |
CN113688785A (en) | Multi-supervision-based face recognition method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |