CN105426828B - Method for detecting human face, apparatus and system - Google Patents

Method for detecting human face, apparatus and system Download PDF

Info

Publication number
CN105426828B
CN105426828B CN201510761566.0A CN201510761566A CN105426828B CN 105426828 B CN105426828 B CN 105426828B CN 201510761566 A CN201510761566 A CN 201510761566A CN 105426828 B CN105426828 B CN 105426828B
Authority
CN
China
Prior art keywords
face
target
image
detected
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510761566.0A
Other languages
Chinese (zh)
Other versions
CN105426828A (en
Inventor
邓兵
颜昌杰
毛泉涌
程博
祝中科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Boguan Intelligent Technology Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201510761566.0A priority Critical patent/CN105426828B/en
Publication of CN105426828A publication Critical patent/CN105426828A/en
Application granted granted Critical
Publication of CN105426828B publication Critical patent/CN105426828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of method for detecting human face, apparatus and system, wherein the described method includes: carrying out foreground target extraction to two dimensional image to be detected;If there are foreground targets in the two dimensional image to be detected, Face Detection is carried out to foreground target region;If there are colour of skin prospects in the foreground target region, Face datection is carried out to colour of skin foreground area using face two dimensional image classifier.It can effectively improve the accuracy of Face datection using the method for detecting human face that the disclosure provides, reduce false detection rate.

Description

Face detection method, device and system
Technical Field
The invention relates to the technical field of computer vision, in particular to a face detection method, a face detection device and a face detection system.
Background
The original human face detection is originated from human face recognition, and is an important link for realizing an automatic human face recognition system. With the development of mobile internet in recent years, the application of face detection has been popularized to the aspects of intelligent monitoring and the like.
At present, intelligent monitoring is applied to numerous fields, such as intelligent transportation, smart parks, urban security and the like. Because the human face can provide effective personnel information, the human face detection technology is a key technology for carrying out information processing on human face snapshot, human face attendance checking, personnel control and the like of an entrance and an exit in an intelligent monitoring system.
In the related art, the face detection method mainly detects a face in a two-dimensional image, and includes: skin color detection, template matching, machine learning and the like. The accurate human face detection method based on machine learning comprises the following steps: firstly, collecting a large number of positive and negative samples of human faces; then, learning and training positive and negative samples of the human face by adopting machine learning methods such as Adaboost or SVM (Support vector machine) and the like to obtain a human face detection classifier; and finally, carrying out face detection on the image to be detected by using the trained face detection classifier, and judging the face position in the image. However, the face detection technology in the related art is completely dependent on two-dimensional image information, has weak interference resistance, cannot accurately distinguish a real face from a face image, for example, a face image on clothes, a face reflection on a glass door, and the like are also easily mistakenly detected as a face, and the false detection rate is high.
Disclosure of Invention
In view of this, the present invention provides a method, an apparatus and a system for face detection, so as to improve the accuracy of face detection.
In a first aspect, an embodiment of the present invention provides a face detection method, where the method includes:
extracting a foreground target of the two-dimensional image to be detected;
if a foreground target exists in the two-dimensional image to be detected, carrying out skin color detection on a foreground target area;
and if the foreground target area has a skin color foreground, carrying out face detection on the skin color foreground area by using a face two-dimensional image classifier.
Optionally, the performing, by using a face two-dimensional image classifier, face detection on the skin color foreground region includes:
determining a face area to be searched according to the skin color foreground area and the face estimation size;
estimating the size range of the face in the face area to be searched, and determining the scaling coefficient of the face area to be searched;
zooming the face area to be searched according to the zooming coefficient;
calculating an integral image of the face area to be searched after each zooming;
performing face detection on the integral image of the face area to be searched after each zooming by using the face two-dimensional image classifier;
when the face two-dimensional image classifier finishes detecting the face area to be searched in each scaling scale, combining the detected adjacent faces and outputting a first target two-dimensional face;
determining the face position and size of the first target two-dimensional face;
and updating the estimated face size and the estimated face average brightness of the image to be detected in the next frame with the skin color foreground area according to the face position and the size of the first target two-dimensional face.
Optionally, the estimating a face size range in the face region to be searched, and determining a scaling factor of the face region to be searched includes:
calculating the minimum face estimation size W in the face area to be searchedFace_min
Calculating the maximum face estimation size W in the face area to be searchedFace_max
In the above formula, (W + W)Face) Representing the width of a face region to be searched; (H + W)Face) Representing the height of a face region to be searched; the xFace represents the horizontal coordinate of the center position of the face area to be searched; yFace represents the vertical coordinate of the center position of the face area to be searched;representing the lower right corner position of the face area to be searched;representing the position of the lower left corner of the face area to be searched;representing the position of the intersection point of the lower edge of the face region to be searched and the vertical bisector of the two-dimensional image to be detected; wIRepresenting the width of the whole frame of two-dimensional image to be detected;
the face size expansion search range coefficient α is:
wherein N isFaceThe total frame number of the face images detected at the corresponding positions of the to-be-detected region is represented;
respectively calculating the minimum face estimation size WFace_minAnd the maximum face estimate size WFace_maxAnd the estimated size W of the face at the center position (xFace, yFace) of the skin color foreground areaFaceDetermining the scaling coefficient of the face area to be searched.
Optionally, the performing, by using the face two-dimensional image classifier, face detection on the integral image of the face region image to be searched after each scaling includes:
and performing face detection on the integral image of the face area image to be searched after each zooming by adopting a mode of translating the rectangular frame, wherein a horizontal span Step _ W when the rectangular frame is translated is calculated by adopting the following formula:
wherein, WScaleRepresenting the width of the zoomed face region to be searched; (W + W)Face) Representing the width of the face region to be searched before zooming; rScale_maxRepresenting the maximum scaling coefficient of the face area to be searched; rScale_minRepresenting the minimum scaling coefficient of the face region to be searched;
the vertical span Step _ H when the rectangular frame is translated is calculated by the following formula:
wherein HScaleRepresenting the height of the zoomed face region to be searched; (H + W)Face) Representing the height of the face region to be searched before scaling.
Optionally, the updating, according to the face position and the size of the first target two-dimensional face, the face estimated size and the face estimated average brightness of the to-be-detected image in the next frame with the skin color foreground region includes:
updating the estimated face size of the image to be detected in the next frame with the skin color foreground area by using the following formula:
wherein, Wn+1A _ Face (x, y) is the estimated size of the Face of the next frame of image to be detected at the position (x, y); wnA Face (x, y) is the estimated size of the Face of the current image to be detected at the position (x, y);the face size of the first target two-dimensional face at the position (x, y) is obtained, β is an updating coefficient, and the value range is [0.1,0.5 ]];
Updating the estimated average brightness of the face of the image to be detected in the next frame with the skin color foreground area by adopting the following formula:
wherein,estimating the average brightness of the face of the next frame of image to be detected at the position (x, y);estimating average brightness of the face of the first target two-dimensional face at the position (x, y) before the detection;and (3) averaging the face average brightness value of the first target two-dimensional face at the position (x, y).
In a second aspect, another face detection method is provided, the method including:
acquiring a face two-dimensional image classifier and a corresponding face depth image classifier;
carrying out face detection on the input two-dimensional image to be detected according to any one of the face detection methods;
if the face two-dimensional image classifier detects face information in the two-dimensional image to be detected, the face depth image classifier is utilized to carry out face detection on the depth image to be detected corresponding to the two-dimensional image to be detected;
and if the face depth image classifier detects face information in the depth image to be detected, outputting a face detection result.
Optionally, the face detection method further includes:
recording the frame number of a target two-dimensional image in a detected image, wherein the target two-dimensional image is a two-dimensional image to be detected which has a foreground target but no skin color foreground;
when the frame number of the target two-dimensional image reaches a preset threshold value, performing depth normalization processing on a target depth image region in a to-be-detected depth image corresponding to the current target two-dimensional image to obtain a normalized target depth image region, wherein the position of the target depth image region in the to-be-detected depth image is the same as the position of a foreground target region in the target two-dimensional image;
performing face detection on the normalized target depth image area by using the preset face depth image classifier;
if a second target depth face is detected in the normalized target depth image area, adaptively adjusting exposure compensation parameters according to a preset strategy so that the image acquisition device adjusts the exposure compensation level of the acquired image.
Optionally, the adaptively adjusting the exposure compensation parameter according to a preset strategy includes:
determining the face position and the face size of a second target two-dimensional face in the corresponding two-dimensional image according to the position and the face size of the second target depth face in the depth image;
calculating the average brightness of the second target two-dimensional face according to the face position and the size of the second target two-dimensional face;
if the average brightness of the second target two-dimensional face is smaller than the minimum face brightness counted currently, increasing exposure compensation parameters;
and if the average brightness of the second target two-dimensional face is greater than the current statistical maximum face brightness, reducing exposure compensation parameters.
Optionally, the exposure compensation parameter is adaptively adjusted by using the following formula:
wherein Stage _ EV1Compensating the grade for the adjusted exposure; stage _ EV0The exposure compensation level before adjustment; y isFace_avgThe estimated average luminance Y of the second target two-dimensional faceFace_detect(x, y) is the average brightness of the second target two-dimensional face detected at the (x, y) position,is as followsThe maximum face brightness of the previous statistics,the minimum face brightness of the current statistics is obtained; stage _ EV _ TOTAL is a preset exposure compensation level.
In a third aspect, corresponding to the face detection method provided in the first aspect, the present disclosure provides a face detection apparatus, where the apparatus includes:
the foreground target extraction module is used for extracting a foreground target of the two-dimensional image to be detected;
the skin color detection module is used for carrying out skin color detection on the foreground target area under the condition that the foreground target exists in the two-dimensional image to be detected;
and the face detection module is used for detecting the face of the skin color foreground area by using a face two-dimensional image classifier under the condition that the skin color foreground exists in the foreground target area.
Optionally, the face detection module includes:
the searching area determining unit is used for determining a face area to be searched according to the skin color foreground area and the face estimation size;
the scaling factor determining unit is used for estimating the face size range in the face area to be searched and determining the scaling factor of the face area to be searched;
the zooming unit is used for zooming the human face area to be searched according to the zooming coefficient;
the integral unit is used for calculating an integral image of the face area to be searched after each zooming;
the face detection unit is used for carrying out face detection on the integral image of the face area to be searched after each zooming by using the face two-dimensional image classifier;
the first result output unit is used for merging detected adjacent faces and outputting a first target two-dimensional face when the face two-dimensional image classifier finishes detecting the face area to be searched in each scaling scale;
the size determining unit is used for determining the face position and size of the first target two-dimensional face;
and the updating unit is used for updating the estimated face size and the estimated face average brightness of the image to be detected in the next frame with the skin color foreground area according to the face position and the size of the first target two-dimensional face.
In a fourth aspect, corresponding to the face detection method provided in the second aspect, a face detection system includes any one of the face detection apparatuses described above; further comprising:
the classifier obtaining module is used for obtaining a face two-dimensional image classifier and a corresponding face depth image classifier;
the first depth face detection module is used for detecting the depth image to be detected corresponding to the two-dimensional image to be detected by using the face depth image classifier under the condition that the face two-dimensional image classifier detects face information in the two-dimensional image to be detected;
and the detection result output module is used for outputting a face detection result under the condition that the face depth image classifier detects face information in the depth image to be detected.
Optionally, the face detection system further includes:
the device comprises a recording module, a judging module and a judging module, wherein the recording module is used for recording the frame number of a target two-dimensional image in a detected image, and the target two-dimensional image is a two-dimensional image to be detected which has a foreground target but no skin color foreground;
the normalization processing module is used for performing depth normalization processing on a target depth image region in a to-be-detected depth image corresponding to the current target two-dimensional image when the frame number of the target two-dimensional image reaches a preset threshold value to obtain a normalized target depth image region, wherein the position of the target depth image region in the to-be-detected depth image is the same as the position of a foreground target region in the target two-dimensional image;
the second depth face detection module is used for carrying out face detection on the normalized target depth image area by utilizing the preset face depth image classifier;
and the exposure compensation module is used for adaptively adjusting exposure compensation parameters according to a preset strategy under the condition that a second target depth face is detected in the normalized target depth image area, so that the image acquisition device adjusts the exposure compensation level of the acquired image.
Therefore, according to the face detection method provided by the disclosure, a two-dimensional image classifier is used for detecting a first target two-dimensional face in a two-dimensional image to be detected, on the basis, a depth image classifier is further used for continuing face detection on a depth image to be detected corresponding to the two-dimensional image to be detected, and if a corresponding face depth image, namely a first target depth face, is also detected in the depth image to be detected, a face detection result is output. By combining the detection of the face depth information, the false face information in the image, such as a face image on clothes, a face reflection on a glass door and the like, can be prevented from being taken as the real face information and taken as the face detection result, so that the accuracy of face detection is effectively improved, and the false detection rate is reduced.
Drawings
FIG. 1 is a flowchart of a first embodiment of a face detection method according to the present disclosure;
FIG. 2 is a flow diagram of an embodiment of the present disclosure for training a two-dimensional image classifier of a human face;
FIG. 3 is a flow diagram of an embodiment of the present disclosure for training a face depth image classifier;
4-a and 4-b are schematic diagrams of the present disclosure for detecting an input two-dimensional image to be detected by using the face two-dimensional image classifier;
FIG. 5 is a flow chart of a specific implementation of step 12 in an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating an embodiment of step 126 in an embodiment of the present disclosure;
FIG. 7 is a schematic illustration of a face width at a location shown in the present disclosure;
FIG. 8 is a flow chart showing a specific implementation of step 1262 in an embodiment of the present disclosure;
9-1 to 9-3 are schematic distribution positions of the face region to be searched in the two-dimensional image to be detected according to the present disclosure;
FIG. 10 is a schematic diagram of a two-dimensional image of a human face and its corresponding depth image shown in the present disclosure;
FIG. 11 is a flow chart of a specific implementation of step 13 in an embodiment of the present disclosure;
FIG. 12 is a flowchart of a second embodiment of a face detection method according to the present disclosure;
FIG. 13 is a schematic illustration of two-dimensional face images at different exposure levels shown in the present disclosure;
FIG. 14 is a flowchart illustrating a specific implementation of step 18 in a second embodiment of the disclosure;
fig. 15 is a block diagram of a face detection apparatus shown in the present disclosure;
FIG. 16 is a block diagram of an embodiment of a face detection module in an embodiment of the disclosed apparatus;
FIG. 17 is a block diagram of a first embodiment of a face detection system according to the present disclosure;
fig. 18 is a block diagram of a second embodiment of a face detection system according to the present disclosure.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The present disclosure provides a face detection method, which can be applied to an intelligent monitoring system for performing face detection on an image collected by a camera capable of collecting object depth information.
Referring to a flowchart of a first embodiment of a face detection method shown in fig. 1, the method may include the following steps:
step 11, acquiring a face two-dimensional image classifier and a corresponding face depth image classifier;
according to the related art, the specific implementation process of step 11 is as follows: firstly, collecting positive and negative samples of a two-dimensional face image and positive and negative samples of a corresponding face depth image; secondly, training positive and negative samples of the face two-dimensional image to obtain a face two-dimensional image classifier; and training positive and negative samples of the corresponding face depth image to obtain a face depth image classifier. From the above process, the training of the image classifier in the present disclosure includes two parts, the first part: training a two-dimensional image classifier of the human face; a second part: and training a face depth image classifier.
Referring to a flowchart of an embodiment of the method for training a two-dimensional face image classifier according to the present disclosure shown in fig. 2, the method for training a two-dimensional face image classifier according to positive and negative samples of a two-dimensional face image may include the following steps:
101, performing graying processing on a positive sample of a two-dimensional face image to obtain a gray image sample of the face;
102, carrying out size normalization processing on the face gray level image samples to obtain face gray level image samples with uniform sizes;
it is worth mentioning that the human face gray level image sample is used as an input image, before feature training is performed by adopting a preset classification algorithm, scale normalization processing can be performed on the human face gray level image sample in the database by taking two eyes as a center, and the human face gray level image sample is processed into an image with a uniform size, so that comparison can be performed conveniently in a human face detection stage. The size of the image adopted in the normalization processing can be set according to actual requirements; for example, the images may be collectively set to 224 × 224 when the normalization process is performed.
103, training the face gray level image samples with the uniform size by adopting a first preset classification algorithm to obtain the face two-dimensional image classifier.
In the embodiment of the present disclosure, the first preset classification algorithm may be an Adaboost method. The two-dimensional image classifier of the face obtained by training can be KpA two-dimensional face image classifier of stages, as shown in the following equation (1):
wherein HP(x) Representing a cascade of classifiers consisting ofKpThe strong classifiers are cascaded; hi(x) Representing a first order strong classifier.
According to the related technology, in the two-dimensional image detection, a two-dimensional image to be detected sequentially passes through each level of classifier, and all regions detected by each level of classifier are target regions.
Referring to a flowchart of an embodiment of the method for training a face depth image classifier of the present disclosure shown in fig. 3, the method may include the following steps:
104, carrying out depth information normalization processing on the positive sample of the face depth image to obtain a normalized positive sample of the face depth image;
in the embodiment of the present disclosure, the following formula (2) may be adopted to perform depth information normalization processing on a positive sample of a face depth image:
wherein D isorig(x, y) is a depth pixel value at the depth image position (x, y), D (x, y) is a depth pixel value at the position (x, y) after the depth image is normalized, WfaceIs the face sample width pixel value, hfaceIs the face sample height pixel value.
105, carrying out size normalization processing on the normalized face depth image positive sample to obtain a normalized face depth image sample with a uniform size;
in the same way, before feature training is performed on the face depth image sample serving as an input image, scale normalization processing can be performed on the face depth image sample in the database by taking two eyes as centers, and the face depth image sample is processed into depth images with uniform sizes so as to facilitate comparison in a face recognition stage. The size of the image adopted in the normalization processing can be set according to actual requirements; for example, the images may be collectively set to 224 × 224 when the normalization process is performed.
And 106, training the normalized face depth image sample with the uniform size to obtain the face depth image classifier.
In the embodiment of the disclosure, an Adaboost method may be adopted to train a face depth image sample to obtain a KDHierarchical face depth image classifier HDAs shown in formula (3);
wherein HD(x) Representing a cascade of classifiers, by KDThe strong classifiers are cascaded; hi(x) Representing a first order strong classifier.
In the depth image detection, the depth image to be detected sequentially passes through each level of classifier, and all regions detected by each level of classifier are target regions.
It should be noted here that, the present disclosure is not limited to the sequence of the training face two-dimensional image classifier and the training face depth image classifier.
Step 12, detecting the input two-dimensional image to be detected by using the human face two-dimensional image classifier; specifically, the following description is made with reference to a schematic diagram of detecting an input two-dimensional image to be detected by using the two-dimensional image classifier for a human face shown in fig. 4-a and 4-b and a flowchart of a specific implementation of step 12 shown in fig. 5, where step 12 may include:
step 121, acquiring a two-dimensional image to be detected;
the two-dimensional image to be detected can be a two-dimensional image acquired by a camera with depth information acquisition in real time or a two-dimensional image which is acquired and then stored in a designated storage position. The two-dimensional image to be detected may or may not have a human face. The two-dimensional image to be detected shown in fig. 4-a has a human face.
Step 122, extracting a foreground target of the two-dimensional image to be detected;
in the embodiment of the disclosure, foreground target extraction may be performed on the two-dimensional image to be detected by using a frame difference or mixed gaussian background or other foreground target extraction methods.
Step 123, judging whether a foreground target exists in the two-dimensional image to be detected, if so, executing step 124, and if not, returning to step 121;
step 124, carrying out skin color detection on the foreground target area;
in the embodiment of the present disclosure, the following formula (4) may be adopted to perform skin color detection on the foreground target region:
wherein, Y (x, Y) represents the brightness value of a pixel point with coordinates (x, Y) in a foreground target area image in a YCbCr format; cb (x, y) and Cr (x, y) represent the chroma values of pixel points with coordinates (x, y); s (x, y) represents the complexion value of a pixel point with the coordinate of (x, y), and if S (x, y) is equal to 1, the pixel point is represented as a complexion pixel point; and if S (x, y) is equal to 0, the pixel point is not the skin color pixel point.
Step 125, judging whether the foreground target area has a skin color foreground area, if so, executing step 126; if not, the process returns to step 121.
And step 126, carrying out face detection on the skin color foreground area by using the face two-dimensional image classifier.
Referring to FIG. 6, a flow chart of an embodiment of step 126 is shown, which includes:
1261, determining a face area to be searched according to the skin color foreground area and the estimated face size;
as shown in fig. 4-b, after determining that there is a skin color foreground region in the two-dimensional image to be detected, expanding the skin color foreground region S1 outward by WfaceAnd/2, determining the face area to be searched S2. The width of the skin color foreground region S1 is W, and the height thereof is H. The width of the face region to be searched S2 is: w + WfaceThe height is: h + Wface. Wherein, the WfaceThe size is estimated for the face at the center position of the skin color foreground region. At the time of initial detection, WfaceAn empirical value W may be taken0
In the embodiment of the present disclosure, it is assumed that the function W _ Face (x, y) represents the Face width of the position where the pixel point (x, y) is located, that is, the number of Face pixel points included in the line where x is located in the skin color foreground region including the pixel point (x, y) is as shown in fig. 7. Then, the above-mentioned face estimation size can be expressed by the following formula (5):
WFace=W_Face(xFace,yFace) … … formula (5)
Wherein (x)Face,yFace) The center position of the flesh color foreground region S1 is also indicated as the center position of the region to be searched S2. The xFace represents the horizontal coordinate of a pixel point at the center position of the face area to be searched; yFace represents the vertical coordinate of the pixel point at the central position of the face area to be searched.
1262, estimating the face size range in the face area to be searched, and determining the scaling coefficient of the face area to be searched; fig. 8 shows a flowchart of a specific embodiment of step 1262 of the present disclosure, and step 1262 may comprise:
step 12621, calculating the minimum estimated face size W in the face area to be searchedFace_min
In the disclosed embodiment, W may be calculated by the following equation (6)Face_min
Wherein,representing the position of the upper left corner of the area to be searched;
representing the position of the upper right corner of the area to be searched.
In the related art, for a camera with a fixed position, if a human face is located at the upper left corner or the upper right corner in an acquired image, it is indicated that the human face is far away from the camera, and a human face picture captured by the camera is small. Similarly, for the face area to be searched in one frame of image, the face size at the upper left corner or the upper right corner is smaller, and by combining the embodiment of the present disclosure, the smaller face size of the two can be selected, and then the minimum face estimation size W is calculated according to the formula (6)Face_min. 12622, calculating the maximum estimated face size W in the face area to be searchedFace_max
In the disclosed embodiment, W may be calculated using the following equation (7)Face_max
Wherein, (W + W)Face) Representing the width of a face region to be searched; (H + W)Face) Representing the height of a face region to be searched;representing the lower right corner position of the face area to be searched;representing the position of the lower left corner of the face area to be searched;representing the position of the intersection point of the lower edge of the face area to be searched and the vertical bisector of the image to be detected; wIRepresenting the width of the whole frame of the two-dimensional image to be detected.
For the maximum estimated size of the face, the following description is made with reference to the schematic distribution position of the face region to be searched in the two-dimensional image to be detected shown in fig. 9-1 to 9-3, and with reference to the formula (7) and the size of the region to be searched shown in fig. 4-b, and the values thereof may include the following three cases:
as shown in fig. 9-1, assume that the center pixel position (x) of the face region S2 to be searchedFace,yFace) As an origin 0, establishing a coordinate system as shown in the figure, and knowing that if the face region to be searched is located on the left side of the vertical bisector AB of the whole frame of the two-dimensional image to be detected, the condition is satisfied:in this case, it can be known that the lower right corner position M of the face region S2 to be searched is the position of the largest face, so that the lower right corner position M of the face region to be searched, that is, the position of the largest face can be determined according to the position of the lower right corner M of the face region to be searchedThe maximum face estimate size is calculated using equation (7) for the face width.
As shown in fig. 9-2, if the face area S2 to be searched is located on the right side of the vertical bisector AB of the whole frame of the two-dimensional image to be detected, the condition is satisfied:in this case, it can be known that the lower left corner position M of the face region S2 to be searched is the position of the largest face, so that the lower left corner position M of the face region to be searched, that is, the position of the largest face, can be determined according to the position of the lower left corner M of the face region to be searchedThe maximum face estimate size is calculated using equation (7) for the face width.
As shown in fig. 9-3, if the face area S2 to be searched intersects the vertical bisector AB of the whole frame of the two-dimensional image to be detected, it can be known that the intersection M of the lower edge of the face area S2 and the vertical bisector AB of the two-dimensional image to be detected is the position of the largest face, so the intersection M of the lower edge of the face area S2 to be searched and the vertical bisector AB of the two-dimensional image to be detected, that is, the intersection M of the lower edge of the face area S2 to be searched and the vertical bisector of the twoThe maximum face estimate size is calculated using equation (7) for the face width.
The face size expansion search range coefficient α can be expressed by equation (8):
wherein N isFaceAnd the total frame number of the images of which the human faces are detected at the corresponding positions of the area to be searched is represented.
Step 12623, calculating said minimum estimated face size W respectivelyFace_minAnd the maximum face estimate size WFace_maxRespectively estimating the size W with the face at the center position (xFace, yFace) of the skin color foreground areaFaceDetermining the scaling coefficient of the face area to be searched.
Wherein, the minimum scaling factor of the face region to be searched is expressed by formula (9):
the minimum scaling factor of the face region to be searched is expressed by equation (10):
1263, zooming the face area to be searched according to the zooming coefficient;
1264, calculating an integral image of the face area to be searched after each zooming;
1265, using the face two-dimensional image classifier to perform face detection on the integral image of the face area to be searched after each zooming;
in the embodiment of the present disclosure, a manner of translating a rectangular frame may be adopted to perform face detection on an integral image of a face region image to be searched after each scaling, where a horizontal span Step _ W when the rectangular frame is translated may be calculated by using formula (11):
in formula (11), WScaleRepresenting the width of the zoomed face region to be searched; (W + W)Face) Representing the width of the face region to be searched before zooming; rScale_maxRepresenting the maximum scaling coefficient of the face area to be searched; rScale_minRepresenting the minimum scaling coefficient of the face region to be searched; the floor () function represents a round down.
The vertical span steph when the rectangular frame is translated can be calculated by equation (12):
wherein HScaleRepresenting the height of the zoomed face region to be searched; (H + W)Face) Representing the height of the face region to be searched before scaling.
1266, when the face two-dimensional image classifier finishes the detection of the face area to be searched of each scaling scale, combining the detected adjacent faces and outputting a first target two-dimensional face;
1267, determining the face position and size of the first target two-dimensional face;
and 1268, updating the estimated face size and the estimated face average brightness of the image to be detected in the next frame with the skin color foreground area according to the face position and size of the first target two-dimensional face.
In the embodiment of the present disclosure, formula (13) may be used to update the estimated face size of the image to be detected in the next frame with the skin color foreground region:
wherein, Wn+1A _ Face (x, y) is the estimated size of the Face of the image to be detected at the position (x, y) of the next frame with the skin color foreground area; wnA Face (x, y) is the estimated size of the Face of the current image to be detected at the position (x, y);the face size of the first target two-dimensional face at the position (x, y) is obtained, β is an updating coefficient, and the value range can be [0.1,0.5 ]]。
In the embodiment of the present disclosure, formula (14) may be adopted to update the estimated average brightness of the face of the image to be detected in the next frame with the skin color foreground region:
wherein,estimating average brightness of the face of the image to be detected at the position (x, y) of the next frame with the skin color foreground area;estimating average brightness of the face of the first target two-dimensional face at the position (x, y) before the detection of the image to be detected in the frame;the average brightness of the first target two-dimensional face centered at (x, y) may be obtained by calculating a gray-scale average value of the entire rectangular region where the first target two-dimensional face is located.
In the embodiment of the disclosure, the image detection is an iterative process, when a face is detected in one frame of two-dimensional image, the estimated size of the face in the next frame of image to be detected is determined according to the actually measured face size, and data support is provided for detecting the two-dimensional image to be detected and the corresponding depth image of the next frame with skin color foreground. All the two-dimensional image detections constitute an iterative loop process.
It should be noted that, in the present disclosure, the step 12 may also be used as an independent technical solution for detecting a two-dimensional image of a human face, and the accuracy of human face detection in the two-dimensional image is effectively improved by using the image detection algorithms shown in the above formulas (5) to (14).
Step 13, if the face two-dimensional image classifier detects face information in the two-dimensional image to be detected, detecting the depth image to be detected corresponding to the two-dimensional image to be detected by using the face depth image classifier; as shown in the schematic diagram of the face image shown in fig. 10, each two-dimensional image to be measured has a depth image corresponding to one frame.
Referring to the flowchart of an embodiment of step 13 shown in fig. 11, step 13 may include:
131, according to the face position and size of the first target two-dimensional face detected in the two-dimensional image to be detected, performing depth information normalization processing on a face region in the corresponding depth image to be detected according to a preset formula to obtain a normalized first target depth image region;
for example, the above formula (2) may be adopted to perform depth information normalization processing on the face region of the depth image to be detected.
And 132, performing face detection on the normalized first target depth image area by using the preset face depth image classifier.
In the embodiment of the present disclosure, the depth image classifier shown in formula (3) is used to perform face detection on the normalized first target depth image region.
And step 14, if the face depth image classifier detects face information in the depth image to be detected, outputting a face detection result.
In combination with the step 132, if a first target depth face is detected in the normalized first target depth image region, the first target two-dimensional face information is fused, and a face detection result is output. The first target depth face is depth information corresponding to the first face.
And (5) circularly executing the steps 11-14, carrying out face detection on a certain number of two-dimensional images to be detected and corresponding depth images, and outputting detected face detection results.
As can be seen from the foregoing embodiments, in the face detection method provided by the present disclosure, a two-dimensional image classifier is first used to detect a first target two-dimensional face in a two-dimensional image to be detected, on this basis, a depth image classifier is further used to continue face detection on a depth image to be detected corresponding to the two-dimensional image to be detected, and if a corresponding face depth image, that is, a first target depth face, is also detected in the depth image to be detected, a face detection result is output. By combining the detection of the face depth information, the false face information in the image, such as a face image on clothes, a face reflection on a glass door and the like, can be prevented from being taken as the real face information and taken as the face detection result, so that the accuracy of face detection is effectively improved, and the false detection rate is reduced.
Fig. 12 shows a flowchart of a second embodiment of the face detection method of the present disclosure, and on the basis of the embodiment shown in fig. 5, the method further includes:
step 15, recording the frame number of a target two-dimensional image in the detected image, wherein the target two-dimensional image is a two-dimensional image to be detected, which has a foreground target but no skin color foreground;
in the second embodiment of the present disclosure, if the determination result in the step 125 is negative, that is, if the skin color foreground cannot be detected in the foreground target area of one frame of the two-dimensional image to be detected, the current two-dimensional image to be detected is marked as the target two-dimensional image.
In the embodiment of the present disclosure, when the face detection is started or the start time is preset, a counter may be started to record the number of frames of the target two-dimensional image appearing in the two-dimensional image detection process. And adding 1 to the counter every time one frame of target two-dimensional image is detected.
And step 16, when the frame number of the target two-dimensional image reaches a preset threshold value, performing depth normalization processing on a target depth image region in the to-be-detected depth image corresponding to the current target two-dimensional image to obtain a normalized target depth image region, wherein the position of the target depth image region in the to-be-detected depth image is the same as the position of a foreground target region in the target two-dimensional image.
In the implementation of the present disclosure, the depth information normalization processing may be performed on the target depth image region by using formula (2), so as to obtain a normalized target depth image region.
Step 17, performing face detection on the normalized target depth image area by using the preset face depth image classifier;
and step 18, if a second target depth face is detected in the normalized target depth image area, adaptively adjusting exposure compensation parameters according to a preset strategy so that the image acquisition device adjusts the exposure compensation level of the acquired image.
In the embodiment of the present disclosure, if a second target depth face is detected in the normalized target depth image region, it is indicated that face information also exists in the corresponding two-dimensional image to be detected, but the acquired face image information is annihilated in the background information because the exposure compensation parameter of the image acquisition device is not appropriate, for example, the exposure compensation level is too high or the exposure compensation level is too low. A schematic diagram of two-dimensional face images at different exposure levels as shown in fig. 13. As can be seen from the figure, FIG. 13-1 is a two-dimensional face image collected when the exposure compensation level is low; fig. 13-4 are two-dimensional face images acquired when the exposure compensation level is high.
Because the depth image is irrelevant to the ambient illumination and shadow, the pixel points of the depth image clearly express the surface geometry of the scenery. Therefore, when the face information cannot be detected in the two-dimensional image due to the fact that the exposure compensation parameters of the two-dimensional image to be detected are not appropriate, the depth information of the target face, namely the second target depth face, can still be detected in the depth image corresponding to the two-dimensional image, so that the face information can be found in time, the exposure compensation parameters of the image acquisition device can be adjusted in time in an adaptive manner, and the image acquisition and detection device can operate normally.
Referring to fig. 14, which is a flowchart illustrating an embodiment of step 18, adaptively adjusting the exposure compensation parameter according to a preset strategy may include:
step 181, determining the face position and size of a second target two-dimensional face in the corresponding two-dimensional image according to the position and face size of the second target depth face in the depth image;
step 182, calculating the estimated average brightness of the second target two-dimensional face according to the face position and size of the second target two-dimensional face;
assuming a center position of the face at the second target depthIs (x)d,yd) The width of the second target depth face is WdHeight of Hd. Then, the position and size of the second target two-dimensional face in the corresponding two-dimensional image are the same as the position and size of the second target depth face, and the estimated average brightness of the second target two-dimensional face is calculated according to the above equation (14).
And 183, comparing the estimated average brightness of the second target two-dimensional face with a preset face brightness threshold value.
In this disclosure, the preset face brightness threshold includes: maximum face luminance of current statisticsAnd the minimum face brightness of the current statistics
Step 184, if the average brightness of the second target two-dimensional face is smaller than the minimum face brightness counted currently, increasing an exposure compensation parameter;
and 185, if the average brightness of the second target two-dimensional face is greater than the currently counted maximum face brightness, reducing the exposure compensation parameter.
In the above steps 184 and 185, the following formula may be adopted to adaptively adjust the exposure compensation parameter:
wherein Stage _ EV1Compensating the grade for the adjusted exposure; stage _ EV0The exposure compensation level before adjustment; y isFace_avgThe estimated average luminance Y of the second target two-dimensional faceFace_detect(x, y) is the average brightness of the second target two-dimensional face detected at the (x, y) position,is the maximum face brightness of the current statistics,the minimum face brightness of the current statistics is obtained; stage _ EV _ TOTAL is a preset exposure compensation level.
Therefore, in the embodiment of the disclosure, when the image acquisition device acquires a two-dimensional image like a camera capable of acquiring depth information, the exposure compensation parameters are not appropriate, the exposure compensation parameters can be adaptively adjusted, and the illumination adaptability of the image acquisition device is effectively improved, so that the face detection rate is improved, and the equipment performance is improved.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present invention is not limited by the illustrated ordering of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the invention.
Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
Corresponding to the embodiment of the method for detecting a two-dimensional face provided in step 12 of the present disclosure, the present invention further provides a face detection device, which can be disposed in an intelligent monitoring system. Referring to a block diagram of a structure of a face detection apparatus of the present disclosure shown in fig. 15, the face detection apparatus provided by the present disclosure may include:
the foreground target extraction module 21 is configured to extract a foreground target from the two-dimensional image to be detected;
the skin color detection module 22 is configured to perform skin color detection on the foreground target region when the foreground target exists in the two-dimensional image to be detected;
and the face detection module 23 is configured to perform face detection on the skin color foreground region by using a face two-dimensional image classifier under the condition that the foreground target region has a skin color foreground.
Referring to the block diagram of the structure of the embodiment of the face detection module shown in fig. 16, on the basis of the embodiment shown in fig. 15, the face detection module 23 may include:
a search area determining unit 231, configured to determine a face area to be searched according to the skin color foreground area and the face estimation size;
a scaling factor determining unit 232, configured to estimate a face size range in the face region to be searched, and determine a scaling factor of the face region to be searched;
a scaling unit 233, configured to scale the face region to be searched according to the scaling coefficient;
an integral unit 234, configured to calculate an integral image of the face area to be searched after each scaling;
a face detection unit 235, configured to perform face detection on the scaled integral image of the face region to be searched each time by using the face two-dimensional image classifier;
a first result output unit 236, configured to combine detected adjacent faces and output a first target two-dimensional face when the face two-dimensional image classifier completes detection of a face region to be searched at each scaling scale;
a size determining unit 237, configured to determine a face position and a size of the first target two-dimensional face;
and the updating unit 238 is configured to update the estimated face size and the estimated face average brightness of the to-be-detected image in the next frame with the skin color foreground region according to the face position and the size of the first target two-dimensional face.
The working process of the apparatus for detecting a human face shown in fig. 15 and 16 can be referred to the detailed description of step 12, and will not be described herein again.
Corresponding to the above method for detecting a face shown in fig. 1 to 14, the present disclosure provides a face detection system, referring to a structural block diagram of a first embodiment of the face detection system shown in fig. 17, including:
the face detection device 20 is used for detecting a two-dimensional image to be input, and the structure of the face detection device 20 can be seen from the schematic diagram shown in fig. 15 or 16.
A classifier obtaining module 31, configured to obtain a face two-dimensional image classifier and a corresponding face depth image classifier;
the first depth face detection module 32 is configured to detect, by using the face depth image classifier, a depth image to be detected corresponding to the two-dimensional image to be detected when the face two-dimensional image classifier detects face information in the two-dimensional image to be detected;
and a detection result output module 33, configured to output a face detection result when the face depth image classifier detects face information in the depth image to be detected.
Fig. 18 shows a block diagram of a second embodiment of the face detection system, which may further include, on the basis of the first embodiment shown in fig. 17:
the recording module 34 is configured to record the number of frames of a target two-dimensional image in a detected image, where the target two-dimensional image is a two-dimensional image to be detected, which has a foreground target but no skin color foreground;
the normalization processing module 35 is configured to, when the number of frames of the target two-dimensional image reaches a preset threshold, perform depth normalization processing on a target depth image region in a to-be-detected depth image corresponding to the current target two-dimensional image to obtain a normalized target depth image region, where a position of the target depth image region in the to-be-detected depth image is the same as a position of a foreground target region in the target two-dimensional image;
a second depth face detection module 36, configured to perform face detection on the normalized target depth image region by using the preset face depth image classifier;
and the exposure compensation module 37 is configured to adaptively adjust exposure compensation parameters according to a preset strategy when a second target depth face is detected in the normalized target depth image region, so that the image acquisition device adjusts an exposure compensation level of an acquired image.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (11)

1. A face detection method, comprising:
extracting a foreground target of the two-dimensional image to be detected;
if a foreground target exists in the two-dimensional image to be detected, carrying out skin color detection on a foreground target area;
if the foreground target area has a skin color foreground, performing face detection on the skin color foreground area by using a face two-dimensional image classifier;
the method for detecting the face of the skin color foreground area by using the face two-dimensional image classifier comprises the following steps:
determining a face area to be searched according to the skin color foreground area and the face estimation size;
estimating the size range of the face in the face area to be searched, and determining the scaling coefficient of the face area to be searched;
zooming the face area to be searched according to the zooming coefficient;
calculating an integral image of the face area to be searched after each zooming;
performing face detection on the integral image of the face area to be searched after each zooming by using the face two-dimensional image classifier;
when the face two-dimensional image classifier finishes detecting the face area to be searched in each scaling scale, combining the detected adjacent faces and outputting a first target two-dimensional face;
determining the face position and size of the first target two-dimensional face;
and updating the estimated face size and the estimated face average brightness of the image to be detected in the next frame with the skin color foreground area according to the face position and the size of the first target two-dimensional face.
2. The method according to claim 1, wherein the estimating a face size range in the face region to be searched and determining a scaling factor of the face region to be searched comprises:
calculating the minimum face estimation size W in the face area to be searchedFace_min
Calculating the maximum face estimation size W in the face area to be searchedFace_max
In the above formula, W represents the width of the skin color foreground region; h represents the height of the skin color foreground area; (W + W)Face) Representing the width of a face region to be searched; (H + W)Face) Representing the height of a face region to be searched; the xFace represents the horizontal coordinate of the center position of the face area to be searched; yFace represents the vertical coordinate of the center position of the face area to be searched; w _ Face () is a function representing the width of a human Face;representing the lower right corner position of the face area to be searched;representing the position of the lower left corner of the face area to be searched;representing the position of the intersection point of the lower edge of the face region to be searched and the vertical bisector of the two-dimensional image to be detected; wIRepresenting the width of the whole frame of two-dimensional image to be detected;
the face size expansion search range coefficient α is:
wherein N isFaceThe total frame number of the face image detected at the corresponding position of the area to be searched is represented;
respectively calculating the minimum face estimation size WFace_minAnd the maximum face estimate size WFace_maxAnd the estimated size W of the face at the center position (xFace, yFace) of the skin color foreground areaFaceDetermining the scaling coefficient of the face area to be searched.
3. The method according to claim 1, wherein the performing, by using the face two-dimensional image classifier, face detection on the integral image of the face region image to be searched after each scaling comprises:
and performing face detection on the integral image of the face area image to be searched after each zooming by adopting a mode of translating the rectangular frame, wherein a horizontal span Step _ W when the rectangular frame is translated is calculated by adopting the following formula:
wherein W represents a width of the skin tone foreground region; wScaleRepresenting the width of the zoomed face region to be searched; (W + W)Face) Representing the width of the face region to be searched before zooming; rScale_maxRepresenting the maximum scaling coefficient of the face area to be searched; rScale_minRepresenting the minimum scaling coefficient of the face region to be searched;
the vertical span Step _ H when the rectangular frame is translated is calculated by the following formula:
wherein H represents the height of the skin color foreground region; hScaleRepresenting the height of the zoomed face region to be searched; (H + W)Face) Representing the height of the face region to be searched before scaling.
4. The method according to claim 1, wherein the updating the face estimation size and the face estimation average brightness of the image to be detected with a skin color foreground region in the next frame according to the face position and size of the first target two-dimensional face comprises:
updating the estimated face size of the image to be detected in the next frame with the skin color foreground area by using the following formula:
wherein, Wn+1A _ Face (x, y) is the estimated size of the Face of the next frame of image to be detected at the position (x, y); wnA Face (x, y) is the estimated size of the Face of the current image to be detected at the position (x, y);the face size of the first target two-dimensional face at the position (x, y) is obtained, β is an updating coefficient, and the value range is [0.1,0.5 ]];
Updating the estimated average brightness of the face of the image to be detected in the next frame with the skin color foreground area by adopting the following formula:
wherein,estimating the average brightness of the face of the next frame of image to be detected at the position (x, y);estimating average brightness of the face of the first target two-dimensional face at the position (x, y) before the detection;and (3) averaging the face average brightness value of the first target two-dimensional face at the position (x, y).
5. A face detection method, comprising:
acquiring a face two-dimensional image classifier and a corresponding face depth image classifier;
the face detection method according to any one of claims 1 to 4, wherein the face detection is performed on an input two-dimensional image to be detected;
if the face two-dimensional image classifier detects face information in the two-dimensional image to be detected, the face depth image classifier is utilized to carry out face detection on the depth image to be detected corresponding to the two-dimensional image to be detected;
if the face depth image classifier detects face information in the depth image to be detected, outputting a face detection result;
the method for detecting the face of the depth image to be detected corresponding to the two-dimensional image to be detected by using the face depth image classifier comprises the following steps:
according to the face position and the size of the first target two-dimensional face detected in the two-dimensional image to be detected, carrying out depth information normalization processing on a face area in the corresponding depth image to be detected to obtain a normalized first target depth image area;
and carrying out face detection on the normalized first target depth image area by using the face depth image classifier.
6. The method of claim 5, further comprising:
recording the frame number of a target two-dimensional image in a detected image, wherein the target two-dimensional image is a two-dimensional image to be detected which has a foreground target but no skin color foreground;
when the frame number of the target two-dimensional image reaches a preset threshold value, performing depth normalization processing on a target depth image region in a to-be-detected depth image corresponding to the current target two-dimensional image to obtain a normalized target depth image region, wherein the position of the target depth image region in the to-be-detected depth image is the same as the position of a foreground target region in the target two-dimensional image;
carrying out face detection on the normalized target depth image area by using the face depth image classifier;
if a second target depth face is detected in the normalized target depth image area, adaptively adjusting exposure compensation parameters according to a preset strategy so that the image acquisition device adjusts the exposure compensation level of the acquired image.
7. The method of claim 6, wherein the adaptively adjusting the exposure compensation parameter according to the preset strategy comprises:
determining the face position and the face size of a second target two-dimensional face in the corresponding two-dimensional image according to the position and the face size of the second target depth face in the depth image;
calculating the average brightness of the second target two-dimensional face according to the face position and the size of the second target two-dimensional face;
if the average brightness of the second target two-dimensional face is smaller than the minimum face brightness counted currently, increasing exposure compensation parameters;
and if the average brightness of the second target two-dimensional face is greater than the current statistical maximum face brightness, reducing exposure compensation parameters.
8. The method of claim 7, wherein the exposure compensation parameter is adaptively adjusted using the following formula:
wherein Stage _ EV1Compensating the grade for the adjusted exposure; stage _ EV0The exposure compensation level before adjustment; y isFace_avgThe estimated average luminance Y of the second target two-dimensional faceFace_detect(x, y) is the average brightness of the second target two-dimensional face detected at the (x, y) position,is the maximum face brightness of the current statistics,the minimum face brightness of the current statistics is obtained; stage _ EV _ TOTAL is a preset exposureThe level of compensation.
9. An apparatus for face detection, the apparatus comprising:
the foreground target extraction module is used for extracting a foreground target of the two-dimensional image to be detected;
the skin color detection module is used for carrying out skin color detection on the foreground target area under the condition that the foreground target exists in the two-dimensional image to be detected;
the face detection module is used for detecting the face of the skin color foreground area by using a face two-dimensional image classifier under the condition that the foreground target area has the skin color foreground;
wherein, the face detection module includes:
the searching area determining unit is used for determining a face area to be searched according to the skin color foreground area and the face estimation size;
the scaling factor determining unit is used for estimating the face size range in the face area to be searched and determining the scaling factor of the face area to be searched;
the zooming unit is used for zooming the human face area to be searched according to the zooming coefficient;
the integral unit is used for calculating an integral image of the face area to be searched after each zooming;
the face detection unit is used for carrying out face detection on the integral image of the face area to be searched after each zooming by using the face two-dimensional image classifier;
the first result output unit is used for merging detected adjacent faces and outputting a first target two-dimensional face when the face two-dimensional image classifier finishes detecting the face area to be searched in each scaling scale;
the size determining unit is used for determining the face position and size of the first target two-dimensional face;
and the updating unit is used for updating the estimated face size and the estimated face average brightness of the image to be detected in the next frame with the skin color foreground area according to the face position and the size of the first target two-dimensional face.
10. A face detection system comprising the face detection apparatus of claim 9, and further comprising:
the classifier obtaining module is used for obtaining a face two-dimensional image classifier and a corresponding face depth image classifier;
the first depth face detection module is used for detecting the depth image to be detected corresponding to the two-dimensional image to be detected by using the face depth image classifier under the condition that the face two-dimensional image classifier detects face information in the two-dimensional image to be detected;
and the detection result output module is used for outputting a face detection result under the condition that the face depth image classifier detects face information in the depth image to be detected.
11. The system of claim 10, further comprising:
the device comprises a recording module, a judging module and a judging module, wherein the recording module is used for recording the frame number of a target two-dimensional image in a detected image, and the target two-dimensional image is a two-dimensional image to be detected which has a foreground target but no skin color foreground;
the normalization processing module is used for performing depth normalization processing on a target depth image region in a to-be-detected depth image corresponding to the current target two-dimensional image when the frame number of the target two-dimensional image reaches a preset threshold value to obtain a normalized target depth image region, wherein the position of the target depth image region in the to-be-detected depth image is the same as the position of a foreground target region in the target two-dimensional image;
the second depth face detection module is used for carrying out face detection on the normalized target depth image area by using the face depth image classifier;
and the exposure compensation module is used for adaptively adjusting exposure compensation parameters according to a preset strategy under the condition that a second target depth face is detected in the normalized target depth image area, so that the image acquisition device adjusts the exposure compensation level of the acquired image.
CN201510761566.0A 2015-11-10 2015-11-10 Method for detecting human face, apparatus and system Active CN105426828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510761566.0A CN105426828B (en) 2015-11-10 2015-11-10 Method for detecting human face, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510761566.0A CN105426828B (en) 2015-11-10 2015-11-10 Method for detecting human face, apparatus and system

Publications (2)

Publication Number Publication Date
CN105426828A CN105426828A (en) 2016-03-23
CN105426828B true CN105426828B (en) 2019-02-15

Family

ID=55505028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510761566.0A Active CN105426828B (en) 2015-11-10 2015-11-10 Method for detecting human face, apparatus and system

Country Status (1)

Country Link
CN (1) CN105426828B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI610571B (en) * 2016-10-26 2018-01-01 緯創資通股份有限公司 Display method, system and computer-readable recording medium thereof
CN106991377B (en) * 2017-03-09 2020-06-05 Oppo广东移动通信有限公司 Face recognition method, face recognition device and electronic device combined with depth information
CN107481186B (en) * 2017-08-24 2020-12-01 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection
CN108009483A (en) * 2017-11-28 2018-05-08 信利光电股份有限公司 A kind of image collecting device, method and intelligent identifying system
CN110163032B (en) * 2018-02-13 2021-11-16 浙江宇视科技有限公司 Face detection method and device
CN108364267B (en) * 2018-02-13 2019-07-05 北京旷视科技有限公司 Image processing method, device and equipment
CN108885791B (en) * 2018-07-06 2022-04-08 达闼机器人有限公司 Ground detection method, related device and computer readable storage medium
CN109063685A (en) * 2018-08-28 2018-12-21 成都盯盯科技有限公司 The recognition methods of face pattern, device, equipment and storage medium
CN110046595B (en) * 2019-04-23 2022-08-09 福州大学 Cascade multi-scale based dense face detection method
CN113395457B (en) * 2020-03-11 2023-03-24 浙江宇视科技有限公司 Parameter adjusting method, device and equipment of image collector and storage medium
CN111931677A (en) * 2020-08-19 2020-11-13 北京影谱科技股份有限公司 Face detection method and device and face expression detection method and device
CN112560660A (en) * 2020-12-10 2021-03-26 杭州宇泛智能科技有限公司 Face recognition system and preset method thereof
CN113255599B (en) * 2021-06-29 2021-09-24 成都考拉悠然科技有限公司 System and method for user-defined human flow testing face distribution control rate

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258232A (en) * 2013-04-12 2013-08-21 中国民航大学 Method for estimating number of people in public place based on two cameras
CN103473564A (en) * 2013-09-29 2013-12-25 公安部第三研究所 Front human face detection method based on sensitive area
CN103699888A (en) * 2013-12-29 2014-04-02 深圳市捷顺科技实业股份有限公司 Human face detection method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPR541801A0 (en) * 2001-06-01 2001-06-28 Canon Kabushiki Kaisha Face detection in colour images with complex background

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258232A (en) * 2013-04-12 2013-08-21 中国民航大学 Method for estimating number of people in public place based on two cameras
CN103473564A (en) * 2013-09-29 2013-12-25 公安部第三研究所 Front human face detection method based on sensitive area
CN103699888A (en) * 2013-12-29 2014-04-02 深圳市捷顺科技实业股份有限公司 Human face detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
复杂背景下视频运动目标的人脸检侧算法;付朝霞等;《第十三届中国体视学与图像分析学术会议论文集》;20141011;第364-367页

Also Published As

Publication number Publication date
CN105426828A (en) 2016-03-23

Similar Documents

Publication Publication Date Title
CN105426828B (en) Method for detecting human face, apparatus and system
KR102641115B1 (en) A method and apparatus of image processing for object detection
US9426449B2 (en) Depth map generation from a monoscopic image based on combined depth cues
US6661907B2 (en) Face detection in digital images
JP6482195B2 (en) Image recognition apparatus, image recognition method, and program
CN104036278B (en) The extracting method of face algorithm standard rules face image
CN109670430A (en) A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning
CN105404884B (en) Image analysis method
CN109086724B (en) Accelerated human face detection method and storage medium
CN105279772B (en) A kind of trackability method of discrimination of infrared sequence image
WO2018059125A1 (en) Millimeter wave image based human body foreign object detection method and system
AU2018253963B2 (en) Detection system, detection device and method therefor
CN102609724B (en) Method for prompting ambient environment information by using two cameras
CN103955949A (en) Moving target detection method based on Mean-shift algorithm
CN108446642A (en) A kind of Distributive System of Face Recognition
CN111582118A (en) Face recognition method and device
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN111274851A (en) Living body detection method and device
CN106446832B (en) Video-based pedestrian real-time detection method
CN106611417B (en) Method and device for classifying visual elements into foreground or background
JP2001167273A (en) Method and device for detecting face and computer readable medium
CN105184244B (en) Video human face detection method and device
CN107545270A (en) Target detection method and system
Khongpit et al. 2D Gun's type classification using edge detection algorithm and SUSAN low level image processing
CN107220650B (en) Food image detection method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200603

Address after: 250001 whole floor, building 3, Aosheng building, 1166 Xinluo street, Jinan area, Jinan pilot Free Trade Zone, Shandong Province

Patentee after: Jinan boguan Intelligent Technology Co., Ltd

Address before: Hangzhou City, Zhejiang province 310051 Binjiang District West Street Jiangling Road No. 88 building 10 South Block 1-11

Patentee before: ZHEJIANG UNIVIEW TECHNOLOGIES Co.,Ltd.