CN100377164C - Method, device and storage medium for detecting face complexion area in image - Google Patents

Method, device and storage medium for detecting face complexion area in image Download PDF

Info

Publication number
CN100377164C
CN100377164C CNB2004100865046A CN200410086504A CN100377164C CN 100377164 C CN100377164 C CN 100377164C CN B2004100865046 A CNB2004100865046 A CN B2004100865046A CN 200410086504 A CN200410086504 A CN 200410086504A CN 100377164 C CN100377164 C CN 100377164C
Authority
CN
China
Prior art keywords
pixel
value
pixels
edge
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004100865046A
Other languages
Chinese (zh)
Other versions
CN1763765A (en
Inventor
王建民
陈新武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CNB2004100865046A priority Critical patent/CN100377164C/en
Publication of CN1763765A publication Critical patent/CN1763765A/en
Application granted granted Critical
Publication of CN100377164C publication Critical patent/CN100377164C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a method for detecting a human face skin-color area in an image. The present invention is characterized in that the method comprises the following procedures: a human face area in an image is identified; the skin color reference vector of the image is calculated; skin color pixels in the human face area are identified; an ellipse which surrounds the skin color pixels in essence is identified; the edge of the ellipse is adjusted, and a human face skin-color area is obtained. The present invention also provides a device and a machine-readable storage medium. The contour of a human face in the image can be exactly identified according to the method of the present invention, and therefore, display quality or printing quality is improved.

Description

Method, apparatus and storage medium for detecting face skin color region in image
Technical Field
The present invention relates to image processing, and more particularly, to an image processing method, apparatus, and storage medium in which a human face skin color region is detected.
Background
Various techniques are known for detecting regions of interest in an image, such as a human face or other identified objects of interest. Face detection is a particularly interesting area, since face recognition is important not only for image processing, but also for recognition and security purposes, and also for human-machine interface purposes. The human-machine interface not only recognizes the position of the face, but also recognizes a specific face if the face exists, and can understand the facial expression and posture.
Recently, many studies on automatic face detection have been reported. For example, reference may be made to "Face Detection and rotation Estimation Using Color Information (the 5th IEEE International Workshop on Robot and Human Communication, 1996, pp 341-346)" and "Face Detection from Color Images Using a Fuzzy Pattern Analysis and Machine Analysis Method (IEEE Transaction on Pattern Analysis and Machine Analysis, vol.21, no.6, june 1999)".
Japanese patent application No. H10-293840 discloses a method of detecting a face in an image. According to the method, candidate regions are determined by skin color. For the region, calculating a most approximate ellipse; calculating the optimal symmetrical y axis by comparing the left part and the right part; the x-axis is calculated by comparing the upper and lower parts.
However, the prior art does not disclose how to accurately identify the contours of the faces in the image.
Disclosure of Invention
An object of the present invention is to solve the above technical problems of the prior art and to provide a method, an apparatus and a storage medium for detecting a skin color region of a human face in an image, so as to be able to accurately recognize an outline of the human face in the image.
The invention provides a method for detecting a human face skin color area in an image, which is characterized by comprising the following steps:
identifying a face region in the image;
calculating a skin color reference vector of the image;
identifying skin color pixels in the face region;
identifying an ellipse that substantially surrounds the flesh tone pixel;
and adjusting the edge of the ellipse to obtain the human face skin color area.
The invention also provides a device for detecting the human face skin color area in the image, which is characterized by comprising the following components:
a candidate recognition circuit for recognizing a face region in the image;
a calculator for calculating a skin color reference vector for the image;
the skin color pixel identification circuit is used for identifying skin color pixels in the face area;
an ellipse identification and adjustment circuit for identifying an ellipse substantially surrounding the skin tone pixel and for adjusting the edge of the ellipse to obtain the face skin tone region.
The present invention also provides a storage medium encoded with machine readable computer program code for detecting a human face skin tone region in an image, the storage medium comprising instructions to cause a processor to implement a method according to the present invention.
The method, apparatus and storage medium according to the present invention can accurately recognize the contour of a human face in an image if the image contains the human face. Accordingly, when an image including a human face is displayed or printed, the human face can be displayed or printed with improved quality.
In addition, the method of the present invention can be easily combined with various conventional methods of recognizing a face region (or rectangle) and recognizing flesh color pixels to adapt to different situations.
Other features and advantages of the present invention will become apparent from the following description of the preferred embodiment, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention.
Drawings
FIG. 1 illustrates an example method of identifying an eye region in an image;
FIG. 2 illustrates an example method of identifying face rectangles in an image;
FIG. 3 is a flow diagram of a method of detecting a face skin tone region in an image according to one embodiment of the invention;
FIG. 4 illustrates a face rectangle that includes a skin tone reference portion for obtaining a skin tone reference vector;
FIG. 5 illustrates an example of each pixel used to calculate the edge value of the pixel P;
FIG. 6 is a block diagram of an apparatus for detecting human skin tone regions in an image according to another embodiment of the present invention;
fig. 7A, 7B, and 7C show examples of face skin color regions;
fig. 8 schematically illustrates an image processing system capable of implementing the method illustrated in fig. 3.
Detailed Description
In the following description, as to how to identify candidate face regions, how to identify eye regions in a face, reference may be made to chinese patent application No. 00127067.2 (filed by the same applicant at 9/15 of 2000), chinese patent application No. 01132807.X (filed by the same applicant at 9/6 of 2001), chinese patent application No. 02155468.4 (filed by the same applicant at 12/13 of 2002), chinese patent application No. 02160016.3 (filed by the same applicant at 30 of 12/2002), chinese patent application No. 03137345.3 (filed by the same applicant at 18 of 6/2003), and so forth. These applications are incorporated herein by reference. However, the methods of recognizing candidate face regions and the methods of recognizing eye regions disclosed in these applications do not limit the present invention. Any conventional method of identifying a face rectangle in an image or identifying an eye region may be used in the present invention.
Fig. 1 illustrates an example method of identifying an eye region in an image. The method starts in step 101. Then, in step 102, each column of the image is divided into a plurality of spaced segments.
At step 103, valley regions in adjacent columns are merged to generate candidate eye regions. Then, in step 104, it is determined whether each candidate eye region is a true eye region or a false eye region.
FIG. 2 illustrates an example method of identifying face rectangles in an image. The method starts in step 201. Then, in step 202, two eye regions in the image are identified, and a candidate face rectangle is identified based on the two eye regions.
In step 203, a ring-shaped region surrounding the candidate face rectangle is set. In step 204, the gray scale gradient of each pixel of the annular region is calculated. In step 205, a reference gradient is calculated for each pixel of the annular region. In step 206, the gray scale gradients and corresponding reference gradient angles of all pixels in the annular region are averaged. In step 207, it is determined whether the average angle is less than a second threshold. If the determination at step 207 is "no", the process goes to step 210; otherwise, go to step 208.
In step 208, it is determined whether the weighted average angle is less than a third threshold. If the determination at step 208 is "no," then processing branches to step 210; otherwise, go to step 209.
At step 209, the candidate face rectangle is classified as a face rectangle (e.g., a real face). At step 210, the candidate face rectangle is classified as a false face (e.g., a false face).
The process ends at step 211.
For further description of the method shown in fig. 1 and 2, reference may be made to chinese patent application No. 01132807. X.
Fig. 3 is a flow diagram of a method of detecting a face skin tone region in an image according to one embodiment of the invention.
As depicted in fig. 3, the process begins at step 301. Then, in step 302, a face rectangle in the image to be processed is identified. The different methods of identifying face rectangles in an image are not limiting to the invention.
Then, in step 303, a skin tone reference portion is identified in the face rectangle. Fig. 4 shows an example of the skin color reference part. Various methods of identifying the skin tone reference portion can be used. The different size and shape of the skin tone reference portion and the method of identification are not limiting of the present invention.
In step 304, a skin tone reference vector is calculated in the skin tone reference section.
Suppose that
Figure C20041008650400081
Is the skin tone reference vector.
Figure C20041008650400082
Including the R (red) valueG (Green) valueAnd B (blue) value
Figure C20041008650400085
R value
Figure C20041008650400086
Is the average red value of the pixels included in the reference portion of skin tones. G value
Figure C20041008650400087
Is the average green value of the pixels included in the skin tone reference portion. B value
Figure C20041008650400088
Is the average blue value of the pixels included in the skin color reference portion. R, G, B values
Figure C20041008650400089
Figure C200410086504000810
And
Figure C200410086504000811
calculated based on the following formula:
Figure C20041008650400091
Figure C20041008650400092
where r represents the red value of the pixel in the skin color reference portion, g represents the green value of the pixel in the skin color reference portion, b represents the blue value of the pixel in the skin color reference portion, and n represents the number of pixels of the skin color reference portion.
From step 305 to step 309, skin color pixels in the face rectangle are identified with reference to the skin color reference vector.
Specifically, in step 305, a pixel is selected from the face rectangle.
At step 306, the distance (e.g., mahalanobis distance) between the skin tone reference vector and the color vector of the pixel selected in step 305 is calculated. Alternatively, the distance may be a euclidean distance. The different kinds of distances and different selection methods of pixels do not limit the invention.
The mahalanobis distance between the skin tone reference vector and the color vector of the pixel is calculated based on the following formula:
Figure C20041008650400093
wherein x represents the color vector of the pixel, d represents the mahalanobis distance between the skin color reference vector and the color vector x;
Figure C20041008650400094
representing the skin tone reference vector calculated in step 104; "T" represents a transpose of a vector or matrix; sigma x -1 An inverse matrix representing the covariance matrix of the pixels in the skin tone reference portion.
The covariance matrix is calculated based on the following equation:
Figure C20041008650400095
wherein, delta r 2 ,δ g 2 ,δ b 2 ,δ rg 2 ,δ rb 2 ,δ gb 2 Calculated using the following formula:
Figure C20041008650400096
Figure C20041008650400097
Figure C20041008650400098
Figure C20041008650400102
Figure C20041008650400103
in step 307, it is determined whether the Mahalanobis distance of the pixel selected in step 305 is less than a first threshold. The first threshold value may range from 1 to 20, depending on the experiment. Here, the first threshold value takes 5.
If the result of step 307 is "yes", the process goes to step 308; otherwise, processing transfers back to step 305.
Of course, step 307 may be modified to determine whether the mahalanobis distance calculated in step 306 is greater than the same first threshold. If so, a YES determination at step 307 branches to step 305 and a NO determination at step 307 branches to step 308.
At step 308, the pixels selected in step 305 and identified by step 307 as flesh tone pixels are recorded.
In step 309, it is determined whether all pixels included in the face rectangle have been tested in steps 305 to 307.
If the result of step 309 is "no," the process returns to step 305, where one of the pixels that has not been tested is selected. If the result of step 309 is "yes," then the process passes to step 310.
At step 310, an ellipse is identified that encompasses substantially all of the flesh tone pixels.
The ellipse that encompasses substantially all flesh tone pixels is identified based on the following formula:
Figure C20041008650400104
where z represents the coordinates of the flesh tone pixel, i.e., (z) x ,z y );
Figure C20041008650400105
Mean coordinates representing all flesh tone pixels, i.e.
Figure C20041008650400106
(n represents the number of flesh tone pixels); "T" represents the transpose of a vector or matrix; sigma z -1 An inverse matrix representing a covariance matrix of all flesh tone pixels; r represents the size of the ellipse. For example, r may take the value 2.
The covariance matrix is calculated based on the following equation:
Figure C20041008650400107
wherein, delta x 2 ,δ y 2 ,δ xy 2 Calculated using the following formula:
Figure C20041008650400111
Figure C20041008650400112
Figure C20041008650400113
the ellipse identified as described above surrounds most of the skin pixels that have been identified after step 309.
In step 311, the edges of the ellipses identified in step 310 are adjusted in order to obtain the face skin color regions detected in the image.
In order to adjust the edge of the ellipse, each pixel at the edge of the ellipse (hereinafter referred to as an edge pixel) is considered. For edge pixels, a feature value is calculated. And a feature value is also calculated for each pixel close to the edge pixel. Then, the characteristic values are compared with each other. Among the feature values, the largest feature value is selected, and the pixel having the largest feature value is selected as a new edge pixel, and then the original edge pixel is replaced with the new edge pixel. In this way, the edges of the ellipse are updated. The above calculation, comparison and update processes continue until all edge pixels have been considered.
After step 311, the edges of the ellipse have been updated and a new graph is obtained that typically has irregular edges. The area enclosed by the edges of the new image is considered to be the face skin tone area in the image detected according to the invention.
Finally, the process ends at step 312.
Fig. 4 shows a face rectangle comprising a skin tone reference portion for obtaining a skin tone reference vector.
As shown in fig. 4, reference numeral 400 represents a face rectangle; reference numeral 401 and reference numeral 402 represent two eyes in the face rectangle 400. A coordinate system is established. In this coordinate system, the x-axis passes through the centers of the left eye 401 and the right eye 402, the origin O is located at the midpoint between the eyes 401 and 402, and the y-axis is perpendicular to the x-axis.
In fig. 4, a rectangular portion 403 is selected as the skin tone reference portion. Assume that the distance between the origin O and the eye 401 or the eye 402 is 1. The skin color reference portion 403 is defined as a = { (x, y) | | | x | < 0.8; y is more than 0 and less than 1. Of course, other portions of the face rectangle 400 may also be selected as the skin tone reference portion.
The following is an example of adjusting the edges of an ellipse.
In this example, an edge value calculated by the sobel algorithm is used to represent a likelihood that a pixel is located on an edge of a region. . If the edge value of a pixel calculated using the sobel algorithm is large, the pixel is more likely to be located on the edge of an area.
As shown in fig. 5, if the edge value of the pixel P is to be calculated, each pixel in the neighborhood of the pixel P (e.g., 3 × 3 neighborhood) is also considered.
According to the sobel algorithm, the edge values are calculated as follows:
Figure C20041008650400121
for simplicity, the edge value is usually calculated by the following formula:
Sbl(P)=(|M h |+|M v |)/255
wherein M is h And M v Calculated using the following formula:
M h =(G 3 +2G 5 +G 8 )-(G 1 +2G 4 +G 6 )
M v =(G 6 +2G 7 +G 8 )-(G 1 +2G 2 +G 3 )
wherein G is 1 、G 2 、G 3 、G 4 、G 5 、G 6 、G 7 And G 8 Representing the neighborhood of the pixel whose edge value is to be calculated (e.g. P in FIG. 5) 1 、P 2 、P 3 、P 4 、P 5 、P 6 、P 7 And P 8 ) The gray scale of (a).
Edge values of pixels near the edge of the ellipse are calculated to adjust the edge of the ellipse.
Suppose P 0 (x 0 ,y 0 ) Is the edge pixel to be considered. Will approach P 0 (x 0 ,y 0 ) Is defined as the following set:
S={P(x,y)|y=y 0 ,|x-x 0 |<T}
where T represents a threshold value. For example, T may be 1/8 of the major axis of the ellipse identified in step 310.
Can also approach P 0 (x 0 ,y 0 ) Is defined as the following set:
S={P(x,y)|x=x 0 ,|y-y 0 |<T}
for each pixel in the set S, the edge value is calculated using the Sobel algorithm and the pixel with the maximum value is selected as the new edge pixel and used in place of P 0 (x 0 ,y 0 )。
Alternatively, the edge value may be calculated by Canny algorithm, prewitt algorithm, roberts algorithm, laplace-gaussian method, zero-cross method, and the like. The different kinds of edge values and the different calculation methods of the edge values do not limit the present invention.
Fig. 6 is a block diagram of an apparatus for detecting a human skin color region in an image according to another embodiment of the present invention.
In fig. 6, reference numeral 601 denotes a calculator, reference numeral 602 denotes a candidate identification circuit, reference numeral 603 denotes a flesh color pixel identification circuit, and reference numeral 605 denotes an ellipse identification and adjustment circuit.
The calculator 601 receives the image to be processed and calculates a skin color reference vector. The calculator 601 may identify the skin tone reference portion directly in the image, or first identify a face rectangle in the image and then identify the skin tone reference portion in the face rectangle. Then, the calculator 601 calculates a skin color reference vector based on the skin color reference part.
The candidate recognition circuit 602 recognizes a face rectangle in an image.
The flesh tone pixel identification circuit 603 identifies flesh tone pixels based on the distance between the flesh tone reference vector and the pixel color vectors in the candidate face rectangle. The distances of pixels in the neighborhood of the pixel to be identified are also considered. Reference may be made to steps 306 to 309 in figure 3.
The ellipse identification and adjustment circuit 605 identifies an ellipse that encompasses substantially all flesh tone pixels that have been identified by the flesh tone pixel identification circuit 602. Reference may be made to steps 310 and 311 in fig. 3. The output of the ellipse recognition and adjustment circuit 605 can be used to further process the image.
Fig. 7A, 7B, and 7C show examples of a face skin color region. Fig. 7A shows an original image. Fig. 7B shows an ellipse based on flesh tone pixels identified in the face rectangle shown in fig. 7A. Fig. 7C shows an image in which a face skin color region has been detected in the image.
An example of identifying flesh tone pixels is described below.
Face rectangles are identified from the image. The width of the face rectangle is 241 pixels. The face rectangle is 297 pixels in height. And selecting a rectangular skin color reference part in the face rectangle. The width of the skin tone reference portion is 24 pixels. The height of the skin tone reference portion is 20 pixels. The R, G, B values of the pixels in the face rectangle are
(70,67,62),
(70,69,65),
(91,90,86),
(87,87,85),
(78,77,73),
(94,91,86),
(143,136,128),
(157,144,135),
(173,157,144),
(169,147,133),
(165,139,124),
(165,136,118),
(169,136,119),
(171,138,119),
(176, 141, 122), and the like.
Reference vector of skin color
Figure C20041008650400141
Calculated as [177.5506, 143.0911, 126.2267]。
Computing a covariance matrix as ∑ x
Figure C20041008650400142
Then, the user can use the device to perform the operation,
Figure C20041008650400143
then, the Mahalanobis distance is calculated as
Figure C20041008650400144
Let the first threshold be 5.
Then, mahalanobis distances of all pixels in the face rectangle 400 are calculated.
Take pixels (111, 56) and (111, 40) as an example. The color vectors for pixels (111, 56) and (111, 40) are [70/100, 67/100, 62/100] and [165/100, 139/100, 124/100].
Here, the flesh color reference vector and the color vector of the pixel are calculated using the R (red), G (green), and B (blue) values. To reduce the amount of computation, all of the R (red), G (green), and B (blue) values of a pixel are divided by 100.
The mahalanobis distance of the pixel (111, 56) is 43.187, which is not less than the first threshold value. Therefore, the pixel (111, 56) cannot be identified as a flesh tone pixel.
The mahalanobis distance of the pixel (111, 40) is 1.4322, which is less than the first threshold. Thus, the pixel (111, 40) is identified as a flesh tone pixel.
Examples of identifying and adjusting ellipses are described below.
As shown in fig. 7A, 4735 pixels are identified as flesh tone pixels. Their coordinates are
(-1.8759,0.5517),
(-1.8759,0.6069),
(-1.8759,0.6621),
(-1.8759,0.7172),
(-1.8759,0.7724),
(-1.8759,0.8276),
(-1.8759,0.8828),
(-1.8759,0.9379),
(-1.8759,0.9931),
(-1.8207, 0.2207), etc.
Calculate the average coordinates of 4735 flesh tone pixels as
The covariance matrix sigma z The calculation is as follows:
then, the user can use the device to perform the operation,
Figure C20041008650400153
let r be 2.
Thus, the formula for the ellipse is:
Figure C20041008650400161
an example of calculating the edge value using the sobel algorithm is as follows.
As shown in fig. 5, the pixels (380, 250) are taken as an example. Will M h The value is calculated as-39 and M is added v The value was calculated to be-3. Therefore, the edge value of the pixel is (| -39| + | -3 |)/255, i.e., the edge value is 0.165. For a further explanation of this example, reference may be made to chinese patent application No. 01132807.X (filed by the same applicant on 6/9/2001).
Fig. 8 schematically illustrates an image processing system capable of implementing the method illustrated in fig. 3. The image processing system in fig. 8 includes a CPU (central processing unit) 801, a RAM (random access memory) 802, a ROM (read only memory) 803, a system bus 804, an HD (hard disk) controller 805, a keyboard controller 806, a serial port controller 807, a parallel port controller 808, a display controller 809, a hard disk 810, a keyboard 811, a camera 812, a printer 813, and a display 814. Among these components, connected to a system bus 804 are a CPU 801, a RAM802, a ROM 803, an HD controller 805, a keyboard controller 806, a serial port controller 807, a parallel port controller 808, and a display controller 809. The hard disk 810 is connected to the HD controller 805, and the keyboard 811 is connected to the keyboard controller 806, the camera 812 is connected to the serial port controller 807, the printer 813 is connected to the parallel port controller 808, and the display 814 is connected to the display controller 809.
The functions of the various components in fig. 8 are well known in the art, and the architecture shown in fig. 8 is conventional. This structure is used not only for personal computers but also for handheld devices such as Palm PCs, PDAs (personal data assistants), digital cameras, and the like. In a different application, certain components shown in FIG. 8 may be omitted. For example, if the entire system is a digital camera, the parallel port controller 808 and the printer 813 may be omitted, and the system may be implemented by a single chip microcomputer. If the application software is stored in an EPROM or other non-volatile memory, the HD controller 805 and the hard disk 810 may be omitted.
The entire system shown in fig. 8 is controlled by computer readable instructions, typically stored as software in the hard disk 810 (or as described above, in an EPROM or other non-volatile memory). The software may also be downloaded from a network (not shown in the figure). Software stored in the hard disk 810 or downloaded from the network may be loaded into the RAM802 and executed by the CPU 801 to perform functions defined by the software.
One or more software programs may be developed based on the flow chart of fig. 3 without inventive work for a person skilled in the art. The software thus developed will perform the method of processing images shown in fig. 3.
In a sense, the image processing system shown in fig. 8, if supported by software developed according to the flowchart shown in fig. 3, can realize the same functions as the image processing apparatus shown in fig. 6.
The present invention also provides a storage medium encoded with a machine-readable computer program for detecting a human face skin tone region in an image, the storage medium comprising instructions for causing a processor to implement a method according to the present invention. The storage medium may be any tangible medium, such as a floppy disk, a CD-ROM, or a hard disk drive (e.g., hard disk 810 in FIG. 8).
While the foregoing has been with reference to particular embodiments of the invention, it will be appreciated by those skilled in the art that these are merely illustrative of and that many changes can be made in these embodiments without departing from the principles of the invention, the scope of which is defined by the appended claims.

Claims (17)

1. A method for detecting a face skin color region in an image is characterized by comprising the following steps:
identifying a face region in the image;
determining a skin color reference part in the identified human face region;
calculating a skin color reference vector of the image based on color values of pixels included in the skin color reference portion;
identifying skin color pixels in the face region based on the skin color reference vector;
identifying an ellipse substantially surrounding the flesh tone pixel; and
and adjusting the edge of the ellipse based on the edge value of the edge of the ellipse and the edge value of the peripheral pixels of the edge to obtain the face skin color area.
2. The method according to claim 1, characterized in that said skin tone reference part is the area between the two eyes in said face area.
3. The method according to claim 1, characterized in that said skin tone reference vector comprises an R value, a G value and a B value, wherein
The R value is an average red value of pixels included in the skin tone reference part,
the G value is an average green value of pixels included in the skin color reference portion, an
The B value is an average blue value of pixels included in the skin tone reference portion.
4. A method according to claim 3, characterized in that said step of identifying flesh tone pixels in said face region comprises:
calculating a distance between the skin color reference vector and a color vector of each pixel included in the face region; and
and if the distance of one pixel and the distance of each pixel in the neighborhood of the pixel are both smaller than a first threshold value, identifying the pixel as a skin color pixel.
5. A method according to claim 4, characterized in that said pixels are spaced by a distance
Figure C2004100865040002C1
A calculation, where x represents the color vector of the pixel, x represents the skin color reference vector, Σ x A covariance matrix representing pixels in the skin color reference portion, and T represents a transpose; further characterized in that said first threshold is in the range of 1 to 20.
6. A method according to claim 5, characterized in that said first threshold value is 5.
7. The method of claim 1, wherein the ellipse is fullFoot
Figure C2004100865040003C1
Wherein z represents the coordinates of any one of the flesh tone pixels;
Figure C2004100865040003C2
an average coordinate representing all of the skin tone pixels; sigma z A covariance matrix representing all of the flesh tone pixels; r denotes the size of the ellipse and T denotes the transpose.
8. The method of claim 1, wherein said step of adjusting the edges of said ellipse comprises the steps of:
for each edge pixel located on the edge of the ellipse, calculating a feature value of the edge pixel, calculating a feature value of each pixel close to the edge pixel, selecting the largest feature value, and replacing the edge pixel with the pixel having the largest feature value.
9. A method according to claim 8, characterized in that the characteristic values are edge values calculated with the sobel algorithm.
10. An apparatus for detecting a face skin tone region in an image, comprising:
a candidate recognition circuit for recognizing a face region in the image;
a calculator for calculating a skin color reference vector for the image based on color values of pixels included in the skin color reference portion;
means for determining a skin tone reference portion in the identified face region;
a skin tone pixel identification circuit for identifying skin tone pixels in the face region based on the skin tone reference vector; and
an ellipse recognition and adjustment circuit to recognize an ellipse that substantially surrounds the skin tone pixel and to adjust the ellipse edge based on an edge value of the edge of the ellipse and an edge value of a peripheral pixel of the edge to obtain the face skin tone region.
11. The apparatus of claim 10 wherein said skin tone reference portion is the area between the eyes in said face region.
12. The apparatus according to claim 10, wherein said skin tone reference vector includes an R value, a G value and a B value, wherein
The R value is an average red value of pixels included in the skin color reference part,
the G value is an average green value of pixels included in the skin color reference portion, an
The B value is an average blue value of pixels included in the skin tone reference portion.
13. The apparatus of claim 12 wherein said flesh tone pixel identification circuit comprises:
means for calculating a distance between the skin tone reference vector and each pixel included in the face region; and
means for identifying a pixel as a flesh tone pixel if the distance of the pixel and the distance of each pixel in the neighborhood of the pixel are both less than a first threshold.
14. The apparatus of claim 13, wherein said pixels are spaced apart by a distance
Figure C2004100865040004C1
Calculating, wherein x represents a color vector of the pixel,
Figure C2004100865040004C2
represents the skin tone reference vector, Σ x Representing a covariance matrix of pixels in the skin tone reference portion, and T representing a transpose; further characterized in that said first threshold is in the range of 1 to 20.
15. The apparatus of claim 14 wherein said first threshold is 5.
16. The apparatus of claim 10, wherein said ellipse identification and adjustment circuit uses
Figure C2004100865040004C3
Identifying said ellipse, wherein z represents the coordinates of any one of said flesh tone pixels;
Figure C2004100865040004C4
an average coordinate representing all of the skin tone pixels; sigma z A covariance matrix representing all of the flesh tone pixels; r represents the size of the ellipse and T represents the transposition.
17. The apparatus of claim 10, wherein:
for each edge pixel located on the edge of the ellipse, the ellipse identification and adjustment circuit calculates a feature value of the edge pixel, calculates a feature value of each pixel proximate to the edge pixel, selects the largest feature value, and replaces the edge pixel with the pixel having the largest feature value.
CNB2004100865046A 2004-10-21 2004-10-21 Method, device and storage medium for detecting face complexion area in image Expired - Fee Related CN100377164C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100865046A CN100377164C (en) 2004-10-21 2004-10-21 Method, device and storage medium for detecting face complexion area in image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100865046A CN100377164C (en) 2004-10-21 2004-10-21 Method, device and storage medium for detecting face complexion area in image

Publications (2)

Publication Number Publication Date
CN1763765A CN1763765A (en) 2006-04-26
CN100377164C true CN100377164C (en) 2008-03-26

Family

ID=36747891

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100865046A Expired - Fee Related CN100377164C (en) 2004-10-21 2004-10-21 Method, device and storage medium for detecting face complexion area in image

Country Status (1)

Country Link
CN (1) CN100377164C (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8111874B2 (en) * 2007-12-04 2012-02-07 Mediatek Inc. Method and apparatus for image capturing
CN102096802B (en) * 2009-12-11 2012-11-21 华为技术有限公司 Face detection method and device
JP2011133977A (en) * 2009-12-22 2011-07-07 Sony Corp Image processor, image processing method, and program
CN102324036B (en) * 2011-09-02 2014-06-11 北京新媒传信科技有限公司 Method and device for acquiring human face skin color region from image
CN103839250B (en) * 2012-11-23 2017-03-01 诺基亚技术有限公司 The method and apparatus processing for face-image
CN105205437B (en) * 2014-06-16 2018-12-07 浙江宇视科技有限公司 Side face detection method and device based on contouring head verifying
CN105224917B (en) * 2015-09-10 2019-06-21 成都品果科技有限公司 A kind of method and system using color space creation skin color probability map
CN106991360B (en) * 2016-01-20 2019-05-07 腾讯科技(深圳)有限公司 Face identification method and face identification system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1255225A2 (en) * 2001-05-01 2002-11-06 Eastman Kodak Company Method for detecting eye and mouth positions in a digital image
CN1411284A (en) * 2001-10-05 2003-04-16 Lg电子株式会社 Method for testing face by image
JP2003308530A (en) * 2002-04-15 2003-10-31 Canon I-Tech Inc Image recognizer
CN1492379A (en) * 2002-10-22 2004-04-28 中国科学院计算技术研究所 Method for covering face of news interviewee using quick face detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1255225A2 (en) * 2001-05-01 2002-11-06 Eastman Kodak Company Method for detecting eye and mouth positions in a digital image
CN1411284A (en) * 2001-10-05 2003-04-16 Lg电子株式会社 Method for testing face by image
JP2003308530A (en) * 2002-04-15 2003-10-31 Canon I-Tech Inc Image recognizer
CN1492379A (en) * 2002-10-22 2004-04-28 中国科学院计算技术研究所 Method for covering face of news interviewee using quick face detection

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
人脸图像自动检测与识别. 陈维斌,陈启泉,邱文宇.计算机应用研究. 2003 *
人脸识别技术综述. 何东风,凌捷.微机发展,第13卷第12期. 2003 *
基于支持向量机的彩色图像人脸检测方法. 冯元戬,施鹏飞.上海交通大学学报,第37卷第6期. 2003 *
彩色序列图像的人脸检测和识别系统. 凌旭峰,杨杰,叶晨洲.电子学报,第31卷第4期. 2003 *

Also Published As

Publication number Publication date
CN1763765A (en) 2006-04-26

Similar Documents

Publication Publication Date Title
US7376270B2 (en) Detecting human faces and detecting red eyes
Chiang et al. A novel method for detecting lips, eyes and faces in real time
US8209172B2 (en) Pattern identification method, apparatus, and program
WO2019232866A1 (en) Human eye model training method, human eye recognition method, apparatus, device and medium
CN106056064B (en) A kind of face identification method and face identification device
US9792494B2 (en) Image processing apparatus, method, and program capable of recognizing hand gestures
US8351708B2 (en) Information processing apparatus, information processing method, computer program, and recording medium
JP4414401B2 (en) Facial feature point detection method, apparatus, and program
JP2002342756A (en) Method for detecting position of eye and mouth in digital image
JPH10214346A (en) Hand gesture recognizing system and its method
WO2006013913A1 (en) Object image detection device, face image detection program, and face image detection method
JP2003030667A (en) Method for automatically locating eyes in image
KR20010103631A (en) System and method for biometrics-based facial feature extraction
JPH10214346A6 (en) Hand gesture recognition system and method
JP2007042072A (en) Tracking apparatus
EP2691915A1 (en) Method of facial landmark detection
JP2007233871A (en) Image processor, control method for computer, and program
US20070014433A1 (en) Image processing apparatus and image processing method
JP2005309765A (en) Image recognition device, image extraction device, image extraction method and program
CN100377164C (en) Method, device and storage medium for detecting face complexion area in image
JP4092059B2 (en) Image recognition device
US7403636B2 (en) Method and apparatus for processing an image
JP2007026308A (en) Image processing method and image processor
JP2006323779A (en) Image processing method and device
JP2007047949A (en) Apparatus for tracking mouse and computer program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080326

Termination date: 20161021

CF01 Termination of patent right due to non-payment of annual fee