CN1635543A - Method and apparatus for detecting human face - Google Patents

Method and apparatus for detecting human face Download PDF

Info

Publication number
CN1635543A
CN1635543A CN 200310116033 CN200310116033A CN1635543A CN 1635543 A CN1635543 A CN 1635543A CN 200310116033 CN200310116033 CN 200310116033 CN 200310116033 A CN200310116033 A CN 200310116033A CN 1635543 A CN1635543 A CN 1635543A
Authority
CN
China
Prior art keywords
face
candidate
people
detect people
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200310116033
Other languages
Chinese (zh)
Other versions
CN100418106C (en
Inventor
陈新武
王健民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CNB2003101160334A priority Critical patent/CN100418106C/en
Priority to US11/023,965 priority patent/US7376270B2/en
Publication of CN1635543A publication Critical patent/CN1635543A/en
Application granted granted Critical
Publication of CN100418106C publication Critical patent/CN100418106C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

This invention relates to human face test method and device, which comprises the following steps: getting a prepare human face skin color sample; computing the relative distance between all picture elements to the sample color space; forming a characteristic vector based on the said relative distance; analyzing prepare faces to determine whether it is the human face. Besides, it can process human faces of different races.

Description

Be used to detect the method and apparatus of people's face
Technical field
The present invention relates to Flame Image Process, more particularly, relate to the method and apparatus that is used to detect people's face.
Background technology
Recently, people's face detection techniques is widely used for various applications.For example, this technology can be used for digital video apparatus and office equipment, for example digital camera, digital camera, printer, scanner etc.As everyone knows, in general portrait is the major part of a width of cloth digital picture.In order to make a width of cloth comprise that the image of one or more people's faces reflects subject in more real mode, should carry out some and handle, for example the skin color of auto-mending face, elimination blood-shot eye illness, elimination glasses, judgement countenance etc.All above processing all based on people's face detection techniques.
People's face detection is the process of people's face position in the detected image.Yet, the changeability of (vertical, rotation) and posture (front, side) because ratio, location, orientation, face detection is a challenging task.Facial expression and lighting condition also will change the whole appearance of face.
Routinely, the method that is used for face detection is divided into four classes, promptly based on the method for data, based on the method for the method of feature, template matches with based on the method for appearance.
The human body data that will constitute a typical face based on the method for data is encoded.Usually, rule is obtained the relation between the facial characteristics.These methods are mainly used in the face location.
Even be the architectural feature that finds posture, viewpoint or lighting condition to change and also exist based on the method purpose of feature, use these architectural features that face is located then.These methods are mainly used in the face location.
In the method for template matches, several test patterns of a face of storage making as a whole description face, or are described facial characteristics respectively.In order to detect, calculate the correlativity between a width of cloth input picture and the institute's graphics.These methods have been used for the face location and have detected two aspects.
In the method based on appearance, the one group of training image that changes from the representativeness that should obtain facial appearance comes learning model.Then these learning models are used for detecting.These methods are mainly used in face detection.
In the method based on feature, the invariant features that finds face is to be used for detection.Following hypothesis is observed based on this, and promptly the people can easily detect face and the object under different gestures and the lighting condition, and therefore must have constant characteristic or feature for these variations.Proposed no counting method,, inferred the existence of face then at first to detect facial characteristics.Facial characteristics for example eyebrow, eyes, nose, mouth and hair line uses edge detector to extract usually.Based on the feature that extracts, set up a statistical model, describing their relation, and the existence of checking face.Using these problems based on the algorithm of feature and be characteristics of image can be because illumination, noise and block and seriously damaged.The characteristic boundary of face is died down, and shade can cause countless strong edges, these will cause feeling that grouping algorithm can't use.
Usually, in method, facial characteristics, people texture and skin color on the face got and made the feature invariant based on feature.In addition, with some facial characteristics for example skin color, size and dimension combine, to find candidate's face, for example eyebrow, nose and hair wait and verify these candidate's faces to use the partial detailed feature then.
The application on human skin color has been used and has proved that in the many application that are used for face detection are validity features.If can make the skin color model suitably be fit to different lighting environments, then colouring information is the effective tool that is used to discern facial zone and specific facial characteristics.Yet under the situation of the spectrum marked change of light source, such skin color model is very ineffective.In other words, the variation owing to background and prospect illumination causes that the color appearance is often unstable.
And, many images of specifying the race of only handling of above model.If a skin color model that is used for particular race is used to handle the image that a width of cloth comprises one or more people's faces of other races, error result may occur.
These existing skin color models use the absolute value of the facial skin color of candidate's face, to determine whether they are real people's face.In other words, these skin color models limit the border of skin bunch clearly in a certain color space.
Therefore, be difficult to determine exactly whether the image that absorbs is real human face under different lighting conditions.And existing skin color model can not be used for handling the image of the people's face that comprises different races.
Summary of the invention
According to the present invention, a kind of method that is used to detect people's face is provided, it can solve problems of the prior art.
This method that is used to detect people's face may further comprise the steps: the sample that obtains the skin color of candidate's face; Calculate the relative distance of the color space of interior all pixels of described candidate's face and described sample; Constitute a proper vector of described candidate's face based on described relative distance; And analyze described candidate's face to determine if it is people's face.
In addition, also provide a kind of equipment that is used to detect people's face, comprise an input media, an output unit and a computing machine.Computing machine comprises that obtains a device, is used to obtain the sample of the skin color of candidate's face; A calculation element is used for the relative distance of the color space of all pixels and sample in the calculated candidate face; A constituent apparatus is used for a proper vector based on relative distance formation candidate face; With an analytical equipment, be used to analyze candidate's face to determine if it is people's face.
Because this relative skin color model of method and apparatus use that is used to detect people's face so it can be used for determining exactly whether the image that absorbs is real human face under different lighting conditions, determines that the result is not influenced by lighting condition.And it can be used for handling the image of the people's face that comprises different races.
By the following description of embodiment being done together with accompanying drawing, above and other purpose of the present invention, effect, characteristics and advantage will become more apparent.
Description of drawings
Fig. 1 is that expression is suitable for implementing the calcspar according to the equipment of the method that is used to detect people's face of the present invention.
Fig. 2 is that expression is suitable for implementing the process flow diagram according to the image processing method of the method that is used to detect people's face of the present invention.
Fig. 3 is that expression is according to the process flow diagram that is used to detect the method for people's face of the present invention.
Fig. 4 is candidate's face of a given area A of the expression RGB sample value that is used to obtain the facial skin color.
Fig. 5 a to Fig. 5 d represents to be used to detect a process of the method for people's face.
Fig. 6 a and Fig. 6 b represent two candidate's faces respectively, and one of them is people's face, and another is not.
Embodiment
In the following description, a preferred embodiment of the present invention will be described, as a kind of method that is used to detect people's face that will implement with software program usually.Those skilled in the art will readily recognize that, also can constitute the equivalent of such software with hardware.
And software program can be stored in one and calculate in the readable storage medium storing program for executing, and this storage medium for example can comprise magnetic storage medium, for example disk (for example floppy disk or hard disk drive) or tape; Optical storage media, for example CD; Solid state electronic memory device, for example random-access memory (ram), or ROM (read-only memory) (ROM); Or be used for any other physical unit or the medium of storage computation machine program.
With reference to figure 1, illustrate that is suitable for implementing a typical image treatment facility of the present invention.This image processing equipment comprises that 10, one of an input media that are used to import a width of cloth input digital image is used to handle digital picture computing machine 12 and output unit 14 that is used to receive and handle output image to produce a width of cloth output image of input.Input media 10 can be that digital camera or scanner, Internet connect, independent memory storage or other similar devices.Computing machine 12 can be personal computer, MPU or other similar devices.Output unit 14 can be that digital printer, display device, Internet connect, independent memory storage or other similar devices.
Fig. 2 is that expression is suitable for implementing the process flow diagram according to the image processing method of the method that is used to detect people's face of the present invention, below will describe in more detail.This image processing method can be used for from a sequence of video images location and detect people's face in the width of cloth coloured image.Sequence of video images for example can be by a video camera real-time feed, or searches from an image data base.
Shown in Fig. 2 is general,, obtain the digital picture that a width of cloth is got red, green and blue (RGB) form at step S101.For example, this step can comprise the video data of storage from video camera.At step S102, searching image is so that the location constitutes the zone of candidate's face.At step S103, determine whether to find any candidate's face.If not, execution in step S101, and repeating step S102 and S103 are up to find at least one candidate's face in up-to-date image.
Above step S101 to S103 process routinely carries out.
Then, process proceeds to step S104, to analyze candidate's face, to determine whether this candidate's face is people's face (step S105).If determine that at step S105 candidate's face is not people's face, process is returned step S104, otherwise proceeds to step S106, comprises the image of people's face with further processing, for example eliminates people's blood-shot eye illness on the face.
After this, process proceeds to step S107, to determine whether to have tested all candidate's faces.If answer is a "Yes", process finishes, otherwise returns step S104, to analyze other candidate's faces.
Fig. 3 is the process flow diagram that expression is used to detect the method for people's face, to analyze candidate's face, to determine if it is people's face.In other words, Fig. 3 represents the processing among the step S104 and S105 as shown in Figure 2 in detail.
At first, obtain the sample of the skin color of candidate's face.In the preferred embodiment, the sample of skin color is R (red), G (green) and B (indigo plant) value of the skin color of candidate's face.Selectively, the sample of skin color can be other values of the skin color of reflection candidate face.
As shown in Figure 3, read candidate's face (step S200) that comprises a plurality of pixels from the storer (not shown) of digital machine 10 (Fig. 1).Can obtain the rgb value (step S201) of all pixels in candidate's face then.
At step S202, determine a given area in candidate's face, with average RGB value and the covariance matrix that calculates this regional interior pixel, as the sample of the skin color of candidate's face.
According to the preferred embodiment, following definite given area:
Set up a coordinate system, wherein pass two points thinking left eye and right eye in candidate's face, the location X-axis, the mid point between two points, the location initial point, and along the direction of passing another point of thinking nose in candidate's face, be in Y-axis (with reference to figure 4).
In Fig. 4, two eyes of label L and R indication candidate face, and the initial point of label o indication coordinate system.
Suppose that the distance between one of two points and the initial point is 1.Zone A={ (x, y) || x|<0.8; 0<y<1} is always in face.In this case, can obtain the rgb value C of all pixels in the regional A r, C gAnd C bThereby, also can obtain the average RGB value c of regional A interior pixel r, c gAnd c b, reach covariance matrix, and get the sample (step 202) of the skin color of making candidate's face.
Secondly, the relative distance (step S203) of the color space of all pixels and sample in the calculated candidate face.In the preferred embodiment, relative distance means Ma Shi (Mahalanobis) distance of all pixels and sample in candidate's face.Selectively, relative distance can be Euclidean (Euclidean) distance.
In the preferred embodiment, based on the mahalanobis distance of the color space of each pixel and sample in the following formula calculated candidate face:
d=(c- c) Tc -1(c- c)??????(1)
Wherein d represents the mahalanobis distance of a pixel; C represents the average RGB value c of given area A interior pixel r, c gAnd c bAn average RGB vector of being formed; The acquisition value C of c remarked pixel r, C gAnd C bA RGB vector of being formed; ∑ c -1The inverse matrix of the covariance matrix of expression given area A interior pixel; And " T " expression vector or transpose of a matrix.
Fig. 5 a represents candidate's face.Fig. 5 b represents a width of cloth picture, wherein above mahalanobis distance value is got the brightness of making pixel.In Fig. 5 b, pixel is dark more, and the mahalanobis distance between the sample of this pixel and skin color is more little.
The 3rd, based on a proper vector of relative distance formation candidate face.Below be mahalanobis distance to be got under the situation of making relative distance the description of constitutive characteristic vector.
At step S204, candidate's face is divided into m * n piece, can calculate the average mahalanobis distance of each piece then, to obtain m * n mean distance, they can constitute an interim vector (step S205) with m * n dimension.
M and n are the positive integers in 1 to 40 scope.In the present embodiment, m=4, and n=5 (Fig. 5 c).
Secondly, with mahalanobis distance and first threshold comparison of all pixels in candidate's face, to obtain mahalanobis distance those pixels (step S206) less than first threshold.
First threshold is a real number in 2 to 13 scopes.In the present embodiment, it is 5.
Then, can calculate the statistics of mahalanobis distance, for example mean value (bidimensional), variance (bidimensional) and covariance (one dimension) (step S207) less than the coordinate (x and y coordinate) of the pixel of first threshold.
Suppose x and ∑ xRepresent mean value and covariance matrix respectively, so (x-x) Tx -1Ellipse shown in the presentation graphs 5d of (x-x)=1.
At step S208, five dimension statisticss are added in the interim vector that step S205 forms, to constitute a full proper vector.
At last, analyze candidate's face, to determine if it is people's face.
At step S209, calculate the inner product of a full proper vector and a given weighing vector w.
At step S210, judge that whether inner product is greater than one second threshold value.If answer is a "Yes", candidate's face is defined as true face.Process proceeds to step S106 then, otherwise proceeds to step S200.
Use for example linear Fei Sheerfa of conventional method or linear SVM, can obtain the given weighing vector w and second threshold value by a training program.
In this preferred embodiment, second threshold value is-1094, and given weighing vector w is as follows:
w[25]=
{12.24,1.29,??-18.39,?-9.88,?9.46,
-6.53,?-14.41,-189.22,-10.10,-6.08,
-20.15,-8.89,?-210.78,12.08,?-5.90,
18.48,?-1.56,?-15.41,?-6.94,?7.27,
74.87,?12.95,?-301.22,7.61,??-11.36}。
To describe in detail according to the equipment that is used to detect people's face of the present invention.
As mentioned above, equipment is suitable for implementing the method that is used to detect people's face according to of the present invention as shown in Figure 1.In order to realize the present invention, computing machine 12 comprises the acquisition device of the sample of a skin color that is used to obtain candidate face; A calculation element that is used for the relative distance of the color space of all pixels and sample in the calculated candidate face; A constituent apparatus that is used for constituting a proper vector of candidate's face based on relative distance; With one be used to analyze candidate's face to determine if it is the analytical equipment of people's face.
Get two candidate's face C that Fig. 6 a and Fig. 6 b represent respectively 1And C 2As two examples, will the method that be used to detect people's face according to of the present invention be described in more detailed mode.
Example 1
Table 1 is illustrated in candidate's face C 1Given area A 1=(x, y) || x|<0.8; The rgb value of the part of the pixel in 0<y<1}.
Table 1
????R 1 ????G 1 ????B 1
????109 ????68 ????62
????144 ????103 ????97
????171 ????130 ????126
????165 ????124 ????120
????194 ????153 ????149
????194 ????153 ????149
????207 ????166 ????164
????219 ????178 ????176
????…… ????…… ????……
????208 ????163 ????158
????210 ????167 ????161
????212 ????171 ????165
????213 ????176 ????168
????211 ????177 ????168
????211 ????177 ????168
????206 ????172 ????162
Wherein table 1 each the row R 1, G 1And B 1Value is indicated respectively at given area A 1R, G and the B value of an interior sampled pixel.At regional A 1In 1369 sampled pixels are arranged, wherein 15 sampled pixels are listed in the table 1.
Then, average RGB vector c, covariance matrix ∑ cWith the covariance matrix ∑ cThe inverse matrix ∑ c -1As follows respectively:
c=[215??175??169]
861.45??917.81???914.19
c=917.81??1009.90??1007.10
914.19??1007.10??1018.00
0.036632????-0.035987????0.0027046
c -1=-0.035987???0.10833??????-0.074846
0.0027046???-0.074846????0.072594
Secondly, will be based on formula (1) calculated candidate face C 1The mahalanobis distance of interior all pixels.That below list for the sake of simplicity, only is candidate's face C 1The mahalanobis distance of 25 pixels of upper left.
22.58????22.74????17.69????17.81????17.88
22.69????22.69????17.69????18.16????18.66
17.51????17.42????12.97????13.42????14.10
17.95????17.88????13.30????13.49????13.63
18.32????18.57????14.01????13.78????13.42
With candidate's face C 1Be divided into 4 * 5, can calculate the average mahalanobis distance of each piece then, to obtain 20 following average mahalanobis distances:
64.75?????7.86?????1.48?????4.76?????47.74
11.37?????1.29?????1.52?????2.52?????3.94
76.73?????4.74?????1.50?????0.50?????0.23
120.42????45.33????13.80????29.00????55.92
Thereby, can obtain the coordinate figure of mahalanobis distance less than the pixel of a predetermined first threshold (in this example, it is 5).
In example 1,4313 mahalanobis distances are arranged less than 5 pixel.What below list is the coordinate that belongs to 15 pixels of above 4313 pixels, wherein x of each row of table 2 and the y value coordinate figure of indicating a pixel respectively.
Table 2
????x ????y
????-1.39 ????-1.00
????-1.28 ????-0.50
????-1.17 ????-0.83
????-1.06 ????-1.50
????-0.94 ????-0.94
????-0.83 ????0.22
????-0.72 ????1.67
????-0.56 ????0.72
????-0.39 ????0.67
????-0.22 ????-0.83
????-0.11 ????0.67
????0.00 ????3.17
????0.17 ????0.94
????0.33 ????0.28
????0.50 ????0.39
Then, can obtain the statistics of its mahalanobis distance, for example mean value, variance and covariance (step S207) less than the coordinate (x and y coordinate) of the pixel of first threshold.
Above-mentioned 4313 pixels have following statistics:
The mean value of coordinate :-0.52 1.05
The variance of coordinate: 1.20 1.25
The covariance of x and y coordinate: 0.25
Thereby, can one of following formation have the 25 candidate's face C that tie up 1Proper vector:
64.75?????7.86?????1.48?????4.76?????47.74
11.37?????1.29?????1.52?????2.52?????3.94
76.73?????4.74?????1.50?????0.50?????0.22
120.42????45.33????13.80????29.00????55.92
-0.52?????1.05?????1.20?????1.25?????0.25
Can obtain the inner product of proper vector and given weighing vector w.For candidate's face C 1, inner product is 617.
As mentioned above, second threshold value is chosen as-1094, thereby, candidate's face C 1Inner product greater than second threshold value.As a result, determine candidate's face C 1The behaviour face.
Example 2
Table 3 is illustrated in candidate's face C 2Given area A 2=(x, y) || x|<0.8; The rgb value of the part of the pixel in 0<y<1}.
Table 3
????R 2 ????G 2 ????B 2
????147 ????140 ????147
????146 ????139 ????146
????139 ????132 ????139
????138 ????131 ????138
????145 ????138 ????145
????144 ????137 ????144
????138 ????131 ????138
????148 ????141 ????148
????150 ????143 ????150
????131 ????122 ????127
????…… ????…… ????……
????85 ????89 ????100
????78 ????82 ????93
????102 ????109 ????119
????90 ????97 ????107
????92 ????96 ????105
????91 ????95 ????104
????86 ????90 ????101
????95 ????99 ????110
????114 ????118 ????129
Wherein table 3 each the row R 2, G 2And B 2Value is indicated respectively at given area A 2R, G and the B value of an interior sampled pixel.At regional A 2In 6241 sampled pixels are arranged, wherein 19 sampled pixels are listed in the table 3.
For candidate's face C 2, c, ∑ cAnd ∑ c -1As follows respectively:
c=[116.55??117.94??129.18]
1098.00????1039.70????1021.50
c=1039.70????1000.10????982.31
1021.50????982.31?????976.57
0.058394????-0.058995????-0.0017409
c -1=-0.058995????0.14285??????-0.081978
-0.001740????-0.081978????0.085305
Similarly, can one of following formation have the proper vector of candidate's face C2 of 25 dimensions:
1.27?????3.30????1.21?????19.07???3.59
7.98?????4.29????10.82????2.88????3.61
1.74?????3.43????1.80?????4.85????0.43
7.94?????4.99????6.48?????1.60????4.68
-0.01????0.96????1.03?????1.21????0.09
Candidate's face C 2Proper vector and the inner product of weighing vector w be-2988.4, it is less than second threshold value-1094.
As a result, determine candidate's face C 2It or not people's face.
Obviously can be used for detecting and verifying a plurality of people's faces of piece image according to the method and apparatus that is used for detecting people's face of the present invention.
As mentioned above, according to the present invention, based on the relative distance of the color space of all pixels and sample in candidate's face, the people's face in detection and the checking piece image.Therefore, by according to the determined result of method and apparatus who is used to detect people's face of the present invention, not influenced by the people's to be detected of the lighting condition of pickup image and its face race.As a result, can highly increase the accuracy of determining people's face in the piece image.
Describe the present invention in detail about preferred embodiment, and now for those skilled in the art will be apparent, with regard to its more extensive aspect, under the situation of not violating spirit of the present invention, can realize many changes and change, and plan to cover all change that belongs to true spirit of the present invention like this and changes by claims.

Claims (20)

1. a method that is used to detect people's face comprises the steps:
Obtain the sample of the skin color of candidate's face;
Calculate the relative distance of the color space of interior all pixels of described candidate's face and described sample;
Constitute a proper vector of described candidate's face based on described relative distance;
Analyze described candidate's face to determine if it is people's face.
2. the method that is used to detect people's face as claimed in claim 1 wherein by the rgb value of a given area interior pixel of described candidate's face, obtains described sample.
3. the method that is used to detect people's face as claimed in claim 2, wherein said given area by (x, y) || x|<0.8; 0<y<1} limits.
4. the method that is used to detect people's face as claimed in claim 1, wherein said relative distance are based on the mahalanobis distance that following formula calculates:
d=(c- c) Tc -1(c- c)
Wherein d represents the mahalanobis distance of a pixel; C represents the average RGB value c of given area interior pixel r, c gAnd c bAn average RGB vector of being formed; The acquisition value C of c remarked pixel r, C gAnd C bA RGB vector of being formed; ∑ c -1The inverse matrix of the covariance matrix of expression given area interior pixel; And " T " expression vector or transpose of a matrix.
5. the method that is used to detect people's face as claimed in claim 1, the described step that wherein constitutes described proper vector comprises the steps:
Described candidate's face is divided into m * n piece;
Calculate the average mahalanobis distance of all pieces;
Calculate mean value, the variance and covariance of mahalanobis distance less than the coordinate of the pixel of first threshold;
Constitute described proper vector, comprise the average mahalanobis distance of m * n piece and mean value, variance and covariance.
6. the method that is used to detect people's face as claimed in claim 1, the described step of wherein analyzing described candidate's face comprises the steps:
Calculate the inner product of a described proper vector and a given weighing vector;
With inner product and one second threshold ratio, to determine whether candidate's face is people's face.
7. the method that is used to detect people's face as claimed in claim 5, wherein m=4 and n=5.
8. the method that is used to detect people's face as claimed in claim 5, wherein first threshold is a real number in 2 to 13 scopes.
9. the method that is used to detect people's face as claimed in claim 5, wherein first threshold is 5.
10. the method that is used to detect people's face as claimed in claim 6 if wherein inner product is greater than second threshold value, determines to be people's face with described candidate's face.
11. an equipment that is used to detect people's face comprises an input media (10), an output unit (14) and a computing machine (12),
Wherein said computing machine (12) comprising:
One obtains device, is used to obtain the sample of the skin color of candidate's face;
A calculation element is used for the relative distance of the color space of all pixels and sample in the calculated candidate face;
A constituent apparatus is used for a proper vector based on relative distance formation candidate face; With
An analytical equipment is used to analyze candidate's face to determine if it is people's face.
12. the equipment that is used to detect people's face as claimed in claim 11 wherein by the rgb value of the pixel in the given area of described candidate's face, obtains described sample.
13. the equipment that is used to detect people's face as claimed in claim 12, wherein said given area by (x, y) || x|<0.8; 0<y<1} limits.
14. the equipment that is used to detect people's face as claimed in claim 11, wherein said relative distance are based on the mahalanobis distance that following formula calculates:
d=(c- c) Tc -1(c- c)
Wherein d represents the mahalanobis distance of a pixel; C represents the average RGB value c of given area interior pixel r, c gAnd c bAn average RGB vector of being formed; The acquisition value C of c remarked pixel r, C gAnd C bA RGB vector of being formed; ∑ c -1The inverse matrix of the covariance matrix of expression given area interior pixel; And " T " expression vector or transpose of a matrix.
15. the equipment that is used to detect people's face as claimed in claim 14, wherein said constituent apparatus is carried out following function:
Described candidate's face is divided into m * n piece;
Calculate the average mahalanobis distance of all pieces;
Calculate mean value, the variance and covariance of mahalanobis distance less than the coordinate of the pixel of first threshold;
Constitute described proper vector, comprise the average mahalanobis distance of m * n piece and mean value, variance and covariance.
16. the equipment that is used to detect people's face as claimed in claim 11, wherein said analytical equipment is carried out following function:
Calculate the inner product of a described proper vector and a given weighing vector;
With inner product and one second threshold ratio, to determine whether candidate's face is people's face.
17. the equipment that is used to detect people's face as claimed in claim 15, wherein m=4 and n=5.
18. the equipment that is used to detect people's face as claimed in claim 15, wherein first threshold is a real number in 2 to 13 scopes.
19. the equipment that is used to detect people's face as claimed in claim 15, wherein first threshold is 5.
20. the equipment that is used to detect people's face as claimed in claim 16, if wherein inner product is greater than second threshold value, described analytical equipment is determined described candidate's face behaviour face.
CNB2003101160334A 2003-12-29 2003-12-29 Method and apparatus for detecting human face Expired - Fee Related CN100418106C (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CNB2003101160334A CN100418106C (en) 2003-12-29 2003-12-29 Method and apparatus for detecting human face
US11/023,965 US7376270B2 (en) 2003-12-29 2004-12-29 Detecting human faces and detecting red eyes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2003101160334A CN100418106C (en) 2003-12-29 2003-12-29 Method and apparatus for detecting human face

Publications (2)

Publication Number Publication Date
CN1635543A true CN1635543A (en) 2005-07-06
CN100418106C CN100418106C (en) 2008-09-10

Family

ID=34843533

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2003101160334A Expired - Fee Related CN100418106C (en) 2003-12-29 2003-12-29 Method and apparatus for detecting human face

Country Status (1)

Country Link
CN (1) CN100418106C (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100354875C (en) * 2005-09-29 2007-12-12 上海交通大学 Red eye moving method based on human face detection
CN102045162A (en) * 2009-10-16 2011-05-04 电子科技大学 Personal identification system of permittee with tri-modal biometric characteristic and control method thereof
CN104143079A (en) * 2013-05-10 2014-11-12 腾讯科技(深圳)有限公司 Method and system for face attribute recognition
CN101965580B (en) * 2007-10-19 2016-06-08 阿泰克集团公司 The mankind based on biostatistics's behavior setting identify system and method
US9626597B2 (en) 2013-05-09 2017-04-18 Tencent Technology (Shenzhen) Company Limited Systems and methods for facial age identification

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000048184A (en) * 1998-05-29 2000-02-18 Canon Inc Method for processing image, and method for extracting facial area and device therefor
US6263113B1 (en) * 1998-12-11 2001-07-17 Philips Electronics North America Corp. Method for detecting a face in a digital image
CN1352436A (en) * 2000-11-15 2002-06-05 星创科技股份有限公司 Real-time face identification system
TW569148B (en) * 2002-04-09 2004-01-01 Ind Tech Res Inst Method for locating facial features in an image

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100354875C (en) * 2005-09-29 2007-12-12 上海交通大学 Red eye moving method based on human face detection
CN101965580B (en) * 2007-10-19 2016-06-08 阿泰克集团公司 The mankind based on biostatistics's behavior setting identify system and method
CN102045162A (en) * 2009-10-16 2011-05-04 电子科技大学 Personal identification system of permittee with tri-modal biometric characteristic and control method thereof
US9626597B2 (en) 2013-05-09 2017-04-18 Tencent Technology (Shenzhen) Company Limited Systems and methods for facial age identification
CN104143079A (en) * 2013-05-10 2014-11-12 腾讯科技(深圳)有限公司 Method and system for face attribute recognition
CN104143079B (en) * 2013-05-10 2016-08-17 腾讯科技(深圳)有限公司 The method and system of face character identification
US9679195B2 (en) 2013-05-10 2017-06-13 Tencent Technology (Shenzhen) Company Limited Systems and methods for facial property identification
US10438052B2 (en) 2013-05-10 2019-10-08 Tencent Technology (Shenzhen) Company Limited Systems and methods for facial property identification

Also Published As

Publication number Publication date
CN100418106C (en) 2008-09-10

Similar Documents

Publication Publication Date Title
CN1975759A (en) Human face identifying method based on structural principal element analysis
CN1977286A (en) Object recognition method and apparatus therefor
CN1845126A (en) Information processing apparatus and information processing method
CN1932847A (en) Method for detecting colour image human face under complex background
CN1822024A (en) Positioning method for human face characteristic point
CN1741039A (en) Face organ's location detecting apparatus, method and program
CN1950844A (en) Object posture estimation/correlation system, object posture estimation/correlation method, and program for the same
CN1798237A (en) Method of and system for image processing and computer program
CN1928889A (en) Image processing apparatus and method
CN108921057B (en) Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
JP2002230547A (en) Digital image processing method for detecting human iris in image and computer program product
CN1794265A (en) Method and device for distinguishing face expression based on video frequency
CN1892702A (en) Tracking apparatus
CN1794264A (en) Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN101034481A (en) Method for automatically generating portrait painting
CN1928895A (en) Image recognition apparatus and its method
CN1901672A (en) Camera system, information processing device and information processing method
CN1871622A (en) Image collation system and image collation method
CN1667355A (en) Image recognition method and image recognition apparatus
CN2765259Y (en) Image recognition device and demonstrator for image recognition device
CN1643540A (en) Comparing patterns
CN106297755A (en) A kind of electronic equipment for musical score image identification and recognition methods
CN112116582A (en) Cigarette detection and identification method under stock or display scene
CN110956184B (en) Abstract graph direction determining method based on HSI-LBP characteristics
CN112729691A (en) Batch workpiece airtightness detection method based on artificial intelligence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080910

Termination date: 20151229

EXPY Termination of patent right or utility model