CN1333370C - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN1333370C
CN1333370C CNB031373453A CN03137345A CN1333370C CN 1333370 C CN1333370 C CN 1333370C CN B031373453 A CNB031373453 A CN B031373453A CN 03137345 A CN03137345 A CN 03137345A CN 1333370 C CN1333370 C CN 1333370C
Authority
CN
China
Prior art keywords
eye
candidate
adjacent domain
human eye
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB031373453A
Other languages
Chinese (zh)
Other versions
CN1567369A (en
Inventor
尹志远
纪新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CNB031373453A priority Critical patent/CN1333370C/en
Publication of CN1567369A publication Critical patent/CN1567369A/en
Application granted granted Critical
Publication of CN1333370C publication Critical patent/CN1333370C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides an image processing method. The present invention is characterized in that the present invention comprises the following steps that candidate human eye areas in an image are recognized; eye region adjacent areas of the candidate human eye areas are selected and processed; according to the results of the processing step, the candidate human eye areas are classified. According to the method of the present invention, the human eyes can be detected according to the color information of the eye region adjacent areas, and the human face can be further detected according to the detected human eyes.

Description

Image processing method and device
Technical field
The present invention relates to Flame Image Process, relate in particular to the method and apparatus that the image that will detect human eye and the final people of detection face is therein handled.
Background technology
As everyone knows, many technology can be used for interesting areas in the detected image, as people's face or other the interested targets that will discern.People's face detects and to be one and to make us interested especially field, because recognition of face is not only for Flame Image Process, and differentiate and safety for identity, and man-machine interface is all made a difference.Man-machine interface is not only discerned the position of people's face, if people's face exists, it can also discern specific people's face, and is appreciated that facial expression and posture.
Recently, many researchs have been reported about detecting from moving face.Reference for example comprises 5th IEEE International Workshop on Robot and HumanCommunication in 1996, " the Face Detection and RotationsEstimation using Color Information " in the 341st to the 346th page and in June, 1999 IEEE Transactionon Pattern Analysis and Machine Intelligence roll up " FaceDetection from Color Images Using a Fuzzy Pattern Matching Method " in 21 No. 6.
In the conventional method of numerous detection people faces, a kind of method is at first to detect human eye.For example, file an application on September 15th, 2000 by same applicant, and the Chinese patent application that disclosed name was called " image processing method and equipment, image processing system and storage medium " on April 10th, 2002 discloses the method for determining a series of candidate's eyes zone in the given image for No. 00127067.2.This application is referred to herein as a reference.
The conventional method that all detect people's face or human eye all has their advantage and deficiencies separately, and this depends on algorithms of different used when handling image.Though certain methods accurately but complicated and consuming time.
Summary of the invention
The object of the present invention is to provide a kind of colouring information according to the eye adjacent domain, especially according to the colour of skin ratio of eye adjacent domain, and the method and apparatus that the image that will detect human eye and the final people of detection face is therein handled.
To achieve these goals, the invention provides a kind of image processing method, it is characterized in that comprising the steps:
Discern the candidate's human eye area in the above-mentioned image;
Choose the eye adjacent domain of candidate's human eye area;
The eye adjacent domain is handled; And
According to the result of above-mentioned treatment step, with above-mentioned candidate's human eye territorial classification.
The present invention also provides a kind of image processing apparatus, it is characterized in that comprising:
Candidate's human eye area recognizer is used for candidate's human eye area of recognition image;
Eye adjacent domain Chooser is used for choosing the eye adjacent domain in the candidate's human eye area that is identified by candidate's human eye area recognizer;
Eye adjacent domain processor is used for the eye adjacent domain that is selected by eye adjacent domain Chooser is handled; And
Sorter is used for the output according to eye adjacent domain processor, with the candidate's human eye territorial classification that has been identified by candidate's human eye area recognizer.
According to method and apparatus of the present invention, can detect human eye according to the colouring information of eye adjacent domain.The colour of skin ratio of use eye adjacent domain can improve the accuracy rate of human eye detection.
Therefore, can easily the various conventional methods of method of the present invention with definite candidate face zone be combined, so that adapt to different situations.According to being classified as true man's eye or being candidate's human eye area of candidate's human eye area of true man's eye most probably, can further detect people's face.
Other characteristics of the present invention and advantage can be in conjunction with the accompanying drawings become clearer from the description below by the preferred implementation that for example principle of the present invention is made an explanation.
Description of drawings
Figure 1A is the process flow diagram according to the embodiment of image processing method of the present invention;
Figure 1B is the process flow diagram of the embodiment of method shown in Figure 1A;
Fig. 2 A is the schematic block diagram according to image processing apparatus structure of the present invention;
Fig. 2 B schematically shows the inner structure of the processor of eye adjacent domain shown in Fig. 2 A;
Fig. 3 A and 3B schematically show the relation between candidate's human eye area and its eye adjacent domain;
Fig. 4 A and 4B illustrate an image and its colour of skin binary map;
Fig. 5 A and 5B illustrate another image and its colour of skin binary map; And
Fig. 6 schematically shows the image processing system that can implement method shown in Figure 1 therein.
Embodiment
In the following description, the candidate's human eye area about how in the recognition image can be with reference to being filed an application on September 15th, 2000 by same applicant, and in No. the 00127067.2nd, disclosed Chinese patent application on April 10th, 2002.This application is referred to herein as a reference.But the method for disclosed identification candidate human eye area is not construed as limiting the invention in No. the 00127067.2nd, the Chinese patent application.The present invention can use the conventional method of candidate's human eye area in any recognition image.
Figure 1A is the process flow diagram according to the embodiment of image processing method of the present invention.
This flow process starts from step 101.In step 102, import pending image.In step 103, identification candidate human eye area in the image of step 102 input.
In step 102 and 103, can adopt the conventional method of candidate's human eye area in any recognition image, and these methods do not constitute the restriction to present embodiment.
In step 104, in candidate's human eye area that step 103 identifies, choose the eye adjacent domain.To be described in detail with reference to Fig. 3 A and 3B in the back about eye adjacent domain how to choose candidate's human eye area.
Then, the eye adjacent domain is handled, so that calculate the colouring information of eye adjacent domain.
Present embodiment is based on such fact: the pixel of the adjacent domain of true man's eye has the colour of skin usually.Pixel with colour of skin is called as " skin pixel ".
Suppose that this image is the HSI form, the pixel that then satisfies following condition is considered to skin pixel:
((H<50 and H 〉=0) or (H>290 and H<358.1)) and I>25,
H remarked pixel form and aspect wherein, the brightness of I remarked pixel.In this example, do not use pixel intensity (S).Also other conditions can be set in order to detect skin pixel.
If this image is an extended formatting,, corresponding condition can be set in order to detect skin pixel as RGB.
The colouring information that calculates at the eye adjacent domain is in certain scope.Can determine that according to the colouring information of this eye adjacent domain the eye adjacent domain of candidate's human eye area is true man's eye or dummy's eye.
Any disposal route can be applied to the eye adjacent domain, the result that needs only the eye adjacent domain is enough to candidate's human eye territorial classification for being candidate's human eye area of true man's eye most probably, be candidate's human eye area of dummy's eye, true man's eye or dummy's eye most probably.
Distinct methods and the different colours information of handling the eye adjacent domain are not construed as limiting the invention.
What specify is to have used " colour of skin ratio " of eye adjacent domain in Figure 1A." colour of skin ratio " of eye adjacent domain is defined as in the eye adjacent domain skin pixel number divided by sum of all pixels in the eye adjacent domain.
Colour of skin ratio in true man's eye eye adjacent domain is in certain scope.Therefore, according to the colouring information of this eye adjacent domain, especially can determine that according to the colour of skin ratio of eye adjacent domain the eye adjacent domain of candidate's human eye area is true man's eye or dummy's eye.
In step 105, calculate the colour of skin ratio of eye adjacent domain.
Step 105 can comprise two sub-steps (not shown among Figure 1A).At first, skin pixel number in the calculating eye adjacent domain and the sum of all pixels in the eye adjacent domain.Then, the skin pixel number divided by sum of all pixels, is obtained the colour of skin ratio of eye adjacent domain.Figure 1B has introduced the another kind of method of calculating colour of skin ratio.
Then, in step 106, whether the colour of skin ratio of judging the eye adjacent domain is less than predetermined threshold.If the result of step 106 is that then flow process does not enter step 108, otherwise enters step 107.
Predetermined threshold is in the scope of [0.1,0.9].Preferably, predetermined threshold equals 0.5.
In step 107, candidate's human eye area is classified as dummy's eye or is candidate's human eye area of dummy's eye most probably.
In step 108, candidate's human eye area is classified as true man's eye or is candidate's human eye area of true man's eye most probably.
After step 107 or the step 108, flow process ends at step 109.
In case detect the human eye in the image, just can be further processed to image or the human eye that is detected.For example, can determine candidate face according to detected human eye.
Figure 1B is the process flow diagram of the embodiment of method shown in Figure 1A.Step 101 among step 101b to 104b among Figure 1B and 106b to 109b and Figure 1A is to 104 and 106 to 109 identical.Therefore, only to step 105a, 105b and 105c are described.
At step 105a, the eye adjacent domain is carried out colour of skin binaryzation.After carrying out binary conversion treatment, form the black white image that is called as binary map (or binaryzation eye adjacent domain).In this binary map, the gray level of the pixel corresponding with skin pixel in the original image is set to 255, and the gray level of other pixels is set to 0.
Though described in step 105a the eye adjacent domain is carried out binaryzation, the entire image that will comprise the eye adjacent domain is carried out that binaryzation also is fine and is practical.Fig. 4 A and 4B show an example, and Fig. 5 A and 5B show another example.Whether eye adjacent domain or entire image are carried out binaryzation and do not constitute restriction present embodiment.
Suppose that this image is the HSI form, and with the entire image binaryzation.
At step 105a, at first with image equalizationization.About the detailed explanation of " equalization ", see also the Kenneth R.Castleman of Prentice Hall company " Digital Image Processing ", version in 1996, chapter 6 " equalization ".
Below, according to the image of equalization, form the black white image that is called as binary map.It is 255 or 0 pixel that this binary map comprises its gray level.Gray level be 255 pixel corresponding to the skin pixel in the equalization image, and gray level is that 0 black picture element is corresponding to the non-skin pixel in the equalization image
The flow process that forms binary map comprises whether the pixel of judging in the equalization image satisfies the step of following condition:
((H<50 and H 〉=0) or (H>290 and H<358.1)) and I>25,
H remarked pixel form and aspect wherein, the brightness of I remarked pixel.
If pixel satisfies above-mentioned condition, then the gray level of respective pixel is set to 255 in the binary map.
Above-mentioned condition only is an example, and for different picture formats, different ethnic group (white people, Asian, African or the like), this condition can be inequality.Therefore, judge that whether pixel is that the distinct methods of skin pixel does not constitute the restriction to present embodiment.
At step 105b, calculate the average gray level of binaryzation eye adjacent domain.This can be by obtaining with the gray level of all pixels in the binaryzation eye adjacent domain with divided by the sum of all pixels in the binaryzation eye adjacent domain.
At step 105c, average gray level divided by 255, is obtained the colour of skin ratio of eye adjacent domain.
Method according to present embodiment, in case one or more candidate's human eye area are classified as true man's eye or are candidate's human eye area of true man's eye most probably, can or be candidate's human eye area of true man's eye most probably just, determine the one or more candidate face zone in the same image according to these true man's eyes.Especially, between the step 108 and step 109 of Figure 1A, perhaps can comprise following steps between the step 108b of Figure 1B and the step 109b:
If in classification step (Figure 1A step 108 or Figure 1B step 108b), candidate's human eye area is classified as true man's eye or is candidate's human eye area of true man's eye most probably, then determines at least one candidate face zone in the image according to this candidate's human eye area;
According to this image, judge whether this at least one candidate face zone represents true man's face.
Fig. 2 A is the schematic block diagram according to image processing apparatus structure of the present invention.
Shown in Fig. 2 A, label 201 expression candidate human eye area recognizers; Label 202 presentation class devices; Label 203 expression eye adjacent domain Choosers; Label 204 expression eye adjacent domain processors.
Candidate's human eye area recognizer 201 is used for discerning candidate's human eye area of pending image.In candidate's human eye area recognizer 201, can adopt any conventional algorithm that is used for recognition image candidate human eye area, and it does not constitute the restriction to present embodiment.
Eye adjacent domain Chooser 203 is used for choosing the eye adjacent domain in the candidate's human eye area that is identified by candidate's human eye area recognizer 201.About how choosing the eye adjacent domain of candidate's human eye area, will be described in detail with reference to Fig. 3 A and Fig. 3 B in the back.
Eye adjacent domain processor 204 is used for the eye adjacent domain of being chosen by eye adjacent domain Chooser 203 is handled, and the output result.
The eigenwert (can be called as " colouring information ") of some type of the eye adjacent domain that this result is normally chosen by eye adjacent domain Chooser 203, as long as they are for sorter 202, are enough to that this candidate's human eye territorial classification is one and are candidate's human eye area of dummy's eye, true man's eye or dummy's eye most probably for candidate's human eye area of true man's eye, one most probably.
Sorter 202 is used for the output according to eye adjacent domain processor 204, the candidate's human eye territorial classification that is identified by candidate's human eye area recognizer 201 is one is candidate's human eye area of dummy's eye, true man's eye or dummy's eye most probably for candidate's human eye area of true man's eye, one most probably.
Though shown in Fig. 2 A and 2B, the candidate's human eye area that is identified by candidate's human eye area recognizer 201 is transfused to sorter 202, in fact needn't be like this.Importantly when sorter 202 receives output (for example, the colouring information of the eye adjacent domain of candidate's human eye area) from eye adjacent domain processor 204, it will know will which candidate's human eye area be classified.
The classification results of sorter 202 can be used to the further processing to image.
It should be noted that, eye adjacent domain processor 204 can be applied to the eye adjacent domain with any disposal route, as long as for sorter 202, the result of this eye adjacent domain is enough to that this candidate's human eye territorial classification is one is candidate's human eye area of dummy's eye, true man's eye or dummy's eye most probably for candidate's human eye area of true man's eye, one most probably.
Fig. 2 B schematically shows the inner structure of the processor of eye adjacent domain shown in Fig. 2 A.
The colouring information that uses among Fig. 2 B is relevant with the colour of skin ratio of eye adjacent domain.Eye adjacent domain processor 204 comprises two parts at least, that is, and and skin pixel recognizer 205 and colour of skin proportion calculator 206.
Skin pixel recognizer 205 is discerned all and is comprised the pixel of the colour of skin in the eye adjacent domain of being chosen by eye adjacent domain Chooser 203, promptly discern all skin pixels in the eye adjacent domain.
The number of colour of skin proportion calculator 206 by the skin pixel that will identify in the eye adjacent domain calculates the colour of skin ratio of eye adjacent domain divided by the sum of all pixels in the eye adjacent domain.The colour of skin ratio that obtains is output to sorter 202.
For example, skin pixel recognizer 205 can at first carry out colour of skin binaryzation with eye adjacent domain (perhaps entire image), make the gray level of the middle skin pixel of binaryzation eye adjacent domain (perhaps binary map) be configured to a fixed value, for example 255, the gray level of other pixels is set to zero (0).Then, colour of skin proportion calculator 206 at first calculates the average gray level of binaryzation eye adjacent domain, afterwards, with this average gray level divided by that fixed value (for example 255), so that obtain the colour of skin ratio of eye adjacent domain.Fig. 4 A and 4B show an example, and Fig. 5 A and 5B show another example.In the prior art, it is well known an image being carried out colour of skin binary conversion treatment.
In sorter 202, if the colour of skin ratio of eye adjacent domain less than predetermined threshold, 202 of sorters are dummy's eye with this candidate's human eye territorial classification, perhaps one is candidate's human eye area of dummy's eye most probably; If the colour of skin ratio of eye adjacent domain is not less than predetermined threshold, 202 of sorters are true man's eye with this candidate's human eye territorial classification, and perhaps one is candidate's human eye area of true man's eye most probably.
Predetermined threshold is in the scope of [0.1,0.9].Preferably, this predetermined threshold equals 0.5.
Device according to present embodiment, in case one or more candidate's human eye area are classified as true man's eye or are candidate's human eye area of true man's eye most probably, can or be candidate's human eye area of true man's eye most probably just, determine the one or more candidate face zone in the same image according to these true man's eyes.Therefore, the device of present embodiment can also comprise:
Be classified device 202 and be categorized as true man's eye or be candidate's human eye area of true man's eye most probably if be used for candidate's human eye area,, determine the device at least one candidate face zone in the image then according to this candidate's human eye area; And
Be used for judging according to image whether at least one candidate face zone represents the device of true man's face.
Fig. 3 A and 3B schematically show the relation between candidate's human eye area and its eye adjacent domain.
As shown in Figure 3A, label 301 expression candidate human eye area, the eye adjacent domain of label 302 expression candidate human eye area 301.The height of candidate's human eye area 301 is H1.The width of candidate's human eye area 301 is W1.The height of eye adjacent domain 302 is H2.The width of eye adjacent domain 302 is W2.
Above-mentioned width and highly satisfy following equation:
H2=H1*r1,1≤r1≤4, (preferably, r1=2.4)
W2=W1*r2,1≤r2≤4, (preferably, r2=2.4)
Can satisfy top equation to choosing of eye adjacent domain among Fig. 1, Fig. 2 A and Fig. 2 B.
The center of eye adjacent domain 302 can be identical with the center of candidate's human eye area 301.
Although eye adjacent domain shown in Fig. 3 A 302 is a rectangle, yet eye adjacent domain 302 can adopt any other irregularly shaped, shown in Fig. 3 B.
Fig. 4 A illustrates an image.Label 401 expression candidate human eye area; The eye adjacent domain of label 402 expression candidate human eye area 401; Label 403 another candidate's human eye area of expression; The eye adjacent domain of label 404 expression candidate human eye area 403.
Fig. 4 B illustrates the colour of skin binary map of image shown in Fig. 4 A.Label 401 to 404 has identical implication with those labels among Fig. 4 A.Owing to the binaryzation of image among Fig. 4 A is carried out at the colour of skin, and wherein the gray level of skin pixel is configured to 255, so the respective regions that has the colour of skin among the white portion presentation graphs 4A among Fig. 4 B.Like this, just can use the skin pixel in this binary map identification original image.
With candidate's human eye area 401 is example.Predetermined threshold is set to 0.5, and r1 and r2 are set to 2.4.
The height of candidate's human eye area 401 is 10 pixels.The width of candidate's human eye area 401 is 8 pixels.The height of eye adjacent domain 402 is a 10*2.4=24 pixel.The width of eye adjacent domain 402 is a 8*2.4=19 pixel.According to binary map shown in Fig. 4 B, the average gray level that calculates eye adjacent domain 402 is 179.Therefore, the colour of skin ratio of eye adjacent domain 402 is 179/255 ≈ 0.7.The colour of skin ratio (0.7) of this eye adjacent domain 402 is greater than predetermined threshold (0.5).Therefore, candidate's human eye area 401 is classified as true man's eye, perhaps is candidate's human eye area of true man's eye most probably.
With candidate's human eye area 403 is example.Predetermined threshold is set to 0.45, and r1 and r2 are set to 2.4.
The height of candidate's human eye area 403 is 17 pixels.The width of candidate's human eye area 403 is 25 pixels.The height of eye adjacent domain 404 is a 17*2.4=41 pixel.The width of eye adjacent domain 404 is a 25*2.4=60 pixel.According to binary map shown in Fig. 4 B, the average gray level that calculates eye adjacent domain 404 is 61.Therefore, the colour of skin ratio of eye adjacent domain 404 is 61/255 ≈ 0.24.The colour of skin ratio (0.24) of this eye adjacent domain 404 is less than predetermined threshold (0.45).Therefore, candidate's human eye area 403 is classified as dummy's eye, perhaps is candidate's human eye area of dummy's eye most probably.
Fig. 5 A illustrates another image.This image does not comprise candidate's human eye area.
Fig. 5 B illustrates the colour of skin binary map of image shown in Fig. 5 A.
Fig. 6 schematically shows the image processing system that can implement illustrative methods shown in Figure 1 therein.Image processing system shown in Fig. 6 comprises CPU (CPU (central processing unit)) 601, RAM (random access memory) 602, ROM (ROM (read-only memory)) 603, system bus 604, HD (hard disk) controller 605, keyboard controller 606, serial interface controller 607, parallel interface controller 608, display controller 609, hard disk 610, keyboard 611, camera 612, mfp printer and display 614.In these parts, what link to each other with system bus 604 has CPU601, RAM602, ROM603, HD controller 605, keyboard controller 606, serial interface controller 607, parallel interface controller 608 and a display controller 609.Hard disk 610 links to each other with HD controller 605, and keyboard 611 links to each other with keyboard controller 606, and camera 612 links to each other with serial interface controller 607, and mfp printer links to each other with parallel interface controller 608, and display 614 links to each other with display controller 609.
Each functions of components all is well-known in the present technique field among Fig. 6, and architecture shown in Figure 6 also is conventional.This architecture is not only applicable to personal computer, and is applicable to handheld device, such as palm PC, and PDA (personal digital assistant), digital camera, or the like.In different application, some parts shown in Fig. 6 can be omitted.For example, if total system is a digital camera, parallel interface controller 608 and mfp printer can be omitted, and this system can be embodied as single-chip microcomputer.If application software is stored in EPROM or other nonvolatile memories, HD controller 605 and hard disk 610 can be omitted.
Total system shown in Fig. 6 is by the computer-readable instruction control that is stored in (or as mentioned above, being stored in EPROM or other nonvolatile memories) in the hard disk 610 usually as software.Software also can be downloaded from the network (not shown).Perhaps be stored in the hard disk 610, perhaps the software from network download can be loaded into the RAM602, and is carried out by CPU601, so that finish the function of being determined by software.
For the one skilled in the art, need not creative work and can on the basis of exemplary process diagram shown in Figure 1, develop one or more software.The software of developing like this will be carried out image processing method as shown in Figure 1.
In some sense, the image processing system shown in Fig. 6 if obtain the support of the software developed according to process flow diagram shown in Figure 1, can be realized the same function of image processing apparatus shown in Fig. 2 A and 2B.
Though, above stated specification is with reference to specific embodiment of the present invention, but those skilled in the art should be appreciated that these only are to describe for example in this area, can make many changes and not break away from principle of the present invention these embodiment, scope of the present invention be limited by appended claims.

Claims (18)

1. an image processing method comprises the steps:
Discern the candidate's human eye area in the described image;
Choose the eye adjacent domain of described candidate's human eye area;
Discern the skin pixel of described eye adjacent domain;
Calculate the colouring information of described eye adjacent domain according to the described skin pixel of described eye adjacent domain; And
Described colouring information according to described eye adjacent domain is classified described candidate's human eye area.
2. image processing method according to claim 1, it is characterized in that the described step of choosing chooses an eye adjacent domain, this regional width is r2 a times of described candidate's human eye peak width, and height that should the zone be described candidate's human eye region height r1 doubly, wherein r2 is [1,4] constant in the scope, and r1 is the constant in [1,4] scope.
3. image processing method according to claim 2 it is characterized in that described r2 equals 2.4, and described r1 equals 2.4.
4. image processing method according to claim 1 is characterized in that described colouring information is the ratio of described skin pixel and the described eye adjacent domain that is selected.
5. image processing method according to claim 1 is characterized in that described calculating colouring information step comprises the steps:
Calculate the number of skin pixel in the described eye adjacent domain;
By with the number of described skin pixel divided by the sum of all pixels in the described eye adjacent domain, calculate colour of skin ratio;
And described classification step is classified described candidate's human eye area by described colour of skin ratio is compared with predetermined threshold.
6. image processing method according to claim 1 is characterized in that described calculating colouring information step comprises the steps:
Carry out colour of skin binaryzation to the described eye adjacent domain of major general, obtaining wherein, the gray level of skin pixel equals fixed value and the null binaryzation eye of the gray level of other pixels adjacent domain;
Calculate the average gray level of described binaryzation eye adjacent domain;
By described average gray level is calculated colour of skin ratio divided by described fixed value;
And described classification step is classified described candidate's human eye area by described colour of skin ratio is compared with predetermined threshold.
7. according to each described image processing method of claim 5 or 6, it is characterized in that described predetermined threshold is in the scope of [0.1,0.9].
8. image processing method according to claim 7 is characterized in that described predetermined threshold equals 0.5.
9. according to claim 1, each described image processing method of 5 or 6, it is characterized in that also comprising the steps:
If in described classification step, described candidate's human eye area is classified as true man's eye or is candidate's human eye area of true man's eye most probably, then determines at least one candidate face zone in the described image according to described candidate's human eye area; And
According to described image, judge whether described at least one candidate face zone represents true man's face.
10. image processing apparatus comprises:
Candidate's human eye area recognizer is used for discerning candidate's human eye area of described image;
Eye adjacent domain Chooser is used to choose the eye adjacent domain of described candidate's human eye area;
The skin pixel recognizer is used for discerning a plurality of skin pixels of described eye adjacent domain;
The colouring information counter is used for the described a plurality of skin pixels according to described eye adjacent domain, calculates the colouring information of described eye adjacent domain; And
Sorter is used for the described colouring information according to described eye adjacent domain, and described candidate's human eye area is classified.
11. image processing apparatus according to claim 10, it is characterized in that described eye adjacent domain Chooser chooses an eye adjacent domain, this regional width is r2 a times of described candidate's human eye peak width, and height that should the zone be described candidate's human eye region height r1 doubly, wherein r2 is [1,4] constant in the scope, and r1 is the constant in [1,4] scope.
12. image processing apparatus according to claim 11 it is characterized in that described r2 equals 2.4, and described r1 equals 2.4.
13. image processing apparatus according to claim 10 is characterized in that described colouring information is the ratio of described skin pixel and the described eye adjacent domain that is selected.
14. image processing apparatus according to claim 10 is characterized in that:
Described skin pixel recognizer calculates the number of skin pixel in the described eye adjacent domain;
Described colouring information counter calculates colour of skin ratio by the number with described skin pixel divided by the sum of all pixels in the described eye adjacent domain;
And described sorter is classified described candidate's human eye area by described colour of skin ratio is compared with predetermined threshold.
15. image processing apparatus according to claim 10 is characterized in that:
Described skin pixel recognizer is by carrying out colour of skin binaryzation to the described eye adjacent domain of major general, discern described a plurality of skin pixel, obtaining wherein, the gray level of skin pixel equals fixed value and the null binaryzation eye of the gray level of other pixels adjacent domain;
Described colouring information counter comprises:
First counter is used to calculate the average gray level of described binaryzation eye adjacent domain;
Second counter is used for by described average gray level is calculated colour of skin ratio divided by described fixed value;
And described sorter is classified described candidate's human eye area by described colour of skin ratio is compared with predetermined threshold.
16., it is characterized in that described predetermined threshold is in the scope of [0.1,0.9] according to each described image processing apparatus of claim 14 or 15.
17. image processing apparatus according to claim 16 is characterized in that described predetermined threshold equals 0.5.
18., it is characterized in that also comprising according to claim 10,13, each described image processing apparatus of 14 or 15:
Be categorized as true man's eye or be candidate's human eye area of true man's eye most probably by described sorter if be used for described candidate's human eye area,, determine the device at least one the candidate face zone in the described image then according to described candidate's human eye area; And
Be used for according to described image, judge whether described at least one candidate face zone represents the device of true man's face.
CNB031373453A 2003-06-18 2003-06-18 Image processing method and device Expired - Fee Related CN1333370C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB031373453A CN1333370C (en) 2003-06-18 2003-06-18 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB031373453A CN1333370C (en) 2003-06-18 2003-06-18 Image processing method and device

Publications (2)

Publication Number Publication Date
CN1567369A CN1567369A (en) 2005-01-19
CN1333370C true CN1333370C (en) 2007-08-22

Family

ID=34470367

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB031373453A Expired - Fee Related CN1333370C (en) 2003-06-18 2003-06-18 Image processing method and device

Country Status (1)

Country Link
CN (1) CN1333370C (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609500B (en) * 2008-12-01 2012-07-25 公安部第一研究所 Quality estimation method of exit-entry digital portrait photos
CN104463074B (en) * 2013-09-12 2017-10-27 金佶科技股份有限公司 The discrimination method and device for identifying of true and false fingerprint
CN106156689B (en) * 2015-03-23 2020-02-21 联想(北京)有限公司 Information processing method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61145690A (en) * 1984-12-19 1986-07-03 Matsushita Electric Ind Co Ltd Recognizing device of characteristic part of face
JPH11250267A (en) * 1998-03-05 1999-09-17 Nippon Telegr & Teleph Corp <Ntt> Method and device for detecting position of eye and record medium recording program for detecting position of eye
EP0984386A2 (en) * 1998-09-05 2000-03-08 Sharp Kabushiki Kaisha Method of and apparatus for detecting a human face and observer tracking display
JP2000235640A (en) * 1999-02-15 2000-08-29 Oki Electric Ind Co Ltd Facial organ detecting device
JP2002342756A (en) * 2001-05-01 2002-11-29 Eastman Kodak Co Method for detecting position of eye and mouth in digital image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61145690A (en) * 1984-12-19 1986-07-03 Matsushita Electric Ind Co Ltd Recognizing device of characteristic part of face
JPH11250267A (en) * 1998-03-05 1999-09-17 Nippon Telegr & Teleph Corp <Ntt> Method and device for detecting position of eye and record medium recording program for detecting position of eye
EP0984386A2 (en) * 1998-09-05 2000-03-08 Sharp Kabushiki Kaisha Method of and apparatus for detecting a human face and observer tracking display
JP2000235640A (en) * 1999-02-15 2000-08-29 Oki Electric Ind Co Ltd Facial organ detecting device
JP2002342756A (en) * 2001-05-01 2002-11-29 Eastman Kodak Co Method for detecting position of eye and mouth in digital image

Also Published As

Publication number Publication date
CN1567369A (en) 2005-01-19

Similar Documents

Publication Publication Date Title
US7376270B2 (en) Detecting human faces and detecting red eyes
US6639998B1 (en) Method of detecting a specific object in an image signal
EP0807297B1 (en) Method and apparatus for separating foreground from background in images containing text
CN102081734B (en) Object detecting device and its learning device
US9053384B2 (en) Feature extraction unit, feature extraction method, feature extraction program, and image processing device
US8209172B2 (en) Pattern identification method, apparatus, and program
CN102103698B (en) Image processing apparatus and image processing method
US8325998B2 (en) Multidirectional face detection method
CN101211411B (en) Human body detection process and device
US20060018517A1 (en) Image processing methods and apparatus for detecting human eyes, human face, and other objects in an image
US20030179911A1 (en) Face detection in digital images
US20030174869A1 (en) Image processing apparatus, image processing method, program and recording medium
US20090226047A1 (en) Apparatus and Method of Processing Image and Human Face Detection System using the smae
Ryu et al. Automatic extraction of eye and mouth fields from a face image using eigenfeatures and multilayer perceptrons
CN106682473A (en) Method and device for identifying identity information of users
US7831068B2 (en) Image processing apparatus and method for detecting an object in an image with a determining step using combination of neighborhoods of a first and second region
US5838839A (en) Image recognition method
US7403636B2 (en) Method and apparatus for processing an image
CN108460320A (en) Based on the monitor video accident detection method for improving unit analysis
US7715632B2 (en) Apparatus and method for recognizing an image
CN110232381A (en) License Plate Segmentation method, apparatus, computer equipment and computer readable storage medium
CN1333370C (en) Image processing method and device
CN111402185B (en) Image detection method and device
KR100316784B1 (en) Device and method for sensing object using hierarchical neural network
Ishizuka et al. Segmentation of road sign symbols using opponent-color filters

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070822

Termination date: 20170618

CF01 Termination of patent right due to non-payment of annual fee