CN1306456C - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
CN1306456C
CN1306456C CNB021554684A CN02155468A CN1306456C CN 1306456 C CN1306456 C CN 1306456C CN B021554684 A CNB021554684 A CN B021554684A CN 02155468 A CN02155468 A CN 02155468A CN 1306456 C CN1306456 C CN 1306456C
Authority
CN
China
Prior art keywords
image
probability
candidate face
mentioned
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB021554684A
Other languages
Chinese (zh)
Other versions
CN1508752A (en
Inventor
陈新武
石田良弘
纪新
王立冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CNB021554684A priority Critical patent/CN1306456C/en
Priority to US10/716,671 priority patent/US7415137B2/en
Priority to JP2003412157A priority patent/JP2004199673A/en
Publication of CN1508752A publication Critical patent/CN1508752A/en
Application granted granted Critical
Publication of CN1306456C publication Critical patent/CN1306456C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides an image processing method, which is characterized in that the method comprises the following steps: recognizing a candidate human face area in an image, calculating the probability of human face representation by the candidate human face area, and storing the probability in the image as additional information. The present invention also provides another image processing method, which is characterized in that the method comprises the following steps: recognizing the candidate human face area in the image, calculating the probability of human face representation by the candidate human face area, comparing the probability with a threshold to determine whether the candidate human face area represents a human face, and storing the result of the determination step in the image as additional information. According to the methods, the result of recognizing the candidate human face area is stored in the image so that the image can be easily further processed.

Description

Image processing method and device
Technical field
The present invention relates to Flame Image Process, relate in particular to the method and apparatus that the image that comprises the candidate face zone is handled.
Background technology
There are many well-known technology to be used for the detected image interesting areas, as people's face or other the interested targets that will discern.People's face detects and to be one and to make us interested especially field, because recognition of face is not only for Flame Image Process, and differentiate and safety for identity, and man-machine interface is all made a difference.Man-machine interface is not only discerned the position of people's face, if people's face exists, it can also discern special people's face, and is appreciated that facial expression and posture.
Recently, many researchs have been reported about detecting from moving face.Reference for example comprises 5th IEEE International Workshop on Robot and HumanCommunication in 1996, " the Face Detection and RotationsEstimation using Color Information " in the 341st to the 346th page and in June, 1999 IEEE Transaction onPattern Analysis and Machine Intelligence roll up " FaceDetection from Color Images Using a Fuzzy Pattern MatchingMethod " in 21 No. 6.
The conventional method that all detect people's faces all has their advantage and deficiency, and this depends on algorithms of different used when handling image.Though certain methods accurately but complicated and consuming time.
Importantly, do not have a kind of conventional method of the people's of detection face that testing result is stored in the image, when this carries out special processing to human face region if desired with regard to making, image is further processed very inconvenient.
Therefore, need develop a kind of method and apparatus in this technical field, it can be discerned human face region and the result of identifying is stored in image, be used for image is for further processing.
Summary of the invention
First purpose of the present invention is to provide a kind of image process method and device, and it in image, after being used for is handled the information stores in candidate face zone to image.
Second purpose of the present invention is to provide a kind of method and apparatus that the image of wherein having stored the candidate face area information is handled.
For achieving the above object, the invention provides a kind of image processing method, it is characterized in that comprising the steps:
Discern the candidate face zone in the above-mentioned image;
Calculate the let others have a look at probability of face of above-mentioned candidate face region list; And
Above-mentioned probability is stored in the above-mentioned image as additional information.
The present invention also provides a kind of image processing method, it is characterized in that comprising the steps:
Discern the candidate face zone in the above-mentioned image;
Calculate the let others have a look at probability of face of above-mentioned candidate face region list;
By with above-mentioned probability and a threshold, judge whether above-mentioned candidate face zone represents people's face; And
The result of above-mentioned determining step is stored in the above-mentioned image as additional information.
The present invention also provides a kind of method that the image of wherein having stored at least one candidate face zone probability is handled, and it is characterized in that comprising the steps:
From above-mentioned image, retrieve the probability in a candidate face zone;
By probability and threshold that will retrieve, judge whether above-mentioned candidate face zone represents people's face; And
If through judging the above-mentioned candidate face region list face of leting others have a look at, then in above-mentioned candidate face zone, carry out a kind of disposal route of the uniqueness of having optimized at people's face.
The present invention also provides the method for identification people in a kind of image of having stored at least one candidate face zone probability therein, it is characterized in that comprising the steps:
From above-mentioned image, retrieve the probability in a candidate face zone;
By probability and threshold that will retrieve, judge whether above-mentioned candidate face zone represents people's face; And
If through judging the above-mentioned candidate face region list face of leting others have a look at, then only according to above-mentioned candidate face zone identification people.
The present invention also provides a kind of image processing apparatus, it is characterized in that comprising:
The candidate face district selector is used in candidate face zone of above-mentioned image identification;
The probability calculation device is used to calculate the let others have a look at probability of face of above-mentioned candidate face region list; And
The probability register is used for above-mentioned probability is write above-mentioned image as additional information.
The present invention also provides a kind of image processing apparatus, it is characterized in that comprising:
The candidate face district selector is used in candidate face zone of above-mentioned image identification;
The probability calculation device is used to calculate the let others have a look at probability of face of above-mentioned candidate face region list; And
Judging unit is used for judging by with above-mentioned probability and a threshold whether above-mentioned candidate face zone represents people's face; And
The judged result register is used for the output of above-mentioned judging unit is write above-mentioned image as additional information.
The present invention also provides a kind of device that the image of wherein having stored at least one candidate face zone probability is handled, and it is characterized in that comprising:
The probability extraction apparatus is used for from the probability in a candidate face zone of extracting data of pending image;
People's face processing unit is used to use a kind of algorithm of having optimized at handler's face that data are handled; And
Judge and control module, be used for judging by with above-mentioned probability and a threshold whether above-mentioned candidate face zone represents people's face, and, if, then start above-mentioned people's face processing unit, so that handle the data in above-mentioned candidate face zone through judging the above-mentioned candidate face region list face of leting others have a look at.
According to method of the present invention, the recognition result in candidate face zone will be stored in the image, be convenient to image is for further processing.
Conventional image processing apparatus also can be trained to has the ability that detects people's face.According to method for detecting human face of the present invention not only accurately but also rapid.
In addition, method of the present invention can be easily combines with the algorithms of different of the conventional method that is used for detecting people's face, so that adapt to different situations.
Other characteristics of the present invention and advantage can be in conjunction with the accompanying drawings become clearer from the explanation below by the preferred embodiment that for example principle of the present invention is made an explanation.
Description of drawings
Fig. 1 is the process flow diagram of method that the image processing apparatus in the embodiments of the invention is trained.
Fig. 2 is the process flow diagram according to image processing method of the present invention, has wherein used the image processing apparatus of having trained with method shown in Figure 1.
Fig. 3 is the process flow diagram according to another image processing method of the present invention, the image processing apparatus that has wherein used a plurality of usefulness method shown in Figure 1 to train.
Fig. 4 is the process flow diagram of another image processing method, has stored the probability at least one candidate face zone in the image of handling.
Fig. 5 is the process flow diagram of identification people's method in image, has stored the probability at least one candidate face zone in image.
Fig. 6 is the schematic block diagram according to image processing apparatus of the present invention.
Fig. 7 is the schematic block diagram according to another image processing apparatus of the present invention.
Fig. 8 is the schematic block diagram of device that the image of wherein having stored at least one candidate face zone probability is handled.
Fig. 9 represents a training sequence, comprises 1O00 training sample, promptly image-region A1, A2, A3 ... A1000.
Figure 10 represents two as the image-region B1, the B2 that detect.
Figure 11 schematically shows an image processing system, can realize every kind of method shown in Fig. 1 to 5 therein.
Embodiment
To make detailed description to the present invention below.In the following description, about how in image candidate face zone of identification, can be with reference to filing an application on September 15th, 2000, and in No. the 00127067.2nd, disclosed Chinese patent application on April 10th, 2002 by same applicant.This application is referred to herein as a reference.But the method in disclosed identification candidate face zone is not construed as limiting the invention in No. the 00127067.2nd, the Chinese patent application.Any conventional method of discerning the candidate face zone in image can be used in the present invention.
Fig. 1 is the process flow diagram of method that the image processing apparatus in the embodiments of the invention is trained.Flow process starts from step 101.In step 102, import a plurality of image-regions.These image-regions can be from an image or a plurality of image.Some image-regions in these image-regions are represented true man's face.Know that in advance which image-region represents true man's face.These image-regions are known as " training sample ".In Fig. 1, the number of training sample is N, and wherein N is the integer greater than 1.
In step 103, to each image-region of input in step 102, use a kind of pre-defined algorithm, so just generate a M dimensional vector, wherein M is equal to or greater than 1 integer.In the M value is 1 o'clock, and pre-defined algorithm is that the image-region of each input generates a scalar.
Use said method, will generate a plurality of M dimensional vectors.The M dimensional vector number that generates is identical with N.Owing to know which training sample (that is, image-region) expression true man face in advance, will know that also which M dimensional vector is corresponding true man's face.
The present invention also is indifferent to the detailed process of pre-defined algorithm, generates a M dimensional vector as long as this algorithm can be the image-region of each input.Therefore, pre-defined algorithm can be any conventional method of image data processing.Some features of the image-region of having used pre-defined algorithm thereon are shown by the vector table of pre-defined algorithm generation.With reference to Fig. 9, two examples (seeing example one and example two) of these algorithms will be provided in the back.
After the step 103, will generate N M dimensional vector, they are distributed in the M dimension space.
Step 104 to 108 constitutes and a kind of the M dimension space is divided into the mode of a plurality of subspaces, and the number of subspace can be expressed as
Figure C0215546800071
And the M dimensional vector of similar number is distributed in the subspace.The number that is distributed in the M dimensional vector in each subspace can be expressed as
Figure C0215546800072
Wherein, K1, K2 ..., KM is the integer greater than 1.
Should be noted that the M dimension space is divided into Sub spaces has multiple mode.Step 104 to 108 has just been expressed an example, is not construed as limiting the invention.
In step 104, value " 1 " is assigned to variable i.
In step 105,,, arrange along i axle according to the value of i component of M dimensional vector to be arranged with all M dimensional vectors that are distributed in each subspace.
In step 106, i in each subspace axle is divided into Ki interval, the M dimension space just correspondingly is divided into like this
Figure C0215546800082
Sub spaces, and the number that is distributed in the M dimensional vector in each subspace is
Figure C0215546800083
In step 107, variable i increases by 1.
In step 108, whether judgment variable i is greater than M.If the judged result of step 108 is for what negate, then flow process enters step 105; Otherwise enter step 109.
In step 109, calculate the probability of each subspace.In a sub spaces, at first the number corresponding to the M dimensional vector of true man's face is counted.Then, the sum of the M dimensional vector in this subspace will be distributed in, promptly
Figure C0215546800084
Remove the number of above-mentioned M dimensional vector corresponding to true man's face.The result of being divided by is as the probability of this subspace.The probability of one sub spaces is meant the vector that is distributed in this subspace probability corresponding to true man's face.
In optional step 110, for example in the internal memory or external memory that stores the position and the probability of all subspaces into image processing apparatus.
In step 111, the training flow process finishes.
For flow process shown in Figure 1 is more readily understood, enumerate two examples below.
Example one
With reference to Fig. 9, it represents by 1000 training samples, promptly image-region A1, A2, A3 ..., the training sequence formed of A1000.Therefore, the N value among Fig. 1 is 1000.
In Fig. 9, know that in advance which image-region represents true man's face, which image-region is not represented true man's face.For example, image-region A1, A5 represent true man's face, and image-region A2, A3, A4, A6 do not represent people's face.
The pre-defined algorithm of using in the example one has generated a scalar, i.e. a M dimensional vector when M=1.As an example, the pre-defined algorithm that uses in this example generates and belongs to the ratio of the area of the colour of skin for the entire image zone.
With image-region A1 is example.Sum of all pixels among the image-region A1 is 10000, and the pixel count that wherein belongs to the colour of skin is 8000.Therefore, the area that belongs to the colour of skin is 8000/10000=0.8 for the ratio in entire image zone.
When top pre-defined algorithm is applied to respectively image-region A1, A2 ..., during A1000, will obtain 1000 scalars, be called the training scalar, as follows:
0.8,0.2,0.3,0.5,0.7,0.1...
Then,, arrange top training scalar, obtain sequence with ascending order along real number axis:
...,0.1,...,0.2,...,0.3,...,0.5,...,0.7,...,0.8,...
Then, real number axis is divided into M interval, makes each interval comprise the training scalar of similar number.The number of the training scalar in each interval equals N/M.
Suppose M=10, real number axis will be divided into 10 intervals (that is, 10 sub spaces, each subspace is an one dimension), for example:
(-∞,0.11],
(0.11,0.2],
(0.2,0.32],
(0.32,0.39],
(0.39,0.45],
(0.45,0.56],
(0.56,0.66],
(0.66,0.73],
(0.73,0.85],
(0.85,+∞)
Interval on the left side circle is for opening, and right margin is for closing, or left margin is for closing, and right margin is for opening.In each interval, promptly in the one-dimensional subspace, N/M=1000/10=100 training scalar arranged.
Then, calculate each interval probability.For 10 intervals that are divided into as stated above, in these 10 intervals, the number of the training scalar corresponding with true man's face is:
5,11,16,28,32,44,52,61,77,43。
Train ading up to of scalar: N/M=1000/10=100 in each interval.
Then each interval probability is respectively in these 10 intervals:
0.05,0.11,0.16,0.28,0.32,0.44,0.52,0.61,0.77,0.43。
In the end a step, store the position and the probability in 10 intervals.
Example two
Fig. 9 represents to comprise 1000 training samples, promptly image-region A1, A2, A3 ..., the training sequence of A1000.In this embodiment, only use A1, A2, A3 ..., A900.Therefore, the N value 900 among Fig. 1.
As mentioned above, know that in advance which image-region represents true man's face, which image-region is not represented true man's face.For example, image-region A1, A5 represent true man's face, and image-region A2, A3, A4, A6 do not represent people's face.
The pre-defined algorithm of using in the example two has generated a bivector, i.e. M dimensional vector during M=2.As an example, the average and weighting angle in annular region of pre-defined algorithm generation of using in this example between intensity profile gradient and the benchmark distribution gradient.The explanation detailed to this algorithm please refer to Chinese patent application 01132807.x number.
When this algorithm is applied to respectively image-region A1, A2, A3 ..., during A900, will obtain the following bivector that is called as training vector:
(0.23,0.14),(-0.6,-0.71),(0.44,0.51),(0.52,0.74),(-0.16,-0.22),(0.58,0.46),...
Then, at the 1st axle, promptly real number axis is arranged by the value of their the 1st components these 900 bivectors with ascending order, obtains following sequences:
...,(-0.6,-0.71),...,(-0.16,-0.22),...,(0.23,0.14),...,(0.44,0.51),...,(0.52,0.74),...,(0.58,0.46),...
Then, with real number axis be divided into P interval, correspondingly two-dimensional space is divided into the P sub spaces, make that each subspace will comprise N/P bivector in the P sub spaces.
Suppose P=10, these 10 intervals are:
(-∞,-0.6],
(-0.6,-0.33],
(-0.33,-0.12],
(-0.12,0.09],
(0.09,0.15],
(0.15,0.26],
(0.26,0.44],
(0.44,0.57],
(0.57,0.73],
(0.73,+∞)。
All interval on the left sides circle are for opening, and right margin is for closing, or left margin is for closing, and right margin is for opening.In each subspace, N/P=90 training vector arranged.
Then, in each subspace, these training vectors are arranged along the 2nd axle with ascending order by the value of their the 2nd components.
For example, with interval (0.12,0.09] in the corresponding subspace, following training vector has distributed:
...,(-0.1,0.2),...,(-0.05,0.01),...,(-0.03,0.3),...,(0.01,-0.1),...,(0.03,-0.22),...,(-0.06,-0.5),...
By the 2nd component value, these vectors are arranged, obtain following sequences:
...,(-0.06,-0.5),...,(0.03,-0.22),...,(0.01,-0.1),...,(-0.05,0.01),...,(-0.1,0.2),...,(-0.03,0.3),...
In each subspace, the 2nd axle is divided into Q interval, thereby each subspace is divided into the Q sub spaces, make each subspace that obtains at last comprise the bivector of similar number, its number is N/ (P*Q).
Suppose Q=9,, the 2nd number axis is divided into 9 intervals for by dividing each subspace that real number axis obtains.
With with (0.12,0.09] interval corresponding subspace is example, 9 intervals that obtain are:
(-∞,-0.5],
(-0.5,-0.35],
(-0.35,-0.18],
(-0.18,0.04],
(0.04,0.17],
(0.17,0.31],
(0.31,0.54],
(0.54,0.77],
(0.77,+∞)。
All interval on the left sides circle are for opening, and right margin is for closing, or left margin is for closing, and right margin is for opening.In each subspace, N/ (P*Q)=10 training vector is arranged.
In the above described manner, two-dimensional space is divided into following (P*Q)=90 sub spaces the most at last:
((-∞,-0.6],(-∞,-0.53]),...,((-∞,-0.6],(0.71,+∞)),
((-0.6,-0.33],(-∞,-0.58]),...,((-0.6,-0.33],(0.56,+∞)),
...
((-0.12,0.09],(-∞,-0.5]),...,((-0.12,0.09],(0.04,0.17]),...,((-0.12,0.09],(0.77,+∞))
...
((0.73,+∞),(-∞,-0.65]),...,((0.73,+∞),(0.61,+∞))
In each subspace, the N/ that distributed (P*Q)=10 training vector.
Next step calculates the probability of each subspace.
The number of supposing the training vector corresponding with true man's face is respectively:
1,...,2,0,...,3,...,3,...,8,...,2,...,0,...,1。
Because what be distributed in training vector in each subspace adds up to N/ (P*Q)=900/ (10*9)=10, then the probability of this 90 sub spaces is:
0.1,...,0.2,0,...,0.3,...,0.3,...,0.8,...,0.2,...,0,...,0.1。
In the end a step, store the position and the probability of 90 sub spaces.
Fig. 2 is the process flow diagram according to image processing method of the present invention, has wherein used the image processing apparatus of having trained with method shown in Figure 1.Flow process starts from step 201.At image of step 202 input.In order to detect the people's face in the pending image, a candidate face zone in this image of step 203 identification.In step 204, the data in above-mentioned candidate face zone are input to utilize in the image processing apparatus that method shown in Figure 1 trained.
In step 205, in image processing apparatus, the pre-defined algorithm that uses is applied to the data in candidate face zone in the training process of image processing apparatus, and generates the M dimensional vector in candidate face zone.
In step 206, above-mentioned A sub spaces at the above-mentioned M dimensional vector of identification place in the sub spaces.These
Figure C0215546800132
Sub spaces is to form in the training process of image processing apparatus, and their information (for example, position and probability) is stored in the image processing apparatus.
In step 207, give (discerning) candidate face zone by step 203 with the probable value of the subspace identified.
By this way, in the image processing apparatus that utilizes method shown in Figure 1 to train, can easily obtain the probability in each candidate face zone.Simultaneously, because
Figure C0215546800133
The probability of sub spaces is stored in the image processing apparatus, so can greatly reduce calculated amount when detecting people's face.
Should be appreciated that step 204 to 207 just constitutes one embodiment of the present of invention, and be not construed as limiting the invention.The method of any routine can adopt, as long as the let others have a look at probability of face of the candidate face region list of discerning in step 203 can calculate by these conventional methods.
In step 208,, judge whether the candidate face zone represents people's face by probability and threshold with the candidate face zone.
In step 209, the result who judges is stored in the image as additional information, for example, store in the header files or footnote file of image with predetermined form.In step 209, also can with the identifying information in candidate face zone as a supplement additional information store in the image, for example, store in the header files or footnote file of image with predetermined form.
In step 210, the probability in candidate face zone is stored in the image as additional information, for example, store in the header files or footnote file of image with predetermined form.In step 210, also can with the identifying information in candidate face zone as a supplement additional information store in the image, for example, store in the header files or footnote file of image with predetermined form.
In step 209 and 210, the predetermined format of storing additional information and additional additional information is unimportant, is not construed as limiting the invention.The conventional form or the data structure of any storage data can be used.
The image of having stored judged result or probability can be widely used.Fig. 4 and Fig. 5 will illustrate some application of this class image.
Above-mentioned flow process ends at step 211.
Example three
Referring now to Figure 10, it expresses two for the image-region B1, the B2 that detect.As shown in figure 10, image-region B1 represents people's face, and image-region B2 does not represent people's face.Following explanation will show the outstanding result of detection method of the present invention.
The algorithm that uses with example one is example.
If image-region B1 is identified as a candidate face zone, this algorithm will generate a scalar 0.75, its fall into the interval (0.73,0.85] in.Because this interval probability is 0.77, then the probability of image-region B1 also value be 0.77.
If image-region B2 is identified as a candidate face zone, this algorithm will generate a scalar 0.31, its fall into the interval (0.2,0.32] in.Because this interval probability is 0.16, then the probability of image-region B2 also value be 0.16.
Clearly, represent that in fact the probability in the candidate face zone of people's face has increased (from 0.75 to 0.77), and do not represent that in fact the probability in the candidate face zone of people's face has reduced (from 0.31 to 0.16).That is, the degree of accuracy of detection people face of the present invention has improved.
Example four
Refer again to Figure 10, it expresses two for the image-region B1, the B2 that detect.
The algorithm that uses with example two is example.
If image-region B1 is identified as a candidate face zone, this algorithm will generate a bivector (0.05,0.11), its fall into the subspace ((0.12,0.09], in (0.04,0.17]).Because the probability of this subspace is 0.8, then the probability of image-region B1 also value be 0.8.
If image-region B2 is identified as a candidate face zone, this algorithm will generate a bivector (0.71 ,-0.66), its fall into the subspace ((∞ ,-0.6], in (∞ ,-0.53]).Because the probability of this subspace is 0.1, then the probability of image-region B2 also value be 0.1.
Used a kind of different algorithm in this example, compared with example three, the degree of accuracy that detects people's face further improves.
Fig. 3 is the process flow diagram according to another image processing method of the present invention, has wherein used a plurality of image processing apparatus of having trained with method shown in Figure 1.
Flow process starts from step 301.Then, in step 302, the data of an image of input.In step 303, candidate face zone of identification in the image of input.
In step 304 to 306, use a plurality of image processing apparatus of having trained with method shown in Figure 1, obtain a plurality of probability that are called as middle probability in candidate face zone.The number of a plurality of image processing apparatus for example is K.K is equal to or greater than 1 integer.Detailed process and the method shown in Fig. 2 that use single image treating apparatus obtains probability are similar.
In different image processing apparatus, can use different algorithms.But certainly, for each image processing apparatus, the algorithm that uses in the training process of image processing apparatus should be identical with the algorithm that uses in the detailed process that obtains probability.
After the step 304 to 306, obtain K middle Probability p 1, p2 ..., pK.
In step 307, according to Probability p 1 in the middle of above-mentioned, p2 ..., pK utilizes following equation, calculates the probability in candidate face zone:
p = α ( 1 - Π i = 1 K ( 1 - pi ) )
Wherein α be less than but very near 1 the factor.
Should be appreciated that step 304 to 307 only constitutes embodiments of the invention, is not construed as limiting the invention.Can adopt any conventional method, as long as the let others have a look at probability of face of the candidate face region list of discerning in step 303 can calculate by these conventional methods.
In step 308,, judge whether the candidate face zone represents people's face by probability and threshold with the candidate face zone.
In step 309, the result who judges is stored in the image as additional information, for example, store in the header files or footnote file of image with predetermined form.In step 309, also can with the identifying information in candidate face zone as a supplement additional information store in the image, for example, store in the header files or footnote file of image with predetermined form.
In step 310, the probability in candidate face zone is stored in the image as additional information, for example, store in the header files or footnote file of image with predetermined form.In step 310, also can with the identifying information in candidate face zone as a supplement additional information store in the image, for example, store in the header files or footnote file of image with predetermined form.
In step 309 and 310, the predetermined format of storing additional information and additional additional information is unimportant, is not construed as limiting the invention.Can adopt any conventional form or data structure to store data.
The image of having stored judged result or probability can be widely used.Fig. 4 and Fig. 5 will illustrate some application of this class image.
Above-mentioned flow process ends at step 311.
Example five
Refer again to Figure 10, it expresses two for the image-region B1, the B2 that detect.
As described in top example three and example four, the probability of image-region B1 (that is middle probability) is 0.77 and 0.8.
Make that α is 0.9.
The probability of image-region B1 is 0.9* (1-(1-0.77) * (1-0.8))=0.86 as calculated.
As described in top example three and example four, the probability of image-region B2 (that is middle probability) is 0.16 and 0.1.
Make that α is 0.9.
The probability of image-region B2 is 0.9* (1-(1-0.16) * (1-0.1))=0.22 as calculated.
From Fig. 3 and explanation thereof, find out that if clearly K and the equal value of α are 1, then method shown in Fig. 3 is identical with method shown in Fig. 2.
Fig. 4 is the process flow diagram of another image processing method, has stored the probability at least one candidate face zone in the image of handling.Flow process starts from step 401.In step 402, receive an image of having stored at least one candidate face zone probability therein.
As mentioned above, probabilistic information can be used as additional information and is stored in the header files or footnote file of image with predetermined form.
In step 403, (for example from the header files or footnote file of image) retrieves the probability in a candidate face zone from image.
In step 404, with probability and threshold that retrieves.In step 405,, judge whether current candidate face zone represents people's face according to step 404 result relatively.Above-mentioned threshold value can be chosen by this way, promptly any have greater than the candidate face zone of the probability of this threshold value all represent people's face.
If the judged result of step 405 is for certainly, i.e. the candidate face region list face of leting others have a look at, flow process enters step 406; Otherwise, enter step 407.
In step 406, in the candidate face zone, carry out the disposal route of the uniqueness of having optimized at people's face.This unique disposal route can be carried out by printer, for example, is used for the printed driver or the application program of the printer 1113 of Figure 11, thereby people's face will be printed with improved print quality.This unique disposal route also can be carried out by display, for example be used for the application program of the display 1114 of Figure 11, thereby people's face will be with high-quality display.
In step 407, in the candidate face zone, carry out common disposal route.
In step 408, other parts that do not comprise the candidate face zone in the image are handled.If there is other candidate face zone, and their probability has been included in the image, and then flow process enters step 403.
This flow process finishes in step 409.
Fig. 5 is the process flow diagram of identification people's method in image, has stored the probability at least one candidate face zone in image.Flow process starts from step 501.In step 502, receive an image of having stored at least one candidate face zone probability therein.As mentioned above, probabilistic information can be used as additional information and is stored in the header files or footnote file of image with predetermined form.In step 503, (for example from the header files or footnote file of image) retrieves the probability in a candidate face zone from image.
In step 504, with probability and threshold that retrieves.In step 505,, judge whether current candidate face zone represents people's face according to step 504 result relatively.Above-mentioned threshold value can be chosen by this way, promptly any have greater than the candidate face zone of the probability of this threshold value all represent people's face.
If the judged result of step 505 is for certainly, i.e. the candidate face region list face of leting others have a look at, flow process enters step 506; Otherwise, enter step 507.
In step 506, only the people is discerned, and, according to entire image the people is discerned usually in step 507 according to the candidate face zone.Understand easily,, then will greatly accelerate, and will improve degree of accuracy this people's identifying if only the people is discerned according to a people's face rather than entire image.
This flow process ends at step 508.
Fig. 6 is the structural drawing according to image processing apparatus of the present invention.601 representative image input blocks, 602 represent the candidate face district selector, 603 representation vector makers, 604 represent the probability selector switch, and 605 represent the probability storer, and 606 represent the probability register, 607 representative image output units.Critical component in this device shown in this figure is a vector generator 603, probability selector switch 604 and probability storer 605.
As shown in Figure 6, the frame of broken lines parts of living have been formed the probability calculation device.Though this probability calculation device is made up of vector generator 603, probability selector switch 604 and probability storer 605 shown in the figure, should be appreciated that and to use any conventional components to form the probability calculation device.That is, vector generator 603, probability selector switch 604 and probability storer 605 do not constitute the restriction to the probability calculation device.Importantly, the probability calculation device will calculate the let others have a look at probability of face of a candidate face region list.
Device has utilized method shown in Figure 1 to train shown in Fig. 6, and the position of all subspaces and probability have been stored in the probability storer 605.Probability storer 605 can use any form, as ROM, EPROM, RAM, hard disk or the like.The different storage mediums of antithetical phrase locus and probability and different storage schemes are not construed as limiting the invention.
Image input block 601 receives images, and its data are input to are used for processing in this device.Candidate face district selector 602 is selected the part in the input pictures, and this part is identified as the candidate face zone.The data in 603 pairs of candidate face zones of vector generator are carried out used pre-defined algorithm in the training process of image processing apparatus, generate the M dimensional vector in candidate face zone.
Because the algorithm that vector generator 603 uses is identical with the algorithm that uses,, the M dimensional vector has been stored in a sub spaces in the probability storer 605 so must belonging to its position and probability in the training process of image processing apparatus.
Probability selector switch 604 retrieves a probability according to the M dimensional vector that is generated by vector generator 603 from probability storer 605.
Probability register 606 will be write in the processed image as additional information by the probability that probability selector switch 604 retrieves, and for example, writes in its header files or the footnote file with predetermined form.Probability register 606 also can with the identifying information in candidate face zone as a supplement additional information write in the image, for example, write in the header files or footnote file of image with predetermined form.
The predetermined format of storing additional information and additional additional information is unimportant, is not construed as limiting the invention.Can adopt any conventional form or data structure to store data.
Image output unit 607 output images are used for further processing.
Fig. 7 is the structural drawing according to another image processing apparatus of the present invention.701 representative image input blocks, 702 represent the candidate face district selector, 703 representation vector makers, 704 represent the probability selector switch, and 705 represent the probability storer.These functions of components are identical with the function of the corresponding component shown in Fig. 6.
As shown in Figure 7, the frame of broken lines parts of living have been formed the probability calculation device.Though this probability calculation device is made up of vector generator 703, probability selector switch 704 and probability storer 705 shown in the figure, should be appreciated that and to use any conventional components to form the probability calculation device.That is, vector generator 703, probability selector switch 704 and probability storer 705 do not constitute the restriction to the probability calculation device.Importantly, the probability calculation device will calculate the let others have a look at probability of face of a candidate face region list.
706 represent judging unit, and 707 represent the judged result register, 708 representative image output units.Judging unit 706 probability and threshold by probability selector switch 704 is retrieved judges whether the candidate face zone represents people's face.Judged result register 707 writes the judged result of judging unit 706 outputs in the image as additional information, for example, writes in the header files or footnote file of image with predetermined form.Judged result register 707 also can write the identifying information in the candidate face zone additional additional information as image in the image, for example, writes in the header files or footnote file of image with predetermined form.
The predetermined format of storing additional information and additional additional information is unimportant, is not construed as limiting the invention.Can adopt any conventional form or data structure storage data.
Image output unit 708 output images are used for further processing.
Fig. 8 is the schematic block diagram of device that the image of wherein having stored at least one candidate face zone probability is handled.801 representative image input blocks, 802 represent the probability extraction apparatus, and 803 representatives are judged and control module, 804 representative image processing units, 805 representatives are at the algorithm of people's face, and 806 representatives are at the algorithm of normal image, 807 representative image output units.
Image input block 801 receives images, and its data are imported is used for processing in this device.Having stored the probability at least one candidate face zone in image, for example is in the step 210 or the step 310 among Fig. 3 in Fig. 2, is stored by the probability register among Fig. 6 606.
Probability extraction apparatus 802 for example retrieves the probability in a candidate face zone from the header files of image or footnote file from image.If the identifying information in candidate face zone has been stored in the image, then probability extraction apparatus 802 also will retrieve the identifying information in candidate face zone from image, and this identifying information will be used by graphics processing unit 804.
Judgement and control module 803 are determined probability and a threshold of retrieving whether current candidate face zone represents people's face according to result relatively, and are controlled graphics processing unit 804 in view of the above.
Graphics processing unit 804 judge and the control of control module 803 under, uses different algorithms, as at the algorithm 805 of people's face with at the algorithm 806 of normal image, the image that processing is imported by image input block 801.If through judging and control module 803 judgements, candidate face Regional Representative people face, then graphics processing unit 804 will use the algorithm 805 at people's face, the candidate face zone that the identifying information that retrieves from image identifies is handled, handle otherwise will use at the algorithm 806 of normal image.Graphics processing unit 804 for example is being used to handle and waiting to print or the parts of data to be displayed perhaps a kind of equipment that is used for recognition object or people in a kind of printer or the display.
Image output unit 807 output images are used for further processing.
Figure 11 represents an image processing system, and every kind of method shown in Fig. 1 to 5 can realize in this system.Image processing system shown in Figure 11 comprises CPU (CPU (central processing unit)) 1101, RAM (random access memory) 1102, ROM (ROM (read-only memory)) 1103, system bus 1104, HD (hard disk) controller 1105, keyboard controller 1106, serial interface controller 1107, parallel interface controller 1108, display controller 1109, hard disk 1110, keyboard 1111, camera 1112, printer 1113 and display 1114.In these parts, what link to each other with system bus 1104 has CPU1101, RAM1102, ROM1103, HD controller 1105, keyboard controller 1106, serial interface controller 1107, parallel interface controller 1108 and a display controller 1109.Hard disk 1110 links to each other with HD controller 1105, keyboard 1111 links to each other with keyboard controller 1106, camera 1112 links to each other with serial interface controller 1107, and printer 1113 links to each other with parallel interface controller 1108, and display 1114 links to each other with display controller 1109.
Each functions of components all is well-known in the present technique field among Figure 11, and architecture shown in Figure 11 also is conventional.This architecture is not only applicable to personal computer, and is applicable to handheld device, such as palm PC, and PDA (personal digital assistant), digital camera, or the like.In different application, some parts shown in Figure 11 can be omitted.For example, if total system is a digital camera, parallel interface controller 1108 and printer 1113 can be omitted, and this system can be embodied as single-chip microcomputer.If application software is stored in EPROM or other nonvolatile memories, HD controller 1105 and hard disk 1110 can be omitted.
Total system shown in Figure 11 is by the computer-readable instruction control that is stored in (or as mentioned above, being stored in EPROM or other nonvolatile memories) in the hard disk 1110 usually as software.Software also can be downloaded from the network (not shown).Perhaps be stored in the hard disk 1110, perhaps the software from network download can be loaded into the RAM1102, and is carried out by CPU1101, so that finish the function of being determined by software.
For the one skilled in the art, need not creative work and can at Fig. 1 to the basis of one or more process flow diagrams shown in Figure 5, develop one or more software.The software of developing like this will be carried out the method for training image treating apparatus as shown in Figure 1, carry out the method for processing image as shown in Figure 2, carry out the method for processing image as shown in Figure 3, carry out the method for processing image as shown in Figure 4, perhaps carry out the method for in image, discerning the people as shown in Figure 5.
In some sense, the image processing system shown in Figure 11 if obtain the support of the software developed according to the process flow diagram shown in Fig. 1 to 5, can be realized as Fig. 6 to the same function of image processing apparatus shown in Figure 8.
Though the front is with reference to specific embodiment of the present invention, but for those skilled in the art, should be appreciated that these only are to describe for example, can make many changes and not break away from principle of the present invention these embodiment, scope of the present invention be determined by appended claims.

Claims (4)

1. image processing method is characterized in that may further comprise the steps:
Discern a candidate face zone in the above-mentioned image;
Calculate the let others have a look at probability of face of above-mentioned candidate face region list; And
The positional information in above-mentioned probability and this candidate face zone is stored in the header files or footnote file of above-mentioned image with predetermined form as additional information.
2. image processing method according to claim 1, wherein said calculation procedure comprises:
Generate the step of multi-C vector based on the view data in above-mentioned candidate face zone; And
By using above-mentioned multi-C vector to calculate the step of above-mentioned probability.
3. image processing method according to claim 1, wherein the aforementioned calculation step comprises:
By using a plurality of image processing apparatus to calculate the step of a plurality of middle probability in above-mentioned candidate face zone; And
Based on this a plurality of in the middle of the step of probability of probability calculations above-mentioned storages.
4. image processing apparatus is characterized in that comprising:
Candidate face zone Chooser is used for discerning a candidate face zone of above-mentioned image;
The probability calculation device is used to calculate the let others have a look at probability of face of above-mentioned candidate face region list; And
The probability register is used for the positional information in above-mentioned probability and this candidate face zone is stored into predetermined form as additional information the header files or the footnote file of above-mentioned image.
CNB021554684A 2002-12-13 2002-12-13 Image processing method and apparatus Expired - Fee Related CN1306456C (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CNB021554684A CN1306456C (en) 2002-12-13 2002-12-13 Image processing method and apparatus
US10/716,671 US7415137B2 (en) 2002-12-13 2003-11-20 Image processing method, apparatus and storage medium
JP2003412157A JP2004199673A (en) 2002-12-13 2003-12-10 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB021554684A CN1306456C (en) 2002-12-13 2002-12-13 Image processing method and apparatus

Publications (2)

Publication Number Publication Date
CN1508752A CN1508752A (en) 2004-06-30
CN1306456C true CN1306456C (en) 2007-03-21

Family

ID=34235921

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB021554684A Expired - Fee Related CN1306456C (en) 2002-12-13 2002-12-13 Image processing method and apparatus

Country Status (1)

Country Link
CN (1) CN1306456C (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060274949A1 (en) * 2005-06-02 2006-12-07 Eastman Kodak Company Using photographer identity to classify images
CN100336070C (en) * 2005-08-19 2007-09-05 清华大学 Method of robust human face detection in complicated background image
ITMO20060093A1 (en) * 2006-03-21 2007-09-22 System Spa METHOD FOR IDENTIFYING DISUNIFORM AREAS ON A SURFACE
US7796163B2 (en) 2006-07-25 2010-09-14 Fujifilm Corporation System for and method of taking image based on objective body in a taken image
CN101398832A (en) 2007-09-30 2009-04-01 国际商业机器公司 Image searching method and system by utilizing human face detection
CN103875018B (en) * 2012-09-12 2017-02-15 株式会社东芝 Information processing device and information processing method
CN108537165A (en) * 2018-04-08 2018-09-14 百度在线网络技术(北京)有限公司 Method and apparatus for determining information
CN111767760A (en) * 2019-04-01 2020-10-13 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1128316A1 (en) * 2000-02-28 2001-08-29 Eastman Kodak Company Face detecting and recognition camera and method
CN1352436A (en) * 2000-11-15 2002-06-05 星创科技股份有限公司 Real-time face identification system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1128316A1 (en) * 2000-02-28 2001-08-29 Eastman Kodak Company Face detecting and recognition camera and method
CN1352436A (en) * 2000-11-15 2002-06-05 星创科技股份有限公司 Real-time face identification system

Also Published As

Publication number Publication date
CN1508752A (en) 2004-06-30

Similar Documents

Publication Publication Date Title
CN1098515C (en) Character generating apparatus and method of the same
CN1945580A (en) Image processing apparatus
CN1678021A (en) Image processing apparatus and method, recording medium and program
CN1187952C (en) Equipment and method for correcting input image distortion
CN1400807A (en) Image processing method and equipment, image processing system and storage medium
CN1969314A (en) Image processing device and method, recording medium, and program
CN1599913A (en) Iris identification system and method, and storage media having program thereof
CN1691744A (en) Image processing system, projector, and image processing method
CN1658227A (en) Method and apparatus for detecting text of video
CN1324526C (en) Adaptive scaling of video signals
CN1219709A (en) Methods for extraction and recognition of pattern in image, method for image abnormality juding, and memory medium with image processing programs
CN1190963C (en) Data processing device and method, learning device and method and media
CN1684492A (en) Image dictionary creating apparatus, coding apparatus, image dictionary creating method
CN1707521A (en) Image processing device and method, recording medium, and program
CN1669052A (en) Image matching system using 3-dimensional object model, image matching method, and image matching program
CN1691740A (en) Magnified display apparatus and magnified image control apparatus
CN1551017A (en) Image searching device, method, programmme and storage medium storing said programme
CN1474379A (en) Voice identfying/responding system, voice/identifying responding program and its recording medium
CN1920825A (en) Method and system for displaying performance constraints in a flow design tool
CN1599406A (en) Image processing method and device and its program
CN1306456C (en) Image processing method and apparatus
CN1543742A (en) Motion detector, image processing system, motion detecting method, program, and recordig medium
CN100346339C (en) Image search program
CN1311692C (en) Apparatus and method for checking dynamic vector
CN1102278C (en) Character pattern generation apparatus and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070321

Termination date: 20161213

CF01 Termination of patent right due to non-payment of annual fee