JP4158153B2 - Face image detection method - Google Patents

Face image detection method Download PDF

Info

Publication number
JP4158153B2
JP4158153B2 JP2003398984A JP2003398984A JP4158153B2 JP 4158153 B2 JP4158153 B2 JP 4158153B2 JP 2003398984 A JP2003398984 A JP 2003398984A JP 2003398984 A JP2003398984 A JP 2003398984A JP 4158153 B2 JP4158153 B2 JP 4158153B2
Authority
JP
Japan
Prior art keywords
face
image
pixel density
candidate
processing step
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2003398984A
Other languages
Japanese (ja)
Other versions
JP2005157964A (en
Inventor
賢二 田中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Victor Company of Japan Ltd
Original Assignee
Victor Company of Japan Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Victor Company of Japan Ltd filed Critical Victor Company of Japan Ltd
Priority to JP2003398984A priority Critical patent/JP4158153B2/en
Publication of JP2005157964A publication Critical patent/JP2005157964A/en
Application granted granted Critical
Publication of JP4158153B2 publication Critical patent/JP4158153B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Description

本発明は、ビデオカメラやスチルカメラなどの撮像装置から得られるカラー画像から人間の顔画像を検出する顔画像検出方法に関する。   The present invention relates to a face image detection method for detecting a human face image from a color image obtained from an imaging device such as a video camera or a still camera.

従来、デジタル画像中から人間の顔画像を検出する際に、肌色の領域やその上方に髪の毛があるかどうかを色で検出する方法がある(例えば下記の特許文献1参照)。
特開2000-187721号公報(図1)
Conventionally, when a human face image is detected from a digital image, there is a method of detecting whether there is a skin-colored region or hair above it by color (for example, see Patent Document 1 below).
JP 2000-187721 A (FIG. 1)

上述した従来の顔画像検出方法では、デジタル画像の中から肌色領域を抽出して、例えばその上部に髪の毛である黒があるか、あるいは肌色領域の中に顔の特徴があるかなどを検出していた。   In the conventional face image detection method described above, a skin color region is extracted from a digital image and, for example, whether there is black hair or a facial feature in the skin color region is detected. It was.

そして他の方法においても顔の特徴を抽出するために、様々なサイズのテンプレートを用意してパターンマッチングを繰り返すために多くの処理を必要としていた。
しかしながら、自然界には肌色に類する色はたくさんあり、明るさなども含めると色空間においてどこまでを肌色とするかが課題であった。
In other methods as well, in order to extract facial features, many processes are required to prepare templates of various sizes and repeat pattern matching.
However, in the natural world, there are many colors that are similar to skin color, and when brightness is included, the problem is how far the skin color is in the color space.

そこで、本発明は上述した点に鑑みてなされたもので、顔の特徴を抽出するテンプレートを極端に減らすことができ、明るさ、顔の向き、大きさの変化にロバストな顔画像検出を行うことができる顔画像検出方法を提供することを目的とする。   Therefore, the present invention has been made in view of the above points, and the number of templates for extracting facial features can be extremely reduced, and face image detection that is robust to changes in brightness, face orientation, and size is performed. It is an object of the present invention to provide a face image detection method that can be used.

上記目的を達成するために、本発明に係る顔画像検出方法は、撮像装置により撮像されたカラー画像の画素密度を所定の画素密度に統一させるべく画素密度変換処理を行う画素密度変換処理ステップと、前記画素密度変換処理された画像を、肌色情報の標準偏差を基にして設定した閾値に基づいて2値化する2値化処理ステップと、前記2値化した画像から候補となる片方の目を検出して所定の目の大きさに相当する複数のテンプレートと比較し、該当する前記テンプレートに対応した探索範囲からもう一方の候補となる目を検出し、検出された両目の間隔及び前記両目を通る方向と基準の方向とのなす角度に基づいて顔領域の候補を特定し、前記特定された顔領域候補から特徴点を抽出する特徴点抽出処理ステップと、前記特徴点抽出処理された前記顔領域候補に含まれる情報から前記顔領域候補が顔かどうかの判断を行う認識処理ステップとを備えたものである。 In order to achieve the above object, a face image detection method according to the present invention includes a pixel density conversion processing step for performing pixel density conversion processing so as to unify the pixel density of a color image captured by an imaging device to a predetermined pixel density. A binarization processing step for binarizing the image subjected to the pixel density conversion processing based on a threshold value set based on a standard deviation of skin color information, and one eye that is a candidate from the binarized image. detected and is compared with a plurality of templates corresponding to the size of a given eye, detects the eyes as a other candidate from the search range for the appropriate the template, detected both eyes distance and the A feature point extraction process step of identifying a face area candidate based on an angle between a direction passing through both eyes and a reference direction, and extracting a feature point from the identified face area candidate; and the feature point extraction process The face region candidates from information contained in the face region candidates is that a recognition processing step of judgment is made as to whether the face.

この構成を備えることにより、顔を含む画像をあらかじめ決まったサイズに統一し、肌色情報の標準偏差を基に2値化を行うことによって顔の特徴を際立たせ、顔の特徴を抽出するテンプレートを極端に減らすことができ、撮像中の照明の明るさ、色温度、顔の大きさ、顔の向きが変わっても高速かつ安定した顔の検出を行うことができる。   By providing this configuration, a template for extracting facial features by unifying facial images with a predetermined size, binarizing based on the standard deviation of skin color information, and making facial features stand out. Even if the brightness, color temperature, face size, and face orientation during imaging change, the face can be detected quickly and stably.

また、前記認識処理ステップは、前記顔領域の左右対称度を前記画像が顔かどうかの判断に用いるものである。   In the recognition processing step, the left / right symmetry of the face area is used to determine whether the image is a face.

この構成を備えることにより、顔の認識精度を高めることができる。   By providing this configuration, face recognition accuracy can be increased.

本発明の顔画像検出方法によれば、顔を含む画像をあらかじめ決まったサイズに統一し、肌色情報の標準偏差を基に2値化を行うことによって顔の特徴を際立たせ、顔の特徴を抽出するテンプレートを極端に減らすことができ、明るさ、顔の向き、大きさの変化にロバストな顔画像の検出を行うことができる。   According to the face image detection method of the present invention, an image including a face is unified to a predetermined size, and binarization is performed based on the standard deviation of skin color information to make the facial features stand out, and the facial features are The number of templates to be extracted can be extremely reduced, and a face image robust to changes in brightness, face orientation, and size can be detected.

本発明の実施の形態について図面を参照して説明する。図1は、本発明の実施の形態に係る顔画像検出方法の概略的な処理内容を示すフローチャートである。本実施の形態に係る顔画像検出方法は、図1に示すように、図示しない撮像装置により撮像されたカラー画像の画素密度を所定の画素密度に統一させるべく画素密度変換処理を行う画素密度変換処理ステップS1と、画素密度変換処理された画像を、肌色情報の標準偏差を基にして設定した閾値に基づいて2値化する2値化処理ステップS2と、2値化した画像から両目を検出し、検出された両目の情報に基づいて顔領域の候補を特定し、特定された顔領域候補から特徴点を抽出する特徴点抽出処理ステップS3と、特徴点抽出処理された顔領域候補に含まれる情報から顔かどうかの判断を行う認識処理ステップS4とを備えている。   Embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a flowchart showing a schematic processing content of the face image detection method according to the embodiment of the present invention. As shown in FIG. 1, the face image detection method according to the present embodiment performs pixel density conversion processing that performs pixel density conversion processing so as to unify the pixel density of a color image captured by an imaging device (not shown) to a predetermined pixel density. Processing step S1, binarization processing step S2 for binarizing the image subjected to pixel density conversion processing based on a threshold value set based on the standard deviation of skin color information, and detecting both eyes from the binarized image Then, a face area candidate is identified based on the detected information of both eyes, and feature point extraction processing step S3 for extracting a feature point from the identified face area candidate, and included in the face area candidate subjected to the feature point extraction process Recognition processing step S4 for determining whether or not the face is detected from the information.

以下、各処理ステップについて詳細に説明する。まず、デジタルスチルカメラなどの撮像装置から得られるカラー画像に対して画素密度変換処理を行う(ステップS1)。デジタルスチルカメラなどの撮像装置から得られる画像は、カメラの性能や撮影条件によって大きく画素密度が変わってしまい、テンプレートマッチングを行うためのテンプレートを多数用意しなければならないという不都合が生じるので画素密度を合わせておく必要がある。ステップS1での画素密度変換処理では、横長の画像も縦長の画像も同様に扱うが、画像の処理時間と精度を考慮し長手方向の画素数を所定の画素数、例えば380に統一する。   Hereinafter, each processing step will be described in detail. First, pixel density conversion processing is performed on a color image obtained from an imaging apparatus such as a digital still camera (step S1). An image obtained from an imaging device such as a digital still camera has a pixel density that greatly changes depending on the performance of the camera and shooting conditions, and a large number of templates for template matching must be prepared. It is necessary to match. In the pixel density conversion process in step S1, a horizontally long image and a vertically long image are handled in the same manner, but the number of pixels in the longitudinal direction is unified to a predetermined number of pixels, for example, 380 in consideration of the processing time and accuracy of the image.

次に、画素密度変換処理が行われた画像に対して2値化処理を行う(ステップS2)。人間の顔画像を検出する際に、必ずしも色情報を必要とせず、白黒2値の画像からでも検出が可能である。そこで、顔画像を検出する際の特徴を抽出し、かつメモリーコストを抑え、処理量を減らして高速化に寄与するように、2値化処理を実施する。ただし、2値化処理は、閾値の設定によって多くの情報が失われることにもなるため、いかに必要な情報だけを残すかが大きなポイントとなる。つまり、2値化の閾値の設定が重要となる。   Next, a binarization process is performed on the image on which the pixel density conversion process has been performed (step S2). When detecting a human face image, color information is not necessarily required, and detection is possible even from a monochrome binary image. Therefore, the binarization process is performed so as to extract features at the time of detecting a face image, reduce the memory cost, reduce the processing amount, and contribute to the speedup. However, in the binarization process, a large amount of information is lost due to the setting of the threshold value, and therefore, the important point is how to leave only necessary information. That is, it is important to set a threshold value for binarization.

そこで、人間の顔の特徴を浮き彫りにするために、肌色の部分を除き、目、鼻、口を残すようにするため、肌色を抽出し、その階調を閾値にすることを考える。以下、図2に示すフローチャートに従って2値化処理の具体的な処理内容について述べる。   Therefore, in order to highlight the features of the human face, it is considered to extract the skin color and use the gradation as a threshold value in order to leave the skin color part and leave the eyes, nose and mouth. Hereinafter, specific processing contents of the binarization processing will be described according to the flowchart shown in FIG.

ステップS21:肌色抽出
撮影条件によってはホワイトバランスが大きく崩れていることがあるので、顔の肌色を確実に抽出するには色空間における肌色の範囲をかなり広くとる必要がある。しかし、肌色の範囲を広げれば画像中に含まれる他の物体色までも含むことになるので、顔が存在する確率が高い画像中央付近からのみ肌色を抽出することにより、その影響を少しでも減らすことにする。抽出する肌色の範囲については、演算の簡略化のため、色相H、彩度S、明るさVを使用する。
Step S21: Extraction of skin color Since the white balance may be greatly lost depending on the photographing conditions, it is necessary to take a considerably wide skin color range in the color space in order to reliably extract the skin color of the face. However, if the skin color range is expanded, other object colors included in the image will be included, so by extracting the skin color only from the center of the image where the probability that a face exists is high, the effect will be reduced as much as possible. I will decide. For the skin color range to be extracted, hue H, saturation S, and brightness V are used to simplify the calculation.

ここで、色相H、彩度S、明るさVの求め方を以下に示す。
画素のRGB値を、それぞれ8ビット(255レベル)で表すとき、
vRed:Redの値、
vGreen:Greenの値、
vBlue:Blueの値
とし、この中の最大値、最小値をMx、Mnとすると、色相H、彩度S、明るさVは、次のようにして求められる。
Here, how to obtain the hue H, saturation S, and brightness V is shown below.
When each pixel RGB value is represented by 8 bits (255 levels),
vRed: Red value,
vGreen: Green value,
vBlue: If the value of Blue is the maximum value and the minimum value is Mx and Mn, the hue H, the saturation S, and the brightness V can be obtained as follows.

彩度S:
S=(Mx−Mn)/Mx (Mx≠0)の場合
S=0 (Mx=0)の場合
Saturation S:
When S = (Mx−Mn) / Mx (Mx ≠ 0) When S = 0 (Mx = 0)

色相H:
rc=(Mx−VRed)/(Mx−Mn)
gc=(Mx−VGreen)/(Mx−Mn)
bc=(Mx−VBlue)/(Mx−Mn)
Mx=vRedのとき H=bc−gc
Mx=vGreenのとき H=2+rc−bc
Mx=vBlueのとき H=4+gc−rc
H=H×60;
H<0 H=H+360
ただし、S≠0で、S=0のとき H=O
Hue H:
rc = (Mx−VRed) / (Mx−Mn)
gc = (Mx−VGreen) / (Mx−Mn)
bc = (Mx−VBlue) / (Mx−Mn)
When Mx = vRed H = bc-gc
When Mx = vGreen, H = 2 + rc-bc
When Mx = vBlue H = 4 + gc-rc
H = H × 60;
H <0 H = H + 360
However, when S ≠ 0 and S = 0, H = O

明るさV:
V=Mx
肌色の範囲は、照明条件によるばらつきを考慮して定める。
Brightness V:
V = Mx
The skin color range is determined in consideration of variations due to lighting conditions.

ステップS22:肌色標準偏差の算出
肌色抽出(ステップS21)で抽出した肌色の明るさVの平均値Vmを求めるとともに、抽出した色の数をNとし、式(1)に示す標準偏差σを求める。
Step S22: Calculation of skin color standard deviation The average value Vm of the brightness V of the skin color extracted in the skin color extraction (step S21) is obtained, and the number of the extracted colors is N, and the standard deviation σ shown in the equation (1) is obtained. .

Figure 0004158153
Figure 0004158153

ステップS23:閾値の決定
閾値Vsについては、式(1)で求めた肌色の標準偏差σを目安とし、式(2)に示すように、定数Kによって若干特徴を強調するよう調整する。
Step S23: Determination of the threshold value The threshold value Vs is adjusted so that the feature is slightly emphasized by the constant K as shown in the equation (2) with the skin color standard deviation σ obtained by the equation (1) as a guide.

Figure 0004158153
Figure 0004158153

ステップS24:2値化
上記のようにして決定された閾値に基づいて画素密度変換処理が行われた画像に対して2値化処理を行う。
Step S24: Binarization A binarization process is performed on the image on which the pixel density conversion process has been performed based on the threshold value determined as described above.

次に、図2に示す流れに従って画像の2値化処理を施した後は、図1に戻って特徴点抽出処理が行われる(ステップS3)。この特徴点抽出処理は、図3に示すフローチャートに従って行われる。本実施の形態では、特徴点の抽出として、顔の特徴となる目、鼻、口、髪の毛を考え、その中でも大きなポイントとなる目に注目し、目を抽出することにより、顔の場所、サイズ、角度を求めるようにした。   Next, after performing the binarization processing of the image according to the flow shown in FIG. 2, returning to FIG. 1, the feature point extraction processing is performed (step S3). This feature point extraction processing is performed according to the flowchart shown in FIG. In the present embodiment, the feature points are extracted by considering the eyes, nose, mouth, and hair that are the features of the face. I tried to find the angle.

ステップS31:まず、目テンプレートマッチングが行われる。目を抽出する際に問題となる点は、デジタルスチルカメラなどで撮影される画像は、免許証やパスポートなどのような証明写真と異なり画像の中の顔の大きさが統一されているわけではなく、目の大きさも一様ではないということである。このため、本実施の形態では、図4(a)〜(c)に示すように、大、中、小の大きさの異なる目のテンプレートを3つ用意した。これによって、抽出対象画像の一部と照らし合わせ、マッチング度を求めていく。そして、あらかじめ定めた閾値を超える場所についてはメモリに記憶していく。そして、テンプレートの大きさごとに調べる範囲を定めてマッチング度の高いものだけを残して目の場所を求める。   Step S31: First, eye template matching is performed. The problem when extracting eyes is that images taken with a digital still camera, etc., are different from ID photos such as licenses and passports, and the size of the face in the image is not unified In other words, the size of the eyes is not uniform. For this reason, in this embodiment, as shown in FIGS. 4A to 4C, three templates having different sizes of large, medium, and small are prepared. Thus, the matching degree is obtained by comparing with a part of the extraction target image. Then, locations exceeding a predetermined threshold are stored in the memory. Then, a range to be examined is determined for each size of the template, and only the one having a high matching degree is left to obtain the eye location.

ステップS32:次に、両目検出を行う。上記処理で残された目の場所を探索し、まず、見つかった目の場所を右目と仮定して、そのテンプレートの大きさから左目を探す。この際に顔の向き、傾き、個人差を許容させるため、テンプレートの大きさごとに、図5に示すように探索範囲を定める。これによって、顔の向き、傾きをある程度許容できるようになる。探索範囲内から最も距離が短いものを選択し、検出の始点となる右目候補と、ここで検出された左目候補の座標を求める。そして、その検出結果を基に顔領域候補を決定する。   Step S32: Next, both eyes are detected. The eye location left by the above process is searched. First, the left eye is searched from the size of the template, assuming that the found eye location is the right eye. At this time, in order to allow face orientation, inclination, and individual differences, a search range is determined for each template size as shown in FIG. As a result, the orientation and inclination of the face can be allowed to some extent. The shortest distance is selected from the search range, and the right eye candidate as the detection start point and the coordinates of the left eye candidate detected here are obtained. Then, face area candidates are determined based on the detection result.

ステップS33:次に、顔領域候補特定を行う。図6に示す顔領域において、検出された両目の間隔W、角度θに基づいて存在するであろう顔の大きさ、位置、角度を推測し、アフィン変換によって顔領域を求める。特徴抽出は顔でなくても実画像の中にたくさん存在し、それらすべてについて詳細に特徴を調べていくと処理時間が長くなるため、大まかに調べて顔領域の候補を決めていく。具体的には、(a)肌色含有率、(b)毛髪存在率、(c)眉間存在率を、後述するようにそれぞれ調べ、これらがある閾値を超えるか否かを調べることで顔領域候補を特定していく。   Step S33: Next, face area candidates are specified. In the face area shown in FIG. 6, the size, position, and angle of the face that will exist based on the detected distance W between the eyes and the angle θ are estimated, and the face area is obtained by affine transformation. There are many feature extractions in a real image, even if they are not faces, and the processing time becomes longer if the features are examined in detail for all of them, so the candidates for the face region are determined roughly. Specifically, (a) skin color content rate, (b) hair presence rate, and (c) eyebrow presence rate are examined as described later, and face region candidates are examined by checking whether these exceed a certain threshold. Will be identified.

ただし、帽子などを被っていて毛髪存在率が低いような場合もあるので、特定するための合計ポイントの閾値を低めに設定することによって、なるべく多くの候補を選択しておく。   However, since there is a case where the hair presence rate is low due to wearing a hat or the like, as many candidates as possible are selected by setting the threshold of the total points for specifying a low value.

(a)肌色含有率
図6の長方形で示されるような顔領域の中で、HSV色空間上であらかじめ設定された肌色領域に含まれる肌色画素Dsをカウントし、顔領域全体の画素数Nfとの比を求めるようにして、式(3)に従って肌色含有率Fsrを求める。
(A) Skin color content rate In the face area as shown by the rectangle in FIG. 6, the skin color pixels Ds included in the skin color area set in advance in the HSV color space are counted, and the number Nf of pixels in the entire face area The skin color content Fsr is obtained according to the equation (3).

Figure 0004158153
Figure 0004158153

(b)毛髪存在率
次に、図6に示す両目の距離Wを基にして顔領域を切り出して示す図7において、顔領域の中で両目を結んだ線よりやや上方の長方形の領域(横幅:(W+2・W/2)、縦幅:W)の頭部領域総画素数Nbaに関して、黒画素(毛髪)Dhがどの程度あるかを示す毛髪存在率Fhrを式(4)に従って求める。
(B) Hair Presence Rate Next, in FIG. 7 showing a face region cut out based on the distance W between both eyes shown in FIG. 6, a rectangular region (width) slightly above the line connecting the eyes in the face region The hair presence rate Fhr indicating the degree of black pixels (hair) Dh is obtained according to the equation (4) with respect to the total number Nba of head region pixels of (W + 2 · W / 2) and vertical width: W).

Figure 0004158153
Figure 0004158153

(c)眉間存在率
また、目と目の間の眉間領域の総画素数Nbaに白画素Dbがどのくらいあるかを示す眉間存在率Fbrを式(5)に従って調べる。
(C) Eyebrow Existence Rate Also, an eyebrow presence rate Fbr indicating how many white pixels Db are in the total pixel count Nba in the eyebrow area between the eyes is examined according to Equation (5).

Figure 0004158153
Figure 0004158153

ステップS34:次に、特徴抽出を行う。顔領域の候補が決まってもこれらの条件に当てはまる部分は実画像では多数存在するので、次のようにして、更に特徴を抽出し、ポイント化して顔としての確からしさを上げる。   Step S34: Next, feature extraction is performed. Even if a candidate for a face area is determined, there are many portions in the real image that meet these conditions. Therefore, features are further extracted and converted into points to improve the likelihood of the face as follows.

(d)顔パターンとのマッチング度
検出する顔のサイズは多様であるため、これらすべての顔パターンを用意することはせず、顔領域とテンプレートの両目の距離から拡大率、角度を求め、式(6)に示すアフィン変換によって座標を求めながら、2値化された実画像との排他的論理和を取るようにして高速なマッチングを行うことで、式(7)に示す顔パターンマッチング度Fmを求める。
(D) Degree of matching with face pattern Since the sizes of faces to be detected are various, it is not necessary to prepare all these face patterns, and the enlargement ratio and angle are obtained from the distance between both eyes of the face area and the template, While obtaining coordinates by affine transformation shown in (6) and performing high-speed matching so as to obtain exclusive OR with a binarized real image, the face pattern matching degree Fm shown in equation (7) Ask for.

Figure 0004158153
Figure 0004158153

Figure 0004158153
Figure 0004158153

(e)対称度
顔領域を水平方向に中心から折り返したときに、どの程度一致するかを顔領域の左右対称度としており、対称度Fsymを式(8)に従って求める。
(E) Symmetry The degree of symmetry of the face area when the face area is folded back from the center in the horizontal direction is defined as the left-right symmetry of the face area, and the symmetry Fsym is obtained according to equation (8).

Figure 0004158153
Figure 0004158153

(f)総合点
式(9)に示すように、上記式(7)、(8)の値に、式(3)の結果を合計して総合点Ftとし、検出された顔領域の中で重なりあった領域の取捨選択に利用する。
(F) Overall Point As shown in Equation (9), the result of Equation (3) is added to the values of Equations (7) and (8) above to obtain an overall point Ft. This is used to select overlapping areas.

Figure 0004158153
Figure 0004158153

次に、図1に戻って認識処理を行う(ステップS4)。認識は、顔領域候補について各顔領域の特徴ポイントを求め、これらを閾値と比較してふるいにかけていく。つまり、式(3)、(7)〜(9)の結果が、式(10)を満足する場合に顔領域とする。なお、式(10)において、Iは恒真命題を意味する記号である。この式(10)に示すように、認識処理ステップは、肌色含有率Fsr、対称度Fsym、顔パターンマッチング度Fm及び総合点Ftがそれぞれ閾値との比較条件を満たす場合に顔かどうかを判断する。   Next, returning to FIG. 1, recognition processing is performed (step S4). In recognition, feature points of each face area are obtained for face area candidates, and these are compared with a threshold value and sieved. That is, when the results of the expressions (3) and (7) to (9) satisfy the expression (10), the face area is determined. In the formula (10), I is a symbol meaning a constant proposition. As shown in this equation (10), the recognition processing step determines whether the skin color content rate Fsr, symmetry degree Fsym, face pattern matching degree Fm, and total point Ft are faces when each satisfies the comparison condition with the threshold value. .

Figure 0004158153
Figure 0004158153

なお、顔の周辺には、場合によってはたくさんの顔領域が残るのでこの中から総合点の最も高いものを正しい顔領域として選択する。   Since many face areas remain around the face in some cases, the face area with the highest total score is selected as the correct face area.

本発明によれば、顔を含む画像をあらかじめ決まったサイズに統一し、肌色情報の標準偏差を基に2値化を行うことによって顔の特徴を際立たせ、顔の特徴を抽出するテンプレートを極端に減らすことができ、撮像中の照明の明るさ、色温度、顔の大きさ、顔の向きが変わっても高速かつ安定した顔の検出を行うことができるので、本発明は顔画像検出装置に適用できる。   According to the present invention, an image including a face is unified to a predetermined size, and binarization is performed based on the standard deviation of skin color information to make the facial features stand out. The present invention is capable of detecting a face at high speed and stably even if the brightness, color temperature, face size, and face orientation during imaging can be changed. Applicable to.

本発明の実施の形態に係る顔画像検出方法の概略的な処理内容を示すフローチャートである。It is a flowchart which shows the schematic processing content of the face image detection method which concerns on embodiment of this invention. 本発明の実施の形態に係るもので、図1の2値化処理の内容を示すフローチャートである。FIG. 3 is a flowchart according to the embodiment of the present invention and showing the contents of the binarization process of FIG. 1. 本発明の実施の形態に係るもので、図1の特徴点抽出処理の内容を示すフローチャートである。FIG. 3 is a flowchart according to the embodiment of the present invention and showing the contents of feature point extraction processing of FIG. 1. 本発明の実施の形態で用いる目の検出用テンプレートを示す図である。It is a figure which shows the template for an eye detection used in embodiment of this invention. 本発明の実施の形態における両目検出探索範囲を示す図である。It is a figure which shows the binocular detection search range in embodiment of this invention. 本発明の実施の形態における顔領域を示す図である。It is a figure which shows the face area | region in embodiment of this invention. 本発明の実施の形態における顔領域の切り出しを示す図である。It is a figure which shows extraction of the face area | region in embodiment of this invention.

符号の説明Explanation of symbols

S1 画素密度変換処理ステップ
S2 2値化処理ステップ
S3 特徴点抽出処理ステップ
S4 認識処理ステップ
S1 pixel density conversion processing step S2 binarization processing step S3 feature point extraction processing step S4 recognition processing step

Claims (2)

撮像装置により撮像されたカラー画像の画素密度を所定の画素密度に統一させるべく画素密度変換処理を行う画素密度変換処理ステップと、
前記画素密度変換処理された画像を、肌色情報の標準偏差を基にして設定した閾値に基づいて2値化する2値化処理ステップと、
前記2値化した画像から候補となる片方の目を検出して所定の目の大きさに相当する複数のテンプレートと比較し、該当する前記テンプレートに対応した探索範囲からもう一方の候補となる目を検出し、検出された両目の間隔及び前記両目を通る方向と基準の方向とのなす角度に基づいて顔領域の候補を特定し、前記特定された顔領域候補から特徴点を抽出する特徴点抽出処理ステップと、
前記特徴点抽出処理された前記顔領域候補に含まれる情報から前記顔領域候補が顔かどうかの判断を行う認識処理ステップとを、
備えた顔画像検出方法。
A pixel density conversion processing step for performing pixel density conversion processing to unify the pixel density of the color image captured by the imaging device to a predetermined pixel density;
A binarization processing step for binarizing the image subjected to the pixel density conversion processing based on a threshold value set based on a standard deviation of skin color information;
One candidate eye is detected from the binarized image , compared with a plurality of templates corresponding to a predetermined eye size, and the other candidate eye from the search range corresponding to the template. detects, identifies the candidate face region based on the angle between the direction of the direction and the reference through spacing and the eyes of both eyes detected, extracts a feature point from the identified face region candidate feature A point extraction processing step;
A recognition process step of determining whether the face area candidate is a face from information included in the face area candidate subjected to the feature point extraction process;
A face image detection method provided.
前記認識処理ステップは、前記顔領域の左右対称度を前記画像が顔かどうかの判断に用いる請求項1に記載の顔画像検出方法。   The face image detection method according to claim 1, wherein the recognition processing step uses the left-right symmetry of the face region to determine whether the image is a face.
JP2003398984A 2003-11-28 2003-11-28 Face image detection method Expired - Fee Related JP4158153B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003398984A JP4158153B2 (en) 2003-11-28 2003-11-28 Face image detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003398984A JP4158153B2 (en) 2003-11-28 2003-11-28 Face image detection method

Publications (2)

Publication Number Publication Date
JP2005157964A JP2005157964A (en) 2005-06-16
JP4158153B2 true JP4158153B2 (en) 2008-10-01

Family

ID=34723668

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003398984A Expired - Fee Related JP4158153B2 (en) 2003-11-28 2003-11-28 Face image detection method

Country Status (1)

Country Link
JP (1) JP4158153B2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4630749B2 (en) * 2005-07-26 2011-02-09 キヤノン株式会社 Image output apparatus and control method thereof
JP5129457B2 (en) * 2006-03-03 2013-01-30 富士通株式会社 Manufacturer determination program and manufacturer determination apparatus
JP2009110048A (en) * 2007-10-26 2009-05-21 Seiko Epson Corp Setting of face area
JP2009237618A (en) * 2008-03-25 2009-10-15 Seiko Epson Corp Detection of face area in image
JP4985510B2 (en) * 2008-03-25 2012-07-25 セイコーエプソン株式会社 Set the face area corresponding to the face image in the target image
WO2013053111A1 (en) * 2011-10-12 2013-04-18 Qualcomm Incorporated Detecting counterfeit print material with camera-equipped computing device

Also Published As

Publication number Publication date
JP2005157964A (en) 2005-06-16

Similar Documents

Publication Publication Date Title
CN110084135B (en) Face recognition method, device, computer equipment and storage medium
JP4505362B2 (en) Red-eye detection apparatus and method, and program
US10007846B2 (en) Image processing method
KR100480781B1 (en) Method of extracting teeth area from teeth image and personal identification method and apparatus using teeth image
US8358838B2 (en) Red eye detecting apparatus, red eye detecting method and red eye detecting program stored on a computer readable medium
US20060017825A1 (en) Method and apparatus for effecting automatic red eye reduction
WO2019061658A1 (en) Method and device for positioning eyeglass, and storage medium
US20050196044A1 (en) Method of extracting candidate human region within image, system for extracting candidate human region, program for extracting candidate human region, method of discerning top and bottom of human image, system for discerning top and bottom, and program for discerning top and bottom
US20030174869A1 (en) Image processing apparatus, image processing method, program and recording medium
JP2005310124A (en) Red eye detecting device, program, and recording medium with program recorded therein
JP5774425B2 (en) Image analysis apparatus and image evaluation apparatus
US20180107877A1 (en) Image processing apparatus, image processing method, and image processing system
US11315360B2 (en) Live facial recognition system and method
US20100172587A1 (en) Method and apparatus for setting a lip region for lip reading
JP6784261B2 (en) Information processing equipment, image processing system, image processing method and program
KR101875891B1 (en) apparatus and method for face detection using multi detection
JP2011081803A (en) Red-eye object classification candidate, computer-readable medium, and image processing apparatus
JP4158153B2 (en) Face image detection method
JP2007274527A (en) Image processing apparatus, image processing method, and image processing program
JP2008139941A (en) Image processor, image processing method, and image processing program
JP2009038737A (en) Image processing apparatus
JP2018025966A (en) Image processing apparatus and image processing method
JPH08138024A (en) Picture direction discriminating method
JP4708250B2 (en) Red-eye correction processing system, red-eye correction processing method, and red-eye correction processing program
JP2004265431A (en) Face extraction method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060331

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20071219

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080111

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080305

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080404

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080530

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20080620

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20080703

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110725

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120725

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120725

Year of fee payment: 4

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313111

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120725

Year of fee payment: 4

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120725

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130725

Year of fee payment: 5

LAPS Cancellation because of no payment of annual fees