JP2011154717A - Facial image processing device - Google Patents

Facial image processing device Download PDF

Info

Publication number
JP2011154717A
JP2011154717A JP2011092307A JP2011092307A JP2011154717A JP 2011154717 A JP2011154717 A JP 2011154717A JP 2011092307 A JP2011092307 A JP 2011092307A JP 2011092307 A JP2011092307 A JP 2011092307A JP 2011154717 A JP2011154717 A JP 2011154717A
Authority
JP
Japan
Prior art keywords
image
face
pupil
facial expression
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2011092307A
Other languages
Japanese (ja)
Other versions
JP5017476B2 (en
Inventor
Hiroshi Sukegawa
寛 助川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Priority to JP2011092307A priority Critical patent/JP5017476B2/en
Publication of JP2011154717A publication Critical patent/JP2011154717A/en
Application granted granted Critical
Publication of JP5017476B2 publication Critical patent/JP5017476B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To provide a facial image processing device that automatically determines expression of a face and obtains a desired image. <P>SOLUTION: A plurality of images of a person including a facial image are input. The face area of the person is extracted for each of the input images. An evaluation value of the expression in the face area for each of the extracted images is calculated. Using the evaluated evaluation value, whether the facial expression of the person displayed in the input image satisfies a photographer is determined for each of the images. Using the determination result, from among the plurality of images, the image having the facial expression that satisfies the photographer is selected and output. <P>COPYRIGHT: (C)2011,JPO&INPIT

Description

本発明の実施形態は、顔画像処理装置に関する。   Embodiments described herein relate generally to a face image processing apparatus.

最近、電子スチルカメラ等のデジタル画像装置の普及はめざましく、様々な分野で広く利用がなされている。
たとえば、電子スチルカメラやTV電話、監視カメラで人物を撮影する場合、顔の向きや目や口などの状態が希望の状態の時に一人または複数の人物の顔を撮影しようとする時は、被撮影者に希望の状態に顔の状態をあわせてもらうといった方法をとるか、監視カメラなどではすべての時間において連続的にビデオテープなどを使って撮影を行い、後から最適な画像を目で見ながら選ぶといった方法をとっている。
Recently, digital image apparatuses such as electronic still cameras have been widely used, and are widely used in various fields.
For example, when shooting a person with an electronic still camera, videophone, or surveillance camera, if you want to shoot one or more people's faces when the face orientation, eyes, mouth, etc. are in the desired state, The photographer can adjust his / her face to the desired state, or a video camera can be used continuously to monitor the camera with a video camera at all times, and the optimal image can be seen later. The method of choosing while taking it.

しかし、一人または複数名を対象に撮影を行っている時に、撮影者の希望する画像を獲得するために被撮影者に対してあらかじめ顔の状態の希望を伝えてその状態にしてもらう必要があったり、複数人の撮影する場合には撮影してみて一人でも適さない状態の人がいた場合は、再度撮りなおしする必要がある。そのため、監視のように撮影されていることを相手に知られたくない場合や複数の人物がいて常に全員がばらばらな顔の状態をしているような撮影対象の場合、非常に撮影が困難であるという問題がある。   However, when taking a picture of one or more people, it is necessary to inform the photographee in advance of the desired facial condition in order to obtain the desired image of the photographer. If there are people who are in a state that is not suitable for one person, it is necessary to take another picture. Therefore, it is very difficult to shoot if you do not want the other party to know that you are shooting, or if you have multiple people who always have disjointed faces. There is a problem that there is.

本発明が解決しようとする課題は、顔の表情を自動的に判断し希望の画像を獲得することができる顔画像処理装置を提供することである。   The problem to be solved by the present invention is to provide a face image processing apparatus capable of automatically determining facial expressions and acquiring a desired image.

実施形態に係る顔画像処理装置は、顔画像を含む人物の複数枚の画像を入力する画像入力手段と、この画像入力手段により入力された各画像ごとに、前記人物の顔領域を抽出する顔領域抽出手段と、この顔領域抽出手段により抽出された各画像ごとの顔領域の表情の評価値を求める表情評価手段と、この表情評価手段で評価された評価値を用いて前記画像入力手段より入力された画像内に表示された前記人物の顔の表情が撮影者が希望した表情となる画像であるかを各画像ごとに判定する表情判定手段と、この表情判定手段の判定結果を利用して前記複数枚の画像の中で前記人物の顔の表情が撮影者が希望した表情と判定された画像を選択して出力する画像選択手段とを具備している。   The face image processing apparatus according to the embodiment includes an image input means for inputting a plurality of images of a person including a face image, and a face for extracting the face area of the person for each image input by the image input means. Region extraction means, facial expression evaluation means for obtaining an evaluation value of facial expression for each image extracted by the face area extraction means, and the image input means using the evaluation value evaluated by the facial expression evaluation means Using facial expression determination means for determining for each image whether the facial expression of the person displayed in the input image is an image that the photographer desires, and using the determination result of the facial expression determination means Image selecting means for selecting and outputting an image in which the facial expression of the person is determined to be the facial expression desired by the photographer among the plurality of images.

実施形態に係るシステムの一例を示す構成図。The lineblock diagram showing an example of the system concerning an embodiment. 実施形態に係るシステムの処理に沿ったブロックダイアグラム。The block diagram along the process of the system which concerns on embodiment. 実施形態に係る顔領域抽出部の処理を説明する説明図。Explanatory drawing explaining the process of the face area extraction part which concerns on embodiment. 実施形態に係る瞳検出部の円形分離度フィルターの処理を説明する説明図。Explanatory drawing explaining the process of the circular separability filter of the pupil detection part which concerns on embodiment. 実施形態に係る瞳検出部及び鼻孔検出部における瞳と鼻孔と口の位置関係を説明する説明図。Explanatory drawing explaining the positional relationship of the pupil, a nostril, and a mouth in the pupil detection part and nostril detection part which concern on embodiment. 実施形態に係る瞳検出部の検出処理を説明する説明図。Explanatory drawing explaining the detection process of the pupil detection part which concerns on embodiment. 実施形態に係る口検出部の検出処理を説明する説明図。Explanatory drawing explaining the detection process of the mouth detection part which concerns on embodiment. 実施形態に係る瞳状態判定部の判定処理を説明する説明図。Explanatory drawing explaining the determination process of the pupil state determination part which concerns on embodiment. 実施形態に係る瞳状態判定部の判定処理を示すフローチャート。The flowchart which shows the determination process of the pupil state determination part which concerns on embodiment. 実施形態に係る瞳状態判定部の判定処理を説明する説明図。Explanatory drawing explaining the determination process of the pupil state determination part which concerns on embodiment. 実施形態に係る口状態判定部の判定処理を説明するフローチャート。The flowchart explaining the determination process of the mouth state determination part which concerns on embodiment. 実施形態に係る顔状態判定部の判定処理を説明する説明図。Explanatory drawing explaining the determination process of the face state determination part which concerns on embodiment. 実施形態に係る顔サイズ補正部のサイズ補正処理を説明する説明図。Explanatory drawing explaining the size correction process of the face size correction | amendment part which concerns on embodiment. 実施形態に係る撮影画像選択画面およびインターフェースを示す図。The figure which shows the picked-up image selection screen and interface which concern on embodiment.

以下、実施形態について図面を参照して説明する。
初めに本方式を用いてテレビカメラや電子スチルカメラから入力された連続画像中に含まれる1人または複数の人物の顔の状態(表情)を認識し、撮影者の希望とする状態の顔を撮影する装置についての実施形態を示す。
Hereinafter, embodiments will be described with reference to the drawings.
First, this method is used to recognize the face state (facial expression) of one or more persons included in a continuous image input from a TV camera or an electronic still camera, and to select a face in a state desired by the photographer. 1 shows an embodiment of an apparatus for photographing.

(1)実施形態の全体処理概要の処理説明
図1は、実施形態であるシステムの一例を示す構成図である。図1において、本実施形態は、テレビカメラ及びモニタ1、PC(またはワークステーション)からなる装置2,3、または電子スチルカメラのような携帯型の筐体内部にPCと同様の計算及び記憶装置等を含み、液晶やプラズマ等の小型ディスプレイを装備した装置4からなる。
(1) Description of overall processing overview of the embodiment
FIG. 1 is a configuration diagram illustrating an example of a system according to an embodiment. In FIG. 1, the present embodiment is a calculation and storage device similar to a PC in a portable housing such as a TV camera and monitor 1, devices 2 and 3 consisting of a PC (or workstation), or an electronic still camera. Etc., and comprises a device 4 equipped with a small display such as liquid crystal or plasma.

図2は、実施形態であるシステムの処理に沿ったブロックダイアグラムである。図2において、本実施形態に係るシステムは、画像入力部11と、画像蓄積部12と、顔領域抽出部13と、瞳検出部14と、鼻孔検出部15と、口検出部16と、瞳状態判定部17と、口状態判定部18と、顔状態判定部19と、属性別計数部20と、最適画像撮影部21と、最適画像合成部22と、顔サイズ補正部23と、出力部24とを有している。   FIG. 2 is a block diagram along the processing of the system according to the embodiment. 2, the system according to the present embodiment includes an image input unit 11, an image storage unit 12, a face area extraction unit 13, a pupil detection unit 14, a nostril detection unit 15, a mouth detection unit 16, and a pupil. State determination unit 17, mouth state determination unit 18, face state determination unit 19, attribute-specific counting unit 20, optimal image photographing unit 21, optimal image composition unit 22, face size correction unit 23, and output unit 24.

このようなシステムにおいて、本実施形態の画像処理は以下のような手順で行われる。つまり、画像入力部11からデジタイズされた画像を入力し、画像蓄積部12にその内容を連続して格納する。入力画像に対して顔領域抽出部13を適用することにより入力画像内に存在する一人または複数の人物の顔を抽出し、抽出された各顔領域において瞳検出部14、鼻孔検出部15、口検出部16を用いて顔内の目、鼻、口の部位を検出する。顔の各部位が検出されたら瞳状態判定部17及び口状態判定部18によって瞳の開閉状態や視線の状態、口の開閉状態等を求め、顔状態判定部19ではその結果を利用して被撮影者それぞれの顔の状態がどのような状態であるかを判定する。   In such a system, the image processing of this embodiment is performed in the following procedure. That is, the digitized image is input from the image input unit 11 and the contents are continuously stored in the image storage unit 12. By applying the face area extraction unit 13 to the input image, one or more faces of the person existing in the input image are extracted, and in each of the extracted face areas, the pupil detection unit 14, the nostril detection unit 15, the mouth The detection unit 16 is used to detect the eye, nose, and mouth regions in the face. When each part of the face is detected, the pupil state determination unit 17 and the mouth state determination unit 18 obtain the open / closed state of the pupil, the line of sight, the open / closed state of the mouth, and the face state determination unit 19 uses the results to detect The state of each photographer's face is determined.

属性別計数部20では撮影領域内にいる人物それぞれの性別、大人/子供等の属性をもとめ、属性毎及び撮影領域内全部の人数を計測する。最適画像撮影部21では、得られた画像が撮影者の希望とする状態であるかどうかを一枚一枚毎に判定し、複数枚得られた画像の中で最も最適状態に近いものを出力し、最適画像合成部22では複数人物を撮影している場合には被撮影者それぞれにおいて最適の画像を保存し、最終出力画像で合成する。
得られた結果や候補画像は入力画像サイズまたは顔サイズ補正部23によってサイズを補正しながら出力部24によって表示し、撮影者に結果を知らせる。
The attribute-specific counting unit 20 obtains the attributes of each person in the shooting area, such as sex and adult / children, and measures the number of persons for each attribute and in the shooting area. The optimum image photographing unit 21 determines whether or not the obtained image is in the state desired by the photographer, and outputs the image closest to the optimum state among a plurality of obtained images. In the case where a plurality of persons are photographed, the optimum image composition unit 22 stores an optimum image for each person to be photographed and composes the final output image.
The obtained results and candidate images are displayed by the output unit 24 while the size is corrected by the input image size or face size correcting unit 23 to notify the photographer of the results.

次に、それぞれの処理部11〜23に沿って詳細にその動作を図面を用いて説明する。   Next, the operation will be described in detail along the respective processing units 11 to 23 with reference to the drawings.

(2)画像入力部11の処理説明
一名または複数名の人物が写るように設置された、動画像入力用のテレビカメラ及び静止画入力用の電子スチルカメラ等を利用して画像をカラーまたはモノクロでデジタイズして入力する。入力画像の階調やサイズはとくに限定せずカメラの入力階調、入力解像度に従うこととする。
(2) Processing description of the image input unit 11
An image is digitized and input in color or monochrome using a moving image input television camera and a still image input electronic still camera installed so that one or more persons can be photographed. The gradation and size of the input image are not particularly limited, and follow the input gradation and input resolution of the camera.

(3)画像蓄積部12の処理説明
画像入力部11から取り込まれた画像はそのままメモリに保存され、また直前(Nフレーム前まで)の複数の画像を別の領域に保存する。
(3) Explanation of processing of image storage unit 12
The image captured from the image input unit 11 is stored in the memory as it is, and a plurality of immediately preceding images (up to N frames before) are stored in another area.

(4)顔領域抽出部13の処理説明
人物顔領域のうち、上下端は眉毛から唇付近、左右端は両目の両端の外側に位置する領域を顔検索用領域として定め、予め複数名の画像を利用して平均画像もしくはKL展開をして上位成分固有ベクトルを用いる等して顔探索用の顔辞書を作成する。
(4) Processing description of the face area extraction unit 13
Of the human face area, the upper and lower ends are located near the lips from the eyebrows, and the left and right ends are located outside both ends of the eyes as face search areas, and an average image or KL development is performed using a plurality of images in advance. Then, a face dictionary for face search is created by using upper component eigenvectors.

また、前もって顔探索用の辞書で様々な画像を評価し、顔辞書と類似度が高い領域で顔ではないものが得られたら非顔辞書として画像を収集する。入力された画像に対して顔の大きさの影響をなくすために複数段階での拡大・縮小画像を作成し、それぞれの画像に対して複合類似度法もしくはテンプレートマッチング法を利用して顔領域の探索を行う。走査する手順を図3の説明図に示す。顔領域は顔辞書と類似度が高く非顔辞書と類似度が低くなるのが理想で、
評価値=顔辞書との類似度−非顔辞書との類似度
で与えられる評価値の最も高い場所を求め第一の顔検出領域とする。最高値を出した領域と重ならず所定の距離以上離れた位置で所定の評価しきい値以上の評価値を与える領域に対しても顔の検出領域とすることで、複数人数が入力画像に入っている場合でも全員を検出し、被撮影領域中の人数を計測することも可能である。
Also, various images are evaluated in advance using a face search dictionary, and if a non-face image is obtained in a region having a high similarity to the face dictionary, the image is collected as a non-face dictionary. In order to eliminate the influence of the face size on the input image, create enlarged / reduced images in multiple stages, and use the composite similarity method or template matching method for each image. Perform a search. The scanning procedure is shown in the explanatory diagram of FIG. Ideally, the face area should have high similarity with the face dictionary and low similarity with the non-face dictionary.
Evaluation value = similarity with face dictionary−similarity with non-face dictionary
The location having the highest evaluation value given by is obtained as the first face detection area. By setting a face detection area for an area that gives an evaluation value equal to or higher than a predetermined evaluation threshold at a position that is not more than a predetermined distance away from the area that has given the highest value, a plurality of persons can be included in the input image. It is also possible to detect all persons even when they are present and measure the number of persons in the imaged area.

(5)瞳検出部14の処理説明
顔領域抽出部13によって抽出された顔領域それぞれに対して、複数の半径で円形分離度フィルター(「動画像を用いた顔認識システム」、山口修他、信学技報 PRMU97−50,PP17−23を参照)をかけることで、円形で周りよりも暗くなっている場所を瞳候補点として列挙する。瞳領域は顔の上方領域にあると想定されるので、探索領域は顔全体に対して処理する必要はない。
(5) Processing description of the pupil detection unit 14
For each of the face regions extracted by the face region extraction unit 13, a circular separability filter (“Face recognition system using moving images”, Osamu Yamaguchi et al., IEICE Technical Report PRMU97-50, PP17-) with a plurality of radii. 23), the locations that are circular and darker than the surroundings are listed as pupil candidate points. Since the pupil region is assumed to be in the upper region of the face, the search region does not need to be processed for the entire face.

また、二値化されて暗いと判定された場所のみで図4に示された外側領域と内側領域それぞれにおける輝度分散の比率を求める円形分離度の計算をすることにより高速化をすることが可能である。得られた候補点それぞれに対して次に用途に応じた幾何学配置条件を用いて候補点の組み合わせ(左右で一組)を絞り込む。たとえば、カメラからの距離によって両瞳間の距離の大小しきい値を決める。又は、正面静止状態の顔しかない場合は両瞳を結ぶ線が水平に近いように角度のしきい値を決める等である。その両目それぞれに対して以下の評価値計算を行い左右の評価値を足したものをその組み合わせの評価値とする。
評価値=瞳辞書との類似度−非瞳辞書との類似度
なお、各辞書は前もって複数名の被験者のデータから顔領域抽出部13と同様に辞書を予め作成しておくものとし、この場合の瞳辞書は眼がねをかけている、目つぶり、横目、半目などといった各種の瞳の状態を全て別々の複数辞書として持ち、目つぶりや横目の状態など様々な状態でも安定して瞳領域を検出することができる。
Further, it is possible to increase the speed by calculating the circular separation degree for obtaining the ratio of the luminance dispersion in each of the outer region and the inner region shown in FIG. 4 only in the binarized and determined to be dark. It is. Next, with respect to each of the obtained candidate points, a combination of candidate points (one set on the left and right) is narrowed down using a geometric arrangement condition according to the application. For example, the threshold value of the distance between both pupils is determined by the distance from the camera. Alternatively, when there is only a face in a stationary front state, the angle threshold is determined so that the line connecting both pupils is nearly horizontal. The following evaluation value calculation is performed for each of the eyes, and the left and right evaluation values are added to obtain an evaluation value of the combination.
Evaluation value = similarity with pupil dictionary−similarity with non-pupil dictionary It should be noted that each dictionary is created in advance in the same manner as the face area extraction unit 13 from data of a plurality of subjects in this case. The pupil dictionary has various types of pupils such as blinking eyes, blinking eyes, horizontal eyes, half eyes, etc. as separate multiple dictionaries. Can be detected.

また、非瞳辞書も瞳と間違いやすい鼻孔や目尻目頭、眉などのクラスを分け複数の辞書を持たせ、非瞳辞書の類似度計算の時にはその中で最も高い類似度を与える物を選択して計算することで色々な抽出失敗に対処する。この様子を図6に示す。
また鼻孔検出部15と組み合わせて幾何学的な拘束条件を図5のように定めることで、瞳検出の精度を上げることが可能である。
In addition, the non-pupil dictionary is divided into classes such as the nostrils, the corners of the eyes, and the eyebrows that are likely to be mistaken for pupils, and has multiple dictionaries. To deal with various extraction failures. This is shown in FIG.
Moreover, the accuracy of pupil detection can be improved by determining geometric constraint conditions in combination with the nostril detection unit 15 as shown in FIG.

(6)鼻孔検出部15の処理説明
顔検出部13及び瞳検出部14の位置関係を用いて鼻領域を限定する。顔領域中央部であり両瞳よりも下において瞳検出部14と同様に二値化、円形分離度フィルター処理をすることで暗くて丸い部分の領域を鼻孔候補点として列挙し、それぞれに対して顔検出部と同様、鼻孔辞書、非鼻孔辞書と類似度計算をし以下の評価値を各点で求める。
評価値=鼻孔辞書との類似度−非鼻孔辞書との類似度
また、候補点全ての2点の組み合わせの中で、予め与えてある瞳との幾何学的な配置条件に一致する中で上記評価値が最高となる一組の点(左右の2点)を求め、それを両鼻孔位置として検出する。また、瞳検出部14にも示したが幾何学配置条件の中で瞳と鼻孔の4点を行うことで精度を上げることも可能である。
(6) Processing description of the nostril detection unit 15
The nose region is limited using the positional relationship between the face detection unit 13 and the pupil detection unit 14. The dark and round areas are listed as nostril candidate points by performing binarization and circular separation degree filter processing in the center of the face area and below both pupils in the same manner as the pupil detection unit 14. Similar to the face detection unit, the similarity is calculated with the nostril dictionary and the non-nasal dictionary, and the following evaluation values are obtained at each point.
Evaluation value = similarity with nostril dictionary−similarity with non-nasal dictionary In addition, among the combinations of all of the two candidate points, the above matches the geometrical arrangement condition with the pupil given in advance. A pair of points (two points on the left and right) with the highest evaluation value is obtained and detected as both nostril positions. As shown in the pupil detection unit 14, it is also possible to improve the accuracy by performing four points of the pupil and the nostril in the geometric arrangement condition.

(7)口検出部16の処理説明
顔領域抽出部13、瞳検出部14及び鼻孔検出部15によって顔及び目鼻の配置が求められたため、両瞳の中心、両鼻孔の中心を求め平均的な幾何学配置を利用して口があるだろうと思われる計算を行う。図5は、本実施形態の瞳検出部14及び鼻孔検出部15における瞳と鼻孔と口の位置関係を説明する説明図であり、図5を参照されたい。
(7) Explanation of processing of mouth detection unit 16
Since the face region extraction unit 13, the pupil detection unit 14, and the nostril detection unit 15 determine the arrangement of the face and the eyes and nose, the center of both pupils and the center of both nostrils are obtained, and the mouth is obtained using an average geometric arrangement. Perform calculations that you think will be. FIG. 5 is an explanatory diagram for explaining the positional relationship between the pupil, the nostril, and the mouth in the pupil detection unit 14 and the nostril detection unit 15 of the present embodiment. Refer to FIG.

また、口検出部16の処理の説明図が図7に示され、これは本実施形態における口検出部16の検出処理を説明する説明図である。
図7において、その領域において最も暗い画素しか出ないような所定しきい値以下の輝度を持つ画素を黒画素にし、それ以外の画素を白画素とする二値化処理を行い、この画像を基準画像とする。このしきい値でも抽出される領域は暗い部分もしくは黒い部分のため、ひげの領域もしくは開いている口の領域とする。そこから徐々にしきい値を上げて二値化をし、基準画像との差分画像に対してラベリング処理を行い、横に長い領域(ラベル)がでてきて大きくなってきたらその領域が縦横それぞれ所定サイズ以上になった段階で口の領域とする。一方で初期しきい値の二値化結果とサイズがほとんど変わらないのはひげなどのような真っ黒な領域は差分処理によって排除でき、口領域とは区別することができる。
Further, FIG. 7 shows an explanatory diagram of the processing of the mouth detection unit 16, which is an explanatory diagram for explaining the detection processing of the mouth detection unit 16 in the present embodiment.
In FIG. 7, binarization processing is performed in which a pixel having a luminance equal to or lower than a predetermined threshold value, in which only the darkest pixel appears in the region, is a black pixel, and other pixels are white pixels. An image. Since the region extracted even with this threshold value is a dark portion or a black portion, the region is a beard region or an open mouth region. From there, the threshold is gradually raised and binarized, and the difference image from the reference image is labeled, and when a horizontal area (label) appears and grows larger, the area becomes vertical and horizontal The area of the mouth is used when the size is exceeded. On the other hand, a black area such as a whisker whose size is almost the same as the binarization result of the initial threshold value can be excluded by difference processing and can be distinguished from the mouth area.

(8)瞳状態判定部17の処理説明
瞳検出部14で求められた左右の各瞳領域にたいし、「目つぶり」「半目」「横目」「上目」等といった目の様々な状態にあわせて辞書を作成しておき、得られた瞳画像との類似度が最も高くなる状態を現在の瞳の状態と判定する。
(8) Explanation of processing of pupil state determination unit 17
For each left and right pupil region obtained by the pupil detection unit 14, a dictionary is created according to various eye states such as “eye blink”, “half eye”, “horizontal eye”, “upper eye”, etc. The state having the highest similarity to the pupil image is determined as the current pupil state.

また、後述する顔状態判定部19にも書かれているようにどの状態を希望するのか撮影者側が予め選択されている場合には以下の方法で最適画像を選択するものとする。   Further, as described in the face state determination unit 19 described later, when the photographer side has previously selected which state is desired, the optimum image is selected by the following method.

図9は、瞳状態判定部17の判定処理を示すフローチャートである。この処理によって瞬きや視線の動きなど瞳の状態が逐次変わる状態であったときや目が細くて瞳の開閉の判定がしにくい被撮影者であっても最適な画像を選択することができる。   FIG. 9 is a flowchart showing the determination process of the pupil state determination unit 17. By this process, an optimal image can be selected even when the pupil state changes sequentially, such as blinking or eye movement, or even if the subject has a narrow eye and it is difficult to determine whether the pupil is opened or closed.

評価値は希望状態を示す辞書との類似度とそれ以外の辞書の中で最も高い類似度との差とする。この値が高いということは理想の状態に近く他の状態と明確に区別できる状態だと判断できる。この評価値を一枚の画像で判定すると目の細い人が開いた状態なのか大きな目の人が半目状態であるのかの区別がつけられないため、瞬きが開始して終わるまでの時間より時間だけ撮影を行うのに十分な枚数Nだけ連続に画像を蓄積し、評価値の分散及び平均値を計算する。   The evaluation value is the difference between the similarity with the dictionary indicating the desired state and the highest similarity among the other dictionaries. If this value is high, it can be judged that it is close to the ideal state and can be clearly distinguished from other states. If this evaluation value is judged with a single image, it cannot be distinguished whether a person with narrow eyes is in an open state or a person with large eyes in a half-eye state. The image is continuously accumulated by the number N sufficient for photographing only, and the variance and average value of the evaluation values are calculated.

図9において、評価値の分散が小さい場合には(S31)、目の状態の変化はほとんどないとして、平均値よりも高い時間が長い場合には(S32)、平均よりも高い評価値の中で最も平均に近い評価値を与える状態を最適画像とし(S35)、平均値よりも低い時間が長い場合には平均よりも低い評価値の中で最も平均に近い評価値を与える状態を最適画像として選択する(S33)。逆に、分散が大きい場合には目の状態が大きく変動していると考えられ、最も高い評価値を与えるものを最適画像とする(S34)。   In FIG. 9, when the variance of the evaluation values is small (S31), there is almost no change in the state of the eyes, and when the time higher than the average value is long (S32), the evaluation value is higher than the average. The state that gives the evaluation value closest to the average is set as the optimum image (S35), and when the time lower than the average value is long, the state that gives the evaluation value closest to the average among the evaluation values lower than the average is set as the optimum image. (S33). On the contrary, when the variance is large, it is considered that the state of the eye is greatly fluctuated, and the image that gives the highest evaluation value is set as the optimum image (S34).

図10は、本実施形態における瞳状態判定部17の判定処理を説明する説明図であり、これを例にとって説明すると、(a)と(b)は動きも少なく分散も小さく、平均よりも高い時間が長いために平均より高い中で最も平均値に近く評価値を与える画像を選択する。(c)では変動が大きく分散が大きくなるため、最高値を与える画像を選択する。(d)では分散が小さく平均よりも低い時間が長いために、平均よりも低い評価値を与える中で最も平均値に近い画像を選択する。   FIG. 10 is an explanatory diagram for explaining the determination process of the pupil state determination unit 17 in the present embodiment. When this is described as an example, (a) and (b) are less moving, less variance, and higher than the average. Since the time is long, an image that gives an evaluation value closest to the average value among the average values is selected. In (c), since the fluctuation is large and the variance becomes large, an image giving the highest value is selected. In (d), since the dispersion is small and the time lower than the average is long, an image closest to the average value is selected among the evaluation values lower than the average.

(9)口状態判定部18の処理説明
次に、口状態判定部16の処理のフローチャートを図11に示す。
図11において、口の上下幅左右幅、及び上下左右幅、およびそれぞれに定めたしきい値との比較によって口が開いているか閉じているかの判定を行う。口の上下幅が所定しきい値以上となれば(S41)、口が開いていると判定し(S44)、所定しきい値以下の場合で横幅が所定しきい値以上であれば(S42)、口が閉じていると判定する(S45)。さらに、そのどちらにも属さない場合には、口の上下幅左右幅、及び上下左右幅を一定サイズになるように正規化した画像において複数の状態の辞書(普通の口、とんがっている口、くいしばり、あかんべぇ等それぞれにあわせて辞書を作成)と比較することで(S43)、口の状態を判定する(S46,S47)。
(9) Explanation of processing of mouth state determination unit 18
Next, a flowchart of the processing of the mouth state determination unit 16 is shown in FIG.
In FIG. 11, it is determined whether the mouth is open or closed by comparing the vertical width and horizontal width of the mouth, the vertical and horizontal widths, and the threshold values determined for each. If the vertical width of the mouth is equal to or greater than the predetermined threshold (S41), it is determined that the mouth is open (S44), and if the width is equal to or smaller than the predetermined threshold and the lateral width is equal to or greater than the predetermined threshold (S42). It is determined that the mouth is closed (S45). Furthermore, if it does not belong to either of them, the vertical and horizontal widths of the mouth, and the vertical and horizontal widths of the normalized image so that it is a constant size, a dictionary of a plurality of states (normal mouth, pointed mouth, The mouth state is determined (S46, S47) by comparing with (creating a dictionary according to each of squeaks, candy, etc.) (S43).

(10)顔状態判定部19の処理説明
瞳状態判定部17及び口状態判定部18の出力を利用し、撮影者の希望する顔状態であるかどうかを判定する。希望の状態とは、たとえば、証明写真等の場合の状態とは「瞳が正面を向いて開いた状態であり、口は閉じた状態である」になり、スナップ写真等では「瞳が開いた状態で口の状態はどちらでもよい」「瞳が開いた状態で口が笑った状態」等となる。
(10) Processing description of the face state determination unit 19
Using the outputs of the pupil state determination unit 17 and the mouth state determination unit 18, it is determined whether or not the face state is desired by the photographer. The desired state is, for example, the state in the case of an ID photo, etc., in which the pupil is open with the front facing and the mouth is closed. The state of the mouth may be either “state”, “the state where the mouth is laughed with the pupil open”, or the like.

実際の瞳状態判定には、図12に示すような瞳と口の状態それぞれを縦軸、横軸にとったマトリクスを準備し、希望の状態であるかどうかをそれぞれのセルに入れていくといった形になる。   For the actual pupil state determination, a matrix is prepared in which the pupil and mouth states are shown on the vertical axis and the horizontal axis as shown in FIG. 12, and whether or not the desired state is entered is put in each cell. Become a shape.

(11)属性別計数部20の処理説明
顔領域抽出部13で抽出された顔領域それぞれにおいて、男女それぞれの平均顔からなる辞書、大人子供それぞれの平均顔からなる辞書、また国籍などそれぞれで平均顔画像辞書をもち、類似度計算をしてどちらに近いかで属性ごとに人数の計測を行い、得られた結果をもとに顔領域に対して属性のラベル付けを行う。また属性に関係なく非撮影領域内に存在する人物の数を全部積算することにより人数計測を行うことができる。
(11) Explanation of processing of the attribute-specific counting unit 20
Each face area extracted by the face area extraction unit 13 has a dictionary consisting of average faces of men and women, a dictionary consisting of average faces of adult children, and an average face image dictionary for each nationality, etc., to calculate similarity. The number of persons is measured for each attribute depending on which is close, and attribute labeling is performed on the face area based on the obtained result. In addition, the number of persons can be measured by integrating all the numbers of persons existing in the non-photographing area regardless of the attribute.

(12)最適画像撮影部21の処理説明
所定時間内に蓄積された時系列連続画像の中において、顔状態判定部19で示したようなマトリクスを用い、撮影者の希望とする状態であるかどうかを、一枚一枚毎に、そして各人毎に、そして各部位毎に計数をかけて積算したものを評価値として求める。式は以下の通り。
評価値=(希望辞書との類似度−非希望辞書中最高類似度)
ここで、「顔」は撮影領域内に含まれる全顔を示し、「部位」は各顔領域内における目と口を示す。複数枚得た画像の中で上記評価値が最も高くなる画像を最適画像として選択する。
(12) Explanation of processing of optimum image photographing unit 21
In a time-series continuous image accumulated within a predetermined time, a matrix as shown by the face state determination unit 19 is used to determine whether the photographer desires for each image, An evaluation value is obtained for each person and for each part by counting and integrating. The formula is as follows.
Evaluation value = (similarity with desired dictionary-highest similarity in non-desired dictionary)
Here, “face” indicates all faces included in the imaging region, and “part” indicates eyes and mouth in each face region. The image having the highest evaluation value is selected as the optimum image among the plurality of obtained images.

(13)最適画像合成部22の処理説明
複数人物を対象として撮影をしており、撮影領域内の全員が目を開いて笑っている(口を開いている)状態の写真を撮りたいなどといった希望の状態の撮影を行いたい場合、上記顔状態判定部19までの処理を所定時間繰り返すことで蓄積された画像の中で、被撮影者それぞれにおいて最適の画像を顔領域及び所定範囲の顔の周辺画像を保存し、最終出力画像で最適画像をあてはめて合成することで、被撮影者が撮影タイミングやまわりの調整が必要なく最適な画像を作成する。合成する場合には、できるだけ被撮影者が動かないことが前提であるが、動いてしまった場合には顔領域より大きめにとった保存領域の周辺に沿ってアンチエイリアス処理をかけることにより不自然な合成画像でなくなるように処理を行う。
(13) Explanation of processing of optimum image composition unit 22
If you are shooting for multiple people and you want to take a photo in the desired state, such as when you want to take a picture of all the people in the shooting area laughing with your eyes open (open mouth) Among the images accumulated by repeating the process up to the face state determination unit 19 for a predetermined time, the most suitable image is stored for each person to be photographed and the peripheral image of the face area and the predetermined range of the face is stored, and the final output image is optimal By applying the images and combining them, the photographed person creates an optimal image without the need to adjust the shooting timing and surroundings. When compositing, it is assumed that the subject does not move as much as possible, but if it moves, it is unnatural by applying anti-aliasing along the periphery of the storage area that is larger than the face area. Processing is performed so that it is no longer a composite image.

(14)顔サイズ補正部23の処理説明
出力部24に出力する際に入力された画像をそのまま出力することもできるが、抽出された一人または複数人の顔領域の大きさに応じて出力画像の大きさを拡大・縮小する。顔のサイズは顔領域抽出部13で用いた複数解像度の顔辞書のサイズを用いれば求めることができるのだがサイズの解像度分だけ解像度が必要となるため、ここでは別手法を用いる。
(14) Processing description of face size correction unit 23
Although the image input when output to the output unit 24 can be output as it is, the size of the output image is enlarged or reduced according to the size of the extracted face area of one or more people. The face size can be obtained by using the size of the multi-resolution face dictionary used in the face area extracting unit 13, but the resolution is required by the resolution of the size, so another method is used here.

顔領域として抽出された領域内の輝度分布のみを利用して、白画素黒画素比率が一定となるようなP−Tile法、もしくは一定しきい値、判別分析法等の手法によって二値化を行い、顔領域を二値化した際のしきい値で顔の周辺領域を含む領域を二値化する。二値化された画像をラベリングすることで顔中心部を含む連結した領域が抽出され、その領域の左右端を顔の左右端としてその横幅の値をもって顔サイズとする。ただし、耳が出ている場合と髪の毛で耳が隠れる場合があるため、瞳検出部14によって求められた瞳位置、及び顔の左右端の位置を用いて分類を行う。   Using only the luminance distribution in the region extracted as the face region, binarization is performed by a method such as a P-Tile method in which the white pixel black pixel ratio is constant, or a constant threshold value, discriminant analysis method, etc. Then, the area including the peripheral area of the face is binarized with the threshold value when the face area is binarized. By labeling the binarized image, a connected region including the center of the face is extracted, and the left and right ends of the region are used as the left and right ends of the face, and the width value is used as the face size. However, since the ears may appear or the ears may be hidden by the hair, classification is performed using the pupil positions obtained by the pupil detection unit 14 and the positions of the left and right ends of the face.

図13に処理の説明図を示すが、両瞳の中心Dを基準にし向かって左側を例にとって説明する。顔の左端は耳が出ている場合はAの位置となり、ADの長さ/CDの長さが所定しきい値以上となるようにしきい値を予め設定しておく。仮に耳が髪の毛で隠れている場合には、左端位置はBの位置となるため(BDの長さ/CDの長さ)の値は耳が出ている場合より小さくなるため、ここで耳が出ているかどうかの判定を行う。同様に反対側の耳についても耳が出ているかどうかを判定する。   FIG. 13 is an explanatory diagram of the processing, and the explanation will be made by taking the left side as an example with reference to the center D of both pupils. The threshold value is set in advance so that the left end of the face is in the position A when the ear is out and the AD length / CD length is equal to or greater than a predetermined threshold value. If the ear is hidden by the hair, the left end position is the position B, and the value of (BD length / CD length) is smaller than when the ear is out. Judge whether it is out. Similarly, it is determined whether or not the ear on the other side is out.

耳が出ていない場合にはそのまま左右端として抽出された位置を顔領域だとし、耳が出ている場合には複数人物のデータで予め計算された(A−D)/(B−D)の平均値を用いて耳位置に影響うけずにBの位置を計算して求める。以上によって求められた顔サイズをもとに撮影者側が希望のサイズを入力していた場合には拡大縮小処理をすることで希望サイズでの画像出力を行う。   When the ear is not heard, the position extracted as the left and right ends as it is is the face area. When the ear is heard, the position is calculated in advance using data of a plurality of persons (AD) / (BD). The position of B is calculated using the average value of B without affecting the ear position. When the photographer inputs a desired size based on the face size obtained as described above, an image is output at the desired size by performing enlargement / reduction processing.

(15)出力部24の処理説明
最後に出力部24の処理を以下に説明する。
テレビカメラで据え置き型の装置の場合にはモニタ、携帯タイプのものでは内蔵されたモニタに最適画像及び最適候補画像を並べて出力を行う。図14に示されたように最適画像と判定された画像が大きく出力され、その横には時間列にそって評価値の高いものを並べる。もし希望の画像が候補列の方にある場合には、上下左右のボタンで希望画像を選択できるようにして最終出力画像を変更できるほか、図14の点線の四角で囲われた矩形領域Hのように各画像それぞれ顔領域に印をつけ、複数の画像の中から最適の顔を手動で合成することも可能である。
(15) Processing description of output unit 24
Finally, the processing of the output unit 24 will be described below.
In the case of a stationary apparatus such as a TV camera, the optimal image and the optimal candidate image are arranged and output on a monitor in the case of a portable type apparatus and in a built-in monitor. As shown in FIG. 14, an image determined to be the optimum image is output large, and images having high evaluation values are arranged along the time sequence. If the desired image is in the candidate row, the final output image can be changed by selecting the desired image with the up / down / left / right buttons, and the rectangular area H surrounded by the dotted square in FIG. As described above, it is possible to mark the face area of each image and manually synthesize the optimum face from a plurality of images.

以上述べた少なくとも1つの実施形態によれば、電子スチルカメラやTV電話、監視カメラで撮影などで一人または複数の人物の顔を撮影する場合、相手に希望の撮影状態や撮影していることを知らせることなく、さらに目の細さや動きの影響もうけず、顔が正面を向いているかどうか、瞳の開閉状態、口の開閉状態等を判定することができ、撮影で必要とする状態に適した顔の状態を確認しながら自動的に最適なものを選択して撮影を行うことができる。
また、集合写真等など複数人物を撮影する場合に被撮影者それぞれの最適状態の画像を自動的に合成することで、被撮影者全員の最適な画像を容易に得ることが可能となる。
According to at least one embodiment described above, when photographing one or more people's faces by photographing with an electronic still camera, a videophone, or a surveillance camera, the desired photographing state or photographing is performed on the other party. Without being informed, it is possible to determine whether the face is facing the front, the open / closed state of the pupil, the open / closed state of the mouth, etc. While checking the face state, it is possible to automatically select the optimum one and take a picture.
In addition, when a plurality of persons such as a group photo are taken, it is possible to easily obtain the optimum images of all the subjects by automatically synthesizing the images in the optimum state of each subject.

本発明のいくつかの実施形態を説明したが、これらの実施形態は、例として提示したものであり、発明の範囲を限定することは意図していない。これら実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行なうことができる。これら実施形態やその変形は、発明の範囲や要旨に含まれると同様に、特許請求の範囲に記載された発明とその均等の範囲に含まれるものである。   Although several embodiments of the present invention have been described, these embodiments are presented by way of example and are not intended to limit the scope of the invention. These embodiments can be implemented in various other forms, and various omissions, replacements, and changes can be made without departing from the scope of the invention. These embodiments and their modifications are included in the scope and gist of the invention, and are also included in the invention described in the claims and the equivalents thereof.

1…カメラ、2…ディスプレイ、3…パーソナルコンピュータ又はワークステーション、4…PC同等の計算・記憶装置および内部表示装置を含むデジタルカメラ、11…画像入力部、12…画像蓄積部、13…顔領域抽出部、14…瞳検出部、15…鼻孔検出部、16…口検出部、17…瞳状態判定部、18…口状態判定部、19…顔状態判定部、20…属性別計数部、21…最適画像撮影部、22…最適画像合成部、23…顔サイズ補正部、24…出力部。   DESCRIPTION OF SYMBOLS 1 ... Camera, 2 ... Display, 3 ... Personal computer or workstation, 4 ... Digital camera including PC equivalent calculation / memory | storage device and internal display device, 11 ... Image input part, 12 ... Image storage part, 13 ... Face area Extraction unit, 14 ... pupil detection unit, 15 ... nostril detection unit, 16 ... mouth detection unit, 17 ... pupil state determination unit, 18 ... mouth state determination unit, 19 ... face state determination unit, 20 ... counting unit by attribute, 21 ... Optimum image photographing unit 22. Optimum image composition unit 23. Face size correction unit 24.

Claims (2)

顔画像を含む人物の複数枚の画像を入力する画像入力手段と、
この画像入力手段により入力された各画像ごとに、前記人物の顔領域を抽出する顔領域抽出手段と、
この顔領域抽出手段により抽出された各画像ごとの顔領域の表情の評価値を求める表情評価手段と、
この表情評価手段で評価された評価値を用いて前記画像入力手段より入力された画像内に表示された前記人物の顔の表情が撮影者が希望した表情となる画像であるかを各画像ごとに判定する表情判定手段と、
この表情判定手段の判定結果を利用して前記複数枚の画像の中で前記人物の顔の表情が撮影者が希望した表情と判定された画像を選択して出力する画像選択手段と、
を具備したことを特徴とする顔画像処理装置。
Image input means for inputting a plurality of images of a person including a face image;
A face area extracting means for extracting the face area of the person for each image input by the image input means;
Facial expression evaluation means for obtaining an evaluation value of facial expression for each image extracted by the facial area extraction means;
For each image, whether the facial expression of the person displayed in the image input from the image input means using the evaluation value evaluated by the facial expression evaluation means is the image desired by the photographer. Facial expression determination means for determining
An image selection means for selecting and outputting an image in which the facial expression of the person is determined to be a facial expression desired by the photographer among the plurality of images using the determination result of the facial expression determination means;
A face image processing apparatus comprising:
前記表情評価手段における評価値の求め方は、評価する部位のその部位に該当する辞書との類似度からその部位に該当しない辞書との類似度の差によって決定することを特徴とする請求項1記載の顔画像処理装置。   2. The method for obtaining an evaluation value in the facial expression evaluation means is determined by a difference in similarity between a part to be evaluated and a dictionary not corresponding to the part from a similarity to the dictionary corresponding to the part. The face image processing apparatus described.
JP2011092307A 2011-04-18 2011-04-18 Facial image processing apparatus, facial image processing method, and electronic still camera Expired - Lifetime JP5017476B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011092307A JP5017476B2 (en) 2011-04-18 2011-04-18 Facial image processing apparatus, facial image processing method, and electronic still camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011092307A JP5017476B2 (en) 2011-04-18 2011-04-18 Facial image processing apparatus, facial image processing method, and electronic still camera

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2009106612A Division JP4762329B2 (en) 2009-04-24 2009-04-24 Face image processing apparatus and face image processing method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
JP2012094753A Division JP5242827B2 (en) 2012-04-18 2012-04-18 Face image processing apparatus, face image processing method, electronic still camera, digital image processing apparatus, and digital image processing method

Publications (2)

Publication Number Publication Date
JP2011154717A true JP2011154717A (en) 2011-08-11
JP5017476B2 JP5017476B2 (en) 2012-09-05

Family

ID=44540580

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011092307A Expired - Lifetime JP5017476B2 (en) 2011-04-18 2011-04-18 Facial image processing apparatus, facial image processing method, and electronic still camera

Country Status (1)

Country Link
JP (1) JP5017476B2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0667601A (en) * 1992-08-24 1994-03-11 Hitachi Ltd Device and system for finger language interpretation
JPH09161062A (en) * 1995-12-13 1997-06-20 Nissan Motor Co Ltd Method for recognizing pattern
JPH09212620A (en) * 1996-01-31 1997-08-15 Nissha Printing Co Ltd Manufacture of face image
JPH10232934A (en) * 1997-02-18 1998-09-02 Toshiba Corp Face image registering device and its method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0667601A (en) * 1992-08-24 1994-03-11 Hitachi Ltd Device and system for finger language interpretation
JPH09161062A (en) * 1995-12-13 1997-06-20 Nissan Motor Co Ltd Method for recognizing pattern
JPH09212620A (en) * 1996-01-31 1997-08-15 Nissha Printing Co Ltd Manufacture of face image
JPH10232934A (en) * 1997-02-18 1998-09-02 Toshiba Corp Face image registering device and its method

Also Published As

Publication number Publication date
JP5017476B2 (en) 2012-09-05

Similar Documents

Publication Publication Date Title
JP4377472B2 (en) Face image processing device
WO2017198040A1 (en) Facial image processing apparatus, facial image processing method, and non-transitory computer-readable storage medium
US8819015B2 (en) Object identification apparatus and method for identifying object
US8411171B2 (en) Apparatus and method for generating image including multiple people
US11232586B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
JP2006228199A (en) Face extraction device and semiconductor integrated circuit
JP2007265125A (en) Content display
US7460705B2 (en) Head-top detecting method, head-top detecting system and a head-top detecting program for a human face
JP2013065119A (en) Face authentication device and face authentication method
JP2001067459A (en) Method and device for face image processing
JP5771647B2 (en) Skin analysis device, skin analysis system, skin analysis method, and skin analysis program
JP5460793B2 (en) Display device, display method, television receiver, and display control device
KR102364929B1 (en) Electronic device, sever, and system for tracking skin changes
JP5971712B2 (en) Monitoring device and method
JP5242827B2 (en) Face image processing apparatus, face image processing method, electronic still camera, digital image processing apparatus, and digital image processing method
JP3970573B2 (en) Facial image recognition apparatus and method
CN109328355A (en) Method and system for intelligent group portrait
JP6098133B2 (en) Face component extraction device, face component extraction method and program
CN112183200A (en) Eye movement tracking method and system based on video image
JP5272797B2 (en) Digital camera
JP4762329B2 (en) Face image processing apparatus and face image processing method
JP2013118574A (en) Imaging apparatus
JP5017476B2 (en) Facial image processing apparatus, facial image processing method, and electronic still camera
JP2009003842A (en) Image processor and image processing method
WO2024090218A1 (en) Diagnosis system, diagnosis device, program, diagnosis method, method for diagnosing skin, and method for diagnosing stresses

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20110418

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120224

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120228

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120418

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120515

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120611

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150615

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150615

Year of fee payment: 3

EXPY Cancellation because of completion of term