JP2006318151A - Digital image display device and digital image display method - Google Patents

Digital image display device and digital image display method Download PDF

Info

Publication number
JP2006318151A
JP2006318151A JP2005139286A JP2005139286A JP2006318151A JP 2006318151 A JP2006318151 A JP 2006318151A JP 2005139286 A JP2005139286 A JP 2005139286A JP 2005139286 A JP2005139286 A JP 2005139286A JP 2006318151 A JP2006318151 A JP 2006318151A
Authority
JP
Japan
Prior art keywords
subject element
face area
subject
digital image
column
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2005139286A
Other languages
Japanese (ja)
Inventor
Sukehito Ozeki
祐仁 尾関
Yachiyo Itou
八千代 伊藤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanyo Electric Co Ltd
Original Assignee
Sanyo Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanyo Electric Co Ltd filed Critical Sanyo Electric Co Ltd
Priority to JP2005139286A priority Critical patent/JP2006318151A/en
Publication of JP2006318151A publication Critical patent/JP2006318151A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To automatically correct a size of a lower end part or an upper end part of a digital image to create a natural image. <P>SOLUTION: This digital image display device is provided with: a face area decision means 3 distinguishing each face area from the digital image wherein a plurality of persons aligned are back-and-forth disposed; a line decision means 30 detecting the face area to be included in the same line from a position of the face area, and finding an average value of the face area belonging to each the line; an arithmetic means 31 setting magnification for the face area of each the line having a different size to be magnified/reduced on the basis of an average size of the face area of each the line found by the line decision means 30; and a CPU 1 magnifying/reducing an area of a photographic subject element of each the line on the basis of the magnification. The face area of each the line is magnified/reduced in not only a longitudinal direction but also lateral direction. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

本発明は、被写体要素である人間の顔領域を特定し、これを拡大/縮小することができるデジタル画像表示機器及びデジタル画像表示方法に関し、具体的にはデジタルカメラに関する。   The present invention relates to a digital image display device and a digital image display method capable of specifying a human face area as a subject element and enlarging / reducing it, and more particularly to a digital camera.

従来から、撮影したデジタル画像から、人の顔を特定する機能を具えた表示装置が提案されている(例えば、特許文献1参照)。これは、図18に示すように、1フレーム分の顔の画像が入力される画像入力部(8)、顔画像をデジタルデータに変換して格納する画像蓄積部(80)、デジタル化された顔画像を検出する顔領域抽出部(81)、瞳検出部(82)、鼻孔検出部(83)、画像展開用メモリ(56)を具える。画像蓄積部(80)では、顔画像が2値化処理されて、周知のラベリング処理や収縮/拡散によってノイズを除去し、図19に示す顔画像を得る。図19の2値化処理された顔画像では、黒い画素が環状に繋がった顔の輪郭線が示され、瞳及び鼻孔が黒色で表される(例えば、特許文献2参照)。顔領域抽出部(81)にて、輪郭線の左右しきい線B1、B2及び上下しきい線C1、C2を定める。瞳検出部(82)ではしきい線B1、B2、C1、C2内の2値化画像が顔であることを確認すべく、画像の上から下に向けてラスタ走査して瞳を検出する。   Conventionally, a display device having a function of identifying a human face from a photographed digital image has been proposed (see, for example, Patent Document 1). As shown in FIG. 18, this is an image input unit (8) for inputting a face image for one frame, an image storage unit (80) for converting the face image into digital data, and storing it. A face area extraction unit (81) for detecting a face image, a pupil detection unit (82), a nostril detection unit (83), and an image development memory (56) are provided. In the image storage unit (80), the face image is binarized, noise is removed by a well-known labeling process or contraction / diffusion, and the face image shown in FIG. 19 is obtained. In the binarized face image of FIG. 19, the outline of a face in which black pixels are connected in a ring shape is shown, and the pupil and nostril are expressed in black (see, for example, Patent Document 2). In the face area extraction unit (81), left and right threshold lines B1 and B2 and upper and lower threshold lines C1 and C2 of the contour line are determined. The pupil detection unit (82) detects the pupil by raster scanning from the top to the bottom of the image in order to confirm that the binarized images within the threshold lines B1, B2, C1, and C2 are faces.

瞳検出部(82)には、図20に示すように予め多数の被験者の瞳の形を撮影して得た瞳辞書、及び瞳とは全く別の顔内の部位を撮影した非瞳辞書が格納されている。瞳検出部(82)では走査して得た黒い画像が瞳か否かを判断する。具体的には、画像が瞳辞書内の画像に近似していれば、類似度として高いポイントを設定し、非類似であれば低いポイントを設定する。同様に、画像が非瞳辞書内の画像に近似していれば、非類似度として高いポイントを設定し、類似していなければ低いポイントを設定する。
次に、瞳検出部(82)は、瞳辞書との類似度のポイントから、非瞳辞書との類似度のポイントを引いて評価値を求める。例えば、図19にて箇所A1を走査して得た画像は眉毛であるから、瞳辞書との類似度のポイントは低く、非類似度のポイントは高い。従って、評価値は低く、走査した画像が瞳でないことが判る。
箇所A2を走査して得た画像は瞳であり、瞳辞書との類似度のポイントは高く、非類似度のポイントは低い。従って、評価値は高く、走査した画像が瞳であることが判る。尚、瞳辞書には、横目、半目のような各種の瞳の状態が用意されている。
In the pupil detection unit (82), as shown in FIG. 20, there are a pupil dictionary obtained by photographing the shapes of the pupils of a large number of subjects in advance and a non-pupil dictionary obtained by photographing a part in the face that is completely different from the pupil. Stored. The pupil detection unit (82) determines whether or not the black image obtained by scanning is a pupil. Specifically, a high point is set as the similarity if the image approximates the image in the pupil dictionary, and a low point is set if the image is dissimilar. Similarly, if the image approximates the image in the non-pupil dictionary, a high point is set as the dissimilarity, and a low point is set if the images are not similar.
Next, the pupil detection unit (82) obtains an evaluation value by subtracting the similarity point with the non-pupil dictionary from the similarity point with the pupil dictionary. For example, since the image obtained by scanning the location A1 in FIG. 19 is eyebrows, the similarity point with the pupil dictionary is low, and the dissimilarity point is high. Therefore, the evaluation value is low and it can be seen that the scanned image is not a pupil.
The image obtained by scanning the location A2 is a pupil, the point of similarity with the pupil dictionary is high, and the point of dissimilarity is low. Therefore, the evaluation value is high and it can be seen that the scanned image is a pupil. In the pupil dictionary, various pupil states such as horizontal eyes and half eyes are prepared.

次に、鼻孔検出部(83)は、鼻孔を検出する。鼻孔検出部(83)は、多数の被験者の鼻孔から得た鼻孔辞書、非鼻孔辞書を格納しており、瞳検出部(82)と同様に評価値を求める。鼻孔は瞳よりも下側にあるから、箇所A2よりも下側の箇所A3にて、鼻孔が検出される。これにより、しきい線B1、B2、C1、C2内の画像が顔であることが確認され、該領域が特定された画像が画像展開用メモリ(56)に格納される。顔領域の面積及び該領域の左上の座標Kも画像展開用メモリ(56)内の画像から判る。画像内に複数の顔が写っていても、顔領域が特定される。
また、図21に示すように、前後に並んだ複数の人物(7)(7)(7)をカメラにて撮影すると、図22に示すように、前側の人(7)の顔が大きく、後側の人(7)の顔が小さく写る、所謂あおり現象を生じる。そこで、このあおり現象を補正する、具体的には小さく写ったデジタル画像を大きくする補正機能を有するデジタルカメラが提案されている(例えば、特許文献3参照)。デジタル画像にあっては、画素であるピクセル間に、ソフトウエアにて補間ピクセルを作ることにより、画像を拡大する。
Next, the nostril detection unit (83) detects the nostril. The nostril detection unit (83) stores a nostril dictionary and a nostril dictionary obtained from the nostrils of a large number of subjects, and obtains an evaluation value in the same manner as the pupil detection unit (82). Since the nostril is below the pupil, the nostril is detected at a location A3 below the location A2. Thereby, it is confirmed that the images in the threshold lines B1, B2, C1, and C2 are faces, and the image in which the area is specified is stored in the image development memory (56). The area of the face area and the upper left coordinate K of the area are also known from the image in the image development memory (56). Even if there are a plurality of faces in the image, the face area is specified.
As shown in FIG. 21, when a plurality of persons (7), (7), and (7) arranged in front and rear are photographed with a camera, as shown in FIG. 22, the face of the front person (7) is large, A so-called tilt phenomenon occurs in which the face of the rear person (7) appears small. In view of this, a digital camera having a correction function for correcting this tilt phenomenon, specifically, for increasing a small digital image has been proposed (for example, see Patent Document 3). In the case of a digital image, the image is enlarged by creating interpolated pixels by software between the pixels.

特開2000−259833号公報JP 2000-259833 A 特開2003−338952号公報JP 2003-338952 A 特開2004−72181号公報JP 2004-72181 A

複数の人間が、横及び前後に並んで写った集合写真の場合には、前記のあおり現象により、後列の人間の顔が小さく写り、前列の人間の顔が大きく写る。この場合に、拡大/縮小する顔を、その都度、手動で特定して大きさを補正するのは面倒である。また、拡大/縮小した画像と、拡大/縮小しない隣の画像との境目に不連続ができ、不自然である。
本発明の目的は、デジタル画像の上端部又は下端部の大きさを自動的に補正して、自然な画像を作成することにある。
In the case of a group photo in which a plurality of humans are photographed side by side and front and back, the human face in the rear row appears smaller and the human face in the front row appears larger due to the tilt phenomenon. In this case, it is troublesome to manually specify the face to be enlarged / reduced and correct its size each time. In addition, discontinuity occurs at the boundary between the enlarged / reduced image and the adjacent image that is not enlarged / reduced, which is unnatural.
An object of the present invention is to automatically correct the size of the upper end or lower end of a digital image to create a natural image.

デジタル画像表示機器は、横に並んだ被写体要素の列が、前後に複数配備された被写体のデジタル画像から各被写体要素の領域を判別する第1の判断手段と、
被写体要素の領域の位置から、同一列に含まれるべき被写体要素の領域を検出する第2の判断手段と、
各列に属する被写体要素の領域の平均値又は中間的な大きさを求める手段と、
演算手段が求めた各列の被写体要素領域の平均値又は中間的な大きさを元に、大きさが異なる各列の被写体要素領域を拡大/縮小すべき倍率を設定する演算手段と、
該倍率に基づいて各列の被写体要素の領域を拡大/縮小する手段を設けている。
The digital image display device includes: a first determination unit configured to determine a region of each subject element from a plurality of digital images of the subject arranged in front and back in a row of subject elements arranged side by side;
Second determination means for detecting a region of the subject element that should be included in the same column from the position of the region of the subject element;
Means for obtaining an average value or an intermediate size of the area of the subject element belonging to each column;
Calculation means for setting a magnification for enlarging / reducing the subject element area of each column having a different size based on an average value or an intermediate size of the subject element areas of each column obtained by the calculation means;
Means is provided for enlarging / reducing the area of the subject element in each row based on the magnification.

第2の判断手段によって、同一列に含まれるべき被写体要素の領域、即ち顔領域が検出され、各列の顔領域の中間的な大きさ、具体的には平均値を元に、大きさが異なる各列の顔領域が自動的に拡大/縮小される。これにより、画像の一部を拡大/縮小したにも拘わらず、自然な画像を作成することができる。   The area of the subject element that should be included in the same column, that is, the face region, is detected by the second determining means, and the size is determined based on the intermediate size of the face region in each column, specifically the average value. The face area of each different column is automatically enlarged / reduced. As a result, a natural image can be created even though a part of the image is enlarged / reduced.

(第1実施例)
以下、本発明の一実施例を図を用いて、詳述する。本例にて、処理すべきデジタル画像は、図3に示すように人(7)が横に並んだ列が、前後に複数設けられた集合写真であって、撮影当初はあおり現象により、前列の人(7)の顔が大きく、後列の人(7)の顔が小さく写っている。即ち、本例では、被写体要素は人の顔であるが、これに限定されない。
図1は、本発明に係わるデジタルカメラの内部ブロック図である。対物レンズ(5)からの光は、CCD(50)上に合焦される。CCD(50)は受けた光信号を電気信号である画像信号に変換して出力し、該画像信号はCDS(相関二重サンプリング)回路(51)によりノイズが低減された後に、AGC(自動利得制御)回路(52)によりレベルが調整される。画像信号は、A/Dコンバータ(53)にてデジタル信号に変換され、デジタル信号処理部(54)にて黒レベルやホワイトバランスが調整された後に、制御手段であるCPU(1)に入力する。CPU(1)は、メモリカード(9)に繋がり、画像信号は圧縮伸長回路(55)にて圧縮されてメモリカード(9)内に格納される。メモリカード(9)内の画像は、圧縮伸長回路(55)にて伸長されて液晶パネルであるディスプレイ(2)に表示される。
(First embodiment)
Hereinafter, an embodiment of the present invention will be described in detail with reference to the drawings. In this example, the digital image to be processed is a group photo in which a row of people (7) arranged side by side as shown in FIG. 3 is a group photo provided before and after the front row. The face of the person (7) is large and the face of the person (7) in the back row is small. That is, in this example, the subject element is a human face, but is not limited to this.
FIG. 1 is an internal block diagram of a digital camera according to the present invention. The light from the objective lens (5) is focused on the CCD (50). The CCD (50) converts the received optical signal into an image signal, which is an electrical signal, and outputs the image signal. After the noise is reduced by the CDS (correlated double sampling) circuit (51), the AGC (automatic gain) The level is adjusted by the control (52) circuit. The image signal is converted into a digital signal by the A / D converter (53), and after the black level and white balance are adjusted by the digital signal processing unit (54), the image signal is input to the CPU (1) which is a control means. . The CPU (1) is connected to the memory card (9), and the image signal is compressed by the compression / decompression circuit (55) and stored in the memory card (9). The image in the memory card (9) is decompressed by the compression / decompression circuit (55) and displayed on the display (2) which is a liquid crystal panel.

CPU(1)は画像処理用のソフトウエアが格納されたROM(10)及び顔領域判断手段(3)に繋がる。該顔領域判断手段(3)は図18に示す画像蓄積部(80)から鼻孔検出部(83)までの構成を具えており、前記の如く、2値化画像から顔領域の面積、座標を特定する。顔領域判断手段(3)は列判断手段(30)に繋がり、該列判断手段(30)は顔領域が前後何列目に属するかを判断するとともに、同一列に属する顔領域の平均面積を求める。列判断手段(30)は演算手段(31)に繋がり、該演算手段(31)は最後列の画像を何倍に拡大するかを求めて、CPU(1)にフィードバックする。勿論、最前列の画像を縮小してもよいが、以下では説明の便宜上、最後列の画像を拡大する場合について説明する。   The CPU (1) is connected to a ROM (10) in which image processing software is stored and a face area determination means (3). The face area judging means (3) has a configuration from the image storage section (80) to the nostril detection section (83) shown in FIG. Identify. The face area judging means (3) is connected to the column judging means (30), and the row judging means (30) judges which column the face area belongs to before and after, and calculates the average area of the face areas belonging to the same column. Ask. The column determining means (30) is connected to the calculating means (31), and the calculating means (31) obtains how many times the image in the last column is enlarged and feeds it back to the CPU (1). Of course, the image in the foremost row may be reduced, but the case where the image in the last row is enlarged will be described below for convenience of explanation.

図2は、本例のデジタルカメラの画像処理の概略を示すフローチャートである。撮影された画像は、デジタル化されて、一旦画像展開用メモリ(56)に格納される(S1−S3)。顔領域判断手段(3)によって、各顔領域の面積や左上隅部の座標を特定する(S4)。次に、各顔領域が属する列を求め、該列に属する顔領域の大きさの平均値を求め、各列毎の平均値から、各列の顔領域を拡大すべき倍率である補正係数を求める(S5)。該補正係数に基づいて、各列の顔領域を拡大処理し(S6)、該画像を圧縮処理して(S7)、メモリカード(9)に保存する(S8)。
尚、既に撮影してメモリカード(9)に保存された画像を再生して処理するには、画像を再生した後に、ステップS3からS8までの手順を踏む。
FIG. 2 is a flowchart showing an outline of image processing of the digital camera of this example. The photographed image is digitized and temporarily stored in the image development memory (56) (S1-S3). The face area judging means (3) specifies the area of each face area and the coordinates of the upper left corner (S4). Next, a column to which each face region belongs is obtained, an average value of the sizes of the face regions belonging to the column is obtained, and a correction coefficient that is a magnification for enlarging the face region in each column is obtained from the average value for each column. Obtain (S5). Based on the correction coefficient, the face area of each column is enlarged (S6), the image is compressed (S7), and stored in the memory card (9) (S8).
In order to reproduce and process an image that has already been shot and stored in the memory card (9), the procedure from steps S3 to S8 is performed after the image is reproduced.

図3は、処理すべき集合写真のデジタル画像であって、各顔の周囲に示される点線は、顔領域判断手段(3)が求めた各顔の領域である。デジタル画像は、左隅部を原点(0、0)とし、原点から右向きにX軸、下向きにY軸が延びる。画像の右隅の座標は、(X、Y)となる。また、図4は図2のフローチャートのステップS4からS6までの動作を詳細に示す流れ図である。
先ず、STEP1にて、「顔認識」を開始、即ち顔領域判断手段(3)によって、各々の顔領域の面積、左上隅部の座標を含む顔領域の位置を特定する。次に、STEP2にて、顔領域判断手段(3)は各顔領域の面積、具体的には図3に点線の四角形で示された領域の面積を求める。
次に、求められた顔領域の面積、位置は列判断手段(30)に送られて、STEP3にて各顔領域がどの列に属するかが判断される。これには先ず、最も後列に属するべき顔領域を特定する必要があり、更にその最初の工程として、最も後列に属するべき複数の顔領域のうち、基準となる顔領域を特定する。この手順を図5のフローチャートに示す。
FIG. 3 is a digital image of a group photo to be processed, and the dotted line shown around each face is the area of each face obtained by the face area determination means (3). The digital image has an origin (0, 0) at the left corner, and the X axis extends rightward from the origin and the Y axis extends downward. The coordinates of the right corner of the image are (X, Y). FIG. 4 is a flowchart showing in detail the operations from step S4 to S6 in the flowchart of FIG.
First, in STEP 1, "face recognition" is started, that is, the face area determination means (3) specifies the area of each face area and the position of the face area including the coordinates of the upper left corner. Next, in STEP 2, the face area determination means (3) obtains the area of each face area, specifically, the area of the area indicated by the dotted rectangle in FIG.
Next, the obtained area and position of the face area are sent to the column determining means (30), and in STEP 3, it is determined which column each face area belongs to. For this, first, it is necessary to specify the face area that should belong to the most back row. Further, as the first step, the face area that becomes the reference among the plurality of face regions that should belong to the most back row is specified. This procedure is shown in the flowchart of FIG.

先ず、図3に示す画像には、15ヶの顔領域がある。この中で、仮に最も左上に位置する顔領域を顔領域1とし、右に向かって顔領域2…5と続き、最も右下の顔領域を顔領域15とする。この顔領域1が最も後列に属するかをチェックする。最初に顔領域1の左上Y座標をY1とし、チェック対象の顔領域の番号をN(当初は1)とする(S10)。
Y1と顔領域2の左上Y座標の何れが大きいかを求める(S12)。仮に図6(a)で示すように顔領域1の左上Y座標が、顔領域2の左上Y座標よりも小さければ、次にNに1を加えて(S14)、顔領域3をチェックする。
図6(b)で示すように顔領域1の左上Y座標が、顔領域2の左上Y座標よりも大きければ、顔領域2の左上Y座標をY1として(S13)、以降の処理を進める。全部の顔領域がチェックされれば(S11)、処理を終了し、最も後列に属するべき複数の顔領域のうち、基準となる顔領域を特定する。以下の記載では、顔領域1のY座標が最も小さい、即ち最後列の基準となる顔領域とする。
First, the image shown in FIG. 3 has 15 face regions. Among these, the face area located at the uppermost left is assumed to be the face area 1, followed by the face areas 2... 5 toward the right, and the lowermost face area is taken as the face area 15. It is checked whether this face area 1 belongs to the rearmost row. First, the upper left Y coordinate of the face area 1 is set to Y1, and the number of the face area to be checked is set to N (initially 1) (S10).
It is determined which of Y1 and the upper left Y coordinate of the face area 2 is larger (S12). If the upper left Y coordinate of the face area 1 is smaller than the upper left Y coordinate of the face area 2 as shown in FIG. 6A, then 1 is added to N (S14), and the face area 3 is checked.
As shown in FIG. 6B, if the upper left Y coordinate of the face area 1 is larger than the upper left Y coordinate of the face area 2, the upper left Y coordinate of the face area 2 is set to Y1 (S13), and the subsequent processing proceeds. If all the face areas are checked (S11), the process is terminated, and a reference face area is specified from among a plurality of face areas that should belong to the rearmost row. In the following description, it is assumed that the face region 1 has the smallest Y coordinate, that is, the face region serving as the reference of the last row.

次に、図4のSTEP3では、顔領域1と同一列に含まれるべき顔領域を検出する。前記の如く、各顔領域の左上隅部のY座標は判っている。図7に示すように、顔領域2の左上隅部K2のY座標が、顔領域1の左上隅部K1のY座標から、顔領域1の縦幅Hの半分H/2以内にあれば、顔領域2は顔領域1と同じ列に属する顔領域とし、そうでなければ顔領域2は顔領域1よりも前列に属する顔領域とする。尚、H/2の値は適宜変更可能であり、例えば、H/3でもよい。
顔領域1と同じ列に属する顔領域が無くなれば、これらの顔領域を除いた顔領域の中で最もY座標が小さな顔領域を求めて、同様に同一列に含まれるべき顔領域を検出する。
Next, in STEP 3 of FIG. 4, a face area that should be included in the same column as the face area 1 is detected. As described above, the Y coordinate of the upper left corner of each face area is known. As shown in FIG. 7, if the Y coordinate of the upper left corner K2 of the face area 2 is within half H / 2 of the vertical width H of the face area 1 from the Y coordinate of the upper left corner K1 of the face area 1, The face area 2 is a face area belonging to the same column as the face area 1, otherwise the face area 2 is a face area belonging to the front row than the face area 1. Note that the value of H / 2 can be changed as appropriate, and may be, for example, H / 3.
When there is no face area belonging to the same column as face area 1, a face area having the smallest Y coordinate is obtained from the face areas excluding these face areas, and a face area that should be included in the same column is detected in the same manner. .

最終的に図8に示すように、最後列に属する5つの顔領域、中央列に属する5つの顔領域、最前列に属する5つの顔領域が求まる。図4のSTEP4に示すように、列判断手段(30)は各列に属する顔領域面積の平均値Ave(Size R1)…Ave(Size Rn)を求める。平均値に代えて、各列に属する顔領域面積の中間的な値でもよい。
ここで、列がnは最後列から数えて何番目の列かを示す値で、図8にあっては、前後3列であるから、nは最大で3である。列判断手段(30)はまた、各列に属する顔領域の総数であるSumR1…SumRn、各列に属する顔領域のX軸からの距離の平均値であるAve(Dist R1)…Ave(Dist Rn)をも求め、これを演算手段(31)に送る。
Finally, as shown in FIG. 8, five face regions belonging to the last column, five face regions belonging to the center column, and five face regions belonging to the front row are obtained. As shown in STEP 4 of FIG. 4, the column determining means (30) obtains the average value Ave (Size R1)... Ave (Size Rn) of the face area areas belonging to each column. Instead of the average value, an intermediate value of the face area area belonging to each column may be used.
Here, the column n is a value indicating the number of columns counted from the last column. In FIG. The column determination means (30) also includes SumR1... SumRn which is the total number of face regions belonging to each column, and Ave (Dist R1)... Ave (Dist Rn) which is the average value of the distances from the X axis of the face regions belonging to each column. ) Is also obtained and sent to the computing means (31).

演算手段(31)は、図4のSTEP5に示すように、Ave(Dist R1)…Ave(Dist Rn)及びAve(Size R1)…Ave(Size Rn)から補正係数を求める。補正係数は、図9に示す式にて求まる。
該補正係数に顔領域のX軸からの距離を乗じれば、各顔領域を拡大すべき倍率が求まる。
具体的には、図8に於いて最後列の顔領域の大きさを、最前列の顔領域の大きさに合わせる場合には、n=3であるから、Ave(Size R3)はAve(Size R1)よりも大きく、Ave(Dist R3)はAve(Dist R1)よりも大きいから、図9の補正係数は1よりも大きい。この補正係数に距離Ave(Dist R1)を乗じれば、最後列の顔領域を拡大すべき倍率が求まる。
演算手段(31)は、この倍率をCPU(1)に伝え、該CPU(1)はこの倍率に基づいて、画像展開用メモリ(56)内の画像を自動的に拡大する。デジタル画像の拡大方法は、従来と同様に、画素間に補間ピクセルを形成する方法である。以上の記載では、最後列の顔領域を拡大することを記載したが、最前列の顔領域を縮小してもよい。
本例の方法により、デジタル画像の上端部又は下端部の大きさは、自動的に補正される。これにより、画像の一部を拡大/縮小したにも拘わらず、自然な画像を作成することができる。
尚、最後列の顔領域の大きさ、最前列の顔領域の大きさを求め、この結果から最後列と最前列の間に存在する中間列の顔領域の大きさを補正してもよい。即ち、図8に於いて、1列目の顔領域の大きさ、及び/又は3列目の顔領域の大きさとから、2列目の顔領域の大きさを自動的に補正してもよい。
As shown in STEP 5 of FIG. 4, the calculation means (31) obtains a correction coefficient from Ave (Dist R1) ... Ave (Dist Rn) and Ave (Size R1) ... Ave (Size Rn). The correction coefficient is obtained by the equation shown in FIG.
By multiplying the correction coefficient by the distance from the X axis of the face area, the magnification for enlarging each face area can be obtained.
Specifically, in FIG. 8, when the size of the face region in the last row is matched with the size of the face region in the front row, n = 3, so Ave (Size R3) is Ave (Size). 9 is larger than R1), and Ave (Dist R3) is larger than Ave (Dist R1). Therefore, the correction coefficient in FIG. By multiplying this correction coefficient by the distance Ave (Dist R1), the magnification for enlarging the face area in the last row is obtained.
The calculation means (31) transmits this magnification to the CPU (1), and the CPU (1) automatically enlarges the image in the image development memory (56) based on this magnification. The digital image enlargement method is a method of forming interpolated pixels between pixels as in the conventional method. In the above description, the face area in the last row has been described as being enlarged, but the face area in the front row may be reduced.
By the method of this example, the size of the upper end or lower end of the digital image is automatically corrected. As a result, a natural image can be created even though a part of the image is enlarged / reduced.
The size of the face region in the last row and the size of the face region in the front row may be obtained, and the size of the face region in the intermediate row existing between the last row and the front row may be corrected from the result. That is, in FIG. 8, the size of the face area in the second row may be automatically corrected from the size of the face area in the first row and / or the size of the face area in the third row. .

(第2実施例)
図10に示すように、集合写真をカメラの縦位置にて撮影したとすると、実際の画像は図11に示すように、人(7)の顔が横向きに写る。図11に於いて、最後列の顔領域サイズと最前列の顔領域サイズを比較すると、上下方向よりも左右方向の方が差が大きい。
即ち、図11の画像では、画像の原点から右向きに進む方が、あおり現象により顔サイズが小さくなっているから、上下方向に大きさを補正するよりも、左右方向に補正するのが合理的である。
本例にあっては、顔領域判断手段(3)は、各列に属する顔領域面積から、顔領域面積の上下及び左右方向の幅を求める。演算手段(31)は、最後列の顔領域の幅を、最前列の顔領域の幅にて除した比を、上下方向のみならず左右方向についても求め、この比を比較し、該比が小さい方の向きに沿って、最後列の顔領域を拡大する。
(Second embodiment)
As shown in FIG. 10, when a group photo is taken at the vertical position of the camera, the face of the person (7) appears sideways in the actual image as shown in FIG. In FIG. 11, when the face area size in the last row and the face area size in the front row are compared, the difference in the left-right direction is larger than in the up-down direction.
That is, in the image of FIG. 11, the face size is smaller due to the tilt phenomenon when proceeding to the right from the origin of the image. Therefore, it is reasonable to correct in the horizontal direction rather than correcting the size in the vertical direction. It is.
In this example, the face area determining means (3) obtains the vertical and horizontal widths of the face area from the face area area belonging to each column. The calculation means (31) obtains a ratio obtained by dividing the width of the face area in the last row by the width of the face area in the front row not only in the vertical direction but also in the horizontal direction, and compares the ratio. The face area in the last row is enlarged along the smaller direction.

具体的には、図11に示す画像にあっては、顔領域判断手段(3)は、先ず上下方向に沿って各列に属する顔領域の幅H1、H2を求め、演算手段(31)は、最後列の顔領域の幅H2を最前列の顔領域の幅H1にて除した比を求める。図11に示す画像では、最後列の顔領域の幅H2は、最前列の顔領域の幅H1に略等しく、比は1に近い値となる。
次に、顔領域判断手段(3)は、左右方向に沿って各列に属する顔領域の横幅HY1、HY2を求め、演算手段(31)は、最後列の顔領域の横幅HY2を最前列の顔領域の横幅HY1にて除した比を求める。図11に示す画像では、最後列の顔領域の横幅HY2は、最前列の顔領域の横幅HY1よりも明らかに小さいから、比は1よりも小さな値となる。即ち、最後列の顔領域と最前列の顔領域の比は、左右方向の方が小さい、換言すれば顔領域は左右方向で差が大きい。この結果から、CPU(1)は画像展開用メモリ(56)内の最後列の顔領域面積を左右、即ち横方向に拡大する。これにより、更に自然な補正画像を作成することができる。
Specifically, in the image shown in FIG. 11, the face area determination means (3) first obtains the widths H1 and H2 of the face areas belonging to each column along the vertical direction, and the calculation means (31) Then, a ratio obtained by dividing the width H2 of the face area in the last row by the width H1 of the face area in the front row is obtained. In the image shown in FIG. 11, the width H2 of the face area in the last row is substantially equal to the width H1 of the face area in the front row, and the ratio is a value close to 1.
Next, the face area determination means (3) obtains the horizontal widths HY1 and HY2 of the face areas belonging to each column along the left-right direction, and the calculation means (31) calculates the horizontal width HY2 of the face area in the last row in the front row. The ratio divided by the width HY1 of the face area is obtained. In the image shown in FIG. 11, the horizontal width HY2 of the face area in the last row is clearly smaller than the horizontal width HY1 of the face area in the front row, so the ratio is a value smaller than 1. That is, the ratio between the face area in the last row and the face area in the front row is smaller in the left-right direction, in other words, the face area has a larger difference in the left-right direction. From this result, the CPU (1) enlarges the area of the face area of the last row in the image development memory (56) in the horizontal direction, that is, in the horizontal direction. Thereby, a more natural correction image can be created.

(第3実施例)
顔領域は前記の如く、2値化画像から認識される。しかるに、顔領域の認識結果は、髪型や顔の角度に影響されやすい。従って、顔領域が大きいと判断された人が最後列に居たり、顔領域が小さいと判断された人が最前列に居るとされる可能性もある。この場合、前記の如く、列毎に顔領域面積を補正すると、大きいと判断された顔領域が更に大きくされたり、逆に小さいと判断された顔領域が更に小さくされる場合も想定される。この虞れを避ける為に、以下の方法が考えられる。
先ず、図12に示すように、画面(6)を縦横に配列された複数の小画面(60)に区切る。図12では、横4つの小画面(60)(60)を上下3段に配列し、各小画面(60)に左上から右下にかけて、1から12までの番号を付しているが、これに限定するものではない。顔領域判断手段(3)には、各小画面(60)の番号、左上隅部のX、Y座標、上下及び左右の幅が予め格納されている。以下に、本例の動作を図13のフローチャートを用いて説明する。
(Third embodiment)
As described above, the face area is recognized from the binarized image. However, the recognition result of the face area is easily influenced by the hairstyle and the face angle. Therefore, there is a possibility that a person who is determined to have a large face area is in the last row or a person who is determined to have a small face area is in the front row. In this case, as described above, when the face area area is corrected for each column, the face area determined to be large may be further increased, or conversely, the face area determined to be small may be further decreased. In order to avoid this fear, the following method can be considered.
First, as shown in FIG. 12, the screen (6) is divided into a plurality of small screens (60) arranged vertically and horizontally. In FIG. 12, four horizontal small screens (60) and (60) are arranged in three upper and lower stages, and each small screen (60) is numbered from 1 to 12 from the upper left to the lower right. It is not limited to. In the face area determination means (3), the number of each small screen (60), the X and Y coordinates of the upper left corner, the vertical and horizontal widths are stored in advance. The operation of this example will be described below with reference to the flowchart of FIG.

列判断手段(30)は、先ず顔領域判断手段(3)が検出した顔領域がどの小画面(60)内に含まれるかを検出する(S20)。次に、該小画面(60)内に存在する顔領域の情報を集計する(S21)。この情報には、小画面(60)内に存在する顔領域の数、顔領域の縦横の幅、後記するように顔領域の傾き角度が含まれる。次に、小画面(60)毎に顔領域の縦横の幅、角度の平均値を求め(S22)、この縦幅、角度の値を小画面(60)の行(即ち、縦方向)毎に集計する(S23)。演算手段(31)は、縦幅の情報から縦方向に補正すべき係数を求めて(S24)、CPU(1)に送る。顔領域判断手段(3)はまた、横幅の値を小画面(60)の列(即ち、横方向)毎に集計し(S25)、演算手段(31)は、横幅の情報から横方向に補正すべき係数を求めて(S26)、CPU(1)に送る。これにより、小画面(60)毎にデジタル画像が拡大/縮小され、例えば大きいと判断された顔領域が更に大きくされるような不具合は解消される。   The column determining means (30) first detects in which small screen (60) the face area detected by the face area determining means (3) is included (S20). Next, the face area information existing in the small screen (60) is totaled (S21). This information includes the number of face areas existing in the small screen (60), the vertical and horizontal widths of the face area, and the inclination angle of the face area as will be described later. Next, an average value of the vertical and horizontal widths and angles of the face area is obtained for each small screen (60) (S22), and the vertical width and angle values are calculated for each row (ie, vertical direction) of the small screen (60). Aggregate (S23). The calculation means (31) obtains a coefficient to be corrected in the vertical direction from the vertical width information (S24) and sends it to the CPU (1). The face area judging means (3) also sums up the width value for each column (ie, horizontal direction) of the small screen (60) (S25), and the calculating means (31) corrects the horizontal width information from the horizontal width information. A coefficient to be obtained is obtained (S26) and sent to the CPU (1). Thus, the problem that the digital image is enlarged / reduced for each small screen (60) and the face area determined to be larger, for example, is further enlarged is solved.

(第4実施例)
図14に示すように、全員の人(7)が真っ直ぐに立っている集合写真を、カメラを傾けて撮影すると、実際にメモリカード(9)内に格納される画像は、図15に示すように傾いた画像となる。この傾いた画像に於いて、図4のSTEP3に示すように、各顔領域がどの列に属するかを判断すると、同一列に属するべき顔領域が、同一列内とみなされなくなる。その結果、同一列に属するべき画面左端部の顔領域と、画面右端部の顔領域の拡大倍率が異なる不具合も考え得る。
そこで、出願人は顔領域判断手段(3)が、各顔領域の左隅部の座標を認識していることから、撮像画像が傾いている場合は、該座標データから画像を傾き補正してから、各顔領域がどの列に属するかを判断することを着想した。
(Fourth embodiment)
As shown in FIG. 14, when a group photo in which all persons (7) are standing straight is taken by tilting the camera, the image actually stored in the memory card (9) is as shown in FIG. The image is inclined to In this tilted image, as shown in STEP 3 in FIG. 4, when it is determined which column each face region belongs to, the face region that should belong to the same column is not regarded as being in the same column. As a result, there may be a problem that the enlargement magnification of the face area at the left end of the screen and the face area at the right end of the screen that should belong to the same column is different.
Therefore, since the face area determination means (3) recognizes the coordinates of the left corner of each face area, the applicant has corrected the inclination of the image from the coordinate data if the captured image is inclined. The idea was to determine which column each face region belongs to.

図16は、各顔領域が傾いている状態を示す拡大図である。顔領域判断手段(3)は、各顔領域1、2、3の左隅部K1、K2、K3の座標が判っているから、K1、K2の座標から傾き角を求め、続いてK2、K3の座標から傾き角を求める。これら2つの傾き角が図16に示すΦで、略同じであれば、顔領域1、2、3が同じ方向に角度Φ傾いているとして、CPU(1)に対して、画像を角度Φだけ傾き補正すべき旨の信号を発する。CPU(1)は公知のマトリックス変換により、画像展開用メモリ(56)内の画像を傾き補正する。傾き角は、K1、K2、K3の座標から最小二乗法により、線形近似した回帰直線から求めてもよい。角度Φの演算は、顔領域判断手段(3)に代えて、列判断手段(30)にて行ってもよい。
但し、図17に示すように、各顔領域1、2、3の座標K1、K2、K3から、各顔領域1、2、3がバラバラな向きに傾いていると顔領域判断手段(3)が判断すれば、画像の傾き補正は行わない。かかる場合に傾き補正すると、更に違和感のある画像となるからである。
FIG. 16 is an enlarged view showing a state in which each face area is inclined. Since the coordinates of the left corners K1, K2, and K3 of each of the face areas 1, 2, and 3 are known, the face area determining means (3) obtains an inclination angle from the coordinates of K1 and K2, and subsequently, K2 and K3. Find the tilt angle from the coordinates. If these two inclination angles are Φ shown in FIG. 16 and are substantially the same, it is assumed that the face regions 1, 2, and 3 are inclined in the same direction by an angle Φ. A signal indicating that the inclination should be corrected is issued. The CPU (1) corrects the inclination of the image in the image development memory (56) by a known matrix transformation. The inclination angle may be obtained from a regression line linearly approximated from the coordinates of K1, K2, and K3 by the least square method. The calculation of the angle Φ may be performed by the column determining means (30) instead of the face area determining means (3).
However, as shown in FIG. 17, when the face areas 1, 2, and 3 are inclined in different directions from the coordinates K1, K2, and K3 of the face areas 1, 2, and 3, the face area determining means (3) If it is determined, the tilt correction of the image is not performed. This is because if the inclination is corrected in such a case, the image becomes more uncomfortable.

上記例にあっては、被写体画像として人の集合写真を例示したが、あおり現象を生じる集合写真であれば、人の顔に限定されない。また、デジタル画像を表示する機器は、デジタルカメラに限定されない。   In the above example, a group photo of a person is exemplified as the subject image, but the group photo is not limited to a human face as long as it is a group photo that causes a tilt phenomenon. Also, a device that displays a digital image is not limited to a digital camera.

上記実施例の説明は、本発明を説明するためのものであって、特許請求の範囲に記載の発明を限定し、或は範囲を減縮する様に解すべきではない。又、本発明の各部構成は上記実施例に限らず、特許請求の範囲に記載の技術的範囲内で種々の変形が可能であることは勿論である。   The above description of the embodiments is for explaining the present invention, and should not be construed as limiting the invention described in the claims or reducing the scope thereof. In addition, the configuration of each part of the present invention is not limited to the above embodiment, and various modifications can be made within the technical scope described in the claims.

デジタルカメラの内部ブロック図である。It is an internal block diagram of a digital camera. デジタルカメラの画像処理の概略を示すフローチャートである。It is a flowchart which shows the outline of the image processing of a digital camera. 集合写真の正面図である。It is a front view of a group photo. 図2のフローチャートのステップS4からS6までの動作を詳細に示す流れ図である。3 is a flowchart showing in detail an operation from steps S4 to S6 in the flowchart of FIG. 最も後列に属するべき複数の顔領域のうち、基準となる顔領域を特定する手順を示すフローチャートである。It is a flowchart which shows the procedure which specifies the face area used as a reference | standard among the several face area which should belong to the back row most. (a)、(b)は、顔領域の配列関係を示す図である。(a), (b) is a figure which shows the arrangement | sequence relationship of a face area | region. 顔領域の配列関係を示す図である。It is a figure which shows the arrangement | sequence relationship of a face area | region. 最後列と最前列を示す正面図である。It is a front view which shows the last row | line and the front row | line. 補正係数を求める式である。It is a formula for obtaining a correction coefficient. 別の集合写真の正面図である。It is a front view of another group photo. 実際の画像を示す図である。It is a figure which shows an actual image. 小画面の配置を示す正面図である。It is a front view which shows arrangement | positioning of a small screen. 小画面毎にデジタル画像を拡大/縮小する手順を示すフローチャートである。It is a flowchart which shows the procedure which expands / reduces a digital image for every small screen. 別の集合写真の正面図である。It is a front view of another group photo. カメラを傾けて撮影した画像を示す図である。It is a figure which shows the image imaged by tilting the camera. 顔領域の配列関係を示す図である。It is a figure which shows the arrangement | sequence relationship of a face area | region. 顔領域の配列関係を示す図である。It is a figure which shows the arrangement | sequence relationship of a face area | region. 従来の2値化画像を得る構成を示す図である。It is a figure which shows the structure which obtains the conventional binarized image. 2値化画像を示す図である。It is a figure which shows a binarized image. 瞳辞書と非瞳辞書を示す図である。It is a figure which shows a pupil dictionary and a non-pupil dictionary. 人間が前後に並んだ画像を撮影する状態の図である。It is a figure of the state which a human image | photographs the image located in order. あおり現象を生じた画像を示す図である。It is a figure which shows the image which produced the tilt phenomenon.

符号の説明Explanation of symbols

(1) CPU
(3) 顔領域判断手段
(30) 列判断手段
(31) 演算手段
(60) 小画面
(1) CPU
(3) Face area determination means
(30) Column judgment means
(31) Calculation means
(60) Small screen

Claims (7)

横に並んだ被写体要素の列が、前後に複数配備された被写体のデジタル画像から各被写体要素の領域を判別する第1の判断手段と、
被写体要素領域の位置から、同一列に含まれるべき被写体要素領域を検出する第2の判断手段と、
各列に属する被写体要素領域の平均値又は中間的な大きさを求める手段と、
演算手段が求めた各列の被写体要素領域の平均値又は中間的な大きさを元に、大きさが異なる各列の被写体要素領域を拡大/縮小すべき倍率を設定する演算手段と、
該倍率に基づいて各列の被写体要素領域を拡大/縮小する手段を設けたデジタル画像表示機器。
A first determination means for determining a region of each subject element from a plurality of digital images of the subject arranged before and after the row of subject elements arranged side by side;
Second determination means for detecting subject element areas to be included in the same column from the positions of the subject element areas;
Means for obtaining an average value or an intermediate size of the subject element regions belonging to each column;
Calculation means for setting a magnification for enlarging / reducing the subject element area of each column having a different size based on an average value or an intermediate size of the subject element areas of each column obtained by the calculation means;
A digital image display device provided with means for enlarging / reducing the subject element area of each column based on the magnification.
被写体要素は人間の顔であり、被写体のデジタル画像は人間の集合写真である、請求項1に記載のデジタル画像表示機器。 The digital image display device according to claim 1, wherein the subject element is a human face, and the digital image of the subject is a group photo of the human. 被写体要素領域を拡大/縮小すべき倍率は、縦横夫々について求められる、請求項1に記載のデジタル画像表示機器。 The digital image display device according to claim 1, wherein a magnification for enlarging / reducing the subject element area is obtained for each of the vertical and horizontal directions. 第1又は第2の判断手段は、被写体要素領域の座標から、被写体要素領域が傾いている角度を算出し、該角度だけ画像を傾き補正すべき旨の信号を発する、請求項1又は3に記載のデジタル画像表示機器。 The first or second determination means calculates the angle at which the subject element region is tilted from the coordinates of the subject element region, and issues a signal indicating that the image should be tilt corrected by the angle. The digital image display device described. 横に並んだ被写体要素の列が、前後に複数配備された被写体のデジタル画像から各被写体要素の領域を判別する第1の判断手段と、
デジタル画像が表示される画面を、縦横複数の小画面に分け、被写体要素領域の位置から、各小画面に含まれるべき被写体要素の領域を検出する第2の判断手段と、
各小画面に属する被写体要素領域の平均値又は中間的な大きさを求める手段と、
演算手段が求めた各小画面毎の被写体要素領域の平均値又は中間的な大きさを元に、大きさが異なる各小画面の被写体要素領域を拡大/縮小すべき倍率を設定する演算手段と、
該倍率に基づいて各小画面の被写体要素領域を拡大/縮小する手段を設けたデジタル画像表示機器。
A first determination means for determining a region of each subject element from a plurality of digital images of the subject arranged before and after the row of subject elements arranged side by side;
A second determination unit that divides a screen on which a digital image is displayed into a plurality of vertical and horizontal small screens, and detects a region of a subject element to be included in each small screen from a position of the subject element region;
Means for obtaining an average value or an intermediate size of subject element areas belonging to each small screen;
Calculating means for setting a magnification for enlarging / reducing the subject element area of each small screen having a different size based on an average value or an intermediate size of the subject element area for each small screen obtained by the calculating means; ,
A digital image display device provided with means for enlarging / reducing the subject element area of each small screen based on the magnification.
被写体要素領域を拡大/縮小すべき倍率は、縦横夫々について求められる、請求項5に記載のデジタル画像表示機器。 The digital image display device according to claim 5, wherein the magnification for enlarging / reducing the subject element area is obtained for each of the vertical and horizontal directions. 横に並んだ被写体要素の列が、前後に複数配備された被写体のデジタル画像から各被写体要素の領域を判別する工程と、
被写体要素の領域の位置から、同一列に含まれるべき被写体要素を検出する工程と、
各列に属する被写体要素の領域の平均値又は中間的な大きさを求める工程と、
演算手段が求めた各列の被写体要素領域の平均値又は中間的な大きさを元に、大きさが異なる各列の被写体要素領域を拡大/縮小すべき倍率を設定する工程と、
該倍率に基づいて各列の被写体要素の領域を拡大/縮小する工程を有するデジタル画像表示方法。
A step of determining a region of each subject element from a digital image of a subject arranged in front and back, a row of subject elements arranged side by side;
Detecting a subject element to be included in the same column from the position of the region of the subject element;
Obtaining an average value or an intermediate size of the area of the subject element belonging to each column;
A step of setting a magnification for enlarging / reducing the subject element area of each column having a different size based on an average value or an intermediate size of the subject element areas of each column obtained by the calculation means;
A digital image display method comprising a step of enlarging / reducing an area of a subject element in each column based on the magnification.
JP2005139286A 2005-05-12 2005-05-12 Digital image display device and digital image display method Pending JP2006318151A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005139286A JP2006318151A (en) 2005-05-12 2005-05-12 Digital image display device and digital image display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005139286A JP2006318151A (en) 2005-05-12 2005-05-12 Digital image display device and digital image display method

Publications (1)

Publication Number Publication Date
JP2006318151A true JP2006318151A (en) 2006-11-24

Family

ID=37538788

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005139286A Pending JP2006318151A (en) 2005-05-12 2005-05-12 Digital image display device and digital image display method

Country Status (1)

Country Link
JP (1) JP2006318151A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009139688A (en) * 2007-12-07 2009-06-25 Nikon Corp Focus adjustment device and camera
JP2009217506A (en) * 2008-03-10 2009-09-24 Seiko Epson Corp Image processor and image processing method
JPWO2009001512A1 (en) * 2007-06-27 2010-08-26 パナソニック株式会社 Imaging apparatus, method, system integrated circuit, and program
JP2010262326A (en) * 2009-04-30 2010-11-18 Casio Computer Co Ltd Photographed image processing apparatus, photographed image processing program, and photographed image processing method
JP2014081941A (en) * 2013-11-13 2014-05-08 Casio Comput Co Ltd Photographic image processor, photographic image processing program and photographic image processing method
US9159118B2 (en) 2013-02-21 2015-10-13 Ricoh Company, Limited Image processing apparatus, image processing system, and non-transitory computer-readable medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0528255A (en) * 1991-07-18 1993-02-05 Canon Inc Geometry transformation system for picture
JPH11161779A (en) * 1997-11-28 1999-06-18 Konica Corp Method for processing picture and device for inputting picture
JP2002057879A (en) * 2000-08-10 2002-02-22 Ricoh Co Ltd Apparatus and method for image processing, and computer readable recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0528255A (en) * 1991-07-18 1993-02-05 Canon Inc Geometry transformation system for picture
JPH11161779A (en) * 1997-11-28 1999-06-18 Konica Corp Method for processing picture and device for inputting picture
JP2002057879A (en) * 2000-08-10 2002-02-22 Ricoh Co Ltd Apparatus and method for image processing, and computer readable recording medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2009001512A1 (en) * 2007-06-27 2010-08-26 パナソニック株式会社 Imaging apparatus, method, system integrated circuit, and program
US8466970B2 (en) 2007-06-27 2013-06-18 Panasonic Corporation Imaging apparatus, method, system integrated circuit, and program for correcting inclination of subjects in an image
JP2009139688A (en) * 2007-12-07 2009-06-25 Nikon Corp Focus adjustment device and camera
JP2009217506A (en) * 2008-03-10 2009-09-24 Seiko Epson Corp Image processor and image processing method
JP2010262326A (en) * 2009-04-30 2010-11-18 Casio Computer Co Ltd Photographed image processing apparatus, photographed image processing program, and photographed image processing method
US9159118B2 (en) 2013-02-21 2015-10-13 Ricoh Company, Limited Image processing apparatus, image processing system, and non-transitory computer-readable medium
JP2014081941A (en) * 2013-11-13 2014-05-08 Casio Comput Co Ltd Photographic image processor, photographic image processing program and photographic image processing method

Similar Documents

Publication Publication Date Title
TWI637230B (en) Projector system and display image calibration method thereof
US7734098B2 (en) Face detecting apparatus and method
US7834907B2 (en) Image-taking apparatus and image processing method
JP3684017B2 (en) Image processing apparatus and method
JP2004062565A (en) Image processor and image processing method, and program storage medium
US20050196044A1 (en) Method of extracting candidate human region within image, system for extracting candidate human region, program for extracting candidate human region, method of discerning top and bottom of human image, system for discerning top and bottom, and program for discerning top and bottom
JP4739870B2 (en) Sunglasses detection device and face center position detection device
JPWO2008012905A1 (en) Authentication apparatus and authentication image display method
JP2007089183A (en) Image-capturing apparatus equipped with image compensating function, and operation method thereof
US9600871B2 (en) Image correcting apparatus, image correcting method and computer readable recording medium recording program thereon
US7415140B2 (en) Method of correcting deviation of detection position for human face, correction system, and correction program
JP2006318151A (en) Digital image display device and digital image display method
JP2005006255A (en) Image pickup device
JP2003288588A (en) Apparatus and method for image processing
KR101302601B1 (en) Image processing apparatus for iris authentication and method thereof
WO2018196854A1 (en) Photographing method, photographing apparatus and mobile terminal
JP6098133B2 (en) Face component extraction device, face component extraction method and program
JP4496005B2 (en) Image processing method and image processing apparatus
JP2005316958A (en) Red eye detection device, method, and program
JP4222013B2 (en) Image correction apparatus, character recognition method, and image correction program
WO2005055144A1 (en) Person face jaw detection method, jaw detection system, and jaw detection program
JP2010193154A (en) Image processor and method
EP2541469A2 (en) Image recognition device, image recognition method and image recognition program
JP2006047077A (en) Method and device for detecting line defects of screen
JP2008181439A (en) Face detection device and method, and imaging apparatus

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080428

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20100408

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100413

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20100803