JP2001116985A - Camera with subject recognizing function and subject recognizing method - Google Patents

Camera with subject recognizing function and subject recognizing method

Info

Publication number
JP2001116985A
JP2001116985A JP29974499A JP29974499A JP2001116985A JP 2001116985 A JP2001116985 A JP 2001116985A JP 29974499 A JP29974499 A JP 29974499A JP 29974499 A JP29974499 A JP 29974499A JP 2001116985 A JP2001116985 A JP 2001116985A
Authority
JP
Japan
Prior art keywords
subject
image
camera
recognizing
sight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP29974499A
Other languages
Japanese (ja)
Inventor
Mitsuru Owada
満 大和田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to JP29974499A priority Critical patent/JP2001116985A/en
Publication of JP2001116985A publication Critical patent/JP2001116985A/en
Withdrawn legal-status Critical Current

Links

Abstract

PROBLEM TO BE SOLVED: To recognize a subject fast with high precision by processing image information of some necessary area although the whole picked-up image should be processed to recognize a subject in the picked-up image and a large quantity of calculations were needed. SOLUTION: This camera is equipped with a subject recognizing means which recognizes a subject, a gaze recognizing means, and an image cutting means which sets the recognition position and range of the subject recognizing means according to the detection result of the gaze detecting means, and the position of an object subject is specified. A range-finding means is provided which measure the detection position of the line of sight detecting means or a nearby subject distance and the image cutting means can vary its image cutting range according to the measurement result.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、撮影画像から被写
体を認識し所要の処理を行うカメラに関するものであ
る。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a camera for recognizing a subject from a photographed image and performing required processing.

【0002】[0002]

【従来の技術】一般に、被写界の特定の物や人物を認識
する被写体認識技術が知られている。撮影画像から人の
顔を抽出する手法として、原画像から肌色データを抽出
し、肌色範囲と判断された測光点のクラスタを顔とする
方法が知られている。特開昭52−156624号公
報,特開昭53−14521号公報,特開昭53−14
5622号公報などである。さらに特開平4−3463
33号公報では、測光データを色相と彩度に変換しこの
二次元ヒストグラムを作成,解析することで顔領域を判
断する方法が開示されている。
2. Description of the Related Art In general, a subject recognition technology for recognizing a specific object or person in a scene is known. As a method for extracting a human face from a captured image, a method is known in which skin color data is extracted from an original image, and a cluster of photometric points determined as a skin color range is used as a face. JP-A-52-156624, JP-A-53-14521, JP-A-53-14
No. 5622 and the like. Further, JP-A-4-3463
No. 33 discloses a method of converting a photometric data into hue and saturation, and creating and analyzing a two-dimensional histogram to determine a face area.

【0003】特開平8−063597号公報に於いて
は、人の顔の形状に相当する顔候補領域を抽出しその領
域内の特徴量から顔領域を決定する。また別の方法は画
像から人の顔の輪郭を抽出し顔領域を決定する。別の方
法では複数の顔の形状をしたテンプレートを用意し、そ
のテンプレートと画像との相関を計算しこの相関値によ
り顔候補領域とする事で人の顔を抽出する。さらに別の
方法ではニューラルネットワークを用いて学習を繰り返
し被写体の物や顔を認識することが可能となる。これは
ネオコグニトロンを用いたモデルが知られている。(福
島:位置ずれに影響されないパターン認識機能の神経回
路モデル−ネオコグニトロン、電子通信学会論文誌A,
J62−A(10)、PP658−665,Oct.1
979)。ニューラルネットワークの学習方法について
は「ニューラルネットワークを用いた顔画像識別の一検
討(テレビジョン学会技術報告,Vol.14,No5
0,1990.9,7−12)」などがある。
[0003] In Japanese Patent Application Laid-Open No. 8-063597, a face candidate area corresponding to the shape of a human face is extracted, and a face area is determined from the feature amount in the area. Another method extracts a human face outline from an image and determines a face area. In another method, a template having a plurality of face shapes is prepared, a correlation between the template and the image is calculated, and a human face is extracted by using the correlation value as a face candidate area. In still another method, learning can be repeated using a neural network to recognize an object or face of a subject. This is known as a model using neocognitron. (Fukushima: Neural network model of pattern recognition function not affected by displacement-Neocognitron, IEICE Transactions A,
J62-A (10), PP658-665, Oct. 1
979). For a method of learning a neural network, see “Study of Face Image Identification Using Neural Network (Technical Report of the Institute of Television Engineers of Japan, Vol. 14, No. 5)
0, 1990.9, 7-12) ".

【0004】また、顔認識,顔検出,目検出についての
公知技術として特開平9−251534号公報や特開平
10−232934号公報に詳細に技術や参考資料が紹
介されている。
As well-known techniques for face recognition, face detection, and eye detection, Japanese Patent Application Laid-Open Nos. Hei 9-251534 and Hei 10-232934 disclose detailed techniques and reference materials.

【0005】[0005]

【発明が解決しようとする課題】前述の従来例では、撮
像画像から被写体を認識する為に、撮像画像全体につい
て画像処理をしなければならず、膨大な量の計算が必要
となり高性能のCPUや演算装置であっても大きな時間
を必要とした。
In the above-mentioned prior art, in order to recognize a subject from a captured image, image processing must be performed on the entire captured image. Even with a computing device, a large amount of time was required.

【0006】本発明は、このような状況のものでなされ
たもので、高速,高精度で被写体認識を実現することが
できる被写体認識機能を有するカメラおよび被写体認識
方法を提供することを目的とするものである。
The present invention has been made in such a situation, and an object of the present invention is to provide a camera having an object recognizing function and an object recognizing method capable of realizing object recognition with high speed and high accuracy. Things.

【0007】[0007]

【課題を解決するための手段】前記目的を達成するた
め、本発明では、被写体認識機能を有するカメラを次の
(1)〜(4)のとおりに構成し、被写体認識方法を次
の(5),(6)のとおりに構成する。
In order to achieve the above object, according to the present invention, a camera having a subject recognizing function is configured as in the following (1) to (4), and a subject recognizing method is performed in the following (5). ), (6).

【0008】(1)撮影画像から被写体を認識する被写
体認識手段と、撮影者の視線を検出する視線検出手段
と、該視線検出手段の検出結果に基づいて該被写体認識
手段の認識位置と範囲を設定する画像切り出し手段とを
備えた被写体認識機能を有するカメラ。
(1) A subject recognition means for recognizing a subject from a photographed image, a gaze detection means for detecting a gaze of a photographer, and a recognition position and range of the subject recognition means based on a detection result of the gaze detection means. A camera having a subject recognizing function provided with an image cutout means for setting.

【0009】(2)前記(1)記載のカメラにおいて、
該視線検出手段の検出位置又はその近傍の被写体距離を
測定する測距手段を備え、該画像切り出し手段は、該測
距手段の測距結果に基づいて切り出し範囲を変える被写
体認識機能を有するカメラ。
(2) In the camera according to the above (1),
A camera comprising a distance measuring means for measuring a subject distance at or near the detection position of the eye-gaze detecting means, and wherein the image clipping means has a subject recognition function of changing a clipping range based on a distance measurement result of the distance measuring means.

【0010】(3)撮影画像から被写体を認識する被写
体認識手段と、撮影者の視線を検出する視線検出手段と
を備え、該被写体認識手段が学習を必要とする時に、そ
の学習入力に該視線検出手段を用いる被写体認識機能を
有するカメラ。
(3) A subject recognizing means for recognizing a subject from a photographed image and a visual line detecting means for detecting a visual line of a photographer. When the subject recognizing means requires learning, the visual recognition is input to the learning input. A camera having a subject recognition function using a detection unit.

【0011】(4)前記(1)〜(3)のいずれかに記
載のカメラにおいて、被写体認識機能を当該カメラのA
F,AE,ズーム,追尾機能の少なくとも一機能に利用
するカメラ。
(4) In the camera according to any one of (1) to (3), the object recognition function is
Camera used for at least one of F, AE, zoom, and tracking functions.

【0012】(5)カメラにおける被写体認識方法であ
って、撮影者の視線を検出するステップAと、このステ
ップAでの検出結果に基づいて被写体の認識範囲を切り
出すステップBと、このステップBにより切り出した範
囲において被写体認識処理を行うステップCとを備えた
被写体認識方法。
(5) A method for recognizing a subject in a camera, comprising: a step A for detecting a line of sight of the photographer; a step B for cutting out a recognition range of the subject based on the detection result in the step A; A subject recognition method comprising: performing a subject recognition process in the cut-out range.

【0013】(6)カメラにおける被写体認識方法であ
って、撮影者の視線を検出するステップAと、このステ
ップAでの検出結果を用いて被写体認識処理における学
習を行うステップBとを備えた被写体認識方法。
(6) A method for recognizing a subject in a camera, comprising: a step A for detecting a line of sight of a photographer; and a step B for performing learning in a subject recognition process using the detection result in step A. Recognition method.

【0014】[0014]

【発明の実施の形態】以下本発明の実施の形態を1眼レ
フカメラの実施例により詳しく説明する。なお本発明は
実施例のようなフィルムカメラに限らず、スチルビデオ
カメラ(デジタルカメラ),ビデオカメラの形で同様に
実施することができる。また、実施例の説明に裏付けら
れて被写体認識方法の形で実施することもできる。
DESCRIPTION OF THE PREFERRED EMBODIMENTS Embodiments of the present invention will be described below in detail with reference to embodiments of a single-lens reflex camera. The present invention is not limited to the film camera as in the embodiment, but can be similarly implemented in the form of a still video camera (digital camera) or a video camera. Further, the present invention can be implemented in the form of a subject recognition method supported by the description of the embodiment.

【0015】[0015]

【実施例】(実施例1)図1は実施例1である“1眼レ
フカメラ”の構成を示すブロック図である。
FIG. 1 is a block diagram showing the configuration of a "single-lens reflex camera" according to a first embodiment.

【0016】図1において、101は撮影レンズ、10
2はクイックリターンミラー、103はピント板、10
4はペンタプリズム、105は光分割プリズム、106
は接眼レンズ、107は測距用エリアセンサ、108は
視線用エリアセンサ、109はフィルム面、110はシ
ャッタ幕、111はエリアセンサの撮像処理部、112
は画像切り出し処理部、113は被写体認識部、114
は出力部、115は距離検出部、116は視線検出部で
ある。
In FIG. 1, reference numeral 101 denotes a photographing lens;
2 is a quick return mirror, 103 is a focus plate, 10
4 is a pentaprism, 105 is a light splitting prism, 106
Is an eyepiece, 107 is a distance measurement area sensor, 108 is a line-of-sight area sensor, 109 is a film surface, 110 is a shutter curtain, 111 is an image sensor of an area sensor, 112
Denotes an image cutout processing unit, 113 denotes a subject recognition unit, 114
Is an output unit, 115 is a distance detection unit, and 116 is a line-of-sight detection unit.

【0017】被写体像は撮影レンズ101を通りクイッ
クリターンミラー102により撮影光軸を90度曲げピ
ント板103で一次結像される。ピント板103は前述
の撮影レンズ101の一次結像面に位置するマットとフ
ィールドレンズからなっている。ピント板103の画像
はペンタプリズム104によりファインダ用に光路変更
され、光分割プリズム105を介して測距用エリアセン
サ107と接眼レンズ106にそれぞれ分離され、撮影
者117により観察される。観察者117の眼画像は接
眼レンズ106、光分割プリズム105を介して視線用
エリアセンサ108に結像される。
The subject image passes through the photographing lens 101, is bent by 90 degrees by the quick return mirror 102, and is primarily formed on the focus plate 103 by the focus plate 103. The focus plate 103 is composed of a mat and a field lens which are located on the primary image forming plane of the photographing lens 101 described above. The image on the focus plate 103 is changed in optical path for a finder by a pentaprism 104, separated into a distance measuring area sensor 107 and an eyepiece 106 via a light splitting prism 105, and observed by a photographer 117. An eye image of the observer 117 is formed on the line-of-sight area sensor 108 via the eyepiece 106 and the light splitting prism 105.

【0018】測距用エリアセンサ107に結像された後
述する瞳位置の異なる2つの撮影画像は撮像処理部11
1により記憶され必要に応じて画像切り出し処理部11
2,距離検出部115にそれぞれ必要とする画像データ
を供給する。画像切り出し処理部112では距離検出部
115の距離情報と視線検出部116の視線位置情報を
元に撮影画像から所定の位置と大きさの画像を選択切り
出し被写体認識部113に画像データを出力する。この
時の切り出し位置は視線検出部116による撮影者の被
写界における視線位置情報から決められる。また画像の
大きさは距離検出部115の距離情報により、被写体の
実寸を予測演算して認識しようとする対象被写体相応の
予め設定された大きさ情報を元に決められる。この時の
距離検出部115の被写体における検出位置は、視線検
出部116からの情報を元に撮影画像のその位置または
その近傍位置とする。被写体認識部113は画像切り出
し処理部112からの画像から被写体を認識し出力部1
14により出力される。被写体認識部113は画像切り
出し処理部112からの画像情報を元に指定された画像
領域だけを認識処理すれば良く、演算を最低限に押さえ
る事で高速に処理をする事が可能となる。一方距離検出
部115は視線検出部116からの視線位置情報を元に
その位置または近傍の被写体距離を測距し前述画像切り
出し処理部112に距離情報を出力する。また視線検出
部116は視線用エリアセンサ108からの眼画像から
視線位置を検出し前述の距離検出部115と画像切り出
し処理部112に視線位置情報として出力する。
Two photographed images having different pupil positions, which will be described later, formed on the distance measuring area sensor 107 are captured by the image pickup processing unit 11.
1 and is stored as necessary in the image cutout processing unit 11
2. Supply necessary image data to the distance detection unit 115, respectively. The image cutout processing unit 112 selectively cuts out an image of a predetermined position and size from the captured image based on the distance information of the distance detection unit 115 and the line-of-sight position information of the line-of-sight detection unit 116, and outputs image data to the object recognition unit 113. The cutout position at this time is determined from the line-of-sight position information of the photographer in the field of view by the line-of-sight detection unit 116. In addition, the size of the image is determined based on the distance information of the distance detection unit 115 based on preset size information corresponding to the target subject to be predicted and calculated by recognizing the actual size of the subject. At this time, the detection position of the subject by the distance detection unit 115 is set to the position of the captured image or a position near the position based on the information from the line-of-sight detection unit 116. The subject recognition unit 113 recognizes a subject from the image from the image cutout processing unit 112 and outputs
14 is output. The subject recognizing unit 113 only needs to recognize and process only the designated image area based on the image information from the image clipping processing unit 112, and can perform high-speed processing by minimizing the calculation. On the other hand, the distance detection unit 115 measures the distance of the subject or a nearby subject based on the line-of-sight position information from the line-of-sight detection unit 116, and outputs the distance information to the image cutout processing unit 112. Further, the line-of-sight detection unit 116 detects the line-of-sight position from the eye image from the line-of-sight area sensor 108 and outputs the line-of-sight position information to the distance detection unit 115 and the image cutout processing unit 112.

【0019】フィルムに撮影する時はクイックリターン
ミラー102とシャッタ110を撮影光束から退避さ
せ、フィルム109に露光する。
When photographing on a film, the quick return mirror 102 and the shutter 110 are retracted from the photographing light beam, and the film 109 is exposed.

【0020】以上の一連の構成と動作により、被写体認
識処理が視線で選択した領域情報を元に必要な画像情報
だけの演算処理になり、高速に処理する事が可能とな
る。また、高速になった分より高度な複雑な被写体認識
処理を行うことも可能となる。さらに、視線により選択
されているためにより高確度で認識することが可能とな
る。
With the above-described series of configurations and operations, the subject recognition processing is an arithmetic processing of only necessary image information based on the area information selected by the line of sight, thereby enabling high-speed processing. In addition, it becomes possible to perform a complicated object recognition process that is more sophisticated than the speedup. Furthermore, since the selection is made based on the line of sight, the recognition can be performed with higher accuracy.

【0021】図2を用いて測距用エリアセンサ107と
視線用エリアセンサ108の周辺について更に詳細に説
明する。
The surroundings of the distance measuring area sensor 107 and the line-of-sight area sensor 108 will be described in more detail with reference to FIG.

【0022】105は半透過ミラー部105aを有する
光分割プリズムである。このプリズム105はファイン
ダ光束を撮影者117に見えるように一部を透過させ、
残りの光束を測距用エリアセンサ107へ導く作用と、
撮影者の眼球像を視線用エリアセンサ108へ導く作用
を合わせ備えている。202は光路変換用のミラー、2
03は位相差方式の測距を行う為の2次元結像レンズで
あり撮像被写界を測距用エリアセンサ107上に結像さ
せている。204a,204bは撮影者の眼球117の
照明であり、接眼レンズ106の近傍に配置されてい
る。201は撮影者の眼球を視線用エリアセンサ108
に結像させるための結像レンズである。撮影者117の
眼画像は照明204a,204bと共に接眼レンズ10
6を介し、光分割プリズム105内の半透明ミラー10
5aで反射し結像レンズ201で結像され、視線用エリ
アセンサ108上に結像される。
Reference numeral 105 denotes a light splitting prism having a semi-transmissive mirror portion 105a. The prism 105 partially transmits the finder light flux so that the photographer 117 can see it,
An action of guiding the remaining light flux to the distance measurement area sensor 107;
It also has the function of guiding the eyeball image of the photographer to the line-of-sight area sensor 108. 202 is a mirror for optical path conversion, 2
Numeral 03 denotes a two-dimensional image forming lens for performing a distance measurement using a phase difference method, and forms an image of an imaged object field on a distance measurement area sensor 107. Reference numerals 204a and 204b denote illuminations of the photographer's eyeball 117, which is arranged near the eyepiece lens 106. Reference numeral 201 denotes a line-of-sight area sensor 108
Is an image forming lens for forming an image on the surface. The eye image of the photographer 117 is included in the eyepiece 10 together with the illuminations 204a and 204b.
6, the semi-transparent mirror 10 in the light splitting prism 105
The light is reflected by 5a, is imaged by the imaging lens 201, and is imaged on the line-of-sight area sensor 108.

【0023】測距動作について図3を用いて更に説明す
る。図3は図1,図2における距離検出に関わる光学系
についての要約図に相当する。ピント板103のマット
面に一次結像された画像は、ピント板103のフィール
ドレンズと2次結像レンズ203により測距用エリアセ
ンサ107の107a,107bにそれぞれ再結像され
る。この時絞り板301により107aと107bには
それぞれ異なる瞳位置から光束が導かれる。この構成に
より所定に視差を持った撮像画面107aと107bが
得られる。この視差を持った撮像画面をそれぞれm×n
個(m=nでも良い)のブロックに分割し、それぞれ相
対するブロック内の信号の公知の相関演算を行うと、三
角測量の原理により前ブロック内の物体までの距離やデ
フォーカス量を測定する事が出来る。この距離とデフォ
ーカスの関係はそのレンズの焦点距離,焦点位置,レン
ズ特有の特性により非線形な関係となる。これらの関係
を必要に応じて補正変換する手段は公知である。
The distance measuring operation will be further described with reference to FIG. FIG. 3 corresponds to a summary diagram of the optical system related to the distance detection in FIGS. The image primarily formed on the matte surface of the focus plate 103 is re-imaged on the distance measuring area sensor 107 107a and 107b by the field lens and the secondary image forming lens 203 of the focus plate 103, respectively. At this time, the luminous flux is guided to the apertures 107a and 107b from different pupil positions by the aperture plate 301. With this configuration, imaging screens 107a and 107b having predetermined parallax can be obtained. Each of the imaging screens having this parallax is m × n
When the block is divided into a number of blocks (m = n may be used) and a known correlation operation is performed on the signals in the blocks facing each other, the distance to the object in the previous block and the defocus amount are measured by the principle of triangulation. I can do things. The relationship between the distance and the defocus is a non-linear relationship due to the focal length of the lens, the focal position, and characteristics specific to the lens. Means for correcting and converting these relationships as necessary are known.

【0024】なお、この構成を有するカメラについては
特願平5−278433号公報等で詳細に開示されてい
る。
The camera having this configuration is disclosed in detail in Japanese Patent Application No. 5-278433.

【0025】次にカメラのファインダを覗く撮影者が、
カメラのファインダ画面、つまりピント板上のどの位置
を見ているかをカメラが認識する視線検出手段について
説明する。
Next, the photographer looking into the viewfinder of the camera
A line-of-sight detection unit that allows the camera to recognize the viewfinder screen of the camera, that is, the position on the focus plate that the user is looking at will be described.

【0026】従来より観察者が観察面上のどの位置を観
察しているかを検出する、所謂視線(視軸)を検出する
装置(例えばアイカメラ)が種々提供されている。その
視線を検出する方法として、例えば特開平1−2747
36号公報においては、光源からの平行光束を観察者の
眼球の前眼部へ投射し、角膜からの反射光による角膜反
射像と瞳孔の結像位置を利用して視軸を求めている。
Conventionally, various devices (for example, eye cameras) for detecting what position on the observation surface the observer is observing, that is, for detecting a so-called line of sight (a visual axis) have been provided. As a method of detecting the line of sight, for example,
In Japanese Patent Publication No. 36, a collimated light beam from a light source is projected to the anterior segment of an eyeball of an observer, and a visual axis is obtained by using a corneal reflection image formed by light reflected from the cornea and an imaging position of a pupil.

【0027】次にフローチャートを用いてさらに説明す
る。
Next, further explanation will be given using a flowchart.

【0028】図4にそのフローチャートを示す。まずス
テップ401(図4ではS401と略記、以下同様)で
撮影者の視線を検出し被写界の視線位置を求めステップ
402へ進む。ステップ402ではステップ401で求
めた視線位置情報を元にその周辺部の被写界画像を取り
込みステップ403へ進む。ステップ403ではステッ
プ402で取り込んだ被写界画像から被写体認識処理を
行い、被写体を検出しステップ404へ進む。ステップ
404ではステップ403で得られた被写体検出結果を
出力する。
FIG. 4 is a flowchart showing the operation. First, in step 401 (abbreviated as S401 in FIG. 4, the same applies hereinafter), the line of sight of the photographer is detected, and the line of sight of the object scene is determined, and the process proceeds to step 402. In step 402, an object scene image in a peripheral portion thereof is captured based on the line-of-sight position information obtained in step 401, and the process proceeds to step 403. In step 403, subject recognition processing is performed from the scene image captured in step 402 to detect a subject, and the process proceeds to step 404. In step 404, the subject detection result obtained in step 403 is output.

【0029】以上の一連のフローにより視線位置の被写
体を効率よく認識する事が可能とする。
With the above series of flows, it is possible to efficiently recognize the subject at the line of sight.

【0030】図5では図4のステップ402の処理内容
を更に詳しく説明している。ステップ501では図4の
ステップ401で得られた撮影者の視線位置情報を元に
その視線位置での測距を行いステップ502へ進む。こ
の測距方法は前述した従来手法で良い。ステップ502
では視線位置の距離情報からその距離での撮影倍率情報
を撮影レンズやエリアセンサ等の諸条件から公知手法に
より求めステップ503へ進む。ステップ503ではス
テップ502での撮影倍率情報を元に認識したい対象物
の実寸に見合った画像範囲を求めステップ504へ進
む。この時実寸法情報は予め用意しておき、その寸法情
報と画像の認識処理範囲との関係をも予め設定しておく
となおさら良い。これにより、多少大きめに処理範囲を
設定する様にする事で視線位置の検出誤差による切り出
しミスを防ぐ働きを持たせる事ができる。ステップ50
4ではステップ503で設定された画像の処理範囲情報
を元に被写体認識対象の被写界画像を撮り込み図4ステ
ップ403で被写体認識所を行う。以上のフローにより
視線位置の被写界画像から被写体認識が良好に実現可能
となる。
FIG. 5 illustrates the processing content of step 402 in FIG. 4 in more detail. In step 501, distance measurement is performed at the line-of-sight position based on the line-of-sight position information of the photographer obtained in step 401 in FIG. This distance measuring method may be the conventional method described above. Step 502
In, the photographing magnification information at that distance is obtained from the distance information of the line-of-sight position based on various conditions such as the photographing lens and the area sensor by a known method, and the process proceeds to step 503. In step 503, an image range corresponding to the actual size of the object to be recognized is obtained based on the photographing magnification information in step 502, and the process proceeds to step 504. At this time, it is even more preferable that the actual dimension information is prepared in advance, and the relationship between the dimension information and the image recognition processing range is also set in advance. Thus, by setting the processing range to be slightly larger, it is possible to provide a function of preventing a cutting error due to a detection error of the line-of-sight position. Step 50
In step 4, an object scene image to be recognized is captured based on the processing range information of the image set in step 503, and an object recognition station is performed in step 403 in FIG. By the above flow, the object recognition can be satisfactorily realized from the scene image at the line of sight.

【0031】図6で更に動作のイメージについて説明す
る。図6の左側の3つの図は撮像画像に視線検出された
位置を枠で、その位置での測距データを数字で示してい
る。また右側の3つの図はそれぞれのその視線位置と距
離情報を元に求められた被写体認識処理範囲を枠で示し
てある。Caselでは中央一番奥の人物を選択した場
合で5mの測距結果を、Case2は左の人物を選択し
た場合で4mの測距結果を、Case3は右の人物を選
択した場合で3mの測距結果を示している。それぞれの
Caseで被写体の実距離が異なるために認識処理の為
の画像範囲もそれぞれ異なり、当然ながら近いほど選択
範囲は大きく、遠いほど小さくなる。
An image of the operation will be further described with reference to FIG. In the three figures on the left side of FIG. 6, the position where the line of sight is detected in the captured image is indicated by a frame, and the distance measurement data at that position is indicated by a numeral. In the three figures on the right side, the subject recognition processing ranges obtained based on the respective line-of-sight positions and distance information are shown by frames. In Casel, the distance measurement result of 5 m is selected when the deepest person in the center is selected, Case 2 is the measurement result of 4 m when the left person is selected, and Case 3 is the measurement result of 3 m when the right person is selected. The distance results are shown. Since the actual distance of the subject is different in each case, the image range for the recognition process is also different. Naturally, the closer the selection, the larger the selection range, and the farther the range, the smaller the selection range.

【0032】被写体認識の出力結果はAF(自動焦点調
節),AE(自動露出調整),ズーム,追尾等々いろい
ろな機能に応用でき、より容易により高画質な撮影が可
能となる。
The output result of the object recognition can be applied to various functions such as AF (automatic focus adjustment), AE (automatic exposure adjustment), zooming, tracking, and the like, so that higher-quality shooting can be performed more easily.

【0033】(実施例2)実施例2である“カメラ”の
動作をフローチャートを用いて説明する。
(Embodiment 2) The operation of the "camera" of Embodiment 2 will be described with reference to a flowchart.

【0034】この実施例は、被写体認識アルゴリズムに
学習を必要とする例えばニューロ等の場合に、その学習
の入力として視線を用いた例について説明している。
This embodiment describes an example in which a line of sight is used as an input for learning, for example, in the case of a neuro or the like that requires learning of an object recognition algorithm.

【0035】図7にそのフローチャートを示す。ステッ
プ701では被写体認識しようとする被写界を取込みス
テップ702へ進む。ステップ702ではステップ70
1で取り込んだ画像を被写体認識処理を行いステップ7
03へ進む。ステップ703ではステップ702での認
識処理結果を判断し、OKであれば終了へ、NGであれ
ばステップ704へ進む。この時OKとNGは認識処理
自体が自動で判断しても良いし、撮影者等が操作しても
良い。ステップ704では被写体認識処理を学習モード
に設定しステップ705へ進む。ステップ705では被
写体認識処理結果の正解を視線を用いて入力しステップ
706へ進む。この時視線での入力方法は撮影画像を見
ながら正解を入力してもよいし、記号や正解位置等で入
力しても良い。ステップ706では、ステップ705で
入力された正解により被写体認識処理の学習処理を行
い、認識精度の向上をはかり終了する。
FIG. 7 is a flowchart showing the operation. In step 701, the object scene to be recognized is fetched and the process proceeds to step 702. In step 702, step 70
Step 7: subject the image captured in step 1 to subject recognition processing
Go to 03. In step 703, the result of the recognition processing in step 702 is determined. If OK, the process ends, and if NG, the process proceeds to step 704. At this time, OK and NG may be automatically determined by the recognition processing itself, or may be operated by a photographer or the like. In step 704, the subject recognition processing is set to the learning mode, and the flow advances to step 705. In step 705, the correct answer of the subject recognition processing result is input using the line of sight, and the flow advances to step 706. At this time, the input method using the line of sight may be to input the correct answer while looking at the captured image, or to input the symbol, the correct answer position, or the like. In step 706, learning processing of the object recognition processing is performed based on the correct answer input in step 705, and the recognition accuracy is improved and the processing is terminated.

【0036】以上のフローにより、容易に被写体認識処
理の学習入力を行うことがで可能となり、繰り返す事に
より、より高精度に認識する事が可能となる。
According to the above flow, it is possible to easily input the learning of the object recognition processing, and it is possible to perform the recognition with higher precision by repeating the input.

【0037】[0037]

【発明の効果】以上説明したように、本発明によると、
視線入力を用いることで被写体認識を高速,高精度で行
うことができ、被写体認識処理結果を用いた高度な処理
が容易となる。その結果、高品質な画像を自動で容易に
撮影することができる。
As described above, according to the present invention,
By using the line-of-sight input, subject recognition can be performed at high speed and with high accuracy, and advanced processing using the result of subject recognition processing is facilitated. As a result, a high-quality image can be automatically and easily captured.

【図面の簡単な説明】[Brief description of the drawings]

【図1】 実施例1の構成を示すブロック図FIG. 1 is a block diagram illustrating a configuration of a first embodiment.

【図2】 測距用エリアセンサ,視線用エリアセンサ周
辺の詳細図
FIG. 2 is a detailed view around a distance measuring area sensor and a line-of-sight area sensor.

【図3】 測距原理の説明図FIG. 3 is an explanatory view of a principle of distance measurement.

【図4】 実施例1の動作を示すフローチャートFIG. 4 is a flowchart showing the operation of the first embodiment.

【図5】 S402の詳細を示すフローチャートFIG. 5 is a flowchart showing details of S402.

【図6】 動作イメージを示す図FIG. 6 is a diagram showing an operation image.

【図7】 実施例2の動作を示すフローチャートFIG. 7 is a flowchart showing the operation of the second embodiment.

【符号の説明】[Explanation of symbols]

111 撮像処理部 112 画像切り出し処理部 113 被写体認識部 115 距離検出部 116 視線検出部 111 imaging processing unit 112 image cutout processing unit 113 subject recognition unit 115 distance detection unit 116 gaze detection unit

───────────────────────────────────────────────────── フロントページの続き (51)Int.Cl.7 識別記号 FI テーマコート゛(参考) H04N 5/232 H04N 7/18 K 5C023 5/235 G02B 7/11 N 5C054 5/262 G03B 3/00 A 7/18 G06F 15/62 380 Fターム(参考) 2H002 DB29 DB30 FB31 GA63 HA04 2H011 AA01 AA03 CA01 DA00 DA01 2H051 BA02 DA03 DA04 DA15 DA24 DA25 EB04 EB13 EB20 5B057 AA20 BA02 BA19 CA08 CA12 CB08 CB12 CC03 CE09 DA07 DA08 DB02 DB09 5C022 AA13 AB23 AB28 AB34 AB36 AB62 AB63 AC03 AC12 AC42 AC69 CA00 5C023 AA06 AA18 AA37 BA02 CA06 DA08 EA03 5C054 AA05 CA04 CC05 CH04 EA01 FC15 FF02 HA17 HA31 ──────────────────────────────────────────────────続 き Continued on the front page (51) Int.Cl. 7 Identification symbol FI Theme coat ゛ (Reference) H04N 5/232 H04N 7/18 K 5C023 5/235 G02B 7/11 N 5C054 5/262 G03B 3/00 A 7/18 G06F 15/62 380 F term (reference) 2H002 DB29 DB30 FB31 GA63 HA04 2H011 AA01 AA03 CA01 DA00 DA01 2H051 BA02 DA03 DA04 DA15 DA24 DA25 EB04 EB13 EB20 5B057 AA20 BA02 BA19 CA08 CA12 CB08 DB09 CC08 AA13 AB23 AB28 AB34 AB36 AB62 AB63 AC03 AC12 AC42 AC69 CA00 5C023 AA06 AA18 AA37 BA02 CA06 DA08 EA03 5C054 AA05 CA04 CC05 CH04 EA01 FC15 FF02 HA17 HA31

Claims (6)

【特許請求の範囲】[Claims] 【請求項1】 撮影画像から被写体を認識する被写体認
識手段と、撮影者の視線を検出する視線検出手段と、該
視線検出手段の検出結果に基づいて該被写体認識手段の
認識位置と範囲を設定する画像切り出し手段とを備えた
ことを特徴とする被写体認識機能を有するカメラ。
1. A subject recognizing means for recognizing a subject from a photographed image, a gaze detecting means for detecting a gaze of a photographer, and a recognition position and a range of the subject recognizing means are set based on a detection result of the gaze detecting means. A camera having a subject recognition function, comprising:
【請求項2】 請求項1記載のカメラにおいて、該視線
検出手段の検出位置又はその近傍の被写体距離を測定す
る測距手段を備え、該画像切り出し手段は、該測距手段
の測距結果に基づいて切り出し範囲を変えることを特徴
とする被写体認識機能を有するカメラ。
2. A camera according to claim 1, further comprising a distance measuring means for measuring a distance of a subject at or near a position detected by said line-of-sight detecting means, wherein said image clipping means outputs the distance measurement result of said distance measuring means. A camera having a subject recognizing function characterized by changing a clipping range based on the subject.
【請求項3】 撮影画像から被写体を認識する被写体認
識手段と、撮影者の視線を検出する視線検出手段とを備
え、該被写体認識手段が学習を必要とする時に、その学
習入力に該視線検出手段を用いることを特徴とする被写
体認識機能を有するカメラ。
3. A subject recognizing means for recognizing a subject from a photographed image, and a gaze detecting means for detecting a gaze of a photographer. When the subject recognizing means requires learning, the gaze detection is applied to the learning input. A camera having a subject recognizing function, characterized by using means.
【請求項4】 請求項1〜3のいずれかに記載のカメラ
において、被写体認識機能を当該カメラのAF,AE,
ズーム,追尾機能の少なくとも一機能に利用することを
特徴とするカメラ。
4. The camera according to claim 1, wherein an object recognition function is set to AF, AE,
A camera characterized in that it is used for at least one of zoom and tracking functions.
【請求項5】 カメラにおける被写体認識方法であっ
て、撮影者の視線を検出するステップAと、このステッ
プAでの検出結果に基づいて被写体の認識範囲を切り出
すステップBと、このステップBにより切り出した範囲
において被写体認識処理を行うステップCとを備えたこ
とを特徴とする被写体認識方法。
5. A method for recognizing a subject in a camera, comprising: a step A for detecting a line of sight of a photographer; a step B for cutting out a recognition range of the subject based on the detection result in the step A; And a step C of performing a subject recognition process in the range.
【請求項6】 カメラにおける被写体認識方法であっ
て、撮影者の視線を検出するステップAと、このステッ
プAでの検出結果を用いて被写体認識処理における学習
を行うステップBとを備えたことを特徴とする被写体認
識方法。
6. A method for recognizing a subject in a camera, comprising: a step A for detecting a line of sight of a photographer; and a step B for performing learning in a subject recognition process using the detection result in the step A. Characteristic object recognition method.
JP29974499A 1999-10-21 1999-10-21 Camera with subject recognizing function and subject recognizing method Withdrawn JP2001116985A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP29974499A JP2001116985A (en) 1999-10-21 1999-10-21 Camera with subject recognizing function and subject recognizing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP29974499A JP2001116985A (en) 1999-10-21 1999-10-21 Camera with subject recognizing function and subject recognizing method

Publications (1)

Publication Number Publication Date
JP2001116985A true JP2001116985A (en) 2001-04-27

Family

ID=17876456

Family Applications (1)

Application Number Title Priority Date Filing Date
JP29974499A Withdrawn JP2001116985A (en) 1999-10-21 1999-10-21 Camera with subject recognizing function and subject recognizing method

Country Status (1)

Country Link
JP (1) JP2001116985A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006332752A (en) * 2005-05-23 2006-12-07 Sony Corp Image information processing system, image information processing apparatus and method, and recording medium and program
US7327890B2 (en) 2002-12-20 2008-02-05 Eastman Kodak Company Imaging method and system for determining an area of importance in an archival image
US7705908B2 (en) 2003-12-16 2010-04-27 Eastman Kodak Company Imaging method and system for determining camera operating parameter
JP2011013683A (en) * 2010-08-13 2011-01-20 Ricoh Co Ltd Imaging apparatus, auto focus method, and program for allowing computer to perform the method
US8243142B2 (en) 2007-11-06 2012-08-14 Kabushiki Kaisha Toshiba Mobile object image tracking apparatus and method
US8253800B2 (en) 2006-06-28 2012-08-28 Nikon Corporation Tracking device, automatic focusing device, and camera
US8411159B2 (en) 2006-10-25 2013-04-02 Fujifilm Corporation Method of detecting specific object region and digital camera
US8659619B2 (en) 2004-03-26 2014-02-25 Intellectual Ventures Fund 83 Llc Display device and method for determining an area of importance in an original image
US8698937B2 (en) 2007-06-01 2014-04-15 Samsung Electronics Co., Ltd. Terminal and image capturing method thereof
JP2014074923A (en) * 2013-11-29 2014-04-24 Nikon Corp Imaging device
JP2015022207A (en) * 2013-07-22 2015-02-02 キヤノン株式会社 Optical device, control method therefor, and control program
JP2015146500A (en) * 2014-02-03 2015-08-13 キヤノン株式会社 Imaging apparatus and control method of the same
JP2018205648A (en) * 2017-06-09 2018-12-27 キヤノン株式会社 Imaging device
WO2019078338A1 (en) * 2017-10-19 2019-04-25 ソニー株式会社 Electronic apparatus
US10664991B2 (en) 2016-07-20 2020-05-26 Fujifilm Corporation Attention position recognizing apparatus, image pickup apparatus, display apparatus, attention position recognizing method and program
JP7433788B2 (en) 2019-07-03 2024-02-20 キヤノン株式会社 Control device, imaging device, control method, program

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7327890B2 (en) 2002-12-20 2008-02-05 Eastman Kodak Company Imaging method and system for determining an area of importance in an archival image
US7705908B2 (en) 2003-12-16 2010-04-27 Eastman Kodak Company Imaging method and system for determining camera operating parameter
US8659619B2 (en) 2004-03-26 2014-02-25 Intellectual Ventures Fund 83 Llc Display device and method for determining an area of importance in an original image
JP4600149B2 (en) * 2005-05-23 2010-12-15 ソニー株式会社 Image information processing system, image information processing apparatus and method, recording medium, and program
JP2006332752A (en) * 2005-05-23 2006-12-07 Sony Corp Image information processing system, image information processing apparatus and method, and recording medium and program
US9063290B2 (en) 2006-06-28 2015-06-23 Nikon Corporation Subject tracking apparatus, camera having the subject tracking apparatus, and method for tracking subject
US9639754B2 (en) 2006-06-28 2017-05-02 Nikon Corporation Subject tracking apparatus, camera having the subject tracking apparatus, and method for tracking subject
US8253800B2 (en) 2006-06-28 2012-08-28 Nikon Corporation Tracking device, automatic focusing device, and camera
US8411159B2 (en) 2006-10-25 2013-04-02 Fujifilm Corporation Method of detecting specific object region and digital camera
US8698937B2 (en) 2007-06-01 2014-04-15 Samsung Electronics Co., Ltd. Terminal and image capturing method thereof
US8243142B2 (en) 2007-11-06 2012-08-14 Kabushiki Kaisha Toshiba Mobile object image tracking apparatus and method
JP2011013683A (en) * 2010-08-13 2011-01-20 Ricoh Co Ltd Imaging apparatus, auto focus method, and program for allowing computer to perform the method
JP2015022207A (en) * 2013-07-22 2015-02-02 キヤノン株式会社 Optical device, control method therefor, and control program
JP2014074923A (en) * 2013-11-29 2014-04-24 Nikon Corp Imaging device
JP2015146500A (en) * 2014-02-03 2015-08-13 キヤノン株式会社 Imaging apparatus and control method of the same
US10664991B2 (en) 2016-07-20 2020-05-26 Fujifilm Corporation Attention position recognizing apparatus, image pickup apparatus, display apparatus, attention position recognizing method and program
JP2018205648A (en) * 2017-06-09 2018-12-27 キヤノン株式会社 Imaging device
JP6991746B2 (en) 2017-06-09 2022-01-13 キヤノン株式会社 Imaging device
WO2019078338A1 (en) * 2017-10-19 2019-04-25 ソニー株式会社 Electronic apparatus
CN111201770A (en) * 2017-10-19 2020-05-26 索尼公司 Electronic instrument
JPWO2019078338A1 (en) * 2017-10-19 2020-11-19 ソニー株式会社 Electronics
CN111201770B (en) * 2017-10-19 2022-08-09 索尼公司 Electronic instrument
US11483481B2 (en) 2017-10-19 2022-10-25 Sony Corporation Electronic instrument
JP7160044B2 (en) 2017-10-19 2022-10-25 ソニーグループ株式会社 Electronics
JP7433788B2 (en) 2019-07-03 2024-02-20 キヤノン株式会社 Control device, imaging device, control method, program

Similar Documents

Publication Publication Date Title
JP3158643B2 (en) Camera having focus detecting means and line-of-sight detecting means
US8538252B2 (en) Camera
JP2001116985A (en) Camera with subject recognizing function and subject recognizing method
JP5950664B2 (en) Imaging apparatus and control method thereof
JP2014202875A (en) Subject tracking device
JP3102825B2 (en) camera
JP6812387B2 (en) Image processing equipment and image processing methods, programs, storage media
JP6602081B2 (en) Imaging apparatus and control method thereof
US20070002463A1 (en) Image capturing apparatus
JP3653739B2 (en) Camera with subject tracking function
US6636699B2 (en) Focus detection device and distance measurement device
JP2974383B2 (en) Gaze detection device and device having gaze detection device
JP3097201B2 (en) Exposure calculation device
JP3192483B2 (en) Optical equipment
JP5018932B2 (en) Imaging device
JPH05173242A (en) Display device for camera
JPH03107932A (en) Photographing controller for camera
JP3180458B2 (en) Camera having line-of-sight detection means
JP2952072B2 (en) Optical line-of-sight detecting device and optical apparatus having line-of-sight detecting means
JP3363492B2 (en) Eye gaze detection apparatus and eye gaze detection method
JP2019008075A (en) Imaging device
JP3171698B2 (en) Camera focal length control device
JP3184542B2 (en) camera
JP2952071B2 (en) Optical line-of-sight detecting device and optical apparatus having line-of-sight detecting means
JP2024003432A (en) Electronic device

Legal Events

Date Code Title Description
A300 Application deemed to be withdrawn because no request for examination was validly filed

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20070109