JPH05313068A - Image input device - Google Patents

Image input device

Info

Publication number
JPH05313068A
JPH05313068A JP4119183A JP11918392A JPH05313068A JP H05313068 A JPH05313068 A JP H05313068A JP 4119183 A JP4119183 A JP 4119183A JP 11918392 A JP11918392 A JP 11918392A JP H05313068 A JPH05313068 A JP H05313068A
Authority
JP
Japan
Prior art keywords
image
focus
lens
focusing
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP4119183A
Other languages
Japanese (ja)
Other versions
JP3084130B2 (en
Inventor
Susumu Kikuchi
奨 菊地
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Optical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Optical Co Ltd filed Critical Olympus Optical Co Ltd
Priority to JP04119183A priority Critical patent/JP3084130B2/en
Publication of JPH05313068A publication Critical patent/JPH05313068A/en
Application granted granted Critical
Publication of JP3084130B2 publication Critical patent/JP3084130B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • G02B7/38Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals measured at different points on the optical axis, e.g. focussing on two or more planes and comparing image data

Abstract

PURPOSE:To provide an easily operable image input device which has a similar image quality to the whole object surfaces in a visual field and by which a focused image can be obtained and processing can be carried out properly on an optional object. CONSTITUTION:A lens driving device 102 moves a focusing position on an object surface by driving a lens 101. An adder 240 and an image memory 250 carry out a cumulative addition on mutual image signals photographed by a TV camera 103 while moving the focusing position by means of the lens driving device 102. A CPU 230 controls the lens driving device 102 by means of a characteristic of a driving speed function v(a) determined by the lens 101 and a characteristic of a correcting function w(a) calculated according to information on distances up to respective objects measured by a multi-areal distance measuring device 220 so that spatial frequency like characteristics of a plurality of object surface images in a visual field can be almost uniformized in a cumulative addition image.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、例えば光学顕微鏡等光
学機器を使用した物体像の撮影に用いて好適な画像入力
装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image input device suitable for use in photographing an object image using an optical instrument such as an optical microscope.

【0002】[0002]

【従来の技術】既知のように、光学結像系を有する画像
入力装置では、その入力画像の性質は、レンズなどの光
学結像素子や撮像素子の特性に依存する。一般に、光学
系の開口が大きくなると、解像度が向上する代わりに、
焦点深度が浅くなる傾向がある。そこで、従来の画像入
力装置に於いて焦点深度を増大させるには、絞りなどで
開口の大きさを制御する方法が用いられている。しか
し、所望の物体面の範囲に合った画像を得るのは、光学
系の特性上困難で、また操作制御も難しかった。
2. Description of the Related Art As is known, in an image input apparatus having an optical image forming system, the property of the input image depends on the characteristics of an optical image forming element such as a lens or an image pickup element. In general, as the aperture of the optical system increases, instead of improving the resolution,
The depth of focus tends to be shallow. Therefore, in order to increase the depth of focus in the conventional image input device, a method of controlling the size of the aperture with a diaphragm is used. However, it is difficult to obtain an image that matches the desired range of the object plane due to the characteristics of the optical system, and it is difficult to control the operation.

【0003】そこで、光学結像系の焦点深度を越える範
囲に対象物体が存在する場合に、焦点の合った物体面
(以下、合焦面と略記する)の範囲を増大させた画像を
得る方法として、特開昭60−68312号公報に開示
される方法が提案されている。これは、光学顕微鏡に於
いて、撮影視野内の試料の最も凸なる部位に焦点を合わ
せたときの試料と鏡筒間の距離と、試料の最も凹なる部
位に焦点を合わせたときの試料と鏡筒間の距離との間
で、撮影露出時間内に一方の距離から他方の距離まで試
料と鏡筒の少なくとも一方を光学顕微鏡の光軸方向に走
査させるように構成したものである。この方法によれ
ば、フィルム上に光量を順次積算することにより、焦点
の合った部位のみがこれに応じて露光されることになる
ため、結果的に全体として焦点の合った撮影像が得られ
る。
Therefore, when a target object exists in a range exceeding the depth of focus of the optical imaging system, a method of obtaining an image in which the range of an in-focus object plane (hereinafter, abbreviated as a focusing plane) is increased. As a method, a method disclosed in JP-A-60-68312 has been proposed. This is the distance between the sample and the lens barrel when focusing on the most convex part of the sample in the field of view in the optical microscope, and the sample when focusing on the most concave part of the sample. With respect to the distance between the lens barrels, at least one of the sample and the lens barrels is configured to scan in the optical axis direction of the optical microscope from one distance to the other distance within the photographing exposure time. According to this method, since the light amount is sequentially integrated on the film, only the in-focus portion is exposed accordingly, and as a result, the in-focus image is obtained as a whole. ..

【0004】ところがこの公報に開示された方法では、
試料の凸部と凹部しか考慮しておらず、また焦点はずれ
の画像が一定のバックグラウンドの強度になると仮定し
ているため、視野内に存在する全ての対象物体に対して
同様に焦点の合った画像を得ることはできない。
However, in the method disclosed in this publication,
Only the convex and concave parts of the sample are taken into consideration, and it is assumed that the out-of-focus image has a constant background intensity. You can't get a picture.

【0005】[0005]

【発明が解決しようとする課題】このように、従来の光
学結像歪を有する画像入力装置に於いて焦点深度の調節
を行うためには、絞りなどの開口制御を用いていたが、
所望の焦点深度を実現するのは困難で、操作も難しかっ
た。また、従来提案されている焦点深度増大のための画
像入力方法では、視野内にある全ての物体面に対して同
様な画質を有する画像を得ることはできなかった。
As described above, in the conventional image input apparatus having the optical imaging distortion, the aperture control such as the diaphragm is used to adjust the depth of focus.
It was difficult to achieve the desired depth of focus and difficult to operate. Further, with the image input method for increasing the depth of focus that has been conventionally proposed, it is not possible to obtain an image having the same image quality for all object planes within the field of view.

【0006】本発明は上記の点に鑑みてなされたもの
で、視野内にある全ての物体面に対して同様な画質を有
し、且つ焦点の合った画像が得られ、しかも任意の対象
物体に対して適応的に処理が可能で、操作も容易な画像
入力装置を提供することを目的とする。
The present invention has been made in view of the above points, and has the same image quality for all object planes in the field of view, and an in-focus image can be obtained, and an arbitrary target object can be obtained. It is an object of the present invention to provide an image input device that can be adaptively processed and that is easy to operate.

【0007】[0007]

【課題を解決するための手段】上記目的を達成するため
に、本発明の画像入力装置は、光学結像系と、前記光学
結像系により結像された物体の像を電気的信号に変換す
る撮像手段と、物体面に於ける合焦位置を移動させる合
焦面移動手段と、前記合焦面移動手段により合焦位置を
移動させながら前記撮像手段により得られた画像信号間
の累積加算を行うための加算手段と画像メモリでなる累
積加算手段と、視野内の複数の物体面の像の空間周波数
的特性が、前記累積加算手段により得られる累積加算画
像に於いて最も均一に近くなるように、前記合焦面移動
手段を駆動制御する合焦面制御手段とを備えることを特
徴としている。
In order to achieve the above object, an image input device of the present invention converts an optical image forming system and an image of an object formed by the optical image forming system into an electric signal. Image pickup means, a focus plane moving means for moving the focus position on the object plane, and a cumulative addition between the image signals obtained by the image pickup means while moving the focus position by the focus plane moving means. The accumulative addition means composed of an addition means and an image memory, and the spatial frequency characteristics of the images of a plurality of object planes within the field of view become the most uniform in the accumulated addition image obtained by the cumulative addition means. As described above, it is characterized by comprising the focusing surface control means for driving and controlling the focusing surface moving means.

【0008】[0008]

【作用】即ち、本発明による画像入力装置では、合焦面
制御手段によって、撮像手段により得られた画像信号間
の累積加算を行うための加算手段と画像メモリでなる累
積加算手段により得られる累積加算画像に於いて、視野
内の複数の物体面の像の空間周波数的特性が最も均一に
近くなるように、物体面に於ける合焦位置を移動させる
合焦面移動手段を駆動制御するようにしている。
That is, in the image input apparatus according to the present invention, the focusing surface control means adds the cumulative addition between the image signals obtained by the image pickup means and the cumulative addition means formed by the image memory. In the added image, the focus plane moving means for moving the focus position on the object plane is driven and controlled so that the spatial frequency characteristics of the images of the plurality of object planes within the field of view become the most uniform. I have to.

【0009】つまり、合焦位置(合焦面)を所定の範囲
にわたって移動させながら入力画像を累積加算すると、
その範囲内にある対象物体面に焦点の合った画像情報が
累積加算画像内に含まれることになる。従って、複数の
対象物体面が異なる位置に存在する場合でも合焦面の移
動範囲を適当に設定することによって、それら全ての対
象物体面の合焦情報を取り込むことが可能である。その
場合、累積加算画像内のある対象物体面の画像には合焦
情報だけではなく非合焦情報、つまりぼけ画像も加算さ
れていることになる。ところが実際は、合焦情報が支配
的に影響するため非合焦情報を加算することによる画像
の劣化の影響は少ない。また、本発明による画像入力装
置は、累積加算画像内で各対象物体面の画像の空間周波
数的特性が最も均一に近くなるように合焦面を駆動制御
する構成を有し、これにより累積加算画像内に於ける各
対象物体面の画質、つまり合焦性のばらつきを補正する
作用を有する。
That is, when the input images are cumulatively added while moving the focus position (focus plane) over a predetermined range,
Image information focused on the target object surface within the range is included in the cumulative addition image. Therefore, even if a plurality of target object planes exist at different positions, it is possible to capture the focus information of all the target object planes by appropriately setting the moving range of the focus plane. In that case, not only the focus information but also the out-of-focus information, that is, the blurred image is added to the image of the target object surface in the cumulative addition image. However, in reality, since the focus information has a dominant influence, the influence of image deterioration due to addition of the non-focus information is small. Further, the image input device according to the present invention has a configuration in which the focusing surface is driven and controlled so that the spatial frequency characteristics of the images of the respective target object planes in the cumulative addition image become the most uniform, whereby the cumulative addition is performed. It has a function of correcting the image quality of each target object surface in the image, that is, the variation in focusability.

【0010】[0010]

【実施例】以下、図面を参照して、本発明の実施例を説
明する。図1は、本発明の第1の実施例の構成を示す図
である。本実施例に於ける画像入力装置は、大きく撮像
装置100,画像プロセッサ200,TVモニタ30
0,コントロールユニット400に分けられる。
Embodiments of the present invention will be described below with reference to the drawings. FIG. 1 is a diagram showing the configuration of the first exemplary embodiment of the present invention. The image input device according to the present embodiment is roughly composed of an image pickup device 100, an image processor 200, and a TV monitor 30.
0, control unit 400.

【0011】このうち、撮像装置100は、レンズ10
1,レンズ駆動装置102,TVカメラ103とで構成
される。レンズ駆動装置102は、画像プロセッサ20
0内のCPU230からの指令信号により所定の位置に
合焦面が設定されるように構成されると共に、レンズ1
01内に含まれる焦点レンズ(図示せず)の位置を検出
するためのエンコーダ(図示せず)が内蔵され、合焦面
の位置の情報がCPU230へ送られるようになってい
る。レンズ101により結像され、TVカメラ103に
より撮像されて得られた画像信号は、画像プロセッサ2
00内のA/D変換器210によりディジタル信号に変
換される。
Of these, the image pickup apparatus 100 includes a lens 10
1, a lens driving device 102, and a TV camera 103. The lens driving device 102 includes the image processor 20.
The focusing surface is set at a predetermined position in response to a command signal from the CPU 230 within 0, and the lens 1
An encoder (not shown) for detecting the position of a focus lens (not shown) included in 01 is built in, and information on the position of the focusing surface is sent to the CPU 230. An image signal obtained by being imaged by the lens 101 and imaged by the TV camera 103 is obtained by the image processor 2
It is converted into a digital signal by the A / D converter 210 in 00.

【0012】本構成では、TVカメラ103から視野内
の各物体面までの距離を測定する前処理と、前処理によ
り得られた距離情報を基に視野内にある全ての物体面に
焦点の合った画像を合成する本処理との2段階の動作が
行われる。
In this configuration, preprocessing for measuring the distance from the TV camera 103 to each object plane in the field of view, and focusing on all object planes in the field of view based on the distance information obtained by the preprocessing. A two-stage operation is performed, which is the main processing for synthesizing the images.

【0013】まず、前処理に関する構成について説明す
る。撮像装置100から入力され、A/D変換器210
によりディジタル変換された画像信号は、マルチエリア
測距装置220に入力される。このマルチエリア測距装
置220は、図2の(A)に示すように構成され、画像
を分割した各小領域のコントラストを測定することによ
り、視野内の各物体面までの距離が計測されるようにな
っている。つまり、ディジタル画像信号は、マルチエリ
ア測距装置220に入力されると、まずバンドパスフィ
ルタ(BPF)221により所定の周波数帯域のみ抽出
された後に、2乗器222で2乗され、この信号が加算
器223,ラッチ224,及びメモリ225により累積
加算される。
First, the configuration related to preprocessing will be described. Input from the image pickup apparatus 100, and A / D converter 210
The image signal digitally converted by is input to the multi-area distance measuring device 220. The multi-area distance measuring device 220 is configured as shown in FIG. 2A, and the distance to each object plane in the field of view is measured by measuring the contrast of each small area obtained by dividing the image. It is like this. That is, when the digital image signal is input to the multi-area distance measuring device 220, first, only a predetermined frequency band is extracted by the bandpass filter (BPF) 221, and then the signal is squared by the squarer 222. Cumulative addition is performed by the adder 223, the latch 224, and the memory 225.

【0014】以上の動作は、図2の(B)に示すよう
に、各分割画像内の全画素について累積加算されるよう
にアドレス制御され、最終的にメモリ225には、各分
割画像の所定の周波数の周波数帯域に於けるパワーの累
積値(=コントラスト)が記録される。
As shown in FIG. 2B, the above operation is address-controlled so that all the pixels in each divided image are cumulatively added, and finally the memory 225 stores a predetermined value for each divided image. The cumulative value (= contrast) of the power in the frequency band of the frequency is recorded.

【0015】以上の動作は、レンズ駆動装置102によ
り、合焦面が少しづつ移動されるにつれ繰り返し行わ
れ、メモリ225には図2の(C)に示すようなコント
ラストの合焦位置に対する特性(=コントラストカー
ブ)が記録される。
The above operation is repeated by the lens driving device 102 as the in-focus surface is moved little by little, and the memory 225 stores the characteristic () of the contrast at the in-focus position as shown in FIG. = Contrast curve) is recorded.

【0016】そして、CPU230では、各分割画像の
コントラストカーブの頂点の位置と、レンズ駆動装置1
02内の不図示エンコーダから送られてきた合焦位置に
対応するデータ値とから、各分割画像に結像された物体
面の距離が計測される。
Then, in the CPU 230, the position of the vertex of the contrast curve of each divided image and the lens driving device 1
The distance of the object plane imaged in each divided image is measured from the data value corresponding to the in-focus position sent from the unillustrated encoder in 02.

【0017】次に、本処理では次のような構成による動
作が行われる。即ち、前述のような前処理により求めら
れた各物体面までの距離情報を基に、レンズ駆動装置1
02により所定の物体面に合焦させて撮像装置100に
より入力された画像信号は、A/D変換器210により
ディジタル信号に変換された後に、加算器240及び画
像メモリ250によって画像単位に累積加算される。所
定の複数の物体面に合焦させて入力・加算した後に、画
像メモリ250に記録された画像信号は、空間フィルタ
260により所定の中域−高域の空間周波数を強調する
フィルタリングが行われ、その出力は、D/A変換器2
70によりアナログビデオ信号に変換されて、TVモニ
タ300に表示される。
Next, in this processing, the operation with the following configuration is performed. That is, based on the distance information to each object plane obtained by the preprocessing as described above, the lens driving device 1
The image signal input by the image pickup apparatus 100 after being focused on a predetermined object plane by 02 is converted into a digital signal by the A / D converter 210, and then cumulatively added in image units by the adder 240 and the image memory 250. To be done. The image signal recorded in the image memory 250 after being focused on a plurality of predetermined object planes and input / added is subjected to filtering for emphasizing a predetermined middle-high frequency spatial frequency by the spatial filter 260, The output is the D / A converter 2
It is converted into an analog video signal by 70 and displayed on the TV monitor 300.

【0018】なお、CPU230は、以上の動作制御を
行い、またコントロールユニット400は、、CPU2
30と接続され、マン=マシンインターフェースとして
操作者の指令を装置に伝える構成になっている。
The CPU 230 controls the above operations, and the control unit 400 controls the CPU 2
It is connected to 30 and is configured to transmit an operator's command to the device as a man-machine interface.

【0019】次に、本処理に於ける合焦面の設定法につ
いて説明する。レンズ101は通常、複数のレンズ群で
構成されるが、合焦位置は、この内の焦点レンズと呼ば
れるレンズを光軸方向に駆動することにより設定され
る。図3の(A)に、焦点レンズの駆動量と合焦位置と
の関係の例を示す。同図に於いて、横軸はレンズの駆動
量を示すエンコーダアドレス値a、縦軸はエンコーダア
ドレス値aに対する合焦位置h(a)を表す。
Next, the method of setting the focal plane in this processing will be described. The lens 101 is usually composed of a plurality of lens groups, but the in-focus position is set by driving a lens, which is called a focus lens, in the optical axis direction. FIG. 3A shows an example of the relationship between the drive amount of the focusing lens and the focus position. In the figure, the horizontal axis represents the encoder address value a indicating the lens drive amount, and the vertical axis represents the focus position h (a) with respect to the encoder address value a.

【0020】一方、合焦位置h(a)に対する焦点深度
(被写界深度)d(a)は、近似的に次の(1)式で表
される。即ち、 d(a)=d2 (a)−d1 (a) …(1) ただし、
On the other hand, the depth of focus (depth of field) d (a) for the in-focus position h (a) is approximately represented by the following equation (1). That is, d (a) = d 2 (a) −d 1 (a) (1)

【0021】[0021]

【数1】 [Equation 1]

【0022】f:焦点距離、 p:撮像素子の分解能により決定される係数、 k:光学系により決定される係数。 上記(2)式で表されるd1 (a)は焦点深度の近点
を、また上記(3)式で表されるd2 (a)は焦点深度
の遠点をそれぞれ表している。
F: focal length, p: coefficient determined by the resolution of the image sensor, k: coefficient determined by the optical system. D 1 (a) expressed by the above equation (2) represents a near point of the depth of focus, and d 2 (a) expressed by the above equation (3) represents a far point of the depth of focus.

【0023】これらの関係を用いて、レンズの駆動速度
関数v(a)を次の(4)式に示すように定義する。即
ち、 v(a)={c/h(a)}d(a) …(4)
Using these relationships, the lens driving speed function v (a) is defined as shown in the following equation (4). That is, v (a) = {c / h (a)} d (a) (4)

【0024】つまり、上記(4)式は、レンズの駆動に
対して物体側で合焦位置が等速度で移動するように、合
焦位置h(a)の逆数をとり、また各合焦面での焦点深
度d(a)に比例して画像が入力されるように補正した
ものである。従って、このレンズの駆動速度関数v
(a)に基づくレンズの駆動を行えば、物体面に於いて
合焦面の設定間隔に疎密が発生するのを防ぐことがで
き、且つ焦点深度の大きくなる物体面に於いて、必要以
上に細かい間隔で画像を入力するのを防止できる。な
お、合焦位置h(a),焦点深度d(a),レンズの駆
動速度関数v(a)の関係を図3の(B)に示す。
That is, the above equation (4) takes the reciprocal of the focusing position h (a) so that the focusing position moves at a constant speed on the object side with respect to the driving of the lens, and each focusing surface The image is corrected so that the image is input in proportion to the depth of focus d (a). Therefore, the driving speed function v of this lens
By driving the lens based on (a), it is possible to prevent sparseness and denseness from occurring in the set distance of the focusing surface in the object surface, and in the object surface with a large depth of focus, it becomes unnecessary. It is possible to prevent inputting images at fine intervals. The relationship between the focus position h (a), the depth of focus d (a), and the lens driving speed function v (a) is shown in FIG. 3 (B).

【0025】次に、視野内の物体の配置を考慮したレン
ズの駆動法について説明する。図4の(A)に例として
示すように、視野内の対象物がA1 〜A3 の位置にある
とする。このとき、レンズの駆動速度関数v(a)に基
づいて、A1 からA3の位置まで合焦位置を移動させな
がら画像を入力,累積加算しても、各物体の像の質は均
一にならない。なぜなら、上記(1)式乃至(3)式に
示したように、合焦位置をTVカメラ103から遠い物
体面に設定する程、焦点深度は深くなる傾向があり、各
対象物体に合焦面を合わせたときの他の物体の劣化状態
に差がでるためである。つまり、最も遠い位置A3 にあ
る対象物の像は、比較的カメラ103に近い物体面に焦
点を合わせても劣化が少ないのに対して、最も近い位置
1 にある対象物の像は、合焦面が遠ざかると急速に劣
化し、A3 の位置に合焦させたときにはほとんど構造が
見えなくなるほどぼける。従って、累積加算後の画像内
で位置A1 の物体・像は、位置A3 の物体の像に比べて
劣化が大きいことが予想される。
Next, a lens driving method in consideration of the arrangement of objects in the visual field will be described. As shown as an example in (A) of FIG. 4, it is assumed that the object in the visual field is at positions A 1 to A 3 . At this time, based on the driving speed function v (a) of the lens, even if the image is input and accumulated while moving the in-focus position from the position A 1 to the position A 3 , the image quality of each object is uniform. I won't. This is because, as shown in the above formulas (1) to (3), the depth of focus tends to become deeper as the focus position is set on the object plane farther from the TV camera 103, and the focus plane for each target object tends to be deeper. This is because there is a difference in the deterioration state of another object when the two are combined. That is, the image of the object at the farthest position A 3 is less deteriorated even if the object plane relatively close to the camera 103 is focused, whereas the image of the object at the closest position A 1 is The focus surface deteriorates rapidly as the distance from the focus surface increases, and when the focus position is set to the position A 3 , the structure becomes almost invisible so that the structure is blurred. Therefore, in the image after the cumulative addition, the object / image at the position A 1 is expected to be deteriorated more than the image of the object at the position A 3 .

【0026】そこで、累積加算画像内に於ける各物体の
像の劣化が均一になるような補正関数w(a)を以下の
ように設定する。まず、物体面A1 ,A2 ,A3 にそれ
ぞれ加重係数C1 ,C2 ,C3 をかけた時の錯乱円直径
の平均値δj (j=1,2,3)を求め、累積加算画像
に於いて最も劣化が大きいA1 と最も小さいA2 につい
ての平均値δ1 ,δ2 が最も近くなるように加重係数を
求める。実際には、次の(5)式のような計算を行う。
Therefore, a correction function w (a) that makes the deterioration of the image of each object in the cumulative addition image uniform is set as follows. First, the average value δ j (j = 1, 2, 3) of the diameter of the circle of confusion when the weighting factors C 1 , C 2 , C 3 are applied to the object planes A 1 , A 2 , A 3 respectively, and the cumulative The weighting coefficient is calculated so that the average values δ 1 and δ 2 of A 1 having the largest deterioration and A 2 having the smallest deterioration in the added image are closest to each other. Actually, the calculation as shown in the following equation (5) is performed.

【0027】[0027]

【数2】 [Equation 2]

【0028】ただし、 tij:Ai とAj の距離 (i,j=1,2,3) δj (tij):合焦面がAi の位置にある時のAj の位
置での錯乱円の直径。 そして、次の(6)式で表されるfを拘束条件
Where t ij is the distance between A i and A j (i, j = 1,2,3) δ j (t ij ): is the position of A j when the focal plane is at the position of A i The diameter of the circle of confusion. Then, f expressed by the following equation (6) is a constraint condition.

【0029】[0029]

【数3】 の下で最小になるようにラグランジュ(Lagrange)の未
定乗数法を適用する。
[Equation 3] Apply Lagrange's undetermined multiplier method to minimize under.

【0030】[0030]

【数4】 ただし、[Equation 4] However,

【0031】[0031]

【数5】 ここで、上記(7)式で定義したψを、加重係数C
j (j=1,2,3)で偏微分し、0とおく。
[Equation 5] Here, ψ defined by the above equation (7) is a weighting coefficient C
Partial differentiation is performed with j (j = 1, 2, 3), and 0 is set.

【0032】[0032]

【数6】 この(8)式と(9)式とから4元1次連立方程式が立
ち、これを解くことにより、加重係数Cj (j=1,
2,3)が求められる。
[Equation 6] From these equations (8) and (9), a quaternary simultaneous linear equation is established, and by solving it, the weighting coefficient C j (j = 1,
2, 3) is required.

【0033】次に、これら加重係数Cj を多次曲線で補
間することにより、補正関数w(a)が得られる。図4
の(B)に、加重係数C1 ,C2 ,C3 及び補正関数w
(a)の例を示す。
Then, the weighting coefficient C j is interpolated by a multi-dimensional curve to obtain the correction function w (a). Figure 4
In (B) of, the weighting factors C 1 , C 2 , C 3 and the correction function w
An example of (a) is shown.

【0034】最終的に、レンズの駆動速度関数v
w (a)は、上記(4)式で求めた駆動速度関数v
(a)に補正関数w(a)をかけた関数として求められ
る。即ち、 vw (a)=v(a)・w(a)={c/h(a)}d(a)w(a) …(10)
Finally, the lens driving speed function v
w (a) is the drive speed function v obtained by the above equation (4)
It is obtained as a function obtained by multiplying (a) by the correction function w (a). That, v w (a) = v (a) · w (a) = {c / h (a)} d (a) w (a) ... (10)

【0035】装置構成に於いては、レンズ101により
駆動速度関数v(a)が決定されるので、この駆動速度
関数v(a)の特性を予めCPU230内のROMなど
に記録しておき、また、補正関数w(a)については、
マルチエリア測距装置220により測定された各物体ま
での距離情報に基づいて、CPU230に於いて計算す
る。
In the device configuration, the driving speed function v (a) is determined by the lens 101, so the characteristics of this driving speed function v (a) are recorded in advance in the ROM or the like in the CPU 230, and , For the correction function w (a),
It is calculated in the CPU 230 based on the distance information to each object measured by the multi-area distance measuring device 220.

【0036】なお、レンズ駆動装置102が駆動速度関
数v(a)に基づいてレンズを駆動するように最初から
構成しておいても良い。また、画像プロセッサ200内
に設けたROMに数通りの補正関数w(a)を記憶して
おき、各物体までの距離情報を入力として、最も近い条
件の補正関数w(a)を呼び出すように構成しても良
い。
The lens driving device 102 may be configured from the beginning so as to drive the lens based on the driving speed function v (a). Further, several correction functions w (a) are stored in the ROM provided in the image processor 200, and the distance information to each object is input, and the correction function w (a) of the closest condition is called. It may be configured.

【0037】なお、空間フィルタ260は、劣化像をも
累積加算された画像をより鮮鋭なものにするために、中
域−高域の空間周波数を強調するのに用いられるている
ものである。
The spatial filter 260 is used for emphasizing the mid-high spatial frequencies in order to make the image in which the deteriorated image is cumulatively added more sharp.

【0038】以上のように、本実施例では、カメラレン
ズを用いた画像入力装置に於いて、視野内にあり、距離
の異なる物体に対して同等な画質で焦点の合った画像を
合成でき、しかも任意の対象物体の条件に対して適応的
に最適な処理が行えるようになる。
As described above, in the present embodiment, in the image input device using the camera lens, it is possible to synthesize focused images with the same image quality for objects within the field of view and at different distances. In addition, it becomes possible to perform the optimal processing adaptively to the conditions of the arbitrary target object.

【0039】次に、本発明の第2の実施例を説明する。
図に本発明の第2の実施例の構成を示す。構成は大きく
顕微鏡装置500,TVカメラ600,ステージ駆動装
置ドライバ700,画像プロセッサ200,TVモニタ
300,コントロールユニット400に分けられる。こ
のうち、画像プロセッサ200,TVモニタ300,コ
ントロールユニット400は、それぞれ上記第1の実施
例と同様のものであり、よってその説明は省略する。
Next, a second embodiment of the present invention will be described.
The configuration of the second embodiment of the present invention is shown in the drawing. The configuration is roughly divided into a microscope device 500, a TV camera 600, a stage drive device driver 700, an image processor 200, a TV monitor 300, and a control unit 400. Of these, the image processor 200, the TV monitor 300, and the control unit 400 are the same as those in the first embodiment, and therefore their explanations are omitted.

【0040】顕微鏡装置500には、ステージ駆動装置
501及びステージ位置エンコーダ502が設けられて
おり、画像プロセッサ200内にあるCPU230から
の指令に基づいて動作されるステージ駆動装置ドライバ
700により、顕微鏡装置500のステージに対して光
軸方向に所定の動作制御が行われる。TVカメラ600
は、顕微鏡装置500の鏡筒の上に設置されており、顕
微鏡画像が撮像される。
The microscope apparatus 500 is provided with a stage drive unit 501 and a stage position encoder 502, and the stage drive unit driver 700 operated based on a command from the CPU 230 in the image processor 200 is used to drive the microscope unit 500. Predetermined operation control is performed on the stage in the optical axis direction. TV camera 600
Is installed on the barrel of the microscope apparatus 500, and a microscope image is captured.

【0041】本第2の実施例に於いても、動作は前処理
と本処理の2段階で行われる。前処理は、前述した第1
の実施例と同様で、ステージ駆動装置501によりステ
ージを光軸方向に移動しながらマルチエリアのコントラ
ストを計測することにより、視野内の各対象物体面まで
の距離が計測される。
Also in the second embodiment, the operation is performed in two stages of preprocessing and main processing. The pre-processing is the above-mentioned first
Similar to the embodiment described above, the distances to the respective target object planes within the field of view are measured by measuring the multi-area contrast while moving the stage in the optical axis direction by the stage driving device 501.

【0042】本処理でも、上記第1の実施例と同様、前
処理により求められた各対象物体面までの距離を基に合
焦面が所定の複数の位置に設定され、入力される画像が
累積加算される。
Also in this processing, as in the case of the first embodiment, the in-focus image is set at a plurality of predetermined positions based on the distances to the target object surfaces obtained by the pre-processing, and the input image is It is cumulatively added.

【0043】以下に、本第2の実施例に於ける合焦面の
設定方法について説明する。説明を簡単にするために、
図6に示すような複数の物体面で構成される層状構造の
対象物を仮定する。
The method of setting the focal plane in the second embodiment will be described below. To simplify the explanation,
Assume an object having a layered structure composed of a plurality of object planes as shown in FIG.

【0044】本第2の実施例の方法では、各物体面に対
して上下共ある距離dr の範囲が含まれるように合焦面
を設定する。つまり、図6に示すように、z1 の位置に
ある物体面1に対してはz1u=z1 −dr からz1h=z
1 +dr の範囲、同様にz2の位置にある物体面2に対
してはz2u=z2 −dr からz2h=z2 +dr の範囲、
その位置にある物体面3に対してはz3u=z3 −dr
らz3h=z3 +dr の範囲が含まれるようにする。従っ
て、全ての物体面に対しては、図6によると、z2uから
3hまでと、z1uからz1hまで合焦面を移動させながら
画像を入力し、累積加算すれば良い。
In the method of the second embodiment, the in-focus plane is set so as to include the range of the vertical distance dr for each object plane. That is, as shown in FIG. 6, for the object plane 1 in the position of z 1 z 1u = z 1 -d r from z 1h = z
1 + d r range, similarly for the object plane 2 at the z 2 position z 2u = z 2 -d r to z 2h = z 2 + d r ,
For the object plane 3 at that position, the range from z 3u = z 3 −d r to z 3h = z 3 + d r is included. Therefore, for all the object planes, according to FIG. 6, the images may be input while moving the focusing plane from z 2u to z 3h and from z 1u to z 1h , and cumulative addition may be performed.

【0045】以下に、上述したような方法の作用につい
て説明する。まず、ある物体面sに焦点を合わせたとき
の対象物体面tの入力画像を、次の(11)式で表現す
る。即ち、 gt(x,y;t,s)=h(x,y;t,s) *f1(x,y)+n(x,y) …(11) ただし、 h(x,y;t,s) :物体面sに合焦させたときの対象面tの
伝達関数(PSF)、 ft(x,y):対象面tの原画像、 n(x,y) :加法的に加わるノイズ、 gt(x,y;t,s):観測される対象面tの画像、 *:コンボリューションを表すオペレータ。 なお、顕微鏡光学系の場合、PSFは、
The operation of the above method will be described below. First, the input image of the target object surface t when the object surface s is focused is expressed by the following equation (11). That is, g t (x, y; t, s) = h (x, y; t, s) * f 1 (x, y) + n (x, y) (11) where h (x, y; t, s) : transfer function (PSF) of the target surface t when focused on the object surface s, ft (x, y) : original image of the target surface t, n (x, y) : additively Noise added, g t (x, y; t, s) : image of the observed target surface t, *: operator representing convolution. In the case of a microscope optical system, PSF is

【0046】[0046]

【数7】 に依存すると考えて構わない。そこで以下、(t,s)
をzと書き改めることにする。
[Equation 7] You may think that it depends on. Therefore, in the following, (t, s)
Will be rewritten as z.

【0047】ここで、合焦面を離散的に変えながら画像
を入力し、加算することを考える。加算画像内における
対象面tの画像は、上記(11)式を基に導かれる次の
(12)式で表すことができる。
Now, consider inputting and adding images while discretely changing the focal plane. The image of the target surface t in the added image can be expressed by the following equation (12) derived based on the equation (11).

【0048】[0048]

【数8】 ただし、[Equation 8] However,

【0049】[0049]

【数9】 [Equation 9]

【0050】ここで、ha(x,y)を、次の(13)式に示
すように、原画像ft(x,y)の結像に寄与している関数成
分hr(x,y)と、ほとんどノイズしか伝達しない劣化の大
きな関数成分hn(x,y)との和と考える。 ha(x,y)=hr(x,y)+hn(x,y) …(13) ただし、
Here, h a (x, y) is a function component h r (x, y that contributes to the image formation of the original image f t (x, y) , as shown in the following equation (13) . It is considered to be the sum of y) and a functional component h n (x, y) with large deterioration that transmits almost only noise. h a (x, y) = h r (x, y) + h n (x, y) (13) where

【0051】[0051]

【数10】 r:結像に寄与するデフォーカス範囲を表す番号。[Equation 10] r: A number representing the defocus range that contributes to image formation.

【0052】[0052]

【数11】 この場合、gta(x,y) は、次の(16)式で表される。[Equation 11] In this case, g ta (x, y) is expressed by the following equation (16).

【0053】[0053]

【数12】 [Equation 12]

【0054】ただし、 M=N−(2b+1):ノイズが支配的な入力画像の
数、 Ct :ft(x,y)の空間平均値。
However, M = N- (2b + 1): the number of input images in which noise is dominant, and the spatial average value of Ct : ft (x, y) .

【0055】上記(16)式に於いて、第1項はインフ
ォーカス(in-focus)画像、第3項は大きな劣化により
画像の構造が消失したバイアス項、第4項はノイズと解
釈される。従って、全ての物体面について結像に寄与し
ている関数成分hr(x,y)が完全に含まれるような範囲に
合焦面を設定すれば、全ての物体面について上記(1
6)式が同様に成立する。つまり、加算後の画像に於け
る各対象面の劣化状態が均一になる。また、対象面によ
って空間平均値Ct が変化しないならば、S/Nを含め
て均一性が成立する。結像に寄与している範囲dr は、
光学結像系の特性により適当に定めれば良いが、例とし
て光学結像系の焦点深度やOTFに最初に0点が現れる
デフォーカス量を基準にする方法が考えられる。
In the above equation (16), the first term is interpreted as an in-focus image, the third term is interpreted as a bias term in which the image structure disappears due to large deterioration, and the fourth term is interpreted as noise. .. Therefore, if the focus plane is set in a range in which the function components h r (x, y) contributing to image formation are completely included in all object planes, the above (1
Expression 6) is similarly established. That is, the deterioration state of each target surface in the image after addition becomes uniform. Further, if the spatial average value C t does not change depending on the target surface, the uniformity including S / N is established. The range dr that contributes to image formation is
It may be appropriately determined depending on the characteristics of the optical imaging system, but as an example, a method of using the defocus amount of the optical imaging system or the defocus amount at which 0 point first appears in the OTF can be considered.

【0056】顕微鏡に於ける対物レンズのようにNAの
大きな光学結像系の場合は、焦点深度は比較的浅く、焦
点はずれによるぼけ方は合焦面の前後でほとんど違いが
なく、且つ合焦面の位置に対しても焦点深度やOTFと
いった光学的特性が変わらないという性質がある。この
ような光学系を有する画像入力装置に対して、本第2の
実施例は、任意の構造を有する対象物体に対し適応的に
処理を行い、各対象物体面の像の周波数特性及びS/N
が均一になるような画像を入力する装置を提供すること
ができる。
In the case of an optical image-forming system having a large NA such as an objective lens in a microscope, the depth of focus is relatively shallow, and the blurring due to defocusing is almost the same before and after the in-focus surface, and in-focus. The optical characteristics such as the depth of focus and the OTF do not change with respect to the position of the surface. In the image input device having such an optical system, the second embodiment adaptively processes a target object having an arbitrary structure to obtain the frequency characteristics and S / S of the image of each target object surface. N
It is possible to provide a device for inputting an image such that the image becomes uniform.

【0057】[0057]

【発明の効果】以上詳述したように、本発明によれば、
視野内にある全ての物体面に対して同様な画質を有し、
且つ焦点の合った画像が得られ、しかも任意の対象物体
に対して適応的に処理が可能で、操作も容易であるよう
な実用上有用な画像入力装置を提供できる。
As described in detail above, according to the present invention,
Has similar image quality for all object planes in the field of view,
In addition, it is possible to provide a practically useful image input device that can obtain a focused image, can adaptively process an arbitrary target object, and is easy to operate.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の第1の実施例に係る画像入力装置のブ
ロック構成図である。
FIG. 1 is a block configuration diagram of an image input device according to a first embodiment of the present invention.

【図2】(A)は図1中のマルチエリア測距装置の詳細
を示すブロック図、(B)は分割画像を示す図、(C)
は(A)図中のメモリに記録されるコントラストカーブ
のグラフである。
2A is a block diagram showing details of the multi-area distance measuring device in FIG. 1, FIG. 2B is a diagram showing divided images, and FIG.
Is a graph of a contrast curve recorded in the memory in FIG.

【図3】(A)は焦点レンズの駆動量と合焦位置との関
係を示すグラフであり、(B)は合焦位置h(a),焦
点深度d(a),レンズの駆動速度関数v(a)の関係
を示すグラフである。
FIG. 3A is a graph showing a relationship between a driving amount of a focusing lens and a focusing position, and FIG. 3B is a focusing position h (a), a depth of focus d (a), and a lens driving speed function. It is a graph which shows the relationship of v (a).

【図4】(A)は視野内の対象物の配置関係を示す図で
あり、(B)は加重係数C1 ,C2 ,C3 及び補正関数
w(a)の例を示すグラフである。
FIG. 4A is a diagram showing a positional relationship of objects in a visual field, and FIG. 4B is a graph showing an example of weighting coefficients C 1 , C 2 , C 3 and a correction function w (a). ..

【図5】本発明の第2の実施例に係る画像入力装置のブ
ロック構成図である。
FIG. 5 is a block configuration diagram of an image input device according to a second embodiment of the present invention.

【図6】複数の物体面で構成される層状構造の対象物を
示す図である。
FIG. 6 is a diagram showing an object having a layered structure composed of a plurality of object planes.

【符号の説明】[Explanation of symbols]

100…撮像装置、200…画像プロセッサ、220…
マルチエリア測距装置、230…CPU、240…加算
器、250…画像メモリ、300…TVモニタ、400
…コントロールユニット、500…顕微鏡装置、600
…TVカメラ、700…ステージ駆動装置ドライバ。
100 ... Imaging device, 200 ... Image processor, 220 ...
Multi-area distance measuring device, 230 ... CPU, 240 ... Adder, 250 ... Image memory, 300 ... TV monitor, 400
Control unit, 500 Microscope device, 600
... TV camera, 700 ... Stage drive device driver.

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 光学結像系と、 前記光学結像系により結像された物体の像を電気的信号
に変換する撮像手段と、 物体面に於ける合焦位置を移動させる合焦面移動手段
と、 前記合焦面移動手段により合焦位置を移動させながら前
記撮像手段により得られた画像信号間の累積加算を行う
ための加算手段と画像メモリでなる累積加算手段と、 視野内の複数の物体面の像の空間周波数的特性が前記累
積加算手段により得られる累積加算画像に於いて最も均
一に近くなるように、前記合焦面移動手段を駆動制御す
る合焦面制御手段と、 を具備することを特徴とする画像入力装置。
1. An optical imaging system, an imaging means for converting an image of an object formed by the optical imaging system into an electric signal, and a focusing plane movement for moving a focusing position on an object plane. Means, an addition means for performing cumulative addition between the image signals obtained by the image pickup means while moving the focus position by the focus plane moving means, and a cumulative addition means made up of an image memory; Focusing surface control means for driving and controlling the focusing surface moving means so that the spatial frequency characteristic of the image of the object surface of the object becomes the most uniform in the cumulative addition image obtained by the cumulative adding means. An image input device comprising.
JP04119183A 1992-05-12 1992-05-12 Image input device Expired - Fee Related JP3084130B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP04119183A JP3084130B2 (en) 1992-05-12 1992-05-12 Image input device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP04119183A JP3084130B2 (en) 1992-05-12 1992-05-12 Image input device

Publications (2)

Publication Number Publication Date
JPH05313068A true JPH05313068A (en) 1993-11-26
JP3084130B2 JP3084130B2 (en) 2000-09-04

Family

ID=14754968

Family Applications (1)

Application Number Title Priority Date Filing Date
JP04119183A Expired - Fee Related JP3084130B2 (en) 1992-05-12 1992-05-12 Image input device

Country Status (1)

Country Link
JP (1) JP3084130B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11264937A (en) * 1998-03-18 1999-09-28 Olympus Optical Co Ltd Microscope
JP2001516108A (en) * 1997-09-11 2001-09-25 ミュラー・マルクス・エル Method for collecting and storing optically detectable data
WO2007037439A1 (en) * 2005-09-29 2007-04-05 Olympus Corporation Focal point position deciding method, focal point position deciding device, weak light detecting device, and weak light detecting method
JP2009206831A (en) * 2008-02-27 2009-09-10 Kyocera Corp Imaging apparatus, method of generating image, and electronic equipment
JP2011107669A (en) * 2009-06-23 2011-06-02 Sony Corp Biological sample image acquiring apparatus, biological sample image acquiring method, and biological sample image acquiring program
EP2688284A1 (en) * 2011-03-14 2014-01-22 Panasonic Corporation Imaging device, imaging method, integrated circuit, and computer program
US8767092B2 (en) 2011-01-31 2014-07-01 Panasonic Corporation Image restoration device, imaging apparatus, and image restoration method
US8890996B2 (en) 2012-05-17 2014-11-18 Panasonic Corporation Imaging device, semiconductor integrated circuit and imaging method
US8994298B2 (en) 2011-02-24 2015-03-31 Panasonic Intellectual Property Management Co., Ltd. Movement control apparatus, movement control method, and movement control circuit
US9076204B2 (en) 2010-11-08 2015-07-07 Panasonic Intellectual Property Management Co., Ltd. Image capturing device, image capturing method, program, and integrated circuit
US9083880B2 (en) 2011-03-02 2015-07-14 Panasonic Corporation Imaging device, semiconductor integrated circuit, and imaging method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5367094B2 (en) 2009-12-07 2013-12-11 パナソニック株式会社 Imaging apparatus and control method thereof
EP2511747B1 (en) 2009-12-07 2017-11-08 Panasonic Corporation Imaging device and imaging method
US8890995B2 (en) 2011-04-15 2014-11-18 Panasonic Corporation Image pickup apparatus, semiconductor integrated circuit and image pickup method
JP5914834B2 (en) 2011-10-12 2016-05-11 パナソニックIpマネジメント株式会社 Imaging apparatus, semiconductor integrated circuit, and imaging method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001516108A (en) * 1997-09-11 2001-09-25 ミュラー・マルクス・エル Method for collecting and storing optically detectable data
JPH11264937A (en) * 1998-03-18 1999-09-28 Olympus Optical Co Ltd Microscope
WO2007037439A1 (en) * 2005-09-29 2007-04-05 Olympus Corporation Focal point position deciding method, focal point position deciding device, weak light detecting device, and weak light detecting method
US8174686B2 (en) 2005-09-29 2012-05-08 Olympus Corporation Focal position determining method, focal position determining apparatus, feeble light detecting apparatus and feeble light detecting method
JP2009206831A (en) * 2008-02-27 2009-09-10 Kyocera Corp Imaging apparatus, method of generating image, and electronic equipment
JP2011107669A (en) * 2009-06-23 2011-06-02 Sony Corp Biological sample image acquiring apparatus, biological sample image acquiring method, and biological sample image acquiring program
US9235040B2 (en) 2009-06-23 2016-01-12 Sony Corporation Biological sample image acquiring apparatus, biological sample image acquiring method, and biological sample image acquiring program
US9076204B2 (en) 2010-11-08 2015-07-07 Panasonic Intellectual Property Management Co., Ltd. Image capturing device, image capturing method, program, and integrated circuit
US8767092B2 (en) 2011-01-31 2014-07-01 Panasonic Corporation Image restoration device, imaging apparatus, and image restoration method
US8994298B2 (en) 2011-02-24 2015-03-31 Panasonic Intellectual Property Management Co., Ltd. Movement control apparatus, movement control method, and movement control circuit
US9083880B2 (en) 2011-03-02 2015-07-14 Panasonic Corporation Imaging device, semiconductor integrated circuit, and imaging method
EP2688284A1 (en) * 2011-03-14 2014-01-22 Panasonic Corporation Imaging device, imaging method, integrated circuit, and computer program
EP2688284A4 (en) * 2011-03-14 2014-03-12 Panasonic Corp Imaging device, imaging method, integrated circuit, and computer program
US9300855B2 (en) 2011-03-14 2016-03-29 Panasonic Corporation Imaging apparatus, imaging method, integrated circuit, and computer program
US8890996B2 (en) 2012-05-17 2014-11-18 Panasonic Corporation Imaging device, semiconductor integrated circuit and imaging method

Also Published As

Publication number Publication date
JP3084130B2 (en) 2000-09-04

Similar Documents

Publication Publication Date Title
US8537225B2 (en) Image pickup apparatus and image conversion method
US8023000B2 (en) Image pickup apparatus, image processing apparatus, image pickup method, and image processing method
JP4582423B2 (en) Imaging apparatus, image processing apparatus, imaging method, and image processing method
JP4152398B2 (en) Image stabilizer
JP5036599B2 (en) Imaging device
JP3084130B2 (en) Image input device
JP5237978B2 (en) Imaging apparatus and imaging method, and image processing method for the imaging apparatus
CN108737726B (en) Image processing apparatus and method, image capturing apparatus, and computer-readable storage medium
JP4145308B2 (en) Image stabilizer
JP2011188481A (en) Imaging device
JP5144724B2 (en) Imaging apparatus, image processing apparatus, imaging method, and image processing method
JP5419403B2 (en) Image processing device
JP3412713B2 (en) Focus adjustment method
EP1881451A2 (en) Edge-driven image interpolation
JPH1042184A (en) Automatic focus adjustment device for film scanner
JP3109819B2 (en) Automatic focusing device
JP2505835B2 (en) Focus adjustment method and apparatus for television camera
JP2004310504A (en) Picture processing method
JP2925172B2 (en) Automatic tracking device
JP2021071516A (en) Imaging apparatus and method for controlling imaging apparatus
JP3725606B2 (en) Imaging device
EP1522961A2 (en) Deconvolution of a digital image
JP2883648B2 (en) Image input / output device
JP2002277730A (en) Method, device and program for automatic focusing control of electronic camera
JP7087052B2 (en) Lens control device, control method

Legal Events

Date Code Title Description
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20000613

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080630

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090630

Year of fee payment: 9

LAPS Cancellation because of no payment of annual fees