JPH01237610A - Auto focus device - Google Patents

Auto focus device

Info

Publication number
JPH01237610A
JPH01237610A JP6479388A JP6479388A JPH01237610A JP H01237610 A JPH01237610 A JP H01237610A JP 6479388 A JP6479388 A JP 6479388A JP 6479388 A JP6479388 A JP 6479388A JP H01237610 A JPH01237610 A JP H01237610A
Authority
JP
Japan
Prior art keywords
amount
focus
lens
coefficient
deviation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP6479388A
Other languages
Japanese (ja)
Inventor
Yasunobu Otsuka
大塚 康信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Optical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Optical Co Ltd filed Critical Olympus Optical Co Ltd
Priority to JP6479388A priority Critical patent/JPH01237610A/en
Publication of JPH01237610A publication Critical patent/JPH01237610A/en
Pending legal-status Critical Current

Links

Landscapes

  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

PURPOSE:To execute a self-correction of a coefficient by providing a correcting means for correcting the coefficient, based on a result of a focusing operation of a lens driving means and an arithmetic means. CONSTITUTION:A range finding output from a sensor unit 8 is inputted to a CPU 7. Subsequently, an operation for calculating a shift quantity from a focused position of an image pickup lens 1 by using a relative position relation of two images and a coefficient, sending-out of an output signal to a driving means for moving the image pickup lens 1, based on the shift quantity, and an operation for correcting the coefficient, based on a result of a focusing operation by lens driving and the arithmetic operation are executed by this CPU 7. That is, whenever the range finding is executed, the image shift quantity and the quantity of the lens which has been moved are stored, and after the focusing operation has been executed, the coefficient is recalculated by setting a final lens position as a target focal position, based on the stored data, and the coefficient is corrected. In such a way, the self-correction can be executed.

Description

【発明の詳細な説明】 [産業上の利用分野コ この発明は、オートフォーカス装置、詳しくはカメラ等
に用いられる自動焦点調節装置に関するものである。
DETAILED DESCRIPTION OF THE INVENTION [Field of Industrial Application] The present invention relates to an autofocus device, and more particularly to an autofocus device used in cameras and the like.

[従来の技術] 周知のように、−眼レフレックスカメラ等のオートフォ
ーカス装置(以下、AF装置という)には、第5図に示
すようなセパレータレンズを有するAFセンサユニット
8を用い、2像のズレ量からピントズレ量を求める相関
法によるAF装置が使用されている。即ち、撮影レンズ
1を透過しフィルムと共役な面2に結像した光を、コリ
メータレンズ3および視野マスク4によって2つのセパ
レータレンズ5a、5bに導き、複数の光電変換画素列
よりなる第1の受光素子列6aおよび第2の受光素子列
6bを有するイメージセンサ6の上記第1および第2の
受光素子列6a、6b上に、それぞれ上記各セパレータ
レンズ5a、5bによって結像させるようにしたもので
ある。このAF装置は2つの像のズレ量が撮影レンズ1
のピントズレ量にほぼ対応していることを利用し、2つ
の像をイメージセンサ6でビデオ信号として取り出し、
そのデータから像ズレ量を求め、その算出値で撮影レン
ズ1を合焦位置へ移動させるようにしたものである。つ
まり、イメージセンサ6の第1受光素子列6aと第2受
光素子列6bからの出力信号よりイメージセンサ6上の
被写体像の合焦位置からのズレ量Δを算出し、このズレ
量と撮影レンズlのデフォーカスIiDとの直線近似の
関係を用いてズレ量Δよりデフォーカスff1Dを求め
て撮影レンズ1を合焦位置へ移動させている。
[Prior Art] As is well known, an autofocus device (hereinafter referred to as an AF device) of an eye reflex camera uses an AF sensor unit 8 having a separator lens as shown in FIG. An AF device is used that uses a correlation method to determine the amount of focus shift from the amount of shift. That is, the light transmitted through the photographic lens 1 and imaged on a surface 2 conjugate with the film is guided by a collimator lens 3 and a field mask 4 to two separator lenses 5a and 5b, and a first photoelectric conversion pixel array consisting of a plurality of photoelectric conversion pixel rows is guided. An image sensor 6 having a light-receiving element row 6a and a second light-receiving element row 6b has an image formed on the first and second light-receiving element rows 6a and 6b by the separator lenses 5a and 5b, respectively. It is. With this AF device, the amount of deviation between the two images is 1
Taking advantage of the fact that it almost corresponds to the amount of defocus, the image sensor 6 extracts the two images as a video signal,
The amount of image shift is determined from the data, and the photographing lens 1 is moved to the in-focus position based on the calculated value. That is, the amount of deviation Δ of the subject image on the image sensor 6 from the in-focus position is calculated from the output signals from the first light receiving element row 6a and the second light receiving element row 6b of the image sensor 6, and this amount of deviation and the photographing lens Using the linear approximation relationship between l and defocus IiD, defocus ff1D is determined from the amount of deviation Δ, and the photographing lens 1 is moved to the in-focus position.

ところが、このAF装置にあってはズレ量Δとデフォー
カスmDとの関係は、合焦位置に対して±3〜5關程度
のズした範囲内では直線近似になるが、基本的には、双
曲線の関係にあるから像ズレ量ΔとピントズレiDの関
係は、第6図に示すように直線に近似できなくなる。こ
のために撮影レンズの移動量が多くなり過ぎて一度の合
焦動作では合焦しなくなってしまう。即ち、−回目の測
距結果で撮影レンズ1を駆動した場合、直接近似のΔ−
D特性き双曲線のΔ−D特性の相違による測距誤差分だ
け合焦位置より行き過ぎた位置で撮影レンズ1が停止す
ることになる。
However, in this AF device, the relationship between the amount of deviation Δ and the defocus mD is a linear approximation within the range of ±3 to 5 degrees relative to the in-focus position, but basically, Since they have a hyperbolic relationship, the relationship between the image shift amount Δ and the focus shift iD cannot be approximated to a straight line as shown in FIG. For this reason, the amount of movement of the photographic lens becomes so large that it becomes impossible to focus with a single focusing operation. In other words, when the photographing lens 1 is driven using the -th distance measurement result, the direct approximation Δ-
The photographic lens 1 will stop at a position that is too far from the in-focus position by the distance measurement error due to the difference in the Δ-D characteristic of the hyperbola with D characteristic.

そこで、この問題を解決するために直線式でな[A1.
A2 ;定数、Δ;像ズレ量]なる式でピントズレ量を
求めるようにした解決手段が特開昭62−100718
号公報により提案されている。
Therefore, in order to solve this problem, a linear equation [A1.
JP-A-62-100718 discloses a solution that calculates the amount of focus deviation using the formula: A2: constant, Δ: amount of image deviation]
It is proposed by the publication No.

であり、これは製品によってバラツクものである。This varies depending on the product.

従って、この解決手段は製造時に定数を、いちいち調整
しなければならないという煩わしさがあり、また調整方
法も複雑で時間のかかるものであった。
Therefore, this solution has the trouble of having to adjust the constants one by one during manufacturing, and the adjustment method is also complicated and time-consuming.

しかも温度によって最適な定数も変化するという欠点も
あった。
Moreover, there was also the drawback that the optimum constants varied depending on the temperature.

本発明の目的は、これらの問題点を解消するために、A
F装置自体に定数の自己修正機能を持たせたAF装置を
提供するにある。
The purpose of the present invention is to solve these problems by
To provide an AF device in which the F device itself has a constant self-correction function.

[課題を解決するための手段] 本発明のAF装置は上記目的を達成するために、第1の
受光素子列および第2の受光素子列を有するAFセンサ
ユニットに結像された2つの像の相対位置関係と係数を
用いて、撮影レンズの合焦位置からのズレ量を算出する
演算手段と、上記ズレ量に基づいて撮影レンズを移動さ
せる駆動手段と、この駆動手段と上記演算手段の合焦動
作の結果に基づいて上記係数を修正する修正手段とを具
備することを特徴とするものである。
[Means for Solving the Problems] In order to achieve the above object, the AF device of the present invention uses two images formed on an AF sensor unit having a first light-receiving element row and a second light-receiving element row. a calculation means for calculating the amount of deviation of the photographic lens from the in-focus position using the relative positional relationship and a coefficient; a drive means for moving the photographic lens based on the amount of deviation; and a combination of the driving means and the calculation means. The present invention is characterized by comprising a modifying means for modifying the coefficients based on the result of the focusing operation.

[作 用] 像ズレ量とピントズレ量との換算は、前記第2図に示し
たようにズレが大きいときには、直線式からズレる。し
かし、傾きが逆にならない限りは、どの傾き(0および
閃を除く)においても測距→レンズ移動を繰り返したの
ち、合焦点に収束する。
[Function] The conversion between the amount of image shift and the amount of focus shift deviates from the linear equation when the shift is large as shown in FIG. 2. However, unless the inclination is reversed, at any inclination (excluding 0 and flash), after repeating distance measurement → lens movement, the lens converges on the in-focus point.

従って本発明では、毎測距時に像ズレ量と移動させたレ
ンズの量を記憶しておき、合焦動作後に記憶したデータ
を基に最終レンズ位置を目標ピント位置として係数を再
計算し、1/mの重み付けで係数を修正する。これを各
AF毎に行ない自己修正させる。
Therefore, in the present invention, the amount of image shift and the amount of lens movement are memorized during each distance measurement, and the coefficients are recalculated based on the data stored after the focusing operation, with the final lens position set as the target focus position. Modify the coefficients by weighting /m. This is done for each AF to self-correct.

[実 施 例コ 以下、図示の実施例により本発明を説明する。[Implementation example] The present invention will be explained below with reference to illustrated embodiments.

第1図は、本発明の適用されたAF装置の概略構成図で
あって、撮影レンズ1を透過した被写体からの反射光は
、ハーフミラ−10で分割され、一方の光は図示されな
いシャッタを介してフィ□ルム面9に導かれ、他方の光
は上記フィルム面9と共役な位置に配設されたAFセン
サユニット8に導かれるようになっている。このAFセ
ンサユニット8の構成は前記第5図で説明したものと同
様の構成を有しているものであり、同センサユニット8
からの測距出力はCPU7に入力される。そして、2つ
の像の相対位置関係と係数を用いて撮影レンズ1の合焦
位置からのズレ量を算出する演算、上記ズレ量に基づい
て撮影レンズ1を移動させる駆動手段(図示されず)へ
の出力信号の送出、およびレンズ駆動と上記演算による
合焦動作の結果に基づいて上記係数を修正する動作がこ
のCPU7で行なわれるようになっている。
FIG. 1 is a schematic configuration diagram of an AF device to which the present invention is applied, in which light reflected from a subject that passes through a photographic lens 1 is divided by a half mirror 10, and one of the lights is transmitted through a shutter (not shown). The other light is guided to the film surface 9, and the other light is guided to the AF sensor unit 8 disposed at a position conjugate with the film surface 9. The configuration of this AF sensor unit 8 is similar to that explained in FIG.
The distance measurement output from is input to the CPU 7. Then, a calculation is performed to calculate the amount of deviation of the photographing lens 1 from the in-focus position using the relative positional relationship between the two images and coefficients, and a driving means (not shown) that moves the photographing lens 1 based on the amount of deviation. The CPU 7 performs operations such as sending out an output signal, and correcting the coefficients based on the results of lens driving and focusing operations based on the calculations described above.

第2図は、本発明の第1実施例を示す係数自己修正のア
ルゴリズムのフローチャートである。このアルゴリズム
は、AFセンサユニット8からの測距信号を用いて2像
のズレ量を検出する像ズレ検出サブルーチン、ピントズ
レ量算出、レンズ移動を繰り返し、かつ毎回のピントズ
レ量と像ズレ量を記憶する合焦部分と合焦後、シャッタ
シーケンスを行ない、その後に記憶したピントズレ量。
FIG. 2 is a flowchart of a coefficient self-correction algorithm showing a first embodiment of the present invention. This algorithm repeats an image shift detection subroutine that detects the amount of shift between two images using the distance measurement signal from the AF sensor unit 8, calculates the amount of focus shift, and moves the lens, and stores the amount of focus shift and image shift each time. After focusing on the in-focus area, perform the shutter sequence and then memorize the amount of defocus.

像ズレ量を基に係数を自己修正する係数修正部分により
構成されている。
It is composed of a coefficient correction section that self-corrects the coefficients based on the amount of image shift.

次に、その動作について説明すると、AFスタートで像
ズレ量Δを算出し、これをΔ1に記憶する。像ズレ量Δ
によってピントズレmDをは本発明により自己修正がか
けられる係数である。
Next, the operation will be explained. At the start of AF, the image shift amount Δ is calculated and stored in Δ1. Image shift amount Δ
The focus shift mD is a coefficient that is self-corrected according to the present invention.

ピントズレ量りもDlに紀憶し、レンズ1をDだけ移動
させる。次に、合焦の確認のために再度、像ズレ量Δを
n1定し、これを次のメモリに記憶する。合焦でなけれ
ば、ピントズレ量を求め、次のメモリに記憶すると共に
求めた値だけレンズを動かす。これを合焦許容内になる
まで繰り返し行なう。合焦許容内にピントが合ったらシ
ャッタシーケンスに移る。そして、撮影に必要な動作が
終了した後に、係数の自己修正を行なう。繰り返しの数
が2回以下の場合には、修正不用と考え修正は行なわな
い。修正は1回目と2回目の測距データで行なう。記憶
されたピントズレff1Dのその回までの合計から真の
ピントズレ量を求め、その値と像ズレ量から今回のAF
時の係数を求め直す。この係数を1/mの重み付けして
係数を修正する。
The focus deviation measurement is also memorized in Dl, and the lens 1 is moved by D. Next, to confirm focus, the image shift amount Δ is determined again by n1, and this is stored in the next memory. If it is not in focus, the amount of focus deviation is determined, stored in the next memory, and the lens is moved by the determined value. This is repeated until the focus is within tolerance. Once the focus is within the focus tolerance, move on to the shutter sequence. Then, after the operations necessary for photographing are completed, the coefficients are self-corrected. If the number of repetitions is two or less, it is assumed that no correction is necessary and no correction is performed. Corrections are made using the first and second distance measurement data. The true amount of defocus is calculated from the total of the memorized defocus ff1D up to that time, and the current AF is calculated from that value and the amount of image deformation.
Recalculate the coefficient of time. This coefficient is weighted by 1/m to modify the coefficient.

なお、合焦不能の場合は修正を行なわない。°また、繰
り返しが異常に多い時は合焦より数回前のデータで修正
を行なう等の方法も効果がある。また、重み付は定数m
を繰り返し回数が多いほど小さくしても良い。
Note that no correction is made if it is impossible to focus. °Also, when there are an abnormally large number of repetitions, it is also effective to make corrections using data from several times before focusing. Also, the weighting is constant m
may be made smaller as the number of repetitions increases.

このように本実施例によれば、シャッタシーケンス後の
撮影操作性に影響の無い時に、係数を自己修正するので
、使用しているうちに合焦時間が短かくなり、かつ温度
特性の変化による影響も補正でき、高速で高信頼性に富
んだ正確なAFが行なえる。また製造時の調整も不要な
ので、安価で製品のバラツキもないものになる。更に、
重み付゛け定数mを繰り返し数で変えることで、係数が
大きくずれていても速い修正ができる。
In this way, according to this embodiment, the coefficients are self-corrected when there is no effect on the photographing operability after the shutter sequence, so the focusing time becomes shorter over time, and it is also possible to adjust the coefficients due to changes in temperature characteristics. Effects can also be corrected, and accurate AF can be performed at high speed and with high reliability. Furthermore, since no adjustment is required during manufacturing, the product is inexpensive and has no variation in product quality. Furthermore,
By changing the weighting constant m by the number of repetitions, it is possible to quickly correct even if the coefficients deviate greatly.

第3図および第4A図、第4B図、第4C図は本発明の
第2実施例を示したものである。この実施例におけるA
F装置の光学系および各ユニットの構成は、前記第1実
施例のものと同様に構成されている。本実施例では、ピ
ントズレff1Dの算出を第3図に示すように、各エリ
ア毎に分けて、その中で直線近似を行ない像ズレ量Δか
らピントズレuDを求めるようにしたもので、この場合
のアルゴリズムのフローチャートを第4A図〜第4C図
に示しである。
3 and 4A, 4B, and 4C show a second embodiment of the present invention. A in this example
The optical system of the F device and the configuration of each unit are similar to those of the first embodiment. In this embodiment, as shown in Fig. 3, the calculation of the focus deviation ff1D is divided into each area, and linear approximation is performed within each area to calculate the focus deviation uD from the image deviation amount Δ. Flowcharts of the algorithm are shown in FIGS. 4A to 4C.

ピントズレ量算出のアルゴリズムは、第4C図のフロー
に示す如く、前記第1実施例と同様に、像ズレ量Δの検
出を行ない、上述した直線近似によるピントズレff1
Dの演算によってレンズ移動量を求め、これによりレン
ズを移動させる。レンズの移動終了後、再度、像ズレ量
Δを検出し、ピントズレff1Dの演算を行ない、レン
ズ移動を合焦するまで繰り返す。そして、各繰り返し毎
の像ズレ量ΔとピントズレQDを記録しておく。ここま
での構成と動作は、前記第1実施例とほぼ同じである。
As shown in the flowchart of FIG. 4C, the algorithm for calculating the amount of focus deviation detects the amount of image deviation Δ in the same manner as in the first embodiment, and calculates the focus deviation ff1 by the linear approximation described above.
The amount of lens movement is obtained by calculating D, and the lens is moved based on this amount. After the lens movement is completed, the image deviation amount Δ is detected again, the focus deviation ff1D is calculated, and the lens movement is repeated until the lens is brought into focus. Then, the image shift amount Δ and the focus shift QD for each repetition are recorded. The configuration and operation up to this point are almost the same as those of the first embodiment.

また合焦後、シャッタシーケンスを終えた後、1回の合
焦動作(K−1)で合焦点に達した場合は、そこでシー
ケンスは終了する。複数回の合焦動作で合焦に達したと
きは、ピントズレ量演算の係数を自己修正する。この修
正データは、毎回の像ズレ量Δ、ピントズレHDが使用
される。
Further, if the in-focus point is reached in one focusing operation (K-1) after completing the shutter sequence after focusing, the sequence ends there. When in-focus is achieved after multiple focusing operations, the coefficients for calculating the amount of focus deviation are self-corrected. This correction data uses the image shift amount Δ and focus shift HD each time.

この第2実施例のAF装置の動作を次に述べると、AF
スタートで像ズレ量Δを算出し、Δ1に記憶する。像ズ
レ量ΔによってピントズレmDを算出する。ピントズレ
2Dの演算は、第4A図に示すように、像ズレ量Δが正
であるか負であるかを判断する。正負の符号によりレン
ズを繰り出すか繰り込むかが決められる。また、前記第
6図から明らかなように、像ズレ量の正と負(繰り出し
か繰り込みか)により傾が異なる。このことにより正方
向のエリアと負方向のエリアに分けて演算するが、これ
は係数が異なるだけなので、ここでは正方向を例にとっ
て説明する。
The operation of the AF device of this second embodiment will be described next.
At the start, the image shift amount Δ is calculated and stored in Δ1. The focus shift mD is calculated based on the image shift amount Δ. In calculating the focus shift 2D, as shown in FIG. 4A, it is determined whether the image shift amount Δ is positive or negative. Whether the lens is extended or retracted is determined by the positive and negative signs. Furthermore, as is clear from FIG. 6, the slope differs depending on whether the amount of image shift is positive or negative (extension or retraction). As a result, calculations are performed separately into areas in the positive direction and areas in the negative direction, but since only the coefficients are different, the positive direction will be explained here as an example.

先ず、像ズレがどのエリアにあるかを判断する。First, it is determined in which area the image shift occurs.

例えば、第3図に示すように像ズレがエリア3にあると
すると、像ズレ量Δはetとe2とe3を引くまで負に
はならない。エリア3のルーチンでは引きすぎたe3を
加えエリア3内での大きさにエリア3での傾きa3をか
けてエリア2までのピントズレff1B2を加えること
で、ピントズレff1Dを求める。像ズレ量Δが負の時
は、これらの符号が反対になる。ピントズレflDは、
更にレンズによる係数りをかけて実際に移動させる量の
ピントズレ量りになる。
For example, if the image shift is in area 3 as shown in FIG. 3, the image shift amount Δ will not become negative until et, e2, and e3 are subtracted. In the area 3 routine, the out-of-focus e3 is added, the size in the area 3 is multiplied by the slope a3 in the area 3, and the out-of-focus ff1B2 up to the area 2 is added to obtain the out-of-focus ff1D. When the image shift amount Δ is negative, these signs are opposite. The focus shift flD is
This is then multiplied by a lens factor to determine the actual amount of focus shift.

ピントズレJuDの算出に次いで、第1実施例と同様に
ズレmDの記録、そしてレンズの移動が行なわれ、これ
が合焦まで繰り返される。合焦後、シャッタシーケンス
を行ないフィルムへの露光等のCPUの必要な動作を終
了する。そののち、最後に合焦したかどうかの確認の像
ズレ検出骨のピント演算を行なう(第4C図参照)。合
焦判断は必ずしもピッタリ合焦でなくても合焦とするの
で、そのズレ分を修正に反映させるために最後のピント
ズレEilDを求める。
After calculating the focus shift JuD, the shift mD is recorded and the lens is moved in the same way as in the first embodiment, and this is repeated until focusing. After focusing, a shutter sequence is performed and necessary operations of the CPU, such as exposing the film, are completed. After that, a focus calculation is performed on the image deviation detection bone to confirm whether or not the image is finally in focus (see FIG. 4C). Since the focus judgment is performed even if the focus is not necessarily perfect, the final focus deviation EilD is determined in order to reflect the deviation in the correction.

もしここで、−度のAF動作で合焦した場合(K−1)
は、係数の修正の必要が無いとして全シーケンスを終了
する。そうでない場合は、合焦点に近い測距点(Kが大
きい方)から順に、そのΔのあるエリアの傾きを修正す
る。この修正は第4B図に示す如く、修正するエリアよ
り像ズレの少ないエリアの傾きは合っているとして、今
回必要だった傾きaの値を求め、1/mの重みを付けて
係数a を修正する。またこの重み付は係数mを繰り返
しの数により、例えばL−3にのようにすることで各エ
リア毎に重み付けが異なり適切な修正が行なえる。この
他、第1実施例と同様、Kが一定以上のような合焦不能
や合焦に不安がある場合は修正を行なわないようにして
も良い。
If the focus is achieved with -degree AF operation (K-1)
terminates the entire sequence as there is no need to modify the coefficients. If this is not the case, the inclination of the area where the Δ is present is corrected in order from the distance measurement point closest to the focal point (the one with larger K). For this correction, as shown in Figure 4B, assuming that the slope of the area with less image shift than the area to be corrected is correct, the value of the slope a that was needed this time is determined, and the coefficient a is corrected by adding a weight of 1/m. do. Further, this weighting is performed by setting the coefficient m to the number of repetitions, for example, L-3, so that the weighting is different for each area and appropriate correction can be performed. In addition, similar to the first embodiment, if there is a concern about inability or focus, such as when K exceeds a certain level, the correction may not be performed.

[発明の効果] 以上述べたように本発明によれば、 ■式が簡単で、゛レンズの移動量が少ないときほど演算
速度が速くなるピントズレ演算の修正手段であり、自己
修正が可能となる。
[Effects of the Invention] As described above, according to the present invention, (1) the formula is simple, (2) it is a means for correcting focus shift calculations that increases the calculation speed as the amount of movement of the lens is small, and self-correction is possible; .

■エリアに分けた修正が行なえるので、良く使うエリア
の修正を先に行なえば、早く学習が終了する。
■You can make corrections by dividing them into areas, so if you make corrections to frequently used areas first, you can finish learning faster.

■直線近似によるピントズレ演算を用いているの温度特
性やズレ、経年変化等のAF装置による他の要因も使用
して行く度に修正され、製造時の調整無しに最高の状態
になじんでいく。
■Other factors caused by the AF device, such as temperature characteristics, deviation, and aging, are also corrected each time the AF device is used, and the system is adjusted to the best condition without any adjustment during manufacturing.

■直線による近似であるが、曲線による近似で同様のこ
とを行なうことで、よりエリア数の少ない誤差の少ない
ものになる。
■Although this is an approximation using a straight line, by performing the same approximation using a curved line, the number of areas and errors can be reduced.

等の顕著な効果を発揮し、従来の欠点を解消したAF装
置を提供することができる。
It is possible to provide an AF device that exhibits remarkable effects such as, and eliminates the conventional drawbacks.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は、本発明の適用されたAF装置の概略構成図、 第2図は、本発明の第1実施例を示すAF装置における
ピントズレ量演算手段および係数修正手段のフローチャ
ート、 第3図は、ピントズレ量−像ズレ量の特性曲線における
エリアを示す線図、 第4A図〜第4C図は、本発明の第2実施例を示すAF
装置におけるピントズレ量演算手段および係数修正手段
の各フローチャート、 第5図は、AF装置におけるAFゼンサユニットの構成
図、 第6図は、ピントズレ量−像ズレ量の特性線図である。 1・・・・・・・・・・・・・・・撮影レンズ7・・・
・・・・・・・・・・・・CPU (演算手段、修正手
段)8・・・・・・・・・・・・・・・AFセンサユニ
ット(第1の受光素子列) (第2の受光素子列)
FIG. 1 is a schematic configuration diagram of an AF device to which the present invention is applied; FIG. 2 is a flowchart of the focus shift amount calculation means and coefficient correction means in the AF device showing the first embodiment of the present invention; FIG. , a diagram showing the areas in the characteristic curve of the amount of focus shift - the amount of image shift, and FIGS. 4A to 4C show the AF according to the second embodiment of the present invention.
Flowcharts of the focus shift amount calculation means and coefficient correction means in the device; FIG. 5 is a configuration diagram of the AF sensor unit in the AF device; FIG. 6 is a characteristic line diagram of focus shift amount-image shift amount. 1・・・・・・・・・・・・Photography lens 7...
・・・・・・・・・・・・CPU (calculation means, correction means) 8 ・・・・・・・・・・・・AF sensor unit (first light receiving element row) (second photodetector array)

Claims (1)

【特許請求の範囲】[Claims] (1)撮影レンズの光軸を挾む異なる光路を通過した2
つの像を受ける複数の画素よりなる第1の受光素子列お
よび第2の受光素子列と、 これらの受光素子列の上に結像される2つの像の相対位
置関係と係数を用いて、上記撮影レンズの合焦位置から
のズレ量を演算する演算手段と、上記ズレ量に基づいて
上記撮影レンズを移動する駆動手段と、 上記演算手段と上記駆動手段の合焦動作の結果に基づい
て上記係数を修正する修正手段と、を具備したことを特
徴とするオートフォーカス装置。
(1) Passed through different optical paths that sandwich the optical axis of the photographic lens 2
Using the relative positional relationship and coefficients of the first light-receiving element row and the second light-receiving element row consisting of a plurality of pixels that receive two images, and the two images formed on these light-receiving element rows, the above-mentioned a calculation means for calculating the amount of deviation of the photographic lens from the in-focus position; a driving means for moving the photographic lens based on the amount of deviation; An autofocus device comprising: a correction means for correcting a coefficient.
JP6479388A 1988-03-18 1988-03-18 Auto focus device Pending JPH01237610A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP6479388A JPH01237610A (en) 1988-03-18 1988-03-18 Auto focus device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP6479388A JPH01237610A (en) 1988-03-18 1988-03-18 Auto focus device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
JP6305774A Division JP2740749B2 (en) 1994-12-09 1994-12-09 Auto focus device

Publications (1)

Publication Number Publication Date
JPH01237610A true JPH01237610A (en) 1989-09-22

Family

ID=13268469

Family Applications (1)

Application Number Title Priority Date Filing Date
JP6479388A Pending JPH01237610A (en) 1988-03-18 1988-03-18 Auto focus device

Country Status (1)

Country Link
JP (1) JPH01237610A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5359444A (en) * 1992-12-24 1994-10-25 Motorola, Inc. Auto-focusing optical apparatus
US6491391B1 (en) 1999-07-02 2002-12-10 E-Vision Llc System, apparatus, and method for reducing birefringence
US6491394B1 (en) 1999-07-02 2002-12-10 E-Vision, Llc Method for refracting and dispensing electro-active spectacles
US6517203B1 (en) 1999-07-02 2003-02-11 E-Vision, Llc System, apparatus, and method for correcting vision using electro-active spectacles
US6619799B1 (en) 1999-07-02 2003-09-16 E-Vision, Llc Optical lens system with electro-active lens having alterably different focal lengths
US6733130B2 (en) 1999-07-02 2004-05-11 E-Vision, Llc Method for refracting and dispensing electro-active spectacles
US7863550B2 (en) 2007-04-18 2011-01-04 Nikon Corporation Focus detection device and focus detection method based upon center position of gravity information of a pair of light fluxes
US7926940B2 (en) 2007-02-23 2011-04-19 Pixeloptics, Inc. Advanced electro-active optic device
US8915588B2 (en) 2004-11-02 2014-12-23 E-Vision Smart Optics, Inc. Eyewear including a heads up display
US9028062B2 (en) 2007-05-04 2015-05-12 Mitsui Chemicals, Inc. Electronic eyeglass frame
US9122083B2 (en) 2005-10-28 2015-09-01 E-Vision Smart Optics, Inc. Eyewear docking station and electronic module
US9124796B2 (en) 2004-11-02 2015-09-01 E-Vision Smart Optics, Inc. Eyewear including a remote control camera
US9155614B2 (en) 2007-01-22 2015-10-13 E-Vision Smart Optics, Inc. Flexible dynamic electro-active lens
JP2016085465A (en) * 2015-12-17 2016-05-19 株式会社ニコン Focus detection device and imaging device
US9411172B2 (en) 2007-07-03 2016-08-09 Mitsui Chemicals, Inc. Multifocal lens with a diffractive optical power region
US9801709B2 (en) 2004-11-02 2017-10-31 E-Vision Smart Optics, Inc. Electro-active intraocular lenses
US10599006B2 (en) 2016-04-12 2020-03-24 E-Vision Smart Optics, Inc. Electro-active lenses with raised resistive bridges
US10613355B2 (en) 2007-05-04 2020-04-07 E-Vision, Llc Moisture-resistant eye wear
US11061252B2 (en) 2007-05-04 2021-07-13 E-Vision, Llc Hinge for electronic spectacles
US11397367B2 (en) 2016-04-12 2022-07-26 E-Vision Smart Optics, Inc. Electro-active lenses with raised resistive bridges

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5359444A (en) * 1992-12-24 1994-10-25 Motorola, Inc. Auto-focusing optical apparatus
US6491391B1 (en) 1999-07-02 2002-12-10 E-Vision Llc System, apparatus, and method for reducing birefringence
US6491394B1 (en) 1999-07-02 2002-12-10 E-Vision, Llc Method for refracting and dispensing electro-active spectacles
US6517203B1 (en) 1999-07-02 2003-02-11 E-Vision, Llc System, apparatus, and method for correcting vision using electro-active spectacles
US6619799B1 (en) 1999-07-02 2003-09-16 E-Vision, Llc Optical lens system with electro-active lens having alterably different focal lengths
US6733130B2 (en) 1999-07-02 2004-05-11 E-Vision, Llc Method for refracting and dispensing electro-active spectacles
US9500883B2 (en) 1999-07-02 2016-11-22 E-Vision Smart Optics, Inc. Electro-active opthalmic lens having an optical power blending region
US9411173B1 (en) 1999-07-02 2016-08-09 E-Vision Smart Optics, Inc. Electro-active opthalmic lens having an optical power blending region
US9323101B2 (en) 1999-07-02 2016-04-26 E-Vision Smart Optics, Inc. Electro-active opthalmic lens having an optical power blending region
US11822155B2 (en) 2004-11-02 2023-11-21 E-Vision Smart Optics, Inc. Eyewear including a remote control camera
US10795411B2 (en) 2004-11-02 2020-10-06 E-Vision Smart Optics, Inc. Eyewear including a remote control camera and a docking station
US8931896B2 (en) 2004-11-02 2015-01-13 E-Vision Smart Optics Inc. Eyewear including a docking station
US9124796B2 (en) 2004-11-02 2015-09-01 E-Vision Smart Optics, Inc. Eyewear including a remote control camera
US11144090B2 (en) 2004-11-02 2021-10-12 E-Vision Smart Optics, Inc. Eyewear including a camera or display
US8915588B2 (en) 2004-11-02 2014-12-23 E-Vision Smart Optics, Inc. Eyewear including a heads up display
US10852766B2 (en) 2004-11-02 2020-12-01 E-Vision Smart Optics, Inc. Electro-active elements with crossed linear electrodes
US11262796B2 (en) 2004-11-02 2022-03-01 E-Vision Smart Optics, Inc. Eyewear including a detachable power supply and display
US11422389B2 (en) 2004-11-02 2022-08-23 E-Vision Smart Optics, Inc. Eyewear including a remote control camera
US10379575B2 (en) 2004-11-02 2019-08-13 E-Vision Smart Optics, Inc. Eyewear including a remote control camera and a docking station
US9801709B2 (en) 2004-11-02 2017-10-31 E-Vision Smart Optics, Inc. Electro-active intraocular lenses
US10092395B2 (en) 2004-11-02 2018-10-09 E-Vision Smart Optics, Inc. Electro-active lens with crossed linear electrodes
US10729539B2 (en) 2004-11-02 2020-08-04 E-Vision Smart Optics, Inc. Electro-chromic ophthalmic devices
US10126569B2 (en) 2004-11-02 2018-11-13 E-Vision Smart Optics Inc. Flexible electro-active lens
US10159563B2 (en) 2004-11-02 2018-12-25 E-Vision Smart Optics, Inc. Eyewear including a detachable power supply and a display
US10172704B2 (en) 2004-11-02 2019-01-08 E-Vision Smart Optics, Inc. Methods and apparatus for actuating an ophthalmic lens in response to ciliary muscle motion
US9122083B2 (en) 2005-10-28 2015-09-01 E-Vision Smart Optics, Inc. Eyewear docking station and electronic module
US10114235B2 (en) 2005-10-28 2018-10-30 E-Vision Smart Optics, Inc. Eyewear docking station and electronic module
US11474380B2 (en) 2007-01-22 2022-10-18 E-Vision Smart Optics, Inc. Flexible electro-active lens
US9155614B2 (en) 2007-01-22 2015-10-13 E-Vision Smart Optics, Inc. Flexible dynamic electro-active lens
US7926940B2 (en) 2007-02-23 2011-04-19 Pixeloptics, Inc. Advanced electro-active optic device
US7863550B2 (en) 2007-04-18 2011-01-04 Nikon Corporation Focus detection device and focus detection method based upon center position of gravity information of a pair of light fluxes
US10613355B2 (en) 2007-05-04 2020-04-07 E-Vision, Llc Moisture-resistant eye wear
US9028062B2 (en) 2007-05-04 2015-05-12 Mitsui Chemicals, Inc. Electronic eyeglass frame
US11586057B2 (en) 2007-05-04 2023-02-21 E-Vision, Llc Moisture-resistant eye wear
US11061252B2 (en) 2007-05-04 2021-07-13 E-Vision, Llc Hinge for electronic spectacles
US9411172B2 (en) 2007-07-03 2016-08-09 Mitsui Chemicals, Inc. Multifocal lens with a diffractive optical power region
US10598960B2 (en) 2012-01-06 2020-03-24 E-Vision Smart Optics, Inc. Eyewear docking station and electronic module
US11487138B2 (en) 2012-01-06 2022-11-01 E-Vision Smart Optics, Inc. Eyewear docking station and electronic module
US11971612B2 (en) 2012-01-06 2024-04-30 E-Vision Smart Optics, Inc. Eyewear docking station and electronic module
JP2016085465A (en) * 2015-12-17 2016-05-19 株式会社ニコン Focus detection device and imaging device
US11994784B2 (en) 2016-04-12 2024-05-28 E-Vision Smart Optics, Inc. Electro-active lenses with raised resistive bridges
US10599006B2 (en) 2016-04-12 2020-03-24 E-Vision Smart Optics, Inc. Electro-active lenses with raised resistive bridges
US11054714B2 (en) 2016-04-12 2021-07-06 E-Vision Smart Optics, Inc. Electro-active lenses with raised resistive bridges
US11662642B2 (en) 2016-04-12 2023-05-30 E-Vision Smart Optics, Inc. Electro-active lenses with raised resistive bridges
US11397367B2 (en) 2016-04-12 2022-07-26 E-Vision Smart Optics, Inc. Electro-active lenses with raised resistive bridges

Similar Documents

Publication Publication Date Title
JPH01237610A (en) Auto focus device
US8634015B2 (en) Image capturing apparatus and method and program for controlling same
JP2008209761A (en) Focus detecting device and imaging apparatus
US4670645A (en) Focus detecting device with shading reduction optical filter
US5995144A (en) Automatic focusing device using phase difference detection
US6208811B1 (en) Automatic focussing system
JPH0921943A (en) Optical device provided with focal point detector
JPS60262004A (en) Apparatus for photoelectrical detection of image displacement
US4511232A (en) Auto-focus camera
CN110320650A (en) Lens devices and photographic device
JPH07181369A (en) Automatic focusing camera, and automatic focusing method of camera
JP4810160B2 (en) Focus detection apparatus and control method thereof
JP3230759B2 (en) Distance measuring device
JP2850336B2 (en) Focus detection device
JP2006322970A (en) Focus detector
JP2740749B2 (en) Auto focus device
JP4938922B2 (en) Camera system
JP2005195793A (en) Focus detector
JPH0762732B2 (en) Focus detection device
JP2768459B2 (en) Focus detection device
JP2008299048A (en) Accumulation control device, focal point detecting device and imaging device
JP4900134B2 (en) Focus adjustment device, camera
JP6639208B2 (en) Imaging device and control method thereof
JP4536893B2 (en) camera
JP2002072071A (en) Camera