WO2013047111A1 - 撮像装置及び合焦用パラメータ値算出方法 - Google Patents
撮像装置及び合焦用パラメータ値算出方法 Download PDFInfo
- Publication number
- WO2013047111A1 WO2013047111A1 PCT/JP2012/072481 JP2012072481W WO2013047111A1 WO 2013047111 A1 WO2013047111 A1 WO 2013047111A1 JP 2012072481 W JP2012072481 W JP 2012072481W WO 2013047111 A1 WO2013047111 A1 WO 2013047111A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- phase difference
- sensitivity
- incident angle
- value
- imaging
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/34—Systems for automatic generation of focusing signals using different areas in a pupil plane
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
- H04N25/704—Pixels specially adapted for focusing, e.g. phase difference pixel sets
Definitions
- the present invention relates to an imaging apparatus including an imaging element in which phase difference pixels are formed, and a focusing parameter value calculation method.
- phase difference pixels In an imaging device (image sensor) mounted on an imaging apparatus (camera), some of the many pixels arranged in a two-dimensional array in a light receiving area are also referred to as phase difference pixels (focus detection pixels). ).
- a phase difference AF method is employed as an AF method for focusing the photographing lens on the subject.
- the phase difference pixel has a structure in which pupil division is performed by one pixel of the paired pixel and the other pixel, and one pixel receives one of the two light beams passing through different optical paths of the photographing lens, and the other When the other pixel receives the light beam, a relative positional shift in the pupil division direction is detected.
- the phase difference AF method adjusts the focus shift amount of the photographic lens in accordance with the positional shift amount.
- an imaging device equipped with an imaging element there is an interchangeable lens type imaging device in addition to a device that uses a single type of photographing lens fixed to the imaging device. If the photographic lens is replaced, its open F value, focal length, spherical aberration, etc. will be different.
- the correction amount is prepared as table data, and appropriate table data is selected when the photographing lens is replaced. There is.
- An object of the present invention is to provide an imaging apparatus and a focusing parameter value calculation method that can execute a favorable phase difference AF control even if a small-capacity memory is used and replaced with any kind of imaging lens.
- An imaging device and a focusing parameter value calculation method include an imaging device in which a plurality of pixels are arranged in a two-dimensional array and a phase detection pixel is formed in a focus detection area in an effective pixel region, and the imaging device
- a phase difference amount detection unit that analyzes a captured image signal provided by the imaging lens provided in the preceding stage, obtains a phase difference amount from detection signals of the two phase difference pixels that form a pair, and the phase difference amount
- the imaging apparatus comprising: a control unit that obtains a defocus amount of the subject image captured by the imaging element through the imaging lens from the phase difference amount detected by the detection unit and performs focusing control of the imaging lens; Based on the photographing lens information of the photographing lens and the light receiving sensitivity distribution which is the sensitivity for each incident angle of the incident light with respect to the two phase difference pixels as the pair, the defocus amount and the position It obtains the parameter values for the ratio between Saryou, and obtains the defocus amount from the parameter value and the detected phase difference amount.
- the present invention it is possible to obtain an in-focus parameter value that enables highly accurate in-focus control even when the photographing lens is replaced and the F value is changed.
- FIG. 3 is a partially enlarged view of a focus detection area of the solid-state imaging device shown in FIG. 2.
- FIG. 1 is a functional block configuration diagram of a digital camera (imaging device) according to an embodiment of the present invention.
- the digital camera 10 includes a photographic optical system 21 that includes a photographic lens 21a, a diaphragm 21b, and the like, and an imaging element chip 22 that is disposed at a subsequent stage of the photographic optical system 21.
- the photographing optical system 21 is provided in a replaceable manner, and a user can select a desired photographing optical system (a wide-angle lens system, a telephoto lens system, etc.).
- the image pickup device chip 22 has a signal reading means of a single plate type solid-state image pickup device 22a for color image pickup such as a CCD type or a CMOS type, and analog image data output from the solid-state image pickup device 22a.
- An analog signal processing unit (AFE) 22b that performs analog processing such as correlated double sampling processing, and an analog / digital conversion unit (A / D) 22c that converts analog image data output from the analog signal processing unit 22b into digital image data; Is provided.
- AFE analog signal processing unit
- a / D analog / digital conversion unit
- the digital camera 10 further controls the focus position control, zoom position control of the imaging optical system 21 and the driving of the solid-state image sensor 22a, analog signal processing unit 22b, and A / D 22c according to an instruction from a system control unit (CPU) 29 described later.
- a drive unit (including a timing generator TG) 23 that performs control and a flash 25 that emits light in response to an instruction from the CPU 29 are provided.
- the drive unit 23 may be mounted together in the image sensor chip 22 in some cases.
- the digital camera 10 of this embodiment further includes a digital signal processing unit 26 that takes in digital image data output from the A / D 22c and performs known image processing such as interpolation processing, white balance correction, and RGB / YC conversion processing, and an image.
- a compression / expansion processing unit 27 that compresses data into image data such as JPEG format and vice versa; a display unit 28 that displays menus, displays through images (live view images) and captured images; Media interface (I / F) that performs interface processing between a system control unit (CPU) 29 that performs overall control of the entire digital camera, an internal memory 30 such as a frame memory, and a recording medium 32 that stores JPEG image data and the like Unit 31 and a bus 34 for interconnecting them, and the system control unit 29 includes Operation unit 33 for inputting instructions from the user is connected.
- CPU system control unit
- the system control unit 29 uses the subordinate digital signal processing unit 26 and the like to obtain the phase difference amount from the detection signal of the phase difference pixel and to calculate a focusing parameter value to be described later, thereby taking the photographing optical system 21.
- FIG. 2 is a schematic diagram of the surface of the solid-state imaging device 22a.
- the solid-state imaging element 22a is formed on a horizontally long semiconductor substrate, and a large number of pixels (photoelectric conversion elements: photodiodes) are formed in a two-dimensional array in a light receiving area (effective pixel area) 41 thereof. .
- a central area of the light receiving area 41 is a focus detection area 42, and a phase difference pixel described later is provided in the focus detection area 42.
- FIG. 3 is an enlarged view of a part of the focus detection area 42 shown in FIG. 2, and shows a pixel array and a color filter array.
- odd-numbered (or even-numbered) pixel rows a square frame tilted 45 degrees indicates each pixel
- R (red) G (green) B (blue) on each pixel is the color of the color filter.
- honeycomb pixel array in which even-numbered (or odd-numbered) pixel rows are shifted from each other by 1/2 pixel pitch.
- the pixel arrangement is a square lattice arrangement, and the three primary color filters RGB are arranged in a Bayer arrangement. Further, even when only the pixels in the odd rows are viewed, the pixel arrangement is a square lattice arrangement, and the three primary color filters rgb are arranged in a Bayer arrangement.
- the light-receiving area of each pixel is the same, and the size of each light-shielding film opening is also the same (the light-shielding film opening is different only in a phase difference pixel described later).
- microlenses having the same shape in all pixels are mounted on the color filter of each pixel (these illustrations are omitted).
- a pixel row of pixels in which the G filters of the solid-state imaging device 22a shown in FIG. 3 are stacked (hereinafter referred to as G pixels, and R, B, r, g, and b are the same) and a pixel row of g pixels adjacent thereto.
- one pixel per four pixels is a pair of phase difference pixels 2.
- the phase difference pixel a pair of G pixel and g pixel
- the light shielding film opening 2 a is smaller than the light shielding film opening 3 (only one place is shown) of other normal pixels and is shifted to the right with respect to the pixel center of the G pixel 2.
- the pupil division is performed by providing the light shielding film opening 2b in the same manner as the light shielding film opening 2a and decentering to the left with respect to the pixel center of the g pixel 2.
- the pixel array is a so-called honeycomb pixel array, but the following embodiments can be applied even to an image sensor having a square lattice array.
- the phase difference pixel pair is preferably the same color pixel, it may be a color filter array in which two pixels of the same color are arranged.
- FIG. 4 is an explanatory diagram of phase difference detection using a phase difference pixel pair (one pixel is referred to as a first pixel, and the other pixel is referred to as a second pixel).
- FIG. 4A shows the output distribution L of the first pixel and the output distribution R of the second pixel in relation to the coordinate position of the imaging surface when the subject is at a position greatly deviated from the in-focus position. It is a graph.
- Each of the output distributions L and R has a mountain shape (indicated by a rectangular wave in FIG. 4), and an interval ⁇ therebetween is open.
- FIG. 4B is a graph showing output distributions L and R of the first pixel and the second pixel when the subject is present closer to the in-focus position than in FIG. Compared to FIG. 4A, the output distributions L and R are closer to each other. That is, the interval ⁇ between the output distributions L and R is narrower than that in FIG.
- FIG. 4C is a graph showing the output distributions L and R of the first pixel and the second pixel when the subject is present at the in-focus position.
- the phase difference amount of the detection signal by the first pixel and the second pixel can be obtained, for example, based on the value of the interval ⁇ .
- the incident angles ⁇ 1 and ⁇ 2 of incident light, the respective separation amounts a1 and a2 (the total separation amount is a1 + a2), and the defocus amount b are in a fixed functional relationship.
- tan ⁇ 1 a1 / b
- ie ⁇ 1 tan ⁇ 1 a1 / b
- tan ⁇ 2 a2 / b
- ⁇ 2 tan ⁇ 1 a2 / b
- the parameters ⁇ 1 and ⁇ 2 relating to the ratio between the defocus amount and the separation amount (phase difference amount) are used as focusing parameters, and the value of this parameter is calculated.
- “tan ⁇ ” may be used as a parameter.
- the problem here is that the incident angle of incident light differs when the photographing lens 51 shown in FIG. 5A is replaced with a photographing lens 52 having a different F value as shown in FIG. 5B. For this reason, the defocus amount varies depending on the photographing lens.
- the parameter value for focusing that can obtain the defocus amount with high accuracy even if the F value of the photographing lens is different is calculated as follows.
- FIG. 6 is an explanatory diagram of a method for calculating a parameter value for focusing according to the first embodiment of the present invention.
- L and R indicate the light receiving sensitivity distribution characteristic R that is the sensitivity for each incident angle of the incident light of the second pixel, as well as the light receiving sensitivity distribution characteristic L that is the sensitivity for each incident angle of the incident light of the first pixel.
- the horizontal axis in FIG. 4 is the coordinate position of the imaging surface, but the horizontal axis in FIG. 6 is the incident angle of the incident light.
- the focusing parameter value is calculated from the light receiving sensitivity distribution characteristics L and R, but the light receiving sensitivity in the partial region only within the incident angle range (within the range X) corresponding to the F value of the photographing lens.
- Distribution characteristics L and R are used. Data representing the relationship between the light receiving sensitivity distribution characteristics L and R and the incident angle in FIG. 6 may be acquired in advance, for example, at the time of inspection after manufacturing the imaging device.
- the incident angle range X corresponding to the F value the product of the incident angle ( ⁇ ) and the light receiving sensitivity I ( ⁇ ) is integrated by the value of ⁇ , and this integrated value is divided by the integrated value of ⁇ . Then, the sensitivity center of gravity ⁇ G is obtained.
- the incident angle corresponding to the sensitivity centroid position A1 is the focusing parameter value ⁇ 1
- the incident angle corresponding to the sensitivity centroid position B1 is the focusing parameter value ⁇ 2.
- the parameter values ⁇ 1 and ⁇ 2 obtained as described above are values that do not change when the F value of the lens is determined, that is, when the photographing lens to be used is determined and the range X in FIG. 6 is determined.
- what kind of photographic lens is attached to the imaging device if the defocus amount is obtained from the phase difference obtained by the output difference between the first pixel and the second pixel, and the photographic lens is controlled in focus. Even if it is done, the subject can be focused with high accuracy.
- the focus control of the lens can be performed with high accuracy regardless of the type of the photographing lens.
- the parameter is not affected by the variation in the optical characteristics of the photographic lens because calculation is performed within this range X.
- the value can be determined.
- the calculation based on the light receiving sensitivity curves L and R of the phase difference pixel for each individual image sensor makes it possible to reduce the individual variation of the image sensor. Parameter values that are not affected can be calculated.
- FIG. 7 is a flowchart showing an imaging process procedure executed by the CPU 29 of the imaging apparatus shown in FIG. 1 via the subordinate drive unit 24, digital signal processing unit 26, and the like.
- the CPU 29 acquires lens data (step S1). That is, F value data set for the photographing lens (a stop of the photographing optical system) is acquired.
- the picked-up image signal output from the solid-state image pickup device 22a in a moving image state and processed by the digital signal processing unit 26 is analyzed, and the in-focus parameter values ⁇ 1 and ⁇ 2 are calculated by the calculation formula described in FIG. To do.
- step S3 it is determined whether or not the lens has been changed (or whether or not the F value has been changed by adjusting the diaphragm 21b of the photographing optical system), and the lens has been changed (or the F value has been changed). If not, the process jumps to step S6 and waits for S1 depression (half depression) of the two-stage shutter button.
- the defocus amount is obtained by calculation based on the above-mentioned focusing parameter values ⁇ 1 and ⁇ 2 and the phase difference amount obtained by a known method similar to the conventional method (step S7). ), The focusing operation is executed in step S8 (step S8).
- the process proceeds to a well-known shooting process after waiting for S2 (full depression) of the two-stage shutter button, but a description thereof will be omitted.
- step S3 If it is determined in step S3 that lens replacement (or F value change) has been performed, the process proceeds to step S4, and F value data set for the photographic lens after lens replacement (or F value change). To get. Then, in the next step S5, the focusing parameter values ⁇ 1 and ⁇ 2 are calculated by the calculation formula described with reference to FIG. 6, and the process proceeds to step S6 described above.
- an appropriate in-focus parameter value is calculated even if the lens is exchanged, so that an image focused on the subject can be captured.
- the in-focus parameter values calculated in this embodiment are each of a plurality of phase difference pixel pairs discretely formed in the focus detection area 42 (center is the center of the image sensor light receiving region 41) shown in FIG. It is preferable to calculate the average value of the parameter values obtained from the above.
- the information on the photographing lens that is, the incident angle range for each F value and image height is (1) When acquiring from a lens, (2) When obtaining from the setting information on the photographing apparatus body side, (3) When acquiring a lens ID representing a lens type from the lens and obtaining lens information (F value or incident angle range for each image height) for each lens ID stored in advance on the imaging apparatus main body side, etc. There may be obtained in any form.
- FIG. 8 is an explanatory diagram for calculating a parameter value for focusing according to another embodiment of the present invention.
- the focus parameter values are calculated by obtaining the sensitivity centroid positions A1 and B1, but in this embodiment, the sensitivity area center position A2 of the sensitivity distribution L in the partial region within the range X
- the focusing parameter value is calculated from the angles ⁇ 1 and ⁇ 2 by the following equation (2).
- the sensitivity area center position A2 where the areas of the right hatching area and the left hatching area are the same is the sensitivity area center. Even if the focus parameter values ⁇ 1 and ⁇ 2 are calculated from the sensitivity area center positions A2 and B2 instead of the sensitivity gravity center positions A1 and B1, the focus control is performed without considering the lens F value. Focus control can be performed.
- FIG. 9 is an explanatory diagram of another embodiment of the focus detection area (phase difference area) 42 provided in the solid-state imaging device 22a.
- the focus parameter value is calculated in one focus detection area 42.
- the average value of the focusing parameter values is calculated for each divided area 43, and this is used as the parameter value for each divided area 43.
- the image of the main subject imaged on the light receiving surface of the solid-state image pickup device 22a is not necessarily divided into the image formed on the center of the solid-state image pickup device 22a. It exists at an arbitrary coordinate position such as a position or a position close to the left side. For this reason, it is more accurate to divide the focus detection area 42 into a plurality of divided areas 43 and calculate the focusing parameter value for each divided area 43.
- FIG. 10 shows a case where the image height of the main subject image is high.
- the range of the incident angle that is, the range of the incident angle corresponding to the F value changes according to the image height (image height 0).
- the incident angle range becomes narrower in the divided area where the incident position becomes higher, the divided area where the incident position becomes lower, and the divided area which is separated to the left and right with respect to the incident angle range at this time. The calculation with higher accuracy is possible.
- a pair pixel of G pixel and g pixel is a phase difference pixel pair, but a pair of R pixel and r pixel and B pixel and b pixel can also be a phase difference pixel. It is.
- the wavelengths of the R light, the G light, and the B light are different, it is necessary to consider that the incident angle characteristics are different.
- FIG. 11 shows the incident angle characteristics of R light, G light, and B light in a normal pixel.
- the incident angle also has wavelength dependency.
- the range X of incident angles corresponding to the same F value does not change between R, G, and B, but the integrated value within this range changes between R, G, and B, and the sensitivity ratio changes. In consideration of this, it is necessary to calculate the parameter value for focusing.
- FIG. 12 is a flowchart showing a detailed processing procedure of step S2 or step S5 of FIG.
- the processing step for calculating the in-focus parameter value is entered, first, in step S11, the RGB incident angle characteristics are referred to (when phase difference pixels are also provided in the R pixel and the B pixel).
- the sensitivity center is calculated from the characteristics of the F value, the image height, and the angle.
- the sensitivity center may be the sensitivity center of gravity position of FIG. 5 or the sensitivity area center of FIG.
- step S13 subsequent to step S12, it is determined whether the number of phase difference areas is one or more. If there is only one area, the process proceeds to step S14, and the focusing parameter value of image height 0 is set. Is calculated and the process is terminated. If there are a plurality of areas, the process advances to step S15 to calculate a focusing parameter value corresponding to each area, and the process ends.
- the focusing parameter is determined from the incident angle of the incident light corresponding to the F value of the photographing lens and the light receiving sensitivity distribution characteristic which is the sensitivity for each incident angle of the incident light of the phase difference pixel pair. Since the value is calculated, an appropriate in-focus parameter value can be calculated even when the photographic lens is replaced, and high-precision focusing on the subject of the photographic lens can be performed.
- the present invention is limited to these embodiments. It is not a thing.
- the center of sensitivity is obtained by using only the partial region within the range of the incident angle corresponding to the F value in the light receiving sensitivity distributions L and R, Based on this, a parameter value for focusing is calculated.
- the light reception sensitivity distribution of each of the above embodiments is not only the sensitivity of the phase difference pixels but also the sensitivity for each incident angle of the incident light with respect to the two paired phase difference pixels and the incident light with respect to the pixels other than the phase difference pixels. It may be a sensitivity ratio with the sensitivity for each incident angle.
- FIG. 13 is a diagram showing the incident angle characteristic of the ratio of the sensitivity of the phase difference pixel / the sensitivity of the normal pixel.
- the ratio of the sensitivity of the phase difference pixel / the sensitivity of the normal pixel can be obtained from the output value of the phase difference pixel / the output value of the normal pixel under the same conditions.
- the combination of the phase difference pixel for obtaining the sensitivity ratio and the normal pixel is preferably a combination of neighboring pixels.
- the spectral sensitivity distribution does not depend on the absolute value of the light amount when acquiring the light reception sensitivity distribution. Therefore, it is possible to acquire the light receiving sensitivity distribution relatively easily and accurately.
- the focusing parameter value in this case can be obtained by calculating the sensitivity center of gravity or the area center of the sensitivity from the light receiving sensitivity distribution shown in FIG. 13 as described above, and the relationship between the defocus amount and the separation amount can be accurately determined. It can be calculated.
- An imaging apparatus and a method for calculating a focusing parameter value include an imaging device in which a plurality of pixels are arranged in a two-dimensional array and phase difference pixels are formed in a focus detection area in an effective pixel area, and the imaging thereof A photographic lens provided in front of the element, a phase difference amount detection unit that analyzes a captured image signal from the image sensor and obtains a phase difference amount from detection signals of two paired phase difference pixels, and the phase difference
- the imaging apparatus comprising: a control unit that obtains a defocus amount of the subject image captured by the imaging element through the imaging lens from the phase difference amount detected by the amount detection unit and performs focusing control of the imaging lens; Is based on the photographic lens information of the photographic lens and the light receiving sensitivity distribution which is the sensitivity for each incident angle of the incident light with respect to the two phase difference pixels as a pair.
- Obtains the parameter values for the ratio between the serial phase difference amount characterized in that the said parameter value
- the photographing lens information of the imaging apparatus of the embodiment includes an F value of the photographing lens, and the control unit is within an incident angle range corresponding to the F value of the photographing lens in the light receiving sensitivity distribution. In this partial region, the value of the incident angle corresponding to the sensitivity center of the partial region is calculated as the parameter value.
- the value of the incident angle corresponding to the sensitivity center of the imaging apparatus of the embodiment is the sensitivity of the partial region in the partial region within the incident angle range corresponding to the F value of the photographing lens in the light receiving sensitivity distribution. It is a value of an incident angle corresponding to the position of the center of gravity.
- the value of the incident angle corresponding to the sensitivity center of the imaging apparatus of the embodiment is the sensitivity of the partial region in the partial region within the incident angle range corresponding to the F value of the photographing lens in the light receiving sensitivity distribution. It is a value of the incident angle corresponding to the area center position.
- the photographing lens information of the imaging apparatus of the embodiment includes information on an F angle of the photographing lens and an incident angle range corresponding to an image height position in at least the focus detection area on the imaging surface of the imaging element.
- the control unit calculates the parameter value using an incident angle range corresponding to an image height in the focus detection area and an incident angle range corresponding to the F value.
- control unit of the imaging apparatus divides the focus detection area into a plurality of divided areas and obtains the parameter value for each of the divided areas corresponding to the image height.
- the light receiving sensitivity distribution of the imaging apparatus includes the sensitivity for each incident angle of incident light with respect to the two phase difference pixels as a pair and the sensitivity for each incident angle of incident light with respect to pixels other than the phase difference pixels. It is characterized by comprising the following sensitivity ratio.
- control unit of the imaging apparatus is characterized in that the parameter value for each color light is obtained from the light receiving sensitivity distribution for each of red light, green light, and blue light.
- the photographing lens of the imaging apparatus is a replaceable photographing lens, and the parameter value is obtained after the photographing lens is replaced.
- the defocus amount can be obtained with high accuracy, and the focusing control of the photographing lens can be performed with high accuracy.
- the imaging device and the focusing parameter value calculation method according to the present invention can accurately perform the focusing operation on the subject of the photographing lens even when the lens is replaced, and can capture a focused subject image. This is useful when applied to a digital camera or the like.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Focusing (AREA)
- Studio Devices (AREA)
- Automatic Focus Adjustment (AREA)
Abstract
Description
tanθ1=a1/b すなわち θ1=tan-1a1/b
tanθ2=a2/b すなわち θ2=tan-1a2/b
となる。このため、第1画素と第2画素の位相差量(分離量=a1+a2)とθ1,θ2とが分かれば、デフォーカス量bを求めることができる。そこで、本実施形態では、デフォーカス量と分離量(位相差量)との比に関するパラメータθ1,θ2を合焦用パラメータとし、このパラメータの値を算出する。勿論、「tanθ」をパラメータとしても良い。
(1)レンズから取得する場合、
(2)撮影装置本体側の設定情報から得る場合、
(3)レンズからレンズ種別を表すレンズIDを取得し、予め撮像装置本体側で記憶しているレンズID毎のレンズ情報(F値や像高毎の入射角度範囲)を得る場合、などの形態があり、いずれの形態で取得しても良い。
実施形態による撮像装置及び合焦用パラメータ値の算出方法は、複数の画素が二次元アレイ状に配列形成され有効画素領域内の焦点検出エリアに位相差画素が形成された撮像素子と、その撮像素子の前段に設けられた撮影レンズと、その撮像素子による撮像画像信号を解析し、対となる2つの上記位相差画素の検出信号から位相差量を求める位相差量検出部と、上記位相差量検出部が検出した位相差量から上記撮影レンズを通して上記撮像素子が撮像した被写体画像のデフォーカス量を求め、上記撮影レンズの合焦制御を行う制御部とを備える撮像装置において、上記制御部は、上記撮影レンズの撮影レンズ情報と、上記対となる2つの位相差画素に対する入射光の入射角度毎の感度である受光感度分布とに基づいて、上記デフォーカス量と上記位相差量との比に関するパラメータ値を求め、上記パラメータ値と上記検出された位相差量から上記デフォーカス量を求めるものであることを特徴とする。
本出願は、2011年9月30日出願の日本特許出願(特願2011-218532)、及び2012年8月30日出願の日本国特許出願(特願2012-189504)に基づくものであり、その内容はここに参照として取り込まれる。
2a,2b 偏心した遮光膜開口
10 撮像装置(デジタルカメラ)
21 撮影光学系
21a 撮影レンズ
21b 絞り
22a 固体撮像素子
24 駆動部
26 デジタル信号処理部
29 システム制御部(CPU)
41 受光領域
42 焦点検出エリア
43 分割エリア
L 第1画素の受光感度分布特性
R 第2画素の受光感度分布特性
A1,B1 感度重心位置
A2,B2 感度面積中心位置
Claims (10)
- 複数の画素が二次元アレイ状に配列形成され有効画素領域内の焦点検出エリアに位相差画素が形成された撮像素子と、
該撮像素子の前段に設けられた撮影レンズと、
該撮像素子による撮像画像信号を解析し、対となる2つの前記位相差画素の検出信号から位相差量を求める位相差量検出部と、
前記位相差量検出部が検出した位相差量から前記撮影レンズを通して前記撮像素子が撮像した被写体画像のデフォーカス量を求め、前記撮影レンズの合焦制御を行う制御部と
を備える撮像装置であって、
前記制御部は、前記撮影レンズの撮影レンズ情報と、前記対となる2つの位相差画素に対する入射光の入射角度毎の感度である受光感度分布とに基づいて、前記デフォーカス量と前記位相差量との比に関するパラメータ値を求め、前記パラメータ値と前記検出された位相差量から前記デフォーカス量を求めるものである撮像装置。 - 請求項1に記載の撮像装置であって、
前記撮影レンズ情報は前記撮影レンズのF値を含むものであり、
前記制御部は、前記受光感度分布のうち前記撮影レンズのF値に対応する入射角度の範囲内の部分領域において、当該部分領域の感度中心に対応する入射角度の値を前記パラメータ値として算出する撮像装置。 - 請求項2に記載の撮像装置であって、
前記感度中心に対応する入射角度の値は、前記受光感度分布のうち前記撮影レンズのF値に対応する入射角度の範囲内の部分領域において、当該部分領域の感度重心位置に対応する入射角度の値である撮像装置。 - 請求項2に記載の撮像装置であって、
前記感度中心に対応する入射角度の値は、前記受光感度分布のうち前記撮影レンズのF値に対応する入射角度の範囲内の部分領域において、当該部分領域の感度面積中心位置に対応する入射角度の値である撮像装置。 - 請求項1乃至請求項4のいずれか1項に記載の撮像装置であって、
前記撮影レンズ情報は前記撮影レンズのF値と前記撮像素子の撮像面上の少なくとも前記焦点検出エリア内の像高位置に対応する入射角度の範囲の情報を含むものであり、
前記制御部は、前記焦点検出エリア内の像高に対応した入射角度の範囲と前記F値に対応した入射角度の範囲を使用して前記パラメータ値を算出する撮像装置。 - 請求項5に記載の撮像装置であって、
前記制御部は、前記焦点検出エリアを複数の分割エリアに分割し、前記像高に対応した前記分割エリア毎に前記パラメータ値を求める撮像装置。 - 請求項1乃至請求項6のいずれか1項に記載の撮像装置であって、 前記受光感度分布は、前記対となる2つの位相差画素に対する入射光の入射角度毎の感度と前記位相差画素以外の画素に対する入射光の入射角度毎の感度との感度比からなる撮像装置。
- 請求項1乃至請求項7のいずれか1項に記載の撮像装置であって、
前記制御部は、赤色光,緑色光,青色光毎の受光感度分布から各色光毎の前記パラメータ値を求める撮像装置。 - 請求項1乃至請求項8のいずれか1項に記載の撮像装置であって、
前記撮影レンズは交換可能な撮影レンズであり、前記撮影レンズの交換後に前記パラメータ値を求める撮像装置。 - 複数の画素が二次元アレイ状に配列形成され有効画素領域内の焦点検出エリアに位相差画素が形成された撮像素子と、
該撮像素子の前段に設けられた撮影レンズと、
該撮像素子による撮像画像信号を解析し、対となる2つの前記位相差画素の検出信号から位相差量を求める位相差量検出部と、
前記位相差量検出部が検出した位相差量から前記撮影レンズを通して前記撮像素子が撮像した被写体画像のデフォーカス量を求め、前記撮影レンズの合焦制御を行う制御部と
を備える撮像装置の合焦用パラメータ値算出方法であって、
前記撮影レンズの撮影レンズ情報と、前記対となる2つの位相差画素に対する入射光の入射角度毎の感度である受光感度分布とに基づいて、前記デフォーカス量と前記位相差量との比に関するパラメータ値を求め、前記パラメータ値と前記検出された位相差量から前記デフォーカス量を求める撮像装置の合焦用パラメータ値算出方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201280048301.0A CN103842877B (zh) | 2011-09-30 | 2012-09-04 | 成像装置和合焦参数值计算方法 |
EP12836515.2A EP2762941B1 (en) | 2011-09-30 | 2012-09-04 | Imaging device and focus parameter value calculation method |
US14/228,989 US9167152B2 (en) | 2011-09-30 | 2014-03-28 | Image capturing apparatus and method for calculating focusing parameter value |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-218532 | 2011-09-30 | ||
JP2011218532 | 2011-09-30 | ||
JP2012189504 | 2012-08-30 | ||
JP2012-189504 | 2012-08-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/228,989 Continuation US9167152B2 (en) | 2011-09-30 | 2014-03-28 | Image capturing apparatus and method for calculating focusing parameter value |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013047111A1 true WO2013047111A1 (ja) | 2013-04-04 |
Family
ID=47995158
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/072481 WO2013047111A1 (ja) | 2011-09-30 | 2012-09-04 | 撮像装置及び合焦用パラメータ値算出方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9167152B2 (ja) |
EP (1) | EP2762941B1 (ja) |
JP (1) | JP5619294B2 (ja) |
CN (1) | CN103842877B (ja) |
WO (1) | WO2013047111A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015050047A1 (ja) * | 2013-10-02 | 2015-04-09 | オリンパス株式会社 | 焦点調節装置、撮影装置、および焦点調節方法 |
WO2016038936A1 (ja) * | 2014-09-11 | 2016-03-17 | 富士フイルム株式会社 | 撮像装置及び合焦制御方法 |
JP2018013624A (ja) * | 2016-07-21 | 2018-01-25 | リコーイメージング株式会社 | 焦点検出装置、焦点検出方法及び撮影装置 |
US20190285401A1 (en) * | 2016-11-30 | 2019-09-19 | Carl Zeiss Microscopy Gmbh | Determining the arrangement of a sample object by means of angle-selective illumination |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112013005594T5 (de) * | 2012-11-22 | 2015-10-22 | Fujifilm Corporation | Abbildungsvorrichtung, Unschärfebetrag-Berechnungsverfahren und Objektivvorrichtung |
CN104919352B (zh) * | 2013-01-10 | 2017-12-19 | 奥林巴斯株式会社 | 摄像装置和图像校正方法以及图像处理装置和图像处理方法 |
JP6426890B2 (ja) * | 2013-06-11 | 2018-11-21 | キヤノン株式会社 | 焦点検出装置及び方法、及び撮像装置 |
JP6372983B2 (ja) * | 2013-09-02 | 2018-08-15 | キヤノン株式会社 | 焦点検出装置およびその制御方法、撮像装置 |
JP5775918B2 (ja) * | 2013-09-27 | 2015-09-09 | オリンパス株式会社 | 撮像装置、画像処理方法及び画像処理プログラム |
JP6355348B2 (ja) * | 2014-01-31 | 2018-07-11 | キヤノン株式会社 | 撮像装置、撮像システム、撮像装置の制御方法、プログラム、および、記憶媒体 |
CN105259729B (zh) * | 2014-07-17 | 2019-10-18 | 宁波舜宇光电信息有限公司 | 一种af快速对焦方法 |
JP6931268B2 (ja) * | 2015-06-08 | 2021-09-01 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
CN106937107B (zh) * | 2015-12-29 | 2019-03-12 | 宁波舜宇光电信息有限公司 | 基于色差的摄像模组调焦方法 |
KR102018984B1 (ko) * | 2018-05-15 | 2019-09-05 | 재단법인 다차원 스마트 아이티 융합시스템 연구단 | 베이스라인을 증가시키기 위한 카메라 시스템 |
US11496728B2 (en) | 2020-12-15 | 2022-11-08 | Waymo Llc | Aperture health monitoring mode |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009075407A (ja) * | 2007-09-21 | 2009-04-09 | Nikon Corp | 撮像装置 |
JP2009077143A (ja) * | 2007-09-20 | 2009-04-09 | Fujifilm Corp | 自動撮影装置 |
JP2010107771A (ja) | 2008-10-30 | 2010-05-13 | Canon Inc | カメラ及びカメラシステム |
JP2010140013A (ja) * | 2008-11-11 | 2010-06-24 | Canon Inc | 焦点検出装置及びその制御方法 |
JP2011176714A (ja) * | 2010-02-25 | 2011-09-08 | Nikon Corp | カメラおよび画像処理プログラム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4500434B2 (ja) * | 2000-11-28 | 2010-07-14 | キヤノン株式会社 | 撮像装置及び撮像システム、並びに撮像方法 |
JP5028930B2 (ja) * | 2006-09-28 | 2012-09-19 | 株式会社ニコン | 焦点検出装置および撮像装置 |
JP4931225B2 (ja) | 2007-04-26 | 2012-05-16 | キヤノン株式会社 | 撮像装置 |
JP4978449B2 (ja) * | 2007-12-10 | 2012-07-18 | ソニー株式会社 | 撮像装置 |
JP5371331B2 (ja) * | 2008-09-01 | 2013-12-18 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法及びプログラム |
JP5230388B2 (ja) * | 2008-12-10 | 2013-07-10 | キヤノン株式会社 | 焦点検出装置及びその制御方法 |
JP5173954B2 (ja) * | 2009-07-13 | 2013-04-03 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
JP2011002848A (ja) | 2010-08-20 | 2011-01-06 | Canon Inc | 撮像装置 |
JP5589857B2 (ja) * | 2011-01-11 | 2014-09-17 | ソニー株式会社 | 画像処理装置、撮像装置、画像処理方法およびプログラム。 |
-
2012
- 2012-09-04 EP EP12836515.2A patent/EP2762941B1/en active Active
- 2012-09-04 WO PCT/JP2012/072481 patent/WO2013047111A1/ja active Application Filing
- 2012-09-04 JP JP2013536116A patent/JP5619294B2/ja active Active
- 2012-09-04 CN CN201280048301.0A patent/CN103842877B/zh active Active
-
2014
- 2014-03-28 US US14/228,989 patent/US9167152B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009077143A (ja) * | 2007-09-20 | 2009-04-09 | Fujifilm Corp | 自動撮影装置 |
JP2009075407A (ja) * | 2007-09-21 | 2009-04-09 | Nikon Corp | 撮像装置 |
JP2010107771A (ja) | 2008-10-30 | 2010-05-13 | Canon Inc | カメラ及びカメラシステム |
JP2010140013A (ja) * | 2008-11-11 | 2010-06-24 | Canon Inc | 焦点検出装置及びその制御方法 |
JP2011176714A (ja) * | 2010-02-25 | 2011-09-08 | Nikon Corp | カメラおよび画像処理プログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP2762941A4 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015050047A1 (ja) * | 2013-10-02 | 2015-04-09 | オリンパス株式会社 | 焦点調節装置、撮影装置、および焦点調節方法 |
JP2015072357A (ja) * | 2013-10-02 | 2015-04-16 | オリンパス株式会社 | 焦点調節装置 |
CN105593738A (zh) * | 2013-10-02 | 2016-05-18 | 奥林巴斯株式会社 | 焦点调节装置、摄影装置以及焦点调节方法 |
EP3054335A4 (en) * | 2013-10-02 | 2017-03-22 | Olympus Corporation | Focus adjustment device, photography device, and focus adjustment method |
WO2016038936A1 (ja) * | 2014-09-11 | 2016-03-17 | 富士フイルム株式会社 | 撮像装置及び合焦制御方法 |
JPWO2016038936A1 (ja) * | 2014-09-11 | 2017-06-01 | 富士フイルム株式会社 | 撮像装置及び合焦制御方法 |
US9781333B2 (en) | 2014-09-11 | 2017-10-03 | Fujifilm Corporation | Imaging device and focus control method |
JP2018013624A (ja) * | 2016-07-21 | 2018-01-25 | リコーイメージング株式会社 | 焦点検出装置、焦点検出方法及び撮影装置 |
US20190285401A1 (en) * | 2016-11-30 | 2019-09-19 | Carl Zeiss Microscopy Gmbh | Determining the arrangement of a sample object by means of angle-selective illumination |
US10921573B2 (en) * | 2016-11-30 | 2021-02-16 | Carl Zeiss Microscopy Gmbh | Determining the arrangement of a sample object by means of angle-selective illumination |
Also Published As
Publication number | Publication date |
---|---|
EP2762941A4 (en) | 2015-05-06 |
EP2762941A1 (en) | 2014-08-06 |
US9167152B2 (en) | 2015-10-20 |
CN103842877B (zh) | 2016-01-27 |
JP5619294B2 (ja) | 2014-11-05 |
JPWO2013047111A1 (ja) | 2015-03-26 |
CN103842877A (zh) | 2014-06-04 |
EP2762941B1 (en) | 2016-12-28 |
US20140211075A1 (en) | 2014-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5619294B2 (ja) | 撮像装置及び合焦用パラメータ値算出方法 | |
JP5629832B2 (ja) | 撮像装置及び位相差画素の感度比算出方法 | |
JP4952060B2 (ja) | 撮像装置 | |
KR101492624B1 (ko) | 촬상 소자, 초점 검출 장치 및 촬상 장치 | |
JP5176959B2 (ja) | 撮像素子および撮像装置 | |
JP5012495B2 (ja) | 撮像素子、焦点検出装置、焦点調節装置および撮像装置 | |
JP5572765B2 (ja) | 固体撮像素子、撮像装置、及び合焦制御方法 | |
US8218017B2 (en) | Image pickup apparatus and camera | |
JP4983271B2 (ja) | 撮像装置 | |
US8902349B2 (en) | Image pickup apparatus | |
JP5490312B2 (ja) | カラー撮像素子、撮像装置、及び撮像装置の制御プログラム | |
JP2012027390A (ja) | 撮像装置 | |
JP5211590B2 (ja) | 撮像素子および焦点検出装置 | |
JP4858179B2 (ja) | 焦点検出装置および撮像装置 | |
JP6854619B2 (ja) | 焦点検出装置及び方法、撮像装置、レンズユニット及び撮像システム | |
JP5338113B2 (ja) | 相関演算装置、焦点検出装置および撮像装置 | |
JP2009162845A (ja) | 撮像素子、焦点検出装置および撮像装置 | |
JP6442824B2 (ja) | 焦点検出装置 | |
JP2012128101A (ja) | 撮像装置 | |
JP6349624B2 (ja) | 撮像素子および焦点検出装置 | |
JP5978570B2 (ja) | 撮像装置 | |
JP5338118B2 (ja) | 相関演算装置、焦点検出装置および撮像装置 | |
JP5691440B2 (ja) | 撮像装置 | |
JP2012063456A (ja) | 撮像装置 | |
JP2012042863A (ja) | 撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12836515 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013536116 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2012836515 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012836515 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |