WO2011158507A1 - 画像処理装置及び画像処理方法 - Google Patents
画像処理装置及び画像処理方法 Download PDFInfo
- Publication number
- WO2011158507A1 WO2011158507A1 PCT/JP2011/003437 JP2011003437W WO2011158507A1 WO 2011158507 A1 WO2011158507 A1 WO 2011158507A1 JP 2011003437 W JP2011003437 W JP 2011003437W WO 2011158507 A1 WO2011158507 A1 WO 2011158507A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- optical system
- distance
- size
- subject
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/32—Measuring distances in line of sight; Optical rangefinders by focusing the object, e.g. on a ground glass screen
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/36—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
- G02B7/38—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals measured at different points on the optical axis, e.g. focussing on two or more planes and comparing image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/571—Depth or shape recovery from multiple images from focus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/958—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
- H04N23/959—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10148—Varying focus
Definitions
- the present invention relates to an image processing apparatus that measures the depth of a scene based on a plurality of images taken from a single viewpoint.
- the active method is a method of irradiating a subject with infrared rays, ultrasonic waves, laser, or the like, and calculating the distance to the subject based on the time until the reflected wave returns or the angle of the reflected wave.
- the passive method is a method for calculating a distance based on an image of a subject. In particular, when measuring the distance to a subject using a camera, a passive method that does not require a device for irradiating infrared rays or the like is widely used.
- DFD Depth from Defocus
- Equation (1) Equation (1)
- h represents a point spread function (Point Spread Function, hereinafter referred to as PSF) representing the blur state of the optical system
- d (x, y) is from the lens principal point to the subject at the position (x, y).
- subject distance the lens principal point to the subject at the position (x, y).
- * in the expression represents a convolution operation.
- Equation (1) includes S (x, y) and d (x, y) as unknowns.
- an image I 2 (x, y) in which the focus position is changed in the same scene is photographed. Changing the focus position is equivalent to changing the PSF for the same subject distance. That is, Formula (2) is materialized.
- h ′ represents a PSF at a focus position different from h.
- Patent Document 1 and Patent Document 2 disclose a technique using a telecentric optical system so that a change in image magnification does not occur.
- Patent Document 1 Patent Document 1
- the second issue is the uniformity of blur.
- the PSF is greatly different, it is impossible to apply processing using a uniform PSF to the entire image in each DFD algorithm. . This complicates the distance calculation process. Therefore, it is desirable that the blur is uniform throughout the image.
- the present invention has been made in view of the above reasons, and an image processing apparatus and an image processing method including an optical system capable of capturing an image having a sufficiently small change in image magnification and uniform blur throughout the image.
- the purpose is to provide.
- An image processing apparatus is based on an image pickup device that picks up an image, an optical system that forms an image of a subject on the image pickup device, and the size of blur that occurs in the image.
- a distance measurement unit that measures a subject distance, which is a distance to the subject, and the optical system has a characteristic that the distance from the subject system measured by the distance measurement unit is the farthest and farthest from the optical system.
- the optical system When the change of the focus position in the optical system is performed by moving the image sensor, the optical system is configured such that the incident angle ⁇ of the principal ray to the image sensor satisfies the following expression (3). It may be a feature.
- F is the F number of the optical system
- f is the focal length of the optical system
- minD is the number of pixels indicating the minimum blur that can be detected by the distance measuring unit
- B is measured by the distance measuring unit.
- the number of steps of the subject distance, u min is the closest distance to the optical system in the subject distance range measured by the distance measuring unit.
- the amount of change in image magnification caused by the change in focus is less than the size that can be distinguished in the distance measurement process. That is, the same coordinates on a plurality of captured images with different focus positions are associated with a single coordinate of the original image S (x, y).
- the range of the subject distance measured by the distance measuring unit is closest to the optical system.
- a difference ⁇ y between the image size at the image sensor when the focus position is adjusted and the image size at the image sensor when the focus position is adjusted to the farthest side from the optical system satisfies the following expression (4). .
- d is the pixel size of the image sensor. According to such a configuration, it is ensured that the amount of change in the image magnification caused by the focus change is less than the size that can be distinguished in the distance measurement processing. That is, the same coordinates on a plurality of captured images with different focus positions are associated with a single coordinate of the original image S (x, y).
- the optical system may be characterized in that the amount of field curvature ⁇ q at each image height satisfies the following expression (5).
- F is the F number of the optical system
- f is the focal length of the optical system
- minD is the number of pixels indicating the minimum blur that can be detected by the distance measuring unit
- u max is measured by the distance measuring unit.
- d is the size of the pixel of the image sensor.
- Such a configuration ensures that the amount of off-axis blur caused by field curvature is less than the size that can be distinguished in the distance measurement process. As a result, the same blur distance is obtained for the same subject distance, and the error in the distance measurement process can be reduced.
- the optical system may be characterized in that the sagittal field curvature ⁇ qs and the tangential field curvature ⁇ qt at each image height satisfy the following expressions (6) and (7), respectively.
- F is the F number of the optical system
- f is the focal length of the optical system
- minD is the number of pixels indicating the minimum blur that can be detected by the distance measuring unit
- u max is measured by the distance measuring unit.
- the distance on the farthest side from the optical system in the subject distance range, d is the size of the pixel of the image sensor
- This configuration ensures that the amount of off-axis blur caused by astigmatism falls below the size that can be distinguished in the distance measurement process. As a result, the blur shape becomes more uniform in the entire image, and an error in performing distance measurement processing by applying a uniform PSF to the entire image can be reduced.
- the optical system may further be characterized in that the coma aberration amount ⁇ c at each image height satisfies the following expression (8).
- minD is the number of pixels indicating the minimum size of blur that can be detected by the distance measuring unit
- d is the size of the pixels of the image sensor.
- This configuration ensures that the amount of off-axis blur caused by coma is less than the discriminable magnitude in the distance measurement process. As a result, the blur shape becomes more uniform throughout the image, and errors in performing distance measurement processing by applying uniform PSF to the entire image can be reduced.
- the amount of change in image magnification and the non-uniformity of blur on and off the axis are maintained in a range that does not affect the image processing for distance measurement. For this reason, an error due to lens performance does not occur even if image magnification adjustment or the like is not taken into consideration during image processing and uniform processing is applied to the entire image.
- FIG. 1 is a diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention.
- FIG. 2A is a diagram schematically illustrating the imaging relationship of the lenses.
- FIG. 2B is a diagram schematically showing how the image magnification changes with the position of the image plane.
- FIG. 3 is a diagram schematically showing the relationship between the amount of movement of the image plane and the amount of change in image magnification.
- FIG. 4 is a diagram schematically showing the relationship between the position of the image plane and the size of the circle of confusion.
- FIG. 5A is a diagram schematically illustrating curvature of field.
- FIG. 5B is a diagram schematically showing astigmatism.
- FIG. 5C is a diagram schematically illustrating coma aberration.
- FIG. 5A is a diagram schematically illustrating curvature of field.
- FIG. 5B is a diagram schematically showing astigmatism.
- FIG. 5C is a diagram schematically illustrating coma
- FIG. 6 is a diagram schematically showing the relationship between the curvature of field and the size of the circle of confusion.
- FIG. 7 is a diagram schematically showing the magnitude of coma aberration.
- FIG. 8 is a diagram showing the lens shape of the optical system in the first embodiment.
- FIG. 9 is a diagram showing a minimum configuration of an image processing apparatus for realizing the present invention.
- FIG. 1 is a block diagram showing a functional configuration of the image processing apparatus according to the first embodiment of the present invention.
- the image processing apparatus 10 includes an imaging unit 12, a frame memory 14, a distance measurement unit 16, and a control unit 18.
- the imaging unit 12 includes an imaging element 20 such as a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor, and an optical system 22 for causing the imaging element 20 to form a subject image. Capture an image and output the image.
- an imaging element 20 such as a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor
- an optical system 22 for causing the imaging element 20 to form a subject image. Capture an image and output the image.
- the frame memory 14 is a memory for storing images in units of frames, and stores the image output by the imaging unit 12 and the like.
- the distance measuring unit 16 measures the subject distance based on the image captured by the imaging unit 12.
- a well-known DFD algorithm that is generally used can be used, including the method described in Non-Patent Document 1.
- the control unit 18 includes a CPU, a ROM and a RAM that store a control program, and controls each functional block included in the image processing apparatus 10.
- FIG. 2A is a diagram schematically illustrating the imaging relationship of the lenses. According to the lens formula, the following equation (9) is established between the distance u from the principal point of the lens 32 (hereinafter, simply referred to as “principal point”) to the subject surface and the distance v from the principal point to the image plane. The relationship indicated by is established.
- f is the focal length of the lens 32.
- the size of the subject is y
- the size of the subject image is y ′. From equation (9), when the distance u from the principal point to the subject surface changes under the condition that the focal distance f is constant, the distance v from the principal point to the image surface also changes. When the distance v changes, the image size y ′′ also changes as shown in FIG. 2B. This is because the principal ray 34 passing through the center of the aperture is inclined with respect to the optical axis 36 and is incident on the image plane. The relationship between the incident angle of the principal ray 34 and the amount of change in image size will be described in more detail.
- Is ⁇ v, and the incident angle of the principal ray 34 is ⁇ the amount of change ⁇ y from the image size y at the reference position can be obtained by the following equation (10).
- Equation (11) At least one of ⁇ v or ⁇ needs to be small.
- ⁇ v In order to perform distance measurement by DFD, it is necessary to change the blur with a change in focus, and therefore ⁇ v cannot be made very small.
- ⁇ v In order to perform distance measurement by DFD, it is necessary to change the blur with a change in focus, and therefore ⁇ v cannot be made very small.
- the relationship between ⁇ v and the size of blur will be described with reference to FIG.
- the focal length of the lens 32 is f
- the distance u from the principal point to the subject surface, and the image plane is at a position away from the in-focus position by ⁇ v.
- the diameter D of the circle of confusion representing the size of the blur generated at this time is expressed by the following formula (12).
- F is the F number of the optical system 22.
- ⁇ v is determined so that the diameter D is large enough for distance measurement by DFD.
- D is expressed by the following equation (13 ).
- the optical system 22 may be designed so as to satisfy Expression (15).
- the change of the focus position in the optical system 22 is performed by moving the imaging element 20, but the lens included in the optical system 22 is moved to change the focus position. May be performed.
- the image size on the image sensor 20 before the change of the focus position (when the focus position is set to the side closest to the optical system 22 in the range of the subject distance measured by the distance measuring unit 16) is y ′, and the focus
- the size of the image on the image sensor 20 after the position change (when the focus position is set to the farthest side from the optical system 22 in the range of the subject distance measured by the distance measuring unit 16) is y ′′. If the absolute value of the difference ⁇ y is smaller than the size d of one pixel of the image sensor, it can be considered that the size of the image is not substantially changed, and therefore the following formula (16) needs to be satisfied. There is.
- Off-axis aberration curvature of field
- the blur uniformity is affected by three of the field curvature, astigmatism and coma aberration shown in FIGS. 5A, 5B and 5C, respectively.
- the allowable amount of these aberrations will be described.
- the curvature of field is a phenomenon in which the focal point of off-axis light does not exist on a plane that includes the focal point of the on-axis light beam and is perpendicular to the optical axis, and moves back and forth in the optical axis direction.
- distance measurement by DFD is performed on the assumption that the blur is uniform in the entire image, the subject is actually at a different distance because the blur of the subject at an equal distance becomes different depending on the angle of view. It will be measured as a thing.
- the permissible amount of field curvature will be described below.
- F is the F number of the optical system 22. Smaller than the size d * minD of differentiable minimum blur in D q is DFD algorithm, substantially free of its influence is negligible. In order to satisfy this condition, the field curvature ⁇ q at each image height needs to satisfy the following equation (18). Therefore, the optical system 22 may be designed so as to satisfy Expression (18).
- Astigmatism is a phenomenon in which the in-focus position differs between a light beam in a concentric direction (sagittal direction) and a light beam in a radial direction (tangential direction) with respect to the optical axis.
- astigmatism is present, off-axis blur does not spread into a uniform circular shape, causing a phenomenon such as a vertically long or horizontally long ellipse, and the uniformity of the blurred shape is impaired throughout the image.
- the amount of astigmatism is the distance between the surface perpendicular to the optical axis including the focal point in each of the sagittal and tangential directions, and the surface perpendicular to the optical axis including the focal point of the on-axis light beam. Defined as field curvature. Therefore, when the sagittal field curvature amount at each image height is ⁇ qs and the tangential field curvature amount is ⁇ qt, the optical system 22 is designed so that these satisfy inequality (18) instead of ⁇ q. Thus, the effect of astigmatism can be substantially ignored. That is, the optical system 22 may be designed so as to satisfy the expressions (19) and (20).
- the coma aberration is a phenomenon in which the size of the image formed by the principal ray of off-axis light and the outer ray differs.
- the blur does not spread evenly off-axis, but has a tail-like shape, which also impairs the uniformity of the blur shape throughout the image.
- the allowable amount of coma will be described below.
- the coma aberration amount ⁇ c is defined as a difference in size between an image formed by the principal ray 71 passing through the center of the aperture and an image formed by the ray 72 passing through the outermost side of the aperture.
- ⁇ c is positive when the image becomes larger. Therefore, if the size of ⁇ c at each image height is smaller than the minimum blur size dminD that can be discriminated by the DFD algorithm, the influence can be substantially ignored. For that purpose, it is necessary to satisfy the following formula (21). Therefore, the optical system 22 may be designed so as to satisfy Expression (21).
- each condition is determined by the combined focal length and F number of the entire optical system 22 as the focal length and F number.
- the upper limit is set for the image magnification change and the field curvature based on a finite subject distance. However, in order to simplify the calculation, these calculations may be performed with the subject distance set to infinity.
- Specific numerical examples of the optical system 22 according to the present embodiment are shown in Table 1, and the shape is shown in FIG. Note that R, d, nd, and ⁇ d represent the radius of curvature (unit: mm), the surface interval (unit: mm), the refractive index of the d-line, and the Abbe number, respectively.
- the surface number * represents an aspherical surface. In FIG. 8, the surface numbers are indicated by numbers.
- the aspherical shape is expressed by the following formula (22).
- c 1 / R
- k is a conical coefficient
- a 4 , A 6 , A 8 , A 10 , and A 12 are fourth-order, sixth-order, eighth-order, tenth-order, and twelfth-order aspheric coefficients, respectively. It is.
- Table 2 shows the conic coefficient k and the aspheric coefficients A 4 , A 6 , A 8 , A 10 , and A 12 of each aspheric surface.
- the surface numbers in Table 2 are the same as the surface numbers in Table 1.
- the focal length is 15.78 mm
- the F number is 2.8
- the half angle of view is 14.97 °.
- the incident angle ⁇ of the principal ray is
- the curvature of field ⁇ q is
- the coma aberration amount ⁇ c needs to satisfy
- Table 3 shows incident angles ⁇ F, ⁇ d, and ⁇ C (units: degrees) of chief rays at respective image heights with respect to the F-line, d-line, and C-line of the optical system 22 in Table 1.
- Table 4 shows sagittal field curvature amounts ⁇ qsF, ⁇ qsd, ⁇ qsC, and tangential field curvature amounts ⁇ qtF, ⁇ qtd, and ⁇ qtC with respect to the F-line, d-line, and C-line of the optical system 22 in Table 1.
- the curvature of field shown here indicates the amount of displacement from the axial image plane position at each wavelength, and its unit is ⁇ m.
- Table 5 shows coma aberration amounts ⁇ cF, ⁇ cd, and ⁇ cC (unit: ⁇ m) with respect to the F-line, d-line, and C-line of the optical system 22 in Table 1.
- the specified conditions are satisfied for any image height and any wavelength. For this reason, the amount of change in image magnification and the non-uniformity of blurring on and off the axis are kept in a range that does not affect the image processing for distance measurement. Even if uniform processing is applied to the entire image, no error due to lens performance occurs.
- FIG. 9 is a diagram showing the minimum configuration of the image processing apparatus for realizing the present invention, and the image processing apparatus 10 only needs to include the imaging unit 12 and the distance measurement unit 16.
- the present invention can also be realized as a method including, as steps, processing executed by a distance measuring unit provided in the image processing apparatus.
- the present invention can also be realized as an integrated circuit in which the distance measuring unit provided in the image processing apparatus is integrated.
- the present invention can also be realized as a program for causing a computer to execute processing executed by a distance measuring unit included in the image processing apparatus.
- the present invention can perform distance measurement based on an image photographed from a single viewpoint, it can be applied to all imaging devices. In particular, it has uniform blur characteristics over the entire image, and there is no need to switch processing for each location of the image, making it ideal for applications that require high distance measurement accuracy and applications that require a small amount of processing. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Optics & Photonics (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computing Systems (AREA)
- Studio Devices (AREA)
- Lenses (AREA)
- Automatic Focus Adjustment (AREA)
- Measurement Of Optical Distance (AREA)
- Focusing (AREA)
Abstract
Description
以下本発明の実施の形態について、図面を参照しながら説明する。図1は本発明の実施の形態1にかかる画像処理装置の機能的な構成を示すブロック図である。
図2Aはレンズの結像関係を模式的に示す図である。レンズの公式より、レンズ32の主点(以下、単に「主点」という。)から被写体面までの距離uと主点から像面までの距離vとの間には、以下の式(9)で示す関係が成立する。
先述のとおり、DFDにおいてはぼけが画像全体で均一であることが望ましい。いわゆるザイデルの5収差のうち、ぼけの均一性に影響するのは図5A、図5Bおよび図5Cにそれぞれ示す像面湾曲、非点収差およびコマ収差の3つである。以下ではこれらの収差の許容量について説明する。
次に、図5Bに示す非点収差について説明する。非点収差は光軸に対して同心円方向(サジタル方向)の光束と放射方向(タンジェンシャル方向)の光束で合焦位置が異なる現象である。非点収差が存在すると軸外ではぼけが均等な円形に広がらず、縦長や横長の楕円形になるなどの現象が生じ、画像全体でぼけの形状の均一性が損なわれる。
最後に、図5Cに示すコマ収差について説明する。コマ収差は軸外光の主光線と外側の光線によってできる像の大きさが異なる現象である。コマ収差が存在すると軸外ではぼけが均等に広がらず、尾を引いたような形状になり、やはり画像全体でぼけの形状の均一性が損なわれる。コマ収差の許容量について、以下で説明する。
本実施の形態にかかる光学系22の具体的な数値実施例を表1に、形状を図8に記す。なお、R、d、nd、νdはそれぞれ各面の曲率半径(単位:mm)、面間隔(単位:mm)、d線の屈折率、アッベ数を表す。また、面番号の*は非球面を表す。図8には面番号を数字で示している。非球面形状は以下の式(22)で表される。
12 撮像部
14 フレームメモリ
16 距離計測部
18 制御部
20 撮像素子
22 光学系
Claims (6)
- 画像を撮像する撮像素子と、
前記撮像素子に被写体の像を結像させるための光学系と、
前記画像に生じるぼけの大きさに基づいて、前記被写体までの距離である被写体距離を計測する距離計測部とを備え、
前記光学系の特性は、前記距離計測部において計測する前記被写体距離の範囲のうち、前記光学系に最も近い側と最も遠い側にフォーカス位置を合わせた場合の、像倍率の変化が所定の画素数以下になると同時に、前記光学系の像高による点広がり関数の違いが前記距離計測部による前記被写体距離の計測に影響を及ぼさないための所定の大きさ以下になるという条件を満たす
画像処理装置。 - 画像を撮像する撮像素子と、
前記撮像素子に被写体の像を結像させるための光学系と、
前記画像に生じるぼけの大きさに基づいて、前記被写体までの距離である被写体距離を計測する距離計測部とを備える画像処理装置による画像処理方法であって、
前記距離計測部が、前記画像に生じるぼけの大きさに基づいて、前記被写体距離を計測するステップを含み、
前記光学系の特性は、前記距離計測部において計測する前記被写体距離の範囲のうち、前記光学系に最も近い側と最も遠い側にフォーカス位置を合わせた場合の、像倍率の変化が所定の画素数以下になると同時に、前記光学系の像高による点広がり関数の違いが前記距離計測部による前記被写体距離の計測に影響を及ぼさないための所定の大きさ以下になるという条件を満たす
画像処理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201180003333.4A CN102472621B (zh) | 2010-06-17 | 2011-06-16 | 图像处理装置及图像处理方法 |
EP11795415.6A EP2584310A4 (en) | 2010-06-17 | 2011-06-16 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD |
US13/390,536 US9134126B2 (en) | 2010-06-17 | 2011-06-16 | Image processing device, and image processing method |
JP2011544709A JP5841844B2 (ja) | 2010-06-17 | 2011-06-16 | 画像処理装置及び画像処理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-138786 | 2010-06-17 | ||
JP2010138786 | 2010-06-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011158507A1 true WO2011158507A1 (ja) | 2011-12-22 |
Family
ID=45347922
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/003437 WO2011158507A1 (ja) | 2010-06-17 | 2011-06-16 | 画像処理装置及び画像処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9134126B2 (ja) |
EP (1) | EP2584310A4 (ja) |
JP (1) | JP5841844B2 (ja) |
CN (1) | CN102472621B (ja) |
WO (1) | WO2011158507A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2011158508A1 (ja) * | 2010-06-17 | 2013-08-19 | パナソニック株式会社 | 画像処理装置および画像処理方法 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013103410A1 (en) | 2012-01-05 | 2013-07-11 | California Institute Of Technology | Imaging surround systems for touch-free display control |
WO2014107434A1 (en) | 2013-01-02 | 2014-07-10 | California Institute Of Technology | Single-sensor system for extracting depth information from image blur |
JP6091228B2 (ja) * | 2013-01-30 | 2017-03-08 | キヤノン株式会社 | 画像処理装置、撮像装置 |
JP5851650B2 (ja) * | 2013-03-04 | 2016-02-03 | 富士フイルム株式会社 | 復元フィルタ生成装置及び方法、画像処理装置、撮像装置、復元フィルタ生成プログラム並びに記録媒体 |
WO2015053113A1 (ja) * | 2013-10-08 | 2015-04-16 | オリンパス株式会社 | 撮像装置及び電子機器 |
US10031328B2 (en) * | 2015-07-24 | 2018-07-24 | General Electric Company | Systems and methods for image processing in optical microscopes |
CN108141530B (zh) * | 2015-09-29 | 2020-06-30 | 富士胶片株式会社 | 图像处理装置、图像处理方法及介质 |
KR102382871B1 (ko) | 2017-07-18 | 2022-04-05 | 삼성전자주식회사 | 렌즈의 포커스를 제어하기 위한 전자 장치 및 전자 장치 제어 방법 |
CN109660791B (zh) * | 2018-12-28 | 2020-05-15 | 中国科学院长春光学精密机械与物理研究所 | 一种在轨航天相机系统像散辨别方法 |
US20230160778A1 (en) * | 2021-11-19 | 2023-05-25 | Motional Ad Llc | Systems and methods for measurement of optical vignetting |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06505096A (ja) * | 1991-02-12 | 1994-06-09 | オックスフォード・センサ・テクノロジイ・リミテッド | 光センサ |
JP2963990B1 (ja) | 1998-05-25 | 1999-10-18 | 京都大学長 | 距離計測装置及び方法並びに画像復元装置及び方法 |
JP3481631B2 (ja) | 1995-06-07 | 2003-12-22 | ザ トラスティース オブ コロンビア ユニヴァーシティー イン ザ シティー オブ ニューヨーク | 能動型照明及びデフォーカスに起因する画像中の相対的なぼけを用いる物体の3次元形状を決定する装置及び方法 |
JP2009288042A (ja) * | 2008-05-29 | 2009-12-10 | Nikon Corp | 距離測定装置 |
JP2010016743A (ja) * | 2008-07-07 | 2010-01-21 | Olympus Corp | 測距装置、測距方法、測距プログラム又は撮像装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1256831A1 (en) * | 2001-05-11 | 2002-11-13 | MVTec Software GmbH | Method and system for calibrating an image range device |
JP2003215442A (ja) | 2002-01-25 | 2003-07-30 | Canon Inc | 多点測距装置 |
GB0301775D0 (en) | 2003-01-25 | 2003-02-26 | Wilson John E | Device and method for 3Dimaging |
JP5173665B2 (ja) * | 2008-08-08 | 2013-04-03 | キヤノン株式会社 | 画像撮影装置およびその距離演算方法と合焦画像取得方法 |
US8432479B2 (en) * | 2010-04-30 | 2013-04-30 | Apple Inc. | Range measurement using a zoom camera |
-
2011
- 2011-06-16 JP JP2011544709A patent/JP5841844B2/ja active Active
- 2011-06-16 EP EP11795415.6A patent/EP2584310A4/en not_active Ceased
- 2011-06-16 CN CN201180003333.4A patent/CN102472621B/zh active Active
- 2011-06-16 US US13/390,536 patent/US9134126B2/en active Active
- 2011-06-16 WO PCT/JP2011/003437 patent/WO2011158507A1/ja active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06505096A (ja) * | 1991-02-12 | 1994-06-09 | オックスフォード・センサ・テクノロジイ・リミテッド | 光センサ |
JP3481631B2 (ja) | 1995-06-07 | 2003-12-22 | ザ トラスティース オブ コロンビア ユニヴァーシティー イン ザ シティー オブ ニューヨーク | 能動型照明及びデフォーカスに起因する画像中の相対的なぼけを用いる物体の3次元形状を決定する装置及び方法 |
JP2963990B1 (ja) | 1998-05-25 | 1999-10-18 | 京都大学長 | 距離計測装置及び方法並びに画像復元装置及び方法 |
JP2009288042A (ja) * | 2008-05-29 | 2009-12-10 | Nikon Corp | 距離測定装置 |
JP2010016743A (ja) * | 2008-07-07 | 2010-01-21 | Olympus Corp | 測距装置、測距方法、測距プログラム又は撮像装置 |
Non-Patent Citations (2)
Title |
---|
M. SUBBARAO; G. SURYA: "Depth from Defocus: a spatial domain approach", INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 13, no. 3, 1994, pages 271 - 294 |
See also references of EP2584310A4 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2011158508A1 (ja) * | 2010-06-17 | 2013-08-19 | パナソニック株式会社 | 画像処理装置および画像処理方法 |
JP5869883B2 (ja) * | 2010-06-17 | 2016-02-24 | パナソニック株式会社 | 画像処理装置 |
Also Published As
Publication number | Publication date |
---|---|
CN102472621B (zh) | 2015-08-05 |
JPWO2011158507A1 (ja) | 2013-08-19 |
JP5841844B2 (ja) | 2016-01-13 |
CN102472621A (zh) | 2012-05-23 |
US20120140064A1 (en) | 2012-06-07 |
US9134126B2 (en) | 2015-09-15 |
EP2584310A1 (en) | 2013-04-24 |
EP2584310A4 (en) | 2014-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5841844B2 (ja) | 画像処理装置及び画像処理方法 | |
JP5777592B2 (ja) | ズームレンズ及びそれを有する撮像装置 | |
JP6726183B2 (ja) | 写真またはフィルムカメラ用の対物レンズ、およびそのような対物レンズの変調伝達関数の特定の空間周波数範囲の選択的減衰方法 | |
JP5868275B2 (ja) | ズームレンズ及びそれを有する撮像装置 | |
KR20110033816A (ko) | 단일-렌즈 확장피사계심도 촬상시스템 | |
JP2007158825A (ja) | 画像入力装置 | |
US10462363B2 (en) | Optical apparatus | |
US7676147B2 (en) | Focus detection apparatus and optical apparatus | |
JP2014010286A5 (ja) | ||
JP2019066701A (ja) | ズームレンズおよびそれを有する撮像装置 | |
US11800226B2 (en) | Control apparatus, image pickup apparatus, lens apparatus, control method, and storage medium, that provide an image-stabilization driving amount for a predetermined image point position | |
JP6376809B2 (ja) | 三次元形状計測システムに用いられる投影装置および撮像装置 | |
TWI467262B (zh) | 透鏡調芯裝置及攝像透鏡 | |
JP4527203B2 (ja) | 測距装置 | |
US10225450B2 (en) | Image capturing apparatus and image capturing unit | |
JP6628678B2 (ja) | 距離測定装置、撮像装置、および距離測定方法 | |
JP5264847B2 (ja) | 測距装置、レンズシステムおよび撮像装置 | |
JP2006094469A (ja) | 撮像装置および撮像方法 | |
JP2016066995A (ja) | 像ズレ量算出装置、撮像装置、および像ズレ量算出方法 | |
JP6604144B2 (ja) | 結像レンズ系および撮像装置および検査装置 | |
JP7035839B2 (ja) | 結像レンズ系および撮像装置 | |
US20230090825A1 (en) | Optical element assembly, optical apparatus, estimation method, and non-transitory storage medium storing estimation program | |
US20240114244A1 (en) | Control apparatus, image stabilization apparatus, optical apparatus, control method, and storage medium | |
US20220321786A1 (en) | Control apparatus, image pickup apparatus, lens apparatus, camera system, control method, and memory medium | |
JP2009041968A (ja) | 復元処理を前提としたレンズの評価方法および装置、評価用補正光学系 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180003333.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011544709 Country of ref document: JP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11795415 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13390536 Country of ref document: US Ref document number: 2011795415 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |