WO2007058100A1 - 合焦検出装置 - Google Patents
合焦検出装置 Download PDFInfo
- Publication number
- WO2007058100A1 WO2007058100A1 PCT/JP2006/322265 JP2006322265W WO2007058100A1 WO 2007058100 A1 WO2007058100 A1 WO 2007058100A1 JP 2006322265 W JP2006322265 W JP 2006322265W WO 2007058100 A1 WO2007058100 A1 WO 2007058100A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- distance
- luminance information
- focus detection
- optical system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/36—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
- G02B7/38—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals measured at different points on the optical axis, e.g. focussing on two or more planes and comparing image data
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B3/00—Focusing arrangements of general interest for cameras, projectors or printers
- G03B3/02—Focusing arrangements of general interest for cameras, projectors or printers moving lens along baseboard
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B3/00—Focusing arrangements of general interest for cameras, projectors or printers
- G03B3/04—Focusing arrangements of general interest for cameras, projectors or printers adjusting position of image plane without moving lens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/673—Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B2217/00—Details of cameras or camera bodies; Accessories therefor
- G03B2217/002—Details of arrangement of components in or on camera body
Definitions
- the present invention relates to a focus detection device that determines a focus position using a light beam that has passed through an optical system that forms an image of light from a subject at a predetermined position.
- the focus detection method while driving the focus lens in the optical axis direction, a plurality of images are photographed during that time, and an evaluation value of blur for the plurality of photographed images is calculated.
- the evaluation value uses the contrast of the image and the sum of the high frequency components, and the larger the value, the better the focus.
- the size of blur itself may be valued. For example, if the integrated value of the low frequency component of the spatial frequency of the image is used as the evaluation value, the smaller the value is, the more the focus is achieved.
- the focus lens is driven a minute distance in either the near point or far point direction.
- the peak of the evaluation value appears in the direction opposite to the driving direction (near point side). Will exist. Therefore, in this case, the focus lens is driven in the reverse direction.
- the evaluation value in the driving direction first increases. Therefore, in this case, since there is a peak in the driving direction, the driving is continued in the same direction.
- the focus detection method estimates the focus information or distance information of the subject based on the estimated value of the focus position, so this focus detection method is called “Depth From Focus (hereinafter abbreviated as DFF)” method. Called. It is also called the “hill-climbing method” because control is performed so that the evaluation value becomes high and the peak of the evaluation value is estimated.
- DFF Depth From Focus
- DFD Depth From Defocus
- luminance information is acquired at two locations with different optical path lengths.
- a blur parameter is calculated by performing arithmetic processing on a plurality of images having different blurs, and in-focus determination is performed.
- the blur parameter is a representative value indicating a blur state of luminance information, and indicates a value correlated with the dispersion value of the point spread function (PSF) of the optical system.
- PSF is a function that expresses how light rays spread when an ideal point image passes through the optical system.
- the DFD method at least two focus determination brightness information from the same subject, the same part, and the same line-of-sight direction are changed, and at least one shooting parameter that affects the blurred state of the captured image is changed.
- the shooting parameters include focus lens position, aperture amount, focal length, and the like. In this description, the description is limited to the case where only the position of the focus lens is changed.
- the focus lens is moved to the predetermined first position and second position.
- the first luminance information is acquired at the first position
- the second luminance information is acquired at the second position.
- the acquired luminance information is subjected to low-pass filter processing to remove electrical noise, image magnification correction processing to correct different magnifications between the first and second images, and normal correction processing such as luminance distribution. Is done. If necessary, select the area to be focused in the acquired luminance information.
- the selection is performed on one of the luminance information, and the corresponding area is selected on the other luminance information. Then, the difference between the first luminance information and the second luminance information is calculated from the two normal tig processing results in the region where the focus determination is to be performed. In addition, the second derivative of each of the first luminance information and the second luminance information is calculated, and the average value thereof is calculated. Then, by dividing the difference between the first luminance information and the second luminance information by the average value of the second derivative of the luminance information, the variance of the PSF corresponding to the first or second luminance information can be obtained. A correlated blur parameter is calculated.
- the subject distance is obtained based on the relational expression between the dispersion of the PSF and the subject distance described in US Pat. No. 4,965,840.
- the relationship between the blur parameter and subject distance differs depending on the lens configuration and state (zoom, aperture). Further, the relationship between a certain subject distance and a focus lens position for focusing on the subject distance, that is, a focusing lens position is given in advance by data of the lens system. Therefore, the relationship between the blur parameter and the focus lens position to be controlled can be obtained by an individual relational expression or a calculation table depending on the lens system and the lens state.
- the present invention has been made in view of the above points, and an object of the present invention is to provide a focus detection apparatus capable of performing focus detection without requiring correction of a characteristic table for each solid.
- One aspect of the focus detection apparatus of the present invention is a focus detection apparatus that determines a focus position using a light beam that has passed through an optical system that forms an image of light from a subject at a predetermined position.
- Luminance information acquisition means for acquiring the luminance information of the light imaged by the optical system, and the optical system force up to the subject based on two pieces of luminance information obtained by using the luminance information acquisition means.
- a blur parameter calculating means for calculating a blur parameter corresponding to the distance of
- Distance estimation means for estimating distance information corresponding to the distance from the optical system to the subject based on the blur parameter calculated using the blur parameter calculation means
- a focus detection means for calculating an evaluation value and determining a focus position based on the evaluation value
- FIG. 1 is a diagram showing a configuration of a compact camera to which an in-focus detection device according to a first embodiment of the present invention is applied.
- FIG. 2 is a block diagram of the focus detection apparatus according to the first embodiment.
- FIG. 3 is a flowchart for explaining the processing of the focus detection apparatus according to the first embodiment.
- FIG. 4 is a diagram showing a relationship between a blur parameter and a focus lens position.
- FIG. 5 is a diagram showing the relationship between the focus evaluation value and the lens position for explaining the hill-climbing method from the DFD estimation result.
- FIG. 6 is a diagram showing a configuration of a single-lens reflex camera to which a focus detection apparatus according to a modification of the first embodiment of the present invention is applied.
- FIG. 7 is a block configuration diagram of a focus detection apparatus according to a second embodiment of the present invention.
- FIG. 8 is a flowchart for explaining the process of the focus detection apparatus according to the second embodiment.
- FIG. 9 is a diagram showing an example of a distance image.
- FIG. 10 is a diagram showing an example of a mask used for mask processing in the DFF region extraction means.
- FIG. 11 is a diagram showing another example of a mask used for mask processing in the DFF region extraction means.
- FIG. 12 is a block configuration diagram of a focus detection apparatus according to a second modification of the second embodiment.
- FIG. 13 is a diagram showing an example of a calculation result of a second order differential calculation unit.
- FIG. 14 is a flowchart for explaining the process of the focus detection apparatus according to the second modification of the second embodiment.
- FIG. 15 is a block diagram of a focus detection apparatus according to a third modification of the second embodiment.
- FIG. 16 is a block configuration diagram of a focus detection apparatus according to a third embodiment of the present invention.
- FIG. 17 is a diagram showing a configuration of a compact camera to which a focus detection apparatus according to a fourth embodiment of the present invention is applied.
- the focus detection apparatus is applied to a compact camera 10 as shown in FIG.
- the focus detection apparatus includes an optical system 12, an image sensor 14 and a luminance signal control unit 16, a DFFZDFD switching unit 18, a distance estimation unit 20, and a hill-climbing method that function as luminance information acquisition means.
- the calculation unit 22 and the optical system control unit 24 are included.
- the optical system 12 is composed of a plurality of lens groups (taking lenses) for taking an image, and some of the lens groups are arranged in the optical axis direction to adjust the focus. It can be driven.
- This lens group is called a focus lens.
- the subject image formed by the optical system 12 is converted into an electric signal by the photoelectric conversion element of the imaging element 14.
- the converted electrical signal is converted into a digital signal by the luminance signal control unit 16.
- This converted digital signal is called luminance information. This luminance information is used to estimate distance Input to part 20 and hill-climbing calculation part 22.
- the distance estimating unit 20 estimates the subject distance indicating the distance from the optical system 12 to the subject that is the object to be photographed, and then the hill-climbing method computing unit 22 that functions as a focus detection unit. Processing is performed to obtain a focusing result with higher accuracy. Switching between the hill-climbing calculation unit 22 and the distance estimation unit 20 is performed by the DFFZDFD switching unit 18.
- the optical system control unit 24 functions as an arrangement control means for controlling the optical system 12 to an arbitrary position. Although not specifically illustrated, the optical system and a drive circuit for driving the computer are provided. It consists of and.
- the drive circuit gives a signal to the actuator to place the focus lens of the optical system 12 at the lens position. And the signal is input to the actuator to place the focus lens at a desired position.
- the distance estimation unit 20 estimates the subject distance by the DFD method.
- the distance estimation unit 20 includes a blur parameter calculation unit 26, a control parameter calculation unit 28, and an LUT storage unit 30.
- the blur parameter calculation unit 26 includes a difference calculation unit 32, a second order differential calculation unit 34, a blur parameter calculation unit 36, and a buffer 38.
- the difference calculation unit 32 calculates the difference between images necessary for calculating the blur parameter.
- the second-order derivative calculation unit 34 calculates the second-order subtraction of the image, and calculates the average of the second-order derivative results obtained from the two luminance information powers with different blurs.
- the blur parameter calculation unit 36 calculates the blur parameter by dividing the average of the image difference calculated by the difference calculation unit 32 and the second derivative calculated by the second derivative calculation unit 34.
- Nota 38 obtains multiple pieces of luminance information at different times by placing the focus lens at different positions, so it holds the luminance information taken on the first image and the result of its second derivative. It is.
- the LUT storage unit 30 stores the relationship between the blur parameter and the focusing lens position in the form of a look-up table (LUT) as the relationship between the blur parameter and the light focus position of the subject force. is doing.
- the arrangement of the optical system 12 is determined according to the focus lens position.
- the control parameter calculation unit 28 refers to the LUT in the LUT storage unit 30 to obtain a focus lens position corresponding to the blur parameter calculated by the blur parameter calculation unit 36.
- the hill climbing method computing unit 22 includes a high-pass filter (HPF) 40, a DFF control parameter calculation unit 42, and an evaluation value storage unit 44.
- HPF 40 extracts high frequency components of luminance information.
- the DFF control parameter calculation unit 42 adds the results of HPF40 and calculates the evaluation value h (t).
- the evaluation value storage unit 44 stores the lens position when the luminance information is acquired and the evaluation value calculated by the DFF control parameter calculation unit 42.
- the optical system control unit 24 drives the focus lens of the optical system 12 to the predetermined first lens position L1 to control the imaging element 14 and the luminance signal.
- the luminance information of the first image of the subject is acquired by the unit 16 (step S10).
- the acquired luminance information of the first sheet is supplied to the distance estimation unit 20 by the DFFZDFD switching unit 18 and stored in the buffer 38 in the blur parameter calculation unit 26 under the control of a controller (not shown).
- the focus lens of the optical system 12 is driven to a predetermined second lens position L2 by the optical system control unit 24, and the imaging device 14 and the luminance signal control unit 16 Obtain the luminance information of the second image of the subject (step S12) o
- the acquired luminance information of the second image is supplied to the distance estimation unit 20 by the DFF ZDFD switching unit 18 according to the control of the controller (not shown).
- the blur parameter is calculated in the distance estimation unit 20 under the control of a controller (not shown) (step S14). That is, in the blur parameter calculation unit 26, the difference calculation unit 32 reads the first luminance information from the buffer 38 and calculates the difference from the second luminance information supplied from the DFFZDFD switching unit 18. . Further, the second derivative calculation unit 34 calculates the second derivative of the luminance information of the second sheet supplied from the DFFZDFD switching unit 18, and if it is calculated, the luminance information of the first sheet is further calculated. Is read from buffer 38 and its second derivative is calculated. . Then, the average value of the second derivative of the calculated first and second sheets is calculated. When the difference and the average value of the second derivative as the differential information are found in this way, the blur parameter calculation unit 36 calculates the blur parameter by calculating those divisions.
- the blur parameter has a linear relationship with the reciprocal of the subject distance, and the relationship between the subject distance and the focusing lens position has a one-to-one correspondence, so the blur parameter and the focusing lens position are As shown in Fig. 4, the one-to-one correspondence is also saved.
- This relationship is stored in the LUT storage unit 30 as a table (LUT).
- the control parameter calculator 28 calculates a subject distance value corresponding to the blur parameter.
- the distance information corresponding to the subject distance value is represented by the position of the focus lens. Therefore, the in-focus lens position DFD—LF corresponding to the blur parameter obtained by the blur parameter calculation unit 26 can be obtained by linear approximation with reference to the table stored in the LUT storage unit 30 (step S1 6).
- the estimated error ⁇ of the focusing lens position DFD-LF obtained here is larger than the allowable error because the focusing lens position with respect to the subject differs slightly depending on the lens frame mounting error of each individual. Therefore, the control parameter calculation unit 28 sets the position DFD—LF + ⁇ , which is separated from the estimated focusing lens position by the estimation error ⁇ from the lens frame installation error, as the target lens position L (t 1), and uses this as the optical position. Input to system controller 24. The optical system control unit 24 drives the focus lens of the optical system 12 to the target lens position L (t ⁇ 1) (step S18).
- the position of DFD—LF + ⁇ is in-focus with the lens position (L2) from which the luminance information of the second image obtained in the process of calculating the blur parameter is acquired.
- Lens position This is the position between DFD and LF. By setting this position, the drive distance of the focus lens can be minimized.
- the imaging element 14 and the luminance signal control unit 16 acquire luminance information of the object at the lens position L (t ⁇ 1) (step S 20).
- the acquired luminance information is supplied to the hill-climbing method computing unit 22 by the DFFZDFD switching unit 18 under the control of a controller (not shown). Be paid.
- the hill-climbing method computing unit 22 extracts high-frequency components of the supplied luminance information by HP F40, and the DFF control parameter calculation unit 42 adds the results of HPF40 to calculate the evaluation value h (t-1). Perform (Step S22).
- the calculated evaluation value is stored in the evaluation value storage unit 44 together with the lens position when the luminance information from the optical system control unit 24 is acquired.
- the DFF control parameter calculation unit 42 uses the optical system control unit 24 to focus the focus lens of the optical system 12 on the basis of the lens position stored in the evaluation value storage unit 44.
- the lens is step-driven by a predetermined amount ⁇ in the direction of the lens position DFD—LF (step S 24).
- the image sensor 14 and the luminance signal control unit 16 acquire the luminance information of the subject at the driven lens position L (t) (step S26), and the evaluation is performed again by the hill-climbing method calculation unit 22 as described above.
- a value is calculated (step S28).
- the calculated evaluation value is stored in the evaluation value storage unit 44 together with the lens position given from the optical system control unit 24.
- step S30 it is determined whether or not the rate of change h (t) -h (t-1) of the evaluation value is increasing (step S30). If it is increasing, the current lens position L (t) is set to the previous lens position L (t ⁇ 1) (step S32), and the process is repeated by returning to step S24.
- step S30 when it is determined in step S30 that the rate of change h (t) -h (t-1) of the evaluation value has decreased, the DFF control parameter calculation unit 42 determines the peak position DFF-LF. Is estimated (step S34). This approximates the evaluation value and the lens position stored in the evaluation value storage unit 44 to a quadratic function, and obtains the lens position DFF-LF at the peak of the mountain. Then, the DFF control parameter calculation unit 42 gives the calculated lens position DFF-LF to the optical system control unit 24, and drives the focus lens of the optical system 12 to that position (step S36). If the focus lens is driven to the lens position DFF—LF, the focus detection is completed (step S38).
- the evaluation value of the hill-climbing method was obtained by adding the high-frequency components extracted by HPF40.
- the luminance distribution force dispersion was obtained, and a value such that the dispersion increases as the focus is achieved. You may ask.
- control of the actuator of the optical system control unit 24 is an open loop method. As described in a limited manner, an encoder may be attached to the actuator to perform feedback control.
- the force in the first embodiment is applied to a compact camera.
- the focus detection apparatus according to the present embodiment can be similarly applied to a single-lens reflex camera 46 as shown in FIG.
- the optical system 12 includes the taking lens 12A, the reflex mirror 12B, and the AF optical system 12C for guiding the light beam to the AF imaging elements 14A and 14B for focus detection. Composed by 12D.
- the taking lens 12A has a focus lens for adjusting the focus.
- the image sensor has an image sensor 14C for shooting and two AF image sensors (14A, 14B), and one of the AF image sensors is optically equivalent to the image sensor 14C. Is arranged. In this modification, it is assumed that the AF image sensor 14A is in the arrangement.
- the optical system control unit 24 is configured by an actuator driver circuit for driving the focus lens of the taking lens 12A.
- the distance estimation unit 20 uses the AF imaging element 14A at one lens position L determined in advance. , 14B can obtain two luminance information. That is, the processing of step S10 and step S12 can be performed simultaneously. Then, the blur parameter is calculated using the two pieces of luminance information obtained at the same time (step S14). Thereafter, as in the first embodiment, the LUT storage unit 30 is referred to, and the focus lens position DFD-LF corresponding to the blur parameter is estimated by the control parameter calculation unit 28 (step S16).
- a position away from the estimated focusing lens position DFD-LF by the estimated error ⁇ from the lens frame mounting error is set as the target lens position DFD_LF + ⁇ and is input to the optical system control unit 24 (step S18).
- the optical system control unit 24 arranges the focus lens at the target lens position.
- the hill climbing method is then started.
- the hill-climbing method is performed using luminance information obtained from the AF image sensor 14A located at the same position as the image sensor 14C for photographing out of the two AF image sensors (14A, 14B). That is, the evaluation value h (t) is calculated by the same method as in the first embodiment (step S22), and the focus lens at which the evaluation value peaks The position 0-1 ⁇ is obtained (steps 324 to 3124). Then, the force lens is controlled to this lens position (step S36), and the focus detection is completed (step S38).
- the focus detection apparatus is applied to a compact camera 10 as shown in FIG.
- the optical system 12 the image sensor 14 and the luminance signal control unit 16, the DFFZDFD switching unit 18, the distance estimation unit 20, the hill-climbing method calculation unit that function as luminance information acquisition means 22 and optical system control unit 24
- the DFF region extraction unit 48 and the extraction information storage unit 50 used in both the distance estimation unit 20 and the hill-climbing method calculation unit 22 are added.
- the DFF region extraction unit 48 obtains the focusing lens position of the subject that is the closest distance.
- the extracted information storage unit 50 selects a block in which the subject that is the closest distance exists, and stores the address of the selected block.
- the focus lens of the optical system 12 is driven to the predetermined first lens position L1, and the luminance information of the first image of the subject is acquired. And supplied to the blur parameter calculation unit 26 of the distance estimation unit 20 (step S10). Thereafter, the focus lens of the optical system 12 is driven to a predetermined second lens position L2, and brightness information of the second image of the subject is acquired and supplied to the blur parameter calculation unit 26 of the distance estimation unit 20 (Step S12).
- the blur parameter calculation unit 26 calculates a blur parameter by dividing the difference between two images taken at different focus lens positions and the average value of the respective second derivatives (step S14).
- the blur parameter has a linear relationship with the reciprocal of the subject distance, and the relationship between the subject distance and the focusing lens position is 1: 1, so the relationship between the blur parameter and the focusing lens position is also 1: 1.
- the relationship is saved. This relationship is stored in the LUT storage unit 30 as a LUT.
- the control parameter calculation unit 28 uses the calculated blur parameter and LUT information to determine the focusing lens position with respect to the subject by linear interpolation. . This focusing lens position is calculated in pixel units with respect to the edge portion of the subject imaged on the image plane.
- control parameter calculation unit 28 converts the value of the focusing lens position into a luminance value, thereby obtaining an image called a distance image as shown in FIG.
- This distance image is passed to the DFF region extraction unit 48, and the in-focus lens position DFD — LF of the subject at the closest distance is obtained (step S16).
- the DFF region extraction unit 48 selects a block in which the subject exists, and stores the address of the selected block (All, A15 in the example of FIG. 9) in the extraction information storage unit 50 (step S40). .
- the estimated error ⁇ of the focusing lens position obtained here is larger than the allowable error because the focusing lens position with respect to the subject is slightly different depending on the lens frame mounting error of each individual. Therefore, the DFF region extraction unit 48 sets the position away from the estimated focusing lens position force by the estimation error ⁇ from the lens frame mounting error as the target lens position DFD—LF + ⁇ , and the optical system control unit 24 (Step S18).
- the hill-climbing method is then started. That is, the luminance information from the image sensor 14 via the luminance signal control unit 16 is passed to the DFF region extraction unit 48 by the DFFZDFD switching unit 18 which is switched to the execution of the hill-climbing method calculation unit 22 by a controller not shown (step) S 20). Since the address of the block in which the subject of interest exists is stored in advance in the extraction information storage unit 50 based on the DFD result, the DFF region extraction unit 48 extracts the luminance information in that block by mask processing (step S42). As masks used for this masking process, those shown in Fig. 10 and Fig. 11 are used.
- the DFF control parameter calculation unit 42 adds the results of the HPF 40 to calculate an evaluation value (step S22).
- the calculated evaluation value is stored in the evaluation value storage unit 44 together with the lens position when the luminance information from the optical system control unit 24 is acquired.
- the force lens is driven by a predetermined amount ⁇ in the direction of the estimated focus lens position (step S24), and brightness information is obtained. (Step S26), and the evaluation value is calculated again (Step S28).
- the calculated evaluation value together with the lens position given from the optical system control unit 24, is an evaluation value storage unit. Memorized in 44. If the change rate h (t) ⁇ h (t ⁇ 1) of the evaluation value is increased (step S30), this process is repeated.
- the DFF control parameter calculation unit 42 estimates a peak (step S34).
- the evaluation value stored in the evaluation value storage unit 44 and the lens position are approximated to a quadratic function to obtain the lens position DFF-LF that is the peak of the mountain.
- the DFF control parameter calculation unit 42 gives the calculated lens position DFF—LF to the optical system control unit 24, and drives the force lens of the optical system 12 to the position (step S36), thereby detecting focus. Is completed (step S38).
- the subject of interest is extracted by DFD, and the hill-climbing method is performed only on the block corresponding to the result of DFD, thereby calculating the luminance information power other than the subject of interest.
- the peak of the evaluation value can be calculated without being affected by the evaluation value, and as a result, the focusing accuracy can be improved.
- the second embodiment has been described in the case where the region extraction is performed using the distance information obtained as a result of the DFD.
- the region may be extracted using the result of the second derivative obtained in the blur parameter calculation process as shown in FIG.
- step S10 the luminance information of the first image of the subject is acquired at the first lens position L1 (step S10), and then the luminance information of the second image of the subject is acquired at the second lens position L2. Obtain (step S12).
- the second derivative calculation unit 34 performs different focus lens positions. Find the second derivative of each of the two images taken in the camera, and calculate the average value of them. The average value of the second derivative is supplied to the DFF region extraction unit 48 as derivative information.
- the DFF region extraction unit 48 extracts a block in which the average value of the second derivative of the two supplied images exceeds the threshold as a region for calculating the blur parameter, and the position information of the block ( In the example of FIG. 13, All, A15) is stored in the extracted information storage unit 50 (step S44).
- the blur parameter calculation unit 26 calculates a blur parameter by dividing the difference between two images taken at a focus lens position different from the average value of the second derivative of the two images (step S1). S14). Then, the control parameter calculation unit 28 uses the calculated blur meter and the LUT information stored in the LUT storage unit 30 to determine the focus lens position with respect to the subject by linear interpolation. This focus lens position is calculated in pixel units with respect to the edge portion of the subject imaged on the image plane, and the position that becomes the closest distance is defined as the focus lens position DFD-LF of the subject. Obtain (step S16).
- the estimated error ⁇ of the focusing lens position obtained here is larger than the allowable error because the focusing lens position with respect to the subject differs slightly depending on the lens frame mounting error of each individual. Therefore, the estimated focus lens position force is also set to the target lens position DFD—LF + ⁇ , which is separated from the lens frame mounting error by the estimated error ⁇ , and is input to the optical system controller 24 to drive the focus lens. (Step S18).
- the hill climbing method is performed on the extracted block.
- the following processing is the same as the processing shown in the second embodiment, and will be omitted.
- the second-order differentiation extracts the edge portion of the subject, so that the subject region existing on the image plane can be detected.
- region extraction is determined based on edge strength, a main subject may be extracted from the edge structure.
- the blur parameter calculation by the blur parameter calculation unit 36 in step S14 may be performed only for the block extracted by the DFF region extraction unit 48. good.
- the second modification can also be applied to a single-lens reflex camera.
- the same effect can be obtained by providing the distance information from Fig. 28 (Fig. 9) to the DFF region extraction unit 48 to extract the region.
- the object region can be extracted from the result of the second derivative.
- distance information that also obtains DFD force it is possible to prevent erroneous extraction of the subject as compared with the case of using only the second derivative result.
- the third modification can also be applied to a single-lens reflex camera.
- the focus detection apparatus is configured as shown in FIG.
- solid arrows indicate the flow of signals and information for executing the DFD method
- broken arrows indicate the flow of signals and information for executing the hill-climbing method
- the alternate long and short dash line arrows indicate the flow of signals and information common to the DFD method and the hill-climbing method.
- the output of the second-order differential calculation unit 34 is first used for distance estimation in the distance estimation unit 20 as shown by the solid line, and is the third modification of the second embodiment. It is used for region extraction by the DFF region extraction unit 48 as in the example. After the distance estimation is completed, as shown by the broken line in the figure, the second derivative calculation unit 34 secondarily differentiates the luminance signal of the block extracted by the DFF region extraction unit 48 to calculate the hill climbing method. It is supplied to the DFF control parameter calculation unit 42 of the unit 22.
- the second-order differential calculation unit 34 of the blur parameter calculation unit 26 is shared so that it can be used also in the hill-climbing calculation calculation unit 22. That is, the frequency characteristic of the second-order differential calculation unit 34 of the blur parameter calculation unit 26 has an HPF characteristic that allows high-frequency components to pass. Therefore, when the hill-climbing method is performed, the use of the second-order differential calculation unit 34 of the blur parameter calculation unit 26 eliminates the need for the hill-climbing method calculation unit 22 as in the first or second embodiment. [0072] According to the third embodiment as described above, it is not necessary to have an HPF in the hill-climbing arithmetic operation unit 22, so that the circuit scale can be reduced.
- the position of the optical system 12 is changed by driving the position of the focus lens and the diaphragm to acquire different brightness information of the two blurs, and the position of the focus lens is adjusted.
- the configuration for obtaining a focused image has been described.
- an image sensor control unit 52 that functions as an arrangement control unit that changes the arrangement of the luminance information acquisition unit by driving the image sensor 14 in the optical axis direction is provided. Then, instead of adjusting the arrangement of the focus lens, the image sensor 14 is driven in the direction of the optical axis to obtain brightness information with different blur.
- the LUT storage unit 30 may store the relationship between the blur parameter and the position of the image sensor 14 as the relationship between the blur parameter and the focus position of the subject force light.
- Each calculation unit and calculation unit may be configured by a single piece of hardware such as a DSP or CPU.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- Automatic Focus Adjustment (AREA)
- Studio Devices (AREA)
- Focusing (AREA)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/120,513 US20080297648A1 (en) | 2005-11-15 | 2008-05-14 | Focus detection apparatus |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2005-330460 | 2005-11-15 | ||
| JP2005330460A JP2007139893A (ja) | 2005-11-15 | 2005-11-15 | 合焦検出装置 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/120,513 Continuation US20080297648A1 (en) | 2005-11-15 | 2008-05-14 | Focus detection apparatus |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2007058100A1 true WO2007058100A1 (ja) | 2007-05-24 |
Family
ID=38048487
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2006/322265 Ceased WO2007058100A1 (ja) | 2005-11-15 | 2006-11-08 | 合焦検出装置 |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20080297648A1 (enExample) |
| JP (1) | JP2007139893A (enExample) |
| WO (1) | WO2007058100A1 (enExample) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009003152A (ja) * | 2007-06-21 | 2009-01-08 | Sharp Corp | カメラモジュールのフォーカス調整装置及びフォーカス調整方法 |
| JP2021108431A (ja) * | 2019-12-27 | 2021-07-29 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | 制御装置、撮像装置、制御方法、及びプログラム |
Families Citing this family (35)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8280194B2 (en) * | 2008-04-29 | 2012-10-02 | Sony Corporation | Reduced hardware implementation for a two-picture depth map algorithm |
| US8218061B2 (en) * | 2008-09-04 | 2012-07-10 | Csr Technology Inc. | Apparatus, method, and manufacture for iterative auto-focus using depth-from-defocus |
| US8194995B2 (en) * | 2008-09-30 | 2012-06-05 | Sony Corporation | Fast camera auto-focus |
| US8553093B2 (en) | 2008-09-30 | 2013-10-08 | Sony Corporation | Method and apparatus for super-resolution imaging using digital imaging devices |
| TW201101799A (en) * | 2009-06-17 | 2011-01-01 | Altek Corp | Digital image sharpening treatment method and its system thereof |
| US8027582B2 (en) * | 2009-12-21 | 2011-09-27 | Sony Corporation | Autofocus with confidence measure |
| US8542313B2 (en) * | 2010-01-27 | 2013-09-24 | Csr Technology Inc. | Depth from defocus calibration |
| WO2011101036A1 (en) | 2010-02-19 | 2011-08-25 | Iplink Limited | Processing multi-aperture image data |
| EP2537332A1 (en) * | 2010-02-19 | 2012-12-26 | Dual Aperture, Inc. | Processing multi-aperture image data |
| JP2012003233A (ja) * | 2010-05-17 | 2012-01-05 | Sony Corp | 画像処理装置、画像処理方法およびプログラム |
| US8644697B1 (en) | 2010-08-13 | 2014-02-04 | Csr Technology Inc. | Method for progressively determining depth from defocused images |
| US8433187B2 (en) * | 2011-01-18 | 2013-04-30 | DigitalOptics Corporation MEMS | Distance estimation systems and method based on a two-state auto-focus lens |
| US9501834B2 (en) | 2011-08-18 | 2016-11-22 | Qualcomm Technologies, Inc. | Image capture for later refocusing or focus-manipulation |
| AU2011224051B2 (en) * | 2011-09-14 | 2014-05-01 | Canon Kabushiki Kaisha | Determining a depth map from images of a scene |
| US8340456B1 (en) * | 2011-10-13 | 2012-12-25 | General Electric Company | System and method for depth from defocus imaging |
| JP2013130761A (ja) * | 2011-12-22 | 2013-07-04 | Sony Corp | 撮像装置、その制御方法およびプログラム |
| JP2013130762A (ja) | 2011-12-22 | 2013-07-04 | Sony Corp | 撮像装置、その制御方法およびプログラム |
| US9049364B2 (en) * | 2012-02-13 | 2015-06-02 | Htc Corporation | Focus adjusting method and image capture device thereof |
| US8896747B2 (en) | 2012-11-13 | 2014-11-25 | Qualcomm Technologies, Inc. | Depth estimation based on interpolation of inverse focus statistics |
| US10237528B2 (en) | 2013-03-14 | 2019-03-19 | Qualcomm Incorporated | System and method for real time 2D to 3D conversion of a video in a digital camera |
| WO2015016085A1 (ja) * | 2013-08-01 | 2015-02-05 | 富士フイルム株式会社 | 撮影方法及び装置 |
| JP2015036632A (ja) * | 2013-08-12 | 2015-02-23 | キヤノン株式会社 | 距離計測装置、撮像装置、距離計測方法 |
| JP5866493B2 (ja) | 2013-11-19 | 2016-02-17 | パナソニックIpマネジメント株式会社 | 撮像装置 |
| JP5866570B2 (ja) * | 2013-11-19 | 2016-02-17 | パナソニックIpマネジメント株式会社 | 撮像装置 |
| JP6432038B2 (ja) | 2014-03-19 | 2018-12-05 | パナソニックIpマネジメント株式会社 | 撮像装置 |
| JP5895270B2 (ja) * | 2014-03-28 | 2016-03-30 | パナソニックIpマネジメント株式会社 | 撮像装置 |
| JP6300670B2 (ja) * | 2014-07-09 | 2018-03-28 | キヤノン株式会社 | 焦点調節装置、焦点調節方法およびプログラム、並びに撮像装置 |
| US20160255323A1 (en) | 2015-02-26 | 2016-09-01 | Dual Aperture International Co. Ltd. | Multi-Aperture Depth Map Using Blur Kernels and Down-Sampling |
| JP6429724B2 (ja) * | 2015-05-19 | 2018-11-28 | キヤノン株式会社 | 撮像装置およびその制御方法 |
| JP6611576B2 (ja) * | 2015-11-30 | 2019-11-27 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
| TWI585394B (zh) * | 2015-12-09 | 2017-06-01 | 由田新技股份有限公司 | 動態式自動追焦系統 |
| KR102382865B1 (ko) * | 2017-06-28 | 2022-04-05 | 삼성전자주식회사 | 카메라 모듈, 카메라 모듈을 포함하는 전자 장치 |
| KR102382871B1 (ko) * | 2017-07-18 | 2022-04-05 | 삼성전자주식회사 | 렌즈의 포커스를 제어하기 위한 전자 장치 및 전자 장치 제어 방법 |
| KR102593303B1 (ko) * | 2018-01-16 | 2023-10-25 | 삼성전자 주식회사 | 전자 장치 및 전자 장치의 카메라 자동 초점 제어 방법 |
| KR102668212B1 (ko) | 2018-09-28 | 2024-05-23 | 삼성전자 주식회사 | 자동 초점 방법 및 이를 수행하는 전자 장치 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH01167610A (ja) * | 1987-11-27 | 1989-07-03 | Univ New York State | 3次元空間シーンの表面区画及びカメラ装置間の距離を決定するための方法及び装置 |
| JPH03136580A (ja) * | 1989-06-29 | 1991-06-11 | Univ New York State | 対象の距離を決定し、自動焦点合せをし、合焦像を得るための方法及び電子カメラ装置 |
| JP2000199845A (ja) * | 1999-01-05 | 2000-07-18 | Ricoh Co Ltd | 自動合焦装置及び自動合焦方法 |
| JP2006003803A (ja) * | 2004-06-21 | 2006-01-05 | Olympus Corp | 合焦情報取得装置及び合焦情報取得方法 |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP3175175B2 (ja) * | 1991-03-01 | 2001-06-11 | ミノルタ株式会社 | 合焦検出装置 |
| JPH05181057A (ja) * | 1992-01-06 | 1993-07-23 | Olympus Optical Co Ltd | 自動合焦装置 |
| JP2919706B2 (ja) * | 1993-06-17 | 1999-07-19 | 三洋電機株式会社 | オートフォーカスカメラ |
| JP4265233B2 (ja) * | 2003-02-13 | 2009-05-20 | 株式会社ニコン | カメラ |
| JP2005094432A (ja) * | 2003-09-18 | 2005-04-07 | Matsushita Electric Ind Co Ltd | 画像サーバ |
-
2005
- 2005-11-15 JP JP2005330460A patent/JP2007139893A/ja active Pending
-
2006
- 2006-11-08 WO PCT/JP2006/322265 patent/WO2007058100A1/ja not_active Ceased
-
2008
- 2008-05-14 US US12/120,513 patent/US20080297648A1/en not_active Abandoned
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH01167610A (ja) * | 1987-11-27 | 1989-07-03 | Univ New York State | 3次元空間シーンの表面区画及びカメラ装置間の距離を決定するための方法及び装置 |
| JPH03136580A (ja) * | 1989-06-29 | 1991-06-11 | Univ New York State | 対象の距離を決定し、自動焦点合せをし、合焦像を得るための方法及び電子カメラ装置 |
| JP2000199845A (ja) * | 1999-01-05 | 2000-07-18 | Ricoh Co Ltd | 自動合焦装置及び自動合焦方法 |
| JP2006003803A (ja) * | 2004-06-21 | 2006-01-05 | Olympus Corp | 合焦情報取得装置及び合焦情報取得方法 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009003152A (ja) * | 2007-06-21 | 2009-01-08 | Sharp Corp | カメラモジュールのフォーカス調整装置及びフォーカス調整方法 |
| JP2021108431A (ja) * | 2019-12-27 | 2021-07-29 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | 制御装置、撮像装置、制御方法、及びプログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| US20080297648A1 (en) | 2008-12-04 |
| JP2007139893A (ja) | 2007-06-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2007058100A1 (ja) | 合焦検出装置 | |
| JP6592335B2 (ja) | 像ブレ補正装置及び方法 | |
| JP6600232B2 (ja) | 像ブレ補正装置及び方法 | |
| WO2007086378A1 (ja) | 合焦検出装置 | |
| JP6910765B2 (ja) | 制御装置、防振制御方法および防振制御プログラム | |
| US8315512B2 (en) | Method and apparatus for auto-focus control of digital camera | |
| JP6154081B2 (ja) | 撮影装置、撮影装置本体、及びレンズ鏡筒 | |
| US10599015B2 (en) | Image-capturing apparatus, accessory apparatus and communication control method therefor | |
| JP5393300B2 (ja) | 撮像装置 | |
| JP2011013645A5 (enExample) | ||
| JP6808340B2 (ja) | レンズ制御装置、制御方法 | |
| JP6320105B2 (ja) | 撮像装置およびその制御方法 | |
| US20110273574A1 (en) | Image pickup apparatus | |
| JP6838894B2 (ja) | 焦点調節装置、その制御方法及びプログラム | |
| JP7271353B2 (ja) | 撮像装置および波長取得方法 | |
| JP6154080B2 (ja) | 撮影装置、撮影装置本体、及びレンズ鏡筒 | |
| US20070140677A1 (en) | Automatic focusing methods and image capture devices utilizing the same | |
| JP2007139892A (ja) | 合焦検出装置 | |
| JP4687291B2 (ja) | 焦点調節装置および撮像装置 | |
| WO2007058099A1 (ja) | 撮像装置 | |
| JP7087052B2 (ja) | レンズ制御装置、制御方法 | |
| JP6019625B2 (ja) | 焦点調節装置 | |
| JP6778014B2 (ja) | 撮像装置及びその制御方法、プログラム、記憶媒体 | |
| JP5355252B2 (ja) | 撮像装置及びその制御方法 | |
| JP2005338514A (ja) | レンズ制御装置及び撮像機器 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 06823169 Country of ref document: EP Kind code of ref document: A1 |