JP6245885B2 - Imaging apparatus and control method thereof - Google Patents

Imaging apparatus and control method thereof Download PDF

Info

Publication number
JP6245885B2
JP6245885B2 JP2013161267A JP2013161267A JP6245885B2 JP 6245885 B2 JP6245885 B2 JP 6245885B2 JP 2013161267 A JP2013161267 A JP 2013161267A JP 2013161267 A JP2013161267 A JP 2013161267A JP 6245885 B2 JP6245885 B2 JP 6245885B2
Authority
JP
Japan
Prior art keywords
distance
image
map
range
measurement range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2013161267A
Other languages
Japanese (ja)
Other versions
JP2015032144A (en
JP2015032144A5 (en
Inventor
隆弘 高橋
隆弘 高橋
知 小松
知 小松
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2013161267A priority Critical patent/JP6245885B2/en
Publication of JP2015032144A publication Critical patent/JP2015032144A/en
Publication of JP2015032144A5 publication Critical patent/JP2015032144A5/ja
Application granted granted Critical
Publication of JP6245885B2 publication Critical patent/JP6245885B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/236Image signal generators using stereoscopic image cameras using a single 2D image sensor using varifocal lenses or mirrors

Description

  The present invention relates to an imaging device, and more particularly to an imaging device capable of acquiring a distance map.

  Conventionally, as a method of acquiring a distance map simultaneously with an ornamental image, a stereo method (for example, Patent Document 1), Depth from Focus (DFD, Patent Document 2), Depth from Focus (DFF, Patent Document 3), and the like. is there. Since these methods are passive methods that do not require special illumination, they are distance map acquisition methods suitable for general imaging devices. DFD and DFF are distance maps that analyze differences in blurring of image groups (two or more images) shot under a plurality of shooting conditions, based on the fact that the blurring of shot images differs depending on the distance from the imaging device of the shooting scene. Is a method of calculating On the other hand, the stereo method is a method of calculating a distance map of a target scene based on the parallax obtained from the correspondence between each pixel in a captured image of two or more viewpoints based on the principle of triangulation. Embodiments of the stereo method have various forms such as arranging a plurality of image pickup devices, dividing an optical system pupil and acquiring images of two viewpoints with one image pickup device.

  The distance map calculated in this way is used for a cutout function that cuts out the vicinity of the main subject, a background blur function that artificially narrows the depth of field by blurring areas other than the main subject, Can be applied to various image processing.

Japanese Patent Laid-Open No. 04-138777 JP-A-01-167610 International Publication No. 2002/0882805

  As described above, in order to perform image processing using a distance map as desired by the photographer, it is necessary to acquire a distance map necessary for the image processing effect desired by the user. Specifically, for a scene having continuous depth, when it is desired to express the blur continuously changing using a distance map, it is necessary to acquire a distance map in the range in the depth direction to be changed. . However, as will be described later, in the distance map acquisition method such as DFD, DFF, and stereo method, the depth range of the distance map that can be acquired (hereinafter referred to as a distance measurement range) depends on the parameters at the time of distance map acquisition. Therefore, it is necessary to appropriately set parameters in order to obtain a desired distance measurement range. For example, when a desired effect is to be obtained in image processing using a distance map, it is necessary to set parameters for obtaining a distance map having a distance measurement range adapted to the shooting scene. However, conventionally, since the distance map could not be confirmed at the time of photographing, there was a problem that it was not possible to confirm whether or not a desired image processing result could be obtained until the image processing result after photographing was observed.

  In view of the above problems, an object of the present invention is to provide an imaging apparatus capable of confirming a distance measurement range of a distance map acquired during shooting.

In order to solve the above problems, an imaging apparatus according to the present invention is based on image acquisition means for acquiring an image, distance map acquisition means for acquiring a first distance map by DFD, and the first distance map. A range-finding range map generating unit that generates a range-finding range map indicating a range-finding range in the image, a combining unit that generates a combined image obtained by combining the image and the range-finding range map, and displaying the combined image A distance map used by the distance map acquisition means based on the distance measurement range changed by the change instruction means, a change instruction means for receiving a change instruction of the distance measurement range of the distance map acquisition means from a user, Parameter changing means for changing an acquisition parameter, the parameter changing means changing the distance map acquisition parameter to a first parameter; The F value is changed if the degree of change is greater than a predetermined threshold, and the focus bracket amount is changed if the degree of change is less than or equal to the predetermined threshold .

A control method of an imaging apparatus according to the present invention includes an image acquisition step of acquiring an image, and distance map acquisition step you get a first distance map by DFD, on the basis of the first distance map, A ranging range map generating step for generating a ranging range map indicating a ranging range in the image, a combining step for generating a combined image by combining the image and the ranging range map, and the combined image are displayed. A distance map used in the distance map acquisition step, based on the distance measurement range changed in the display step, the distance map acquisition step of the distance map acquisition step from the user, and the distance instruction range changed in the change instruction step It is seen containing a parameter changing step of changing the acquisition parameters, and the parameter changing step said distance map The first parameter is changed to a first parameter, and the F value is changed if the degree of change is greater than a predetermined threshold, and the focus bracket amount is changed if the degree of change is less than or equal to the predetermined threshold. And

  According to the present invention, it is possible to confirm a distance measurement range of a distance map acquired during shooting, and an image processing effect desired by a photographer can be obtained.

1 is a block diagram illustrating a configuration of an imaging apparatus according to Embodiment 1. FIG. 3 is a flowchart illustrating an operation of the imaging apparatus according to the first embodiment. FIG. 3 is a diagram schematically illustrating a display example on a display unit of the imaging apparatus according to the first embodiment. FIG. 6 is a block diagram illustrating a configuration of an imaging apparatus according to a second embodiment. It is the figure which plotted the cross section of PSF. It is a figure which shows the defocusing characteristic of a PSF peak value. It is a figure which shows the characteristic of PSF peak ratio. It is a figure which shows the change of PSF peak ratio accompanying the change of FB amount. It is a figure which shows the FB amount dependence of the range of a measurement range and a PSF peak ratio. It is a figure which shows the principle of a stereo method.

  The present invention is implemented as a function of an imaging apparatus such as a digital camera. First, the distance map acquisition principle and distance measurement range of the DFD method, DFF method, and stereo method, which are passive distance map acquisition methods, will be described. Thereafter, specific embodiments of the present invention will be described with reference to the drawings. However, the scope of the present invention is not limited to the examples illustrated in the description of the embodiments.

<DFD distance measurement principle and measurable distance range>
(Principles of distance measurement)
In the DFD method, the same subject is photographed a plurality of times under different photographing conditions, and a distance map is acquired by using a difference in how the photographed image is blurred. Here, a case will be described in which shooting is performed by changing the focus position twice as a shooting condition. Note that the focus bracket amount (FB), which is the amount of change in the focus position in this specification, is the amount of movement on the sensor side (hereinafter referred to as the image plane) (the image plane at the first focus position and the second focus position) unless otherwise specified. The distance between the image planes at the focus position.

(PSF peak value defocus characteristics)
In the DFD method of this embodiment, the distance is estimated using the defocus characteristic of the peak value of the PSF (Point Spread Function) of the optical system. In the following, description will be given using the PSF of an ideal imaging optical system without aberration, but the same can be handled in an actual imaging optical system.

The ideal PSF shape without aberration at the focus position has a shape like a Gaussian function that gradually decreases with the value at the coordinates of the PSF center as a peak. Figure 5 shows PS
The cross-sectional shape of F is indicated by a solid line. As defocusing occurs, the value at the center of the PSF coordinate decreases and the shape collapses. The dotted lines in FIG. 5 show the sections of the PSF when defocused by 20 μm, 40 μm, 60 μm, and 80 μm, respectively. Here, the value at the coordinate center of this PSF is defined as “PSF peak value”.

  FIG. 6 shows the defocus characteristic of the PSF peak value in an ideal imaging optical system having no aberration. The horizontal axis represents the defocus amount, and the vertical axis represents the PSF peak value. The imaging conditions are: focal length of imaging optical system: 18.0 mm, F value: 4.00, object distance: 3000 mm, focus bracket amount: -0.02 mm, wavelength: 587.56 nm. As shown in FIG. 6, the PSF peak value becomes maximum at the focus position, decreases as defocusing, and approaches 0 while oscillating like a SINC function.

(PSF peak ratio)
Next, a method for calculating the distance from the PSF peak value will be described. As shown in FIG. 6, the PSF peak value depends on the defocus amount. Therefore, if the PSF peak value can be calculated from the captured image, the defocus amount can be known and converted into the object distance to the subject. However, it is difficult to accurately obtain the PSF peak value of the imaging optical system from one image because of the influence of the spatial frequency of the subject. Therefore, the influence of the subject is removed using a plurality of images shot under different shooting conditions. In order to cancel the influence of the subject, it is better to take a ratio. Hereinafter, the ratio of the PSF peak values obtained from the two images is defined as “PSF peak ratio”. In the distance calculation of this embodiment, the correspondence between the theoretically obtained defocus characteristic of the PSF peak ratio of the imaging optical system and the value of the PSF peak ratio obtained from two images obtained by actual photographing is taken. To calculate the distance.

  FIG. 7 shows theoretically obtained defocus characteristics of PSF peak values and PSF peak ratio defocus characteristics of two images. The shooting conditions are the same as in FIG. The horizontal axis is the focus position on the image plane side. In FIG. 7, two curves shown by dotted lines are defocus characteristics of two PSF peak values at different focus positions, and a curve shown by a solid line is defocus characteristics of the PSF peak ratio. The peak ratio is normalized by using the larger peak as the denominator. As a result, the PSF peak ratio has a maximum value of 1, and has a peak at an intermediate position between the two focus positions, and becomes a symmetrical curve in which the value decreases with distance from the peak.

  If the PSF peak ratio of each point (pixel or image group) in the image is obtained from the two actually captured images and the value is applied to the defocus characteristic indicated by the solid line in FIG. 7, each point in the image It is possible to calculate how far the object shown in is from the reference focus position. In the case of FIG. 7, the reference focus position is an intermediate position between the focus positions of the two images. Further, depending on which PSF peak value is normalized (which PSF peak value is larger), it is possible to distinguish the front side (imaging device side) or the back side from the reference focus position.

  In order to obtain the distance Z0 on the object side from the PSF peak ratio, first, the defocus amount Zi from the focus position on the image plane is obtained from the value of the PSF peak ratio. Next, the image plane side distance s ′ is obtained from the focal distance f and the object distance s by Expression 2, and converted to the object distance side distance Z0 by Expression 3 using the defocus amount Zi.


(PSF peak ratio calculation method from image)
A method for calculating the PSF peak ratio from two images obtained by actual photographing will be described. In the two images, the corresponding local regions I1 and I2 are represented by the convolution of the scene s with PSF1 and PSF2. When the Fourier-transformed regions are FI1 and FI2, and the Fourier transform of the scene s is S, this ratio is expressed as Equation 3.

Here, the optical transfer function obtained by Fourier transforming PSF is OTF, and the ratio of the two OTFs is OTFr. This OTFr becomes a value that does not depend on the scene by canceling the scene S as shown in Expression 3. In order to obtain the PSF peak ratio PSFr from this OTFr, an average value of OTFr may be obtained as shown in Equation 4. If the PSF peak is at the center of the images I1 and I2, the PSF peak ratio PSFr is

Holds. Expressing this discretely,

It becomes. By applying the PSF peak ratio PSFr calculated from the image according to Equation 5 to the defocus characteristic of the PSF peak ratio as shown in FIG. 7, the defocus amount of the object shown in the local regions I1 and I2, that is, the distance information is obtained. Can be obtained.

(Ranging range)
Next, a distance range that can be measured (hereinafter, distance measurement range) in distance measurement using the PSF peak ratio will be described with reference to FIG. As indicated by the solid line in FIG. 7, the defocus characteristic of the PSF peak ratio gradually decreases from an intermediate position between two different focus positions, reaches a minimum value, increases again, and repeats this. This is because the defocus characteristic of the PSF peak value oscillates as shown in FIG. Hereinafter, the maximum peak in the defocus characteristic curve such as PSF peak value and PSF peak ratio is referred to as “maximum peak” or “primary peak”, and the minimum value that appears first on the front side and the back side of the maximum peak is referred to as “primary minimum value”. "

As can be seen from the defocus characteristic of the PSF peak value in FIG. 6, the PSF peak values after the primary minimum value are small in value and easily affected by noise and the like. Therefore, when the ratio is taken, the variation is large and the reliability is low. Therefore, the distance measurement range is the distance measurement range 701 between the position of the primary minimum value on the front side and the position of the primary minimum value on the rear side of the maximum peak in the defocusing characteristic of the PSF peak ratio (solid line in FIG. 7). . Actually, when the PSF peak ratio is close to zero, the accuracy decreases due to noise or the like. Therefore, it is desirable to set a slightly narrower range as a measurable range than between the front primary minimum value and the rear primary minimum value. In the example of FIG. 7, about −75 μm to 55 μm is the measurable range. In addition, let the negative direction in a figure be a front side.

  Here, the position of the primary minimum value of the PSF peak ratio that defines the measurable range depends on the position of the primary minimum value of the defocus characteristic (FIG. 6) of the PSF peak value. That is, as shown in FIG. 7, the position of the primary minimum value of the PSF peak ratio on the front side is the position of the primary minimum value on the front side of the PSF peak value of the rear image among the two images having different focus positions. It corresponds. On the other hand, the position of the primary minimum value of the rear PSF peak ratio corresponds to the position of the primary minimum value of the rear side of the PSF peak value of the image whose front position is the front side. That is, the measurable range is determined by the PSF peak value defocus characteristic (interval between the front and rear primary minimum values) and the focus bracket amount.

When the F value of the optical system is F and the wavelength of light is λ, the interval between the primary minimum value on the front side and the rear side in the defocus characteristic of the PSF peak value in the optical system (assuming no aberration) is about can be obtained and 15F 2 lambda (those described as "about", strictly speaking, the interval of the primary minimum value of the longitudinal is because a value of between 15F 2 λ~16F 2 λ). Therefore, when the focus bracket amount is Fb, the measurable range R is expressed by the following equation.

(Characteristics of focus bracket amount and PSF peak ratio)
Next, the relationship between the focus bracket amount and the change in the measurable range, and the relationship between the focus bracket amount and the PSF peak ratio range change will be described. 8A to 8F show PSF peak value defocus characteristics and changes in PSF peak ratio when the focus bracket amount is changed. The focus bracket amount is a difference in the horizontal axis direction between the defocus characteristics (dotted line) of two PSF peak values. In other words, the focus bracket amount is gradually increased as going to FIGS. Here, the values at the intersection of the defocus characteristics of the two PSF peak values (intersection of dotted lines) are 99.8%, 90%, 70%, 50%, 20%, and 5% of the maximum value of the PSF peak value. An example of setting is shown. It can be seen that the characteristics of the PSF peak ratio (solid line) change as the amount of focus bracket increases. Specifically, the measurable range (the range between the primary minimum value position on the front side of the maximum peak of the PSF peak ratio and the primary minimum value position on the rear side) becomes narrower as the focus bracket amount increases. Recognize. Such characteristics are apparent from Equation 6.

  The range of the PSF peak ratio (the difference between the maximum value of the PSF peak ratio and the primary minimum value) spreads rapidly as the amount of the focus bracket increases, and then gradually approaches 1. The wider the range of the PSF peak ratio, the higher the distance resolution, and the higher the tolerance to fluctuation factors such as noise, and the distance estimation accuracy is improved. Further, as the focus bracket amount increases, the defocus characteristic of the PSF peak ratio becomes steeper, which also affects the distance resolution (estimation accuracy). This is because if the slope of the PSF peak ratio is large, it is easy to detect a change in the value of the PSF peak ratio even with a slight distance difference.

9A and 9B show the change in the measurable range according to the amount of the focus bracket and the change in the value range of the PSF peak ratio, respectively. In FIG. 9A, the horizontal axis is the focus bracket amount, and the vertical axis is the measurement range on the image plane side. Similarly, in FIG. 9B, the horizontal axis is the focus bracket amount, and the vertical axis is the PSF peak ratio value range. Here, since the PSF peak ratio is normalized, the maximum value in the range is 1. When the focus bracket amount is 0, the distance cannot be measured, which is a singular point. 9 (a) and 9 (b), it can be seen that as the focus bracket amount is increased, the measurable range becomes narrower, but the distance resolution (estimation accuracy) is improved.

(Guidelines for optimal shooting conditions)
As shown in Equation 6, the distance measurement range (R) is given as a function of the F value (F), the wavelength (λ), and the focus bracket amount (FB). Further, as can be seen from FIGS. 9A and 9B, when the focus bracket amount (FB) is changed, not only the distance measurement range (R) but also the distance resolution (estimation accuracy) changes. Accordingly, when measurement conditions such as a desired distance range and accuracy are given, the shooting conditions such as the focus position and the F value of the optical system when shooting each image are appropriately set so as to satisfy the measurement conditions. Is desirable.

  The basic idea is as follows. The smaller the distance range to be measured, the smaller the F value when capturing two images. This is because as the F value is decreased, the depth of field becomes shallower (the defocus characteristic of the PSF peak ratio becomes steeper), so that improvement in distance resolution (estimation accuracy) can be expected. The amount of the focus bracket at this time may be appropriately determined according to the F value. In the case of an imaging apparatus that cannot change the F value, the focus bracket amount may be increased as the distance range to be measured is narrower. This is because, as described above, the distance resolution (estimation accuracy) improves as the focus bracket amount increases.

An example of a specific method for determining the F value and the focus bracket amount will be described. First, the F value (F) and the focus bracket amount (FB) are designed using the following relational expression. That is, the focus bracket amount is regarded as an amount proportional to the depth of field. In Equation 8, k is a coefficient for adjusting the magnitude of the focus bracket amount, and λ is a wavelength.
The following Expression 8 is obtained by substituting Expression 7 into Expression 6.

For example, when the distance measurement range r is given, the F value that satisfies the distance range r can be determined using Equation 9 (assuming that the coefficient k and the wavelength λ are predetermined). Specifically, r ≦ R, that is,
F value is determined to satisfy For example, a minimum F value satisfying Expression 9 may be selected from F values that can be set by the imaging optical system. Then, by substituting the F value into Expression 7, the focus bracket amount FB can be determined. The focus position (image plane side position) when capturing two images may be determined as rc−FB / 2 and rc + FB / 2, respectively, with the center position rc of the distance range r as a reference. By the above method, the F value, the amount of the focus bracket, and the focus position of each of the two images can be determined as shooting conditions that can measure the measurable range R.

(Coefficient k)
Next, a preferable value of the coefficient k will be described. The inventor found a preferable value of the coefficient k as follows by simulation and experiment. The coefficient k should be a value in the range of 0 <k <15. This is because if k is greater than 15, the subject is too blurred and the measurement accuracy decreases. The reason why k = 0 is excluded is that a blur difference cannot be obtained from images at the same focus position. When the purpose of distance measurement is to separate two layers of distances, that is, to determine whether or not the subject is included in a specific distance range, the coefficient k is set to a value in the range of 8 <k <15. Good. The larger the coefficient k, the larger the focus bracket amount and the narrower the measurable range (see FIGS. 8E and 8F). The fact that the measurable range is narrow means that the value of the PSF peak ratio varies greatly depending on whether or not a subject exists around a specific distance. Therefore, in the case of two-layer separation, it is better to increase the focus bracket amount to some extent.

  On the other hand, when the purpose of distance measurement is to determine the multi-layer separation of distances, that is, to determine in which of three or more distance ranges the subject is included, the coefficient k is in the range of 1 <k ≦ 8. It is good to set to the value of. As shown in FIGS. 8B to 8D, the smaller the coefficient k is, the smaller the amount of the focus bracket becomes, and the measurable range is widened, which is suitable for separation of two or more layers. Note that the reason why the range of 0 <k ≦ 1 is excluded is that in this case, the measurable range is widened, but the distance resolution is lowered, so that it is not suitable for multi-layer separation (FIG. 8A). reference). Furthermore, the coefficient k is preferably in the range of 2 ≦ k <4. This is because, in this range, the balance between the width of the measurable range and the distance resolution becomes particularly good, and it becomes possible to measure a wide distance range with high accuracy (see FIGS. 8B and 8C). As described above, the value of the coefficient k may be appropriately set in the range of 0 to 15 depending on the purpose of distance measurement.

  As described above, in the DFD method, the distance measurement range and the imaging conditions for two images are related. That is, in order to acquire a correct distance map of a desired distance measurement range, it is necessary to set shooting conditions according to the scene.

<DFF method distance measurement principle and measurable distance range>
In the case of the DFF method, a focus position that is most focused in a subject area is determined from a plurality of images obtained by changing the focus position, and the distance from the position to the object side is determined by the image formation of Formula 2. It can be calculated using the formula. The distance measurement range is determined by the range in which the focus position is moved. However, it can be easily imagined that there is a trade-off relationship such as making the focus position moving step rough because it takes much time if the moving range of the focus position is widened.

<Stereo method>
Next, the distance measurement principle of the stereo method and the measurable distance range will be described with reference to FIG. FIG. 10A is a schematic view of the situation in which the point P in the three-dimensional space is photographed using two cameras having the same focal length as viewed from above. It is assumed that the optical axes of the respective imaging devices are adjusted to be installed in parallel and at the same height. FIG. 10B shows images acquired by the left and right imaging devices, respectively.

Assume that the point P is reflected in Pl (u, v) and Pr (u ′, v ′) of the left camera and the right camera, respectively. The distance from the focal length to the point P is D, the focal length of the imaging device is f, the distance between the optical axes of the imaging device (hereinafter referred to as the baseline length) is b, and the position Pl of the point P of the image acquired by each camera When the difference (hereinafter referred to as parallax) between Pr and Pr is expressed by d, D can be calculated by Expression (10).

Note that since the parallax d is calibrated so that the optical axes of the imaging apparatus are parallel and have the same height, only the change in the lateral direction needs to be considered as described in Expression 10. When the optical axis and height of each imaging apparatus are not calibrated, it is necessary to calibrate in advance. As shown in Equation 10, f is uniquely determined if the optical system is determined, but the base line length b and the parallax d need to be appropriately set according to the resolution necessary for the distance to the measurement target. The baseline length can be dealt with by changing the camera arrangement.

  In general, the parallax needs to be obtained by calculating points corresponding to the left and right images for all pixels, and is obtained by image processing such as block matching for searching a local area of the reference image from the other image. At this time, the minimum resolution of the parallax depends on the search accuracy of the block matching and is a pixel interval or a sub-pixel interval. The parallax value range (range) depends on the search range of block matching, and is determined by the parallax corresponding to the distance to the foreground to be measured in the shooting scene and the allowable calculation time. That is, if the search range is narrow, an object closer to a certain distance cannot be detected. Thus, in the stereo method, there is a trade-off relationship between the search range and the range that can be measured.

As described above, the relationship between the distance map acquisition principle and the distance measurement range of each of the DFD, DFF, and stereo methods has been described as the passive distance map acquisition method. In both methods, the distance measurement range is related to a preset parameter. Therefore, it can be seen that it is desirable to be able to adjust according to the photographing object and photographing conditions.
<Example 1>

  Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings. However, the scope of the invention is not limited to the examples illustrated in the description of the embodiments.

(Constitution)
FIG. 1 schematically shows a configuration of an imaging apparatus 1 according to the present embodiment. The imaging device 1 acquires a distance map by the DFD method. The photographing lens 100 guides subject light to the image sensor 102. The exposure control member 101 includes an aperture and a shutter. Subject light incident via the photographing lens 100 is incident on the image sensor 102 via the exposure control member 101. The image sensor 102 is an image sensor that converts subject light into an electrical signal and outputs it, and is typically composed of an image sensor such as a CCD or CMOS. The image forming circuit 103 is an image forming circuit for digitizing an analog signal output from the image sensor 102 to form an image. The image forming circuit 103 includes an analog / digital conversion circuit (not shown), an auto gain control circuit, an auto white balance circuit, a pixel interpolation processing circuit, a color conversion circuit, and the like. The image forming circuit 103 corresponds to an image acquisition unit in the present invention. The exposure control unit 104 is means for controlling the exposure control member 101. The focus control unit 105 is means for controlling the focusing of the taking lens 100. The exposure control unit 104 and the focus control unit 105 are controlled using, for example, a TTL system (Through The Lens: a system that controls exposure and focus by measuring light actually passing through a photographing lens). The distance map calculation circuit 106 is a circuit that calculates a distance map based on two images shot under the shooting conditions controlled by the distance map parameter calculation circuit 107 in accordance with the focus position controlled by the focus control unit 105. The distance map parameter calculation circuit 107 is a circuit that calculates imaging conditions suitable for acquiring a distance map from the focus position controlled by the focus control unit 105, a desired distance measurement range, and the like. The system control circuit 108 is a control circuit that controls the operation of the entire imaging apparatus 1, and performs control of an optical system for photographing and control for digital processing of the photographed image. The distance map calculation circuit 106 and the distance map parameter calculation circuit 107 correspond to the distance map generation means and the parameter change means in the present invention, respectively.

The memory 109 is a memory using a flash ROM or the like that records data for operation control used in the system control circuit 108 and a processing program. The nonvolatile memory 110 is an electrically erasable and recordable nonvolatile memory such as an EEPROM that stores information such as various adjustment values. The frame memory 110 is a frame memory that stores several frames of images generated by the image forming circuit 103. The memory control circuit 111 is a memory control circuit that controls image signals input to and output from the frame memory 110. The distance measurement range map generation circuit 112 is a circuit that generates a distance measurement range map representing a distance measurement range based on the distance map generated by the distance map calculation circuit 106. The image composition circuit 113 synthesizes the ornamental image generated by the image forming circuit 103 and the distance measurement range map generated by the distance measurement range map generation circuit 112 to generate a display image to be displayed on a display device (not shown). Circuit. The image output unit 114 is a functional unit for displaying an image generated by the image forming circuit 103 or the image composition circuit 113 on an image output device (display or the like) (not shown). The input unit 115 is a functional block for accepting an input operation from a user, and includes a button, a switch, a touch panel, and the like. In this embodiment, the user can input an operation for instructing the adjustment of the distance measurement range while confirming the display image combined with the distance measurement range map. Image processing based on the map (for example, cutout processing, background blur processing, etc.) is performed.

(Process flow)
Next, the flow of processing from the start to the end of photographing according to this embodiment will be described with reference to the flowchart of FIG. First, in step S201, the photographer performs zooming on the object to be photographed, determines the composition, and sets predetermined photographing conditions such as a shutter speed and an F number. Note that the imaging apparatus 1 may automatically determine part of the shooting conditions. Here, it is assumed that the user has set a mode for acquiring a distance map and performing desired image processing.

  Next, in step S202, it is determined whether or not the first switch of the photographing switch has been pressed. If it is not pressed, nothing is done. If it is pressed, the process moves to step S203.

  In step S203, focus adjustment is performed with the composition and shooting conditions determined in step S201. This focus adjustment method can be realized by various methods such as a contrast method and a phase difference method, and is not particularly limited.

  In step S204, the distance map parameter calculation circuit 107 sets shooting conditions for obtaining a distance map and parameters for generating a distance map (distance map obtaining parameters). First, the distance s to the main subject in the focusing control acquired in step S203 is acquired as a shooting condition for acquiring the distance map. Subsequently, an initial value r0 of the distance measurement range is set. The initial value setting method is not particularly limited, and may be set from the main subject distance s and the focal length, or determined from the image magnification assumed from the main subject distance. For example, assuming that s = 2000 mm and the main subject is the upper body of a person, assuming that the main subject is not blurred in the subsequent image processing and the continuous depth of the background is to be blurred according to the depth, r0. Is about 5 m. Such distance measurement range initial values may be held in a table, may be inferred from a predetermined relational expression, may be inferred from a photographer's setting history, etc. There is no. The distance map parameter calculation circuit 107 determines the focus positions of the two images for acquiring the distance map according to the determined distance measurement range. Here, the focus position of one image is the position of the main subject, and the focus position of the other image is a position changed from the main subject position by a focus bracket amount that satisfies the distance measurement range.

Further, here, parameter setting for distance map generation performed in step S205 is performed. Here, the distance map may be generated with a resolution equivalent to the number of pixels that can be displayed on the display unit (a resolution lower than that of the ornamental image). In general, the number of pixels in the display portion is about VGA (640 × 480 pixels), so that the calculation cost is reduced and the speed is increased by calculating the position for obtaining the distance map so as to correspond to the number of pixels in the display portion. This makes it possible to confirm the distance measurement range of the distance map before the actual photographing.

  Next, in step S205, pre-photographing is performed to obtain a display ornamental image (hereinafter, a preview image) and a display distance map. The system control unit 108 captures two images according to the capturing parameters set in step S206. The image forming circuit 103 performs predetermined signal processing on an image in which the main subject is focused out of the two images to generate a preview image. The image quality (resolution, etc.) of the preview image may be lower than that at the time of actual shooting. On the other hand, the distance map calculation circuit 106 generates a display distance map based on the two images and the parameters set in step S206. The distance calculation method calculates the distance map based on the PSF peak ratio of two images with different shooting conditions as shown in Expressions 1 to 5. Note that the size of the local region is arbitrary. At this time, since the distance map is set to be calculated discretely in step S204, the area that has not been calculated is represented by the calculated area or is interpolated to generate a distance map. There is no limit to the method.

  In step S205, the distance measurement range map generation circuit 112 generates a distance measurement range map based on the generated display distance map. The distance measurement range map is a map representing a range (depth range) in which distance measurement is possible, and indicates whether distance measurement is possible for each pixel. For example, when the distance map is D and the minimum and maximum scores representing the distance measurement range are smin and smax, the distance measurement range map R has R = 1 (smin ≦ D ≦ smax), 0 (it Other times).

Next, in step S206, the image composition circuit 113 generates a composite image obtained by combining the distance measurement range map generated in step S205 and the preview image (an ornamental image), and the image output unit 114 displays an image display device (not shown). This composite image is displayed on the screen. The composite image is an image displayed so that it can be seen which part is in the distance measurement range in the preview image. Specifically, if the preview image is I, the distance measurement range map R, and the display composite image is I ′, a composite image can be generated based on the following equation.

Here, α is a constant that satisfies 0 <α <1.

  This will be described with reference to FIG. FIG. 3A shows a display image in a state in which the photographer has determined the composition and shooting conditions, and represents a scene in which the main subject 401 is the focal plane and the background wall 402 continues continuously. FIG. 3B shows a state in which the display distance map is acquired and the ranging range 403 is synthesized and displayed in steps S203 to S206 after the 1st switch is pressed in step S202. Here, R representing the distance measurement range is expressed as a binary value, but a composite image can be generated by any method as long as the distance measurement range 403 can be identified. For example, shading or pseudo colors may be added, and the method is not particularly limited.

  In step S207, the photographer observes the composite image and determines whether or not desired image processing can be realized within the displayed distance measurement range. This is shown in FIG.

Here, if the current distance measurement range is acceptable, the photographer operates the input unit 115 and selects OK in FIG. In response to this, the distance map parameter calculation circuit 107 performs parameter setting for acquiring the image processing distance map in step S210. Specifically, parameter setting is performed so as to generate a distance map according to the number of pixels (resolution) to be finally recorded. Next, in step S211, main photographing is performed to obtain an ornamental image and a distance map. The system control unit 108 captures two images based on the capturing parameters set in step S210. The image forming circuit 103 performs predetermined signal processing on an image in which the main subject is in focus among the two images to generate an ornamental image, and stores the image after performing predetermined compression processing. The distance map calculation circuit 106 generates a distance map (image processing distance map) based on the two images and the parameters set in step S210. Thereafter, in step S212, the image processing unit 116 acquires an image processing distance map, performs image processing on the ornamental image based on the distance map, and performs a predetermined compression process and the like, and then executes the predetermined compression processing. To record.

  If it is desired to change the distance measurement range in step S207, the photographer operates the input unit 115 to select No in FIG. Then, the process moves to step S208 to prompt the photographer to change the distance measurement range. The photographer inputs an instruction to change the distance measurement range via the input unit 115. For example, as shown in FIG. 3D, the photographer changes the range using the user interface (not shown) (button or touch panel) so that the range is expanded from the current range 403 to the range 404. . The user interface for changing the shooting range may be arbitrary. For example, a range-finding range that can be enlarged / reduced or translated, or a range-range specification can be used.

  In step S209, the distance map parameter calculation circuit 107 calculates a shooting parameter for achieving the distance measurement range changed in step S208. Specifically, the distance map parameter calculation circuit 107 changes the distance measurement range by changing the shooting parameter based on the degree of change of the distance measurement range in step S208. For example, based on the degree of change (size ratio) of the distance measurement range 403 before the change and the distance measurement range 404 after the change, the shooting parameter is changed to change the distance measurement range. Specifically, when the degree of change between the range-finding range 403 before the change and the range-finding range 404 after the change is larger than a predetermined threshold (the right side and the left side of the rectangle of the range-finding range 403 and the right side of the range-finding range 403 When the displacement amount on the left side is larger than the threshold value), the F value is changed by a predetermined amount. On the other hand, when the displacement amount of the distance measurement range 403 before the change and the shooting range 404 after the change is equal to or less than a predetermined threshold, the focus bracket amount and the predetermined movement amount are changed. When the distance measurement range is expanded, the F value is increased or the focus bracket amount is changed to a smaller value. Conversely, when the distance measurement range is reduced, the F value is decreased or the focus bracket amount is increased.

  In step S209, the distance map parameter calculation circuit 107 calculates a shooting parameter for achieving the distance measurement range changed in step S208. Specifically, as described above, the F number and the focus bracket amount are calculated again and set as shooting conditions. At this time, it is also preferable to change the shooting parameters in consideration of the degree of change in the size of the distance measurement range 403 and the distance measurement range 404. Specifically, when the degree of change between the distance measurement range 403 and the distance measurement range 404 is large, that is, when the displacement amount of the right and left sides of the rectangle of the distance measurement range 403 and the right and left sides of the distance measurement range 403 is large, Set the F value to a larger value. When the displacement amount between the distance measuring range 403 and the photographing range 404 is small, the focus bracket amount and the predetermined movement amount may be reduced.

  Thereafter, the process returns to step S202, and when the 1st switch is pressed, the same process is performed for step S203 and subsequent steps. FIG. 3E shows a distance measurement range 405 when shooting is performed under the changed shooting conditions.

(effect)
As described above, the imaging apparatus according to the present embodiment acquires and synthesizes the display distance map at the time of shooting, and displays it on the display unit. Thereby, the photographer can easily confirm the distance measurement range in the current photographing. Further, when the photographer instructs to change the distance measurement range, the shooting parameter for obtaining the distance map is automatically changed so that the designated distance measurement range can be measured. Thus, there is an effect that the photographer can obtain a distance map of a desired distance measurement range, and can reliably obtain a desired image processing result.

(Modification)
In the above-described embodiment, the distance map acquisition method has been described as the DFD method. However, the method can also be applied to the DFF method. The DFF method is suitable when the imaging device can be fixed and the subject is stationary. In such a case, the flow of the entire process does not change, and it is only necessary to change the contents of each circuit and process.

  First, the distance map parameter calculation circuit 107 may hold the focus bracket amount and the initial value of the range for each photographing condition. In addition, the distance map calculation circuit 106 calculates an evaluation value such as a contrast value in the same local region among a plurality of captured images in the image, and uses Equations 1 and 2 from the image with the highest evaluation value. The distance map may be calculated by using the distance estimation. In this way, the created distance map is combined with the ornamental image in the same manner as in the DFD, displayed, and the distance measurement range is confirmed with the photographer. The photographer gives an instruction to change the distance measurement range if necessary. In the case of DFF, since the distance measurement range is the focus bracket range itself, it may be changed to increase the range while keeping the number of sheets constant.

  By doing as described above, it becomes possible to confirm and change the distance measurement range of the distance map even in the DFF before the photographing is completed, and the image processing result desired by the photographer can be reliably obtained as in the case of the DFD. There is an effect.

<Example 2>
Next, as a second embodiment of the present invention, a case where a distance map is acquired by a binocular stereo method and desired image processing is performed will be described with reference to the drawings.

(Constitution)
FIG. 4 shows the configuration of the image pickup apparatus in the present embodiment. The image pickup apparatus in the present embodiment that is common to the image pickup apparatus 1 shown in FIG. 1 is denoted by the same reference numerals as those in FIG. 1, and only differences will be described.

  Since the imaging apparatus 4 according to the present embodiment is a twin-lens stereo system, another set of a photographing lens 400, an exposure control member 401, and an imaging element 402 is added. An exposure control unit 104 and a focus control unit 105 that control them control two sets of exposure control members and photographing lenses. The image forming circuit 103 images the outputs from the two image sensors 102 and 402. The distance map calculation circuit 403 and the distance map parameter calculation circuit 404 are circuits that perform distance map calculation and parameter setting corresponding to the stereo system, respectively. It is assumed that the optical axes of the two lenses are adjusted. Here, the base line length is fixed, but a mechanism for changing the base line length may be added, and there is no particular limitation.

(Process flow)
Next, the flow of processing in the case of the imaging apparatus of the present embodiment will be described using a flowchart.
The processing flow of the present embodiment is the same as the processing flow diagram 2 of the first embodiment, and only the processing contents of the steps S204 to S205 are different, so only the differences will be described.

In step S204, the distance map may be calculated by the stereo method. In step S204, calculation parameters for the stereo method are set. Specifically, as shown in Expression 10, the subject distance is determined by the base line length b and the parallax d. When the base line length b is fixed, the distance measurement range is determined by the search range of the parallax d (block matching search range). The search range is d0 means that when the reference image is the left image (x, y), the most similar region is searched in the range represented by (x ± d0, y) of the right image. Means that. However, here, it is assumed that the optical axis direction and height of the imaging apparatus are calibrated, and the search range is only the horizontal direction. However, if the calibration range is not calibrated, the search range may be the vertical direction. The relationship between the parallax search range d0 and the parallax search range can be derived from Equation 10. If the base line length b is variable, the distance measurement range may be adjusted by changing the base line length b.

  Steps S206 to S208 are the same processes as those in the first embodiment, and thus description thereof is omitted.

  In step S209, the distance measurement range is changed as instructed by the photographer. Specifically, when an instruction is given to enlarge the distance measurement range in front of the photographer, it is possible to cope with the problem by increasing the block matching search range d0. Further, when it is desired to expand the distance measurement range in the depth direction, as can be seen from Equation 10, the focal length f and the baseline length b may be increased.

(effect)
As described above, even when the distance map acquisition method is the stereo method, it is possible to confirm the distance measurement range of the distance map before the completion of photographing, and to reliably obtain the image processing result desired by the photographer. There is an effect that it becomes possible.

(Modification)
In the above description, the binocular stereo system has been described as an example. However, the present invention can also be applied to an imaging apparatus that performs pupil division of an optical system and acquires an image of two viewpoints with one optical system.

<Other examples>
It should be noted that the specific implementation on the apparatus can be implemented by software (program) or hardware. For example, the present invention can also be implemented by a system or apparatus computer (or a device such as a CPU or MPU) that implements the functions of the above-described embodiments by reading and executing a program recorded in a storage device. For example, the present invention can be implemented by a method including steps executed by a computer of a system or apparatus that implements the functions of the above-described embodiments by reading and executing a program recorded in a storage device. . For this purpose, the program is stored in the computer from, for example, various types of recording media that can serve as the storage device (ie, computer-readable recording media that holds data non-temporarily). Provided to. Therefore, the computer (including devices such as CPU and MPU), the method, the program (including program code and program product), and the computer-readable recording medium that holds the program in a non-temporary manner are all present. It is included in the category of the invention.

DESCRIPTION OF SYMBOLS 1 Imaging device 103 Image formation circuit 106 Distance map calculation circuit 112 Distance measurement range map generation circuit 113 Image composition circuit 114 Image output part

Claims (4)

  1. Image acquisition means for acquiring images;
    Distance map acquisition means for acquiring a first distance map by DFD ;
    A distance measurement range map generating means for generating a distance measurement range map indicating a distance measurement range in the image based on the first distance map;
    Combining means for generating a combined image obtained by combining the image and the distance measurement range map;
    Display means for displaying the composite image;
    A change instruction means for receiving a change instruction of a distance measurement range of the distance map acquisition means from a user;
    Parameter changing means for changing a distance map acquisition parameter used by the distance map acquisition means based on the distance measurement range changed by the change instruction means;
    Equipped with a,
    The parameter changing means changes the distance map acquisition parameter to a first parameter, changes the F value if the change degree is greater than a predetermined threshold, and the focus bracket amount if the change degree is equal to or less than the predetermined threshold. Change
    An imaging apparatus characterized by that .
  2. The parameter changing means changes the distance map acquisition parameter to a second parameter based on the resolution of the image to be captured when the change instruction means does not accept an instruction to change the distance measurement range from the user ,
    The distance map acquisition means acquires a second distance map using the second parameter,
    Further Ru comprising an image processing hand stage for performing image processing on the image based on the second distance map,
    The imaging apparatus according to claim 1.
  3. The resolution of the first distance map is lower than the resolution of the second distance map;
    The imaging device according to claim 2.
  4. An image acquisition step of acquiring an image;
    A distance map acquisition step you get a first distance map by DFD,
    A distance measurement range map generating step for generating a distance measurement range map indicating a distance measurement range in the image based on the first distance map;
    Generating a composite image by combining the image and the distance measurement range map;
    A display step for displaying the composite image;
    A change instruction step for accepting a change instruction of a distance measurement range in the distance map acquisition step from a user;
    A parameter changing step for changing a distance map acquisition parameter used in the distance map acquisition step based on the distance measurement range changed in the change instruction step;
    Only including,
    The parameter changing step changes the distance map acquisition parameter to a first parameter, changes the F value if the change degree is greater than a predetermined threshold, and the focus bracket amount if the change degree is equal to or less than the predetermined threshold. Change
    And a method of controlling the imaging apparatus.
JP2013161267A 2013-08-02 2013-08-02 Imaging apparatus and control method thereof Active JP6245885B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013161267A JP6245885B2 (en) 2013-08-02 2013-08-02 Imaging apparatus and control method thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013161267A JP6245885B2 (en) 2013-08-02 2013-08-02 Imaging apparatus and control method thereof
US14/340,824 US9961329B2 (en) 2013-08-02 2014-07-25 Imaging apparatus and method of controlling same

Publications (3)

Publication Number Publication Date
JP2015032144A JP2015032144A (en) 2015-02-16
JP2015032144A5 JP2015032144A5 (en) 2016-09-08
JP6245885B2 true JP6245885B2 (en) 2017-12-13

Family

ID=52427242

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2013161267A Active JP6245885B2 (en) 2013-08-02 2013-08-02 Imaging apparatus and control method thereof

Country Status (2)

Country Link
US (1) US9961329B2 (en)
JP (1) JP6245885B2 (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9671595B2 (en) 2013-01-05 2017-06-06 Light Labs Inc. Methods and apparatus for using multiple optical chains in paralell
JP2015036841A (en) 2013-08-12 2015-02-23 キヤノン株式会社 Image processing apparatus, distance measuring apparatus, imaging apparatus, and image processing method
US9467627B2 (en) 2013-10-26 2016-10-11 The Lightco Inc. Methods and apparatus for use with multiple optical chains
US9325906B2 (en) 2013-10-18 2016-04-26 The Lightco Inc. Methods and apparatus relating to a thin camera device
US9736365B2 (en) 2013-10-26 2017-08-15 Light Labs Inc. Zoom related methods and apparatus
US9374514B2 (en) 2013-10-18 2016-06-21 The Lightco Inc. Methods and apparatus relating to a camera including multiple optical chains
US9423588B2 (en) 2013-10-18 2016-08-23 The Lightco Inc. Methods and apparatus for supporting zoom operations
US9426365B2 (en) 2013-11-01 2016-08-23 The Lightco Inc. Image stabilization related methods and apparatus
US9554031B2 (en) 2013-12-31 2017-01-24 Light Labs Inc. Camera focusing related methods and apparatus
US9979878B2 (en) 2014-02-21 2018-05-22 Light Labs Inc. Intuitive camera user interface methods and apparatus
US9462170B2 (en) 2014-02-21 2016-10-04 The Lightco Inc. Lighting methods and apparatus
CN106575366A (en) 2014-07-04 2017-04-19 光实验室股份有限公司 Methods and apparatus relating to detection and/or indicating a dirty lens condition
US10110794B2 (en) 2014-07-09 2018-10-23 Light Labs Inc. Camera device including multiple optical chains and related methods
US9912865B2 (en) 2014-10-17 2018-03-06 Light Labs Inc. Methods and apparatus for supporting burst modes of camera operation
EP3235243A4 (en) 2014-12-17 2018-06-20 Light Labs Inc. Methods and apparatus for implementing and using camera devices
US9544503B2 (en) * 2014-12-30 2017-01-10 Light Labs Inc. Exposure control methods and apparatus
US10600169B2 (en) * 2015-03-26 2020-03-24 Sony Corporation Image processing system and image processing method
US9824427B2 (en) 2015-04-15 2017-11-21 Light Labs Inc. Methods and apparatus for generating a sharp image
US10091447B2 (en) 2015-04-17 2018-10-02 Light Labs Inc. Methods and apparatus for synchronizing readout of multiple image sensors
US10075651B2 (en) 2015-04-17 2018-09-11 Light Labs Inc. Methods and apparatus for capturing images using multiple camera modules in an efficient manner
US9967535B2 (en) * 2015-04-17 2018-05-08 Light Labs Inc. Methods and apparatus for reducing noise in images
US9857584B2 (en) 2015-04-17 2018-01-02 Light Labs Inc. Camera device methods, apparatus and components
WO2016172641A1 (en) 2015-04-22 2016-10-27 The Lightco Inc. Filter mounting methods and apparatus and related camera apparatus
US10129483B2 (en) 2015-06-23 2018-11-13 Light Labs Inc. Methods and apparatus for implementing zoom using one or more moveable camera modules
US10491806B2 (en) 2015-08-03 2019-11-26 Light Labs Inc. Camera device control related methods and apparatus
US9958585B2 (en) 2015-08-17 2018-05-01 Microsoft Technology Licensing, Llc Computer vision depth sensing at video rate using depth from defocus
US10365480B2 (en) 2015-08-27 2019-07-30 Light Labs Inc. Methods and apparatus for implementing and/or using camera devices with one or more light redirection devices
US10051182B2 (en) 2015-10-05 2018-08-14 Light Labs Inc. Methods and apparatus for compensating for motion and/or changing light conditions during image capture
US9749549B2 (en) 2015-10-06 2017-08-29 Light Labs Inc. Methods and apparatus for facilitating selective blurring of one or more image portions
US10225445B2 (en) 2015-12-18 2019-03-05 Light Labs Inc. Methods and apparatus for providing a camera lens or viewing point indicator
US10003738B2 (en) 2015-12-18 2018-06-19 Light Labs Inc. Methods and apparatus for detecting and/or indicating a blocked sensor or camera module
EP3413267A4 (en) * 2016-02-05 2019-02-13 Ricoh Company, Ltd. Object detection device, device control system, imaging device, objection detection method, and program
US10306218B2 (en) 2016-03-22 2019-05-28 Light Labs Inc. Camera calibration apparatus and methods
US9948832B2 (en) 2016-06-22 2018-04-17 Light Labs Inc. Methods and apparatus for synchronized image capture in a device including optical chains with different orientations

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4965840A (en) * 1987-11-27 1990-10-23 State University Of New York Method and apparatus for determining the distances between surface-patches of a three-dimensional spatial scene and a camera system
JP2993610B2 (en) 1990-09-29 1999-12-20 アイシン精機株式会社 Image processing method
US5838836A (en) * 1995-11-06 1998-11-17 Agfa Division-Bayer Corporation Method and apparatus for rough cropping images
US5793900A (en) * 1995-12-29 1998-08-11 Stanford University Generating categorical depth maps using passive defocus sensing
EP1381229B1 (en) 2001-03-30 2008-12-10 National Institute of Advanced Industrial Science and Technology Real-time omnifocus microscope camera
JP2003087545A (en) * 2001-09-07 2003-03-20 Canon Inc Image pickup device, image processor and method
JP3799019B2 (en) * 2002-01-16 2006-07-19 オリンパス株式会社 Stereo shooting device and shooting method of stereo shooting device
US7231087B2 (en) * 2002-10-30 2007-06-12 Metrica, Inc. Matching binary templates against range map derived silhouettes for object pose estimation
JP2007139892A (en) * 2005-11-15 2007-06-07 Olympus Corp Focusing detection device
WO2009011492A1 (en) * 2007-07-13 2009-01-22 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding stereoscopic image format including both information of base view image and information of additional view image
KR20120023431A (en) * 2010-09-03 2012-03-13 삼성전자주식회사 Method and apparatus for converting 2-dimensinal image to 3-dimensional image with adjusting depth of the 3-dimensional image
US8582820B2 (en) * 2010-09-24 2013-11-12 Apple Inc. Coded aperture camera with adaptive image processing
US9035939B2 (en) * 2010-10-04 2015-05-19 Qualcomm Incorporated 3D video control system to adjust 3D video rendering based on user preferences
JP2012124555A (en) 2010-12-06 2012-06-28 Canon Inc Imaging apparatus
JP5870533B2 (en) * 2011-08-09 2016-03-01 株式会社リコー Imaging apparatus and imaging method
CN104081414B (en) * 2011-09-28 2017-08-01 Fotonation开曼有限公司 System and method for coding and decoding light field image file
WO2013054527A1 (en) * 2011-10-12 2013-04-18 パナソニック株式会社 Image capture device, semiconductor integrated circuit, and image capture method
US9208570B2 (en) * 2012-03-28 2015-12-08 Sony Corporation System and method for performing depth estimation by utilizing an adaptive kernel
JP6214183B2 (en) 2012-05-11 2017-10-18 キヤノン株式会社 Distance measuring device, imaging device, distance measuring method, and program
JP5932476B2 (en) 2012-05-17 2016-06-08 キヤノン株式会社 Image processing apparatus, imaging apparatus, distance measuring method, and distance measuring program
US20140184586A1 (en) * 2013-01-02 2014-07-03 International Business Machines Corporation Depth of field visualization

Also Published As

Publication number Publication date
US20150035824A1 (en) 2015-02-05
JP2015032144A (en) 2015-02-16
US9961329B2 (en) 2018-05-01

Similar Documents

Publication Publication Date Title
US10129455B2 (en) Auto-focus method and apparatus and electronic device
US9438792B2 (en) Image-processing apparatus and image-processing method for generating a virtual angle of view
EP3033733B1 (en) Stereo yaw correction using autofocus feedback
US9076204B2 (en) Image capturing device, image capturing method, program, and integrated circuit
US10021290B2 (en) Image processing apparatus, image processing method, image processing program, and image pickup apparatus acquiring a focusing distance from a plurality of images
JP5887267B2 (en) 3D image interpolation apparatus, 3D imaging apparatus, and 3D image interpolation method
JP5657343B2 (en) Electronics
EP2214139B1 (en) Two-dimensional polynomial model for depth estimation based on two-picture matching
CN107948519B (en) Image processing method, device and equipment
JP5932476B2 (en) Image processing apparatus, imaging apparatus, distance measuring method, and distance measuring program
JP5178553B2 (en) Imaging device
US8698943B2 (en) Imaging apparatus and distance measurement method
US9749614B2 (en) Image capturing system obtaining scene depth information and focusing method thereof
JP5934929B2 (en) Image processing apparatus and image processing method
US9055218B2 (en) Image processing apparatus, image processing method, and program for combining the multi-viewpoint image data
US9092875B2 (en) Motion estimation apparatus, depth estimation apparatus, and motion estimation method
CN103200361B (en) Image signal processing apparatus
US9503633B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
US8718326B2 (en) System and method for extracting three-dimensional coordinates
JP5173665B2 (en) Image capturing apparatus, distance calculation method thereof, and focused image acquisition method
JP5745416B2 (en) Dither focus evaluation
JP6727791B2 (en) Tracking control device, tracking control method, and imaging device
KR101803712B1 (en) Image processing apparatus, control method, program, and recording medium
US8284261B2 (en) External ranging image pickup apparatus and ranging method
TWI474096B (en) Enhanced image processing with lens motion

Legal Events

Date Code Title Description
A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20160719

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20160719

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20170622

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20170704

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170829

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20171017

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20171114

R151 Written notification of patent or utility model registration

Ref document number: 6245885

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151