JP5159520B2 - Imaging apparatus, automatic focus detection apparatus, and control method thereof - Google Patents

Imaging apparatus, automatic focus detection apparatus, and control method thereof Download PDF

Info

Publication number
JP5159520B2
JP5159520B2 JP2008221791A JP2008221791A JP5159520B2 JP 5159520 B2 JP5159520 B2 JP 5159520B2 JP 2008221791 A JP2008221791 A JP 2008221791A JP 2008221791 A JP2008221791 A JP 2008221791A JP 5159520 B2 JP5159520 B2 JP 5159520B2
Authority
JP
Japan
Prior art keywords
focus detection
pixels
row
pixel
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2008221791A
Other languages
Japanese (ja)
Other versions
JP2010054968A (en
Inventor
稔 廣瀬
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2008221791A priority Critical patent/JP5159520B2/en
Publication of JP2010054968A publication Critical patent/JP2010054968A/en
Application granted granted Critical
Publication of JP5159520B2 publication Critical patent/JP5159520B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to an imaging apparatus, an automatic focus detection apparatus, and a control method thereof that detect a focus using an auxiliary light source when a subject has low brightness or low contrast.
  Currently, there is an imaging apparatus having a so-called live view function in an imaging apparatus having a solid-state imaging element such as a CCD or a CMOS sensor. In the imaging device having the live view function, the image signal continuously read from the imaging device is sequentially output to a display device such as a liquid crystal display installed on the back surface of the camera, thereby confirming the subject image. It can be carried out.
  In addition, the automatic focus detection / adjustment method of the imaging apparatus includes a contrast detection method (called a blur method) and a phase difference detection method (called a shift method) as general methods using a light beam that has passed through a photographing lens. is there.
  The contrast detection method is a method often used in video movie equipment (camcorder) for video recording and electronic still cameras. In this method, an image sensor is used as a focus detection sensor. Further, this method focuses on the output signal of the image sensor, particularly information on high-frequency components (contrast information), and uses the position of the photographic lens with the largest evaluation value as the in-focus position. However, as called the hill-climbing method, it is necessary to calculate the evaluation value while moving the photographic lens by a small amount and move it until it is understood that the evaluation value is the maximum as a result. It is said that it is unsuitable for operation.
  On the other hand, the phase difference detection method is often used for a single-lens reflex camera using a silver salt film, and is a method having the technology that has contributed most to practical application of an auto focus (AF) single-lens reflex camera. In this phase difference detection method, the light beam that has passed through the exit pupil of the photographing lens is divided into two, and the two divided light beams are received by a pair of focus detection sensors. Then, by detecting the shift amount of the signal output according to each received light amount, that is, the relative positional shift amount in the beam splitting direction, the shift amount in the focusing direction of the photographing lens can be directly obtained. Therefore, the amount and direction of the focus shift can be obtained by performing the accumulation operation once by the focus detection sensor, and a high-speed focus adjustment operation is possible.
  However, in order to divide the light beam that has passed through the exit pupil of the photographic lens into two and obtain a signal corresponding to each light beam, optical path dividing means such as a quick return mirror or a half mirror is provided in the imaging optical path, and beyond that In general, a focus detection optical system and an AF sensor are provided. For this reason, the apparatus becomes large and expensive. In addition, when performing live view, the quick return mirror was retracted, so that the focus adjustment operation could not be performed.
  In order to solve such a problem, a technique for providing a phase difference detection function to an image sensor, making a dedicated AF sensor unnecessary, and realizing high-speed phase difference AF is disclosed.
  For example, in Patent Document 1, a pupil division function is provided by decentering the sensitivity region of the light receiving unit with respect to the optical axis of the on-chip microlens in some of the light receiving elements (pixels) of the imaging device. By using these pixels as focus detection pixels and disposing them at a predetermined interval between the imaging pixel groups, focus detection is performed by a phase difference detection method. In addition, since the location where the focus detection pixels are arranged corresponds to a defective portion of the imaging pixel, the image information at this location is created by interpolation from surrounding imaging pixel information.
  Moreover, in patent document 2, the pupil division function is provided by dividing the light receiving part of a part of pixels of the image sensor into two. By using these pixels as focus detection pixels and disposing them at a predetermined interval between the imaging pixel groups, focus detection is performed by a phase difference detection method. Also in this technique, since the image pickup pixel is missing at the place where the focus detection pixels are arranged, the image information at this place is created by interpolation from the surrounding image pickup pixel information.
  On the other hand, a camera equipped with so-called passive automatic focus detection projects a contrast pattern onto the subject as auxiliary light from an AF auxiliary light source built in the camera body when the subject has low brightness or low contrast. This camera generates contrast in the subject image received by the AF sensor, and performs focus detection from the subject image.
  Further, a monitor circuit for checking the integrated value of the electrical signal accumulated in the AF sensor is provided as accumulation control of the AF sensor. In general, so-called AGC accumulation control is used in which accumulation of the AF sensor is terminated when the integral value reaches a predetermined value, and a predetermined amount of output signal amplified by the amplifier can be efficiently obtained.
  However, in the accumulation control using the auxiliary light source, when the subject distance is short or the reflectance of the subject is high, the integral value of the AF sensor exceeds the range that can be processed by one light emission, and the focus detection processing is performed. There was something I couldn't do. In addition, when the subject distance is long or the reflectance of the subject is low, the number of times the auxiliary light is emitted increases, and it takes time to complete the integration of the AF sensor. In addition, there is a possibility that autofocus operation cannot be performed.
As a technique for solving this problem, in Patent Literature 3, when the auxiliary light source emits light, the accumulation completion time is shortened by changing the light emission time or the light emission interval time step by step for each light emission. Yes.
JP 2000-156823 A JP 2000-292686 A JP 2000-338386 A
  However, in the conventional imaging device, when a part of the imaging device is used as a focus detection pixel, if a dedicated monitor circuit for monitoring the accumulation amount of the AF sensor is provided and AGC accumulation control is performed, the circuit becomes complicated. In addition, it is not desirable because it is costly.
  For this reason, when such a dedicated monitor circuit is not provided, the charge accumulated by emitting auxiliary light with a predetermined light emission amount is read from the image sensor, and whether the output signal is at a desired signal level is evaluated. There was a need. In the case of underexposure or overexposure, it was necessary to store again by changing the light emission amount or the light emission time. Therefore, in this method, the number of times of reading from the image sensor increases, and high-speed focus detection and AF operation cannot be performed. Moreover, useless energy is consumed by emitting auxiliary light a plurality of times.
Therefore, the present invention makes it possible to obtain different exposure amounts among a plurality of distance measurement lines in one shooting, and by performing focus detection by selecting a distance measurement line from which a desired exposure amount is obtained, The release time lag can be shortened by expanding the dynamic range using multiple ranging lines, and the exposure can be adjusted with a small number of flashes. An object of the present invention is to provide a detection device and a control method thereof.
In order to achieve the above object, an imaging apparatus of the present invention includes a plurality of pixels arranged in a row direction and a column direction, row selection means for selecting the plurality of pixels for each row at a predetermined period, and the selection A storage means for resetting the charge accumulated in the pixels of the selected row and storing the charge in the pixels of the row after the reset; and a reading means for reading out the charge accumulated in the pixels of the selected row. An image pickup apparatus having an image pickup element and a light emitting means for emitting a flash to a subject, wherein the plurality of pixels are discretely arranged in an image pickup pixel group for storing charge for image pickup and an image pickup region of the image pickup element. And a focus detection pixel group that accumulates charges according to the focus states in a plurality of directions of an imaging optical system that forms a subject image, and includes a part of the focus detection pixel group in a row direction. The pixel group as a focus detection line, Control means for causing the light emitting means to emit light at a timing at which different exposure amounts are applied to the plurality of focus detection lines from which charges are read out by the output means; selecting means for selecting one of the focus detection line, and a focus detection means for detecting a focal point with the charge accumulated in the focus detection pixels included in the selected focus detection lines possess, said control means Is characterized in that the readout timing of the pixels in the row by the readout means is controlled so that a difference in exposure time between the plurality of focus detection lines becomes a predetermined time .
The automatic focusing apparatus according to the present invention includes a plurality of pixels arranged in a row direction and a column direction, row selection means for selecting the plurality of pixels for each row in a predetermined cycle, and pixels in the selected row. An image pickup device that resets accumulated charges and accumulates charges in the pixels of the row after the reset; and a reading unit that reads out the charges accumulated in the pixels of the selected row; An automatic focus detection apparatus mounted on an imaging apparatus having a light emitting means for emitting flash light, wherein the plurality of pixels are discretely arranged in an imaging pixel group for storing imaging charges and an imaging area of the imaging element. And a focus detection pixel group that accumulates electric charges according to the focus states in a plurality of directions of an imaging optical system that forms a subject image, and includes a part of the focus detection pixel group Direction pixel group focus detection line And a control means for causing the light emitting means to emit light at a timing at which different exposure amounts are applied to the plurality of focus detection lines from which charges are read by the reading means, and a predetermined exposure amount is obtained from the plurality of focus detection lines. It was possess a selection means for selecting at least one focus detection line, and a focus detection means for detecting a focal point with the charge accumulated in the focus detection pixels included in the selected focus detection line, The control means controls the readout timing of the pixels in the row by the readout means so that a difference in exposure time between the plurality of focus detection lines becomes a predetermined time .
The imaging apparatus control method according to the present invention includes a plurality of pixels arranged in a row direction and a column direction, row selection means for selecting the plurality of pixels for each row at a predetermined cycle, and pixels in the selected row An image pickup device having a storage means for resetting the charge accumulated in the pixel and storing the charge in the pixel in the row after the reset; and a reading means for reading out the charge accumulated in the pixel in the selected row; The plurality of pixels are discretely arranged in an imaging region of the imaging element and an imaging region of the imaging element to form a subject image. A focus detection pixel group that accumulates electric charges according to focus states in a plurality of directions of an imaging optical system, the control method for an imaging apparatus comprising: A pixel group as a focus detection line, A control step of causing the light emitting means to emit light at a timing at which different exposure amounts are applied to the plurality of focus detection lines from which electric charges are read out, and at least one of the plurality of focus detection lines that has obtained a predetermined exposure amount a selection step of selecting a focus detection line, with the charge accumulated in the focus detection pixels included in the selected focus detection lines have a focus detection step of detecting a focus, in the control step, The readout timing of the pixels in the row by the readout unit is controlled so that a difference in exposure time between the plurality of focus detection lines becomes a predetermined time .
According to the first aspect of the present invention, it becomes possible to obtain different exposure amounts among a plurality of distance measuring lines in one shooting, and focus detection is performed by selecting a distance measuring line from which a desired exposure amount is obtained. By doing so, the release time lag can be shortened by expanding the dynamic range using multiple ranging lines, and the exposure can be adjusted with a small number of flashes, which improves imaging and saves energy. An apparatus can be provided .
According to the imaging apparatus of the second aspect , the influence on the live view can be reduced by making the readout timing period different between the frame that emits light and the frame that does not emit light.
According to the imaging device of the third aspect , it is possible to adjust the exposure amount difference between the focus detection lines without changing the line cycle of the focus detection lines.
According to the imaging apparatus of the fourth aspect , it is possible to obtain output signals having different exposure amounts with respect to one readout from the imaging device. Therefore, the time for obtaining an appropriate signal output in photometry is shortened, and quick photometry and automatic exposure control are possible.
  Embodiments of an imaging apparatus, an automatic focus detection apparatus, and a control method thereof according to the present invention will be described with reference to the drawings. The imaging apparatus of this embodiment is applied to a digital still camera (electronic camera), and the automatic focus detection apparatus is mounted on the electronic camera.
[First Embodiment]
FIG. 1 is a diagram illustrating a configuration of an electronic camera according to the first embodiment. This electronic camera has a structure in which a camera body incorporating an image sensor and a photographing lens are integrated. The first lens group 101 is disposed at the tip of the photographing optical system (imaging optical system) and is held so as to be able to advance and retract in the optical axis direction. The aperture / shutter 102 adjusts the aperture diameter to adjust the amount of light at the time of shooting, and also has a function as an exposure time adjustment shutter at the time of still image shooting.
  The second lens group 103 is integrated with the diaphragm / shutter 102 and the second lens group 103 to advance and retract in the optical axis direction, and performs zooming operation in conjunction with the advance / retreat operation of the first lens group 101 (zoom function). ). The third lens group (focus lens) 105 performs focus adjustment by advancing and retreating in the optical axis direction.
  The optical low-pass filter 106 is an optical element for reducing false colors and moire in the captured image. The image sensor 107 includes a CMOS sensor (also simply referred to as CMOS) and its peripheral circuits. As this image sensor, a two-dimensional single-plate color sensor in which a Bayer array primary color mosaic filter is formed on-chip on each light receiving pixel of m pixels in the horizontal direction and n pixels in the vertical direction is used.
  The zoom actuator 111 drives the first lens group 101, the second lens group 103, and the third lens group 105 to advance and retract in the optical axis direction by rotating a cam cylinder (not shown), and zooms. Perform the operation.
  The aperture shutter actuator 112 controls the aperture diameter of the aperture / shutter 102 to adjust the amount of photographing light, and controls the exposure time during still image photographing. The focus actuator 114 drives the third lens group 105 to advance and retract in the optical axis direction, and performs focus adjustment.
  The subject illumination electronic flash 115 illuminates the subject during shooting. A flash illumination device using a xenon tube is suitable as the electronic flash 115 for subject illumination, but an illumination device including LEDs that emit light continuously may be used.
  The AF auxiliary light source 116 projects a mask image having a predetermined aperture pattern onto the object field via a light projection lens, thereby improving the focus detection capability for a dark subject or a low-contrast subject.
  The CPU 121 is an in-camera CPU that controls various controls in the camera body, and includes a calculation unit, a ROM, a RAM, an A / D converter, a D / A converter, a communication interface circuit, and the like. The CPU 121 drives various circuits included in the camera body according to a predetermined program stored in the ROM, and performs a series of operations such as AF, shooting, image processing, and recording.
  The electronic flash control circuit 122 controls the lighting of the electronic flash 115 for subject illumination in synchronization with the photographing operation. The auxiliary light circuit 123 performs lighting control of the AF auxiliary light source 116 in synchronization with the focus detection operation.
  The image sensor driving circuit 124 controls the image capturing operation of the image sensor 107, A / D converts the acquired image signal, and transmits it to the CPU 121. The image processing circuit 125 performs processing such as γ conversion, color interpolation, and JPEG compression of the image acquired by the image sensor 107.
  The focus drive circuit 126 controls the focus actuator 114 based on the focus detection result, and drives the third lens group 105 to advance and retract in the optical axis direction to perform focus adjustment. The aperture shutter drive circuit 128 controls the aperture shutter actuator 112 and controls the aperture of the aperture / shutter 102. The zoom drive circuit 129 controls the zoom actuator 111 according to the zoom operation of the photographer.
  The display 131 is an LCD or the like, and displays information related to the shooting mode of the camera, a preview image before shooting, a confirmation image after shooting, a focus state display image at the time of focus detection, and the like. The operation switch 132 includes a power switch, a release (shooting trigger) switch, a zoom operation switch, a shooting mode selection switch, and the like. The flash memory 133 is detachable and records captured images.
FIG. 2 is a diagram illustrating an internal configuration of the image sensor 107. Note that FIG. 2 shows only a minimum configuration for explaining a readout operation described later, and a pixel reset signal and the like are omitted. A photoelectric conversion unit (hereinafter referred to as PD mn ) 201 includes a photodiode, a pixel amplifier, a reset switch, and the like. Here, m is an X-direction address, and m = 0, 1,..., M−1. N is a Y-direction address, and n = 0, 1,..., N−1. In the image sensor 107 of the present embodiment, m × n photoelectric conversion units are two-dimensionally arranged. In the figure, only the photoelectric conversion units (PD 00 , PD 01, etc.) near the upper left are denoted by reference numerals so as not to be complicated.
The selection switch 202 selects the output of the photoelectric conversion unit (PD mn ) 201 and is selected for each row by a vertical scanning circuit 208 (row selection unit) described later. The line memory 203 is for temporarily storing the output of the photoelectric conversion unit (PD mn ) 201, and stores the output of the photoelectric conversion unit 201 for one row selected by the vertical scanning circuit 208. is there. Usually, a capacitor is used for the photoelectric conversion unit 201.
The switch 204 is connected to the horizontal output line 210 and resets the horizontal output line 210 to a predetermined potential VHRST, and is controlled by a signal HRST. Switch 205 is for sequentially outputting the output of the photoelectric conversion unit PD mn stored in the line memory 203 to the horizontal output line 210, a switch H 0 ~H m-1. By sequentially scanning the switches H 0 ~H m-1 by the horizontal scanning circuit 206, the output of the photoelectric conversion for one line is read out.
  The horizontal scanning circuit 206 sequentially scans the output of the photoelectric conversion unit 201 stored in the line memory 203 and outputs it to the horizontal output line 210.
Data input PHST is a data input of the horizontal scanning circuit 206. Shift clocks PH1 and PH2 are shift clock inputs. Data is set when PH1 = H, and data is latched when PH2 = H. By inputting the shift clock to the terminals of the shift clocks PH1 and PH2, the data input PHST can be sequentially shifted and the switches H 0 to H m−1 can be sequentially turned on. The thinning-out reading setting SKIP is a control terminal input for performing setting at the time of thinning-out reading described later. By setting the skip reading setting SKIP to H level, the horizontal scanning circuit 206 can be skipped at a predetermined interval. Details regarding this read operation will be described later.
The vertical scanning circuit 208 can select the selection switch 202 of the photoelectric conversion unit PD mn by sequentially scanning and outputting signals V 0 to V n−1 . Similar to the horizontal scanning circuit 206, the control signal is controlled by a data input PVST, shift clocks PV1 and PV2 (not shown), and thinning reading setting SKIP.
  Since the operation of the vertical scanning circuit 208 is the same as that of the horizontal scanning circuit 206, detailed description thereof is omitted. In FIG. 2, the control signal of the vertical scanning circuit 208 is omitted.
  FIG. 3 is a diagram illustrating the arrangement of the photoelectric conversion units when all the pixels of the image sensor are read out. In the figure, an arrangement of m × n photoelectric conversion units is shown. In the figure, symbols R, G, and B represent color filters applied to the photoelectric conversion unit. In this embodiment, out of 4 pixels of 2 rows × 2 columns, pixels having a spectral sensitivity of G (green) are arranged in 2 diagonal pixels, and R (red) and B (blue) are arranged in the other 2 pixels. A case of a Bayer array in which one pixel having spectral sensitivity is arranged is shown.
  In addition, the numbers appended to the upper side and the left side in the drawing are the number of pixels in the X direction (row direction) and the number of pixels in the Y direction (column direction). The shaded pixel portion is a readout target. Here, since all pixels are read out, diagonal lines are drawn for all pixel portions. In addition, a light-shielded OB (optical black) pixel or the like for detecting a black level is usually arranged and read out on the image sensor, but the description thereof is omitted here because it becomes complicated.
  FIG. 4 is a timing chart showing changes in signals at various parts when data of all pixels is read out. Data reading is controlled by the image sensor driving circuit 124 controlled by the CPU 121 sending a pulse to the image sensor. An all-pixel readout operation will be described.
First, image sensor driving circuit 124 drives the vertical scanning circuit 208, the signal V 0 is active. At this time, the output of the pixel in the 0th row is output to the vertical output line 209, respectively. In this state, the image sensor driving circuit 124 activates the MEM signal and samples and holds the data of each pixel in the line memory 203.
Next, the image sensor drive circuit 124 activates the data input PHST, inputs the shift clocks PH1 and PH2, activates the switches H 0 to H m−1 in sequence, and outputs the pixel output to the horizontal output line 210. . The output pixel output is output as an output signal VOUT through an amplifier 207, and is converted into digital data by an AD converter (not shown). The converted digital data is subjected to predetermined image processing by the image processing circuit 125.
Next, when the signal V 1 is activated by the vertical scanning circuit 208, the pixel output of the first row is output to the vertical output line 209. Similar to the pixel output in the 0th row, the data of each pixel in the 1st row is sampled and held in the line memory 203 by the MEM signal. The operation of activating the data input PHST, inputting the shift clocks PH1 and PH2, sequentially activating the switches H 0 to H m-1 and outputting the pixel output to the horizontal output line 210 is the same as in the case of the 0th row. The same. As described above, the image sensor driving circuit 124 sequentially reads up to the (n−1) th row.
  FIG. 5 is a diagram showing the arrangement of the photoelectric conversion units when reading out thinned pixels of the image sensor. FIG. 5 shows an arrangement of m × n photoelectric conversion units as in FIG. 3. In the figure, the shaded pixel portion is a readout target pixel at the time of thinning readout. In this embodiment, 1/3 thinned-out pixels are read in both the X direction and the Y direction.
  FIG. 6 is a timing chart showing changes in signals at various parts when data of thinned pixels is read out. Setting of thinning readout is performed by activating the control terminal of the horizontal scanning circuit (shift register) 206 and the thinning readout setting SKIP (SKIP terminal). By making the SKIP terminal active, the operation of the horizontal scanning circuit 206 and the vertical scanning circuit 208 is changed from sequential scanning for each pixel to sequential scanning for every three pixels. Since this specific method is a known technique, its details are omitted.
The operation at the time of thinning, the image pickup device drive circuit 124, first drives the vertical scanning circuit 208, the signal V 0 is active. At this time, the pixel output of the 0th row is output to the vertical output line 209, respectively. In this state, the image sensor driving circuit 124 activates the MEM signal and samples and holds the data of each pixel in the line memory 203.
Next, the image sensor driving circuit 124 activates the data input PHST and inputs the shift clocks PH1 and PH2. At this time, the path of the shift register is changed by setting the SKIP terminal to be active, and the horizontal output line 210 is sequentially output every three pixels as in the switches H 0 , H 3 , H 6 ... H m-3. The pixel output is output to. The output pixel output is output as an output signal VOUT through an amplifier 207, and is converted into digital data by an AD converter (not shown). The converted digital data is subjected to predetermined image processing by the image processing circuit 125.
Next, similar to the horizontal scanning circuit 206, the vertical scanning circuit 208 skips the signals V 1 and V 2 , activates the signal V 3, and outputs the pixel output of the third row to the vertical output line 209. Similar to the pixel output in the 0th row, the data of each pixel in the 3rd row is sampled and held in the line memory 203 by the MEM signal. Then, the data input PHST is activated, the shift clocks PH1 and PH2 are input, the switches H 0 , H 3 , H 6 ... H m-3 are sequentially activated, and the pixel output is output to the horizontal output line 210. Is the same as in the case of the 0th row. In this way, reading up to the n-3th row is sequentially performed. That is, 1/3 thinning-out reading is performed both in the horizontal direction and in the vertical direction.
  FIG. 7 is a diagram showing the arrangement and structure of the imaging pixels. In this embodiment, out of 4 pixels of 2 × 2, pixels having G (green) spectral sensitivity are arranged in two diagonal pixels, and R (red) and B (blue) spectral sensitivity are arranged in the other two pixels. A Bayer array in which one pixel having each of the above is arranged is employed. Further, in the imaging region, focus detection pixels, which will be described later, are distributed in a predetermined rule between a plurality of pixel groups having a Bayer array.
  FIG. 2A shows a plane of imaging pixels of 2 rows × 2 columns. As is well known, in the Bayer array, G pixels are arranged in a diagonal direction, and R and B pixels are arranged in the other two pixels. The pixel group having the structure of 2 rows × 2 columns is repeatedly arranged.
FIG. 2B shows a cross section in the EE direction in FIG. The on-chip microlens ML is disposed on the forefront of each pixel. Further, in FIG. (B), the rear of the on-chip microlens ML, a color filter CF G of R color (Red) filter CF R and G (Green), the CMOS sensor photoelectric conversion unit PD (see FIG. 3 ) And a wiring layer CL are disposed. The wiring layer CL forms a signal line for transmitting various signals in the CMOS sensor.
  Here, the on-chip microlens ML and the photoelectric conversion unit PD of the imaging pixel are configured to capture the light beam that has passed through the photographing optical system TL as effectively as possible. In other words, the exit pupil EP of the photographing optical system TL and the photoelectric conversion unit PD are conjugated with each other by the on-chip microlens ML, and the effective area of the photoelectric conversion unit is designed to be large.
  In FIG. 5B, the incident light beam of the R pixel has been described, but the G pixel and the B (Blue) pixel have the same structure. Accordingly, the exit pupil EP corresponding to each of the RGB pixels for imaging has a large diameter, and the S / N of the image signal is improved by efficiently capturing the light flux from the subject.
  FIG. 8 is a diagram showing the arrangement and structure of focus detection pixels for performing pupil division in the horizontal direction (lateral direction) of the photographic lens. FIG. 4A shows a plane of 2 rows × 2 columns of pixels including focus detection pixels. When obtaining an imaging signal, the charge output of the G pixel becomes the main component of the luminance information. Since human image recognition characteristics are sensitive to luminance information, if G pixels are lost, image quality deterioration is likely to be recognized. On the other hand, the R pixel or the B pixel is a pixel that acquires color information. However, since humans are insensitive to color information, it is difficult to notice deterioration in image quality even if some pixels are acquired from the color information.
  Therefore, in the present embodiment, among the pixels of 2 rows × 2 columns, the G pixel is left as an imaging pixel, and the R pixel and the B pixel are used as focus detection pixels. In FIG. 4A, SA pixels and SB pixels are shown as focus detection pixels for the R pixel and the B pixel.
FIG. 2B shows a cross section in the FF direction in FIG. The structure of the microlens ML and the photoelectric conversion unit PD is the same as that of the imaging pixel shown in FIG. In the present embodiment, since the signal of the focus detection pixel is not used for image creation, a transparent film CF W (White) is disposed instead of the color filter for color separation.
  Further, since pupil division is performed by the image sensor, the opening of the wiring layer CL is biased in one direction with respect to the center line of the microlens ML. Specifically, since the opening OPHA of the SA pixel is biased to the right side, the photoelectric conversion unit PD of the SA pixel receives the light beam that has passed through the left exit pupil EPHA of the photographic lens TL. Similarly, since the opening OPHB of the SB pixel is biased to the left side, the photoelectric conversion unit PD of the SB pixel receives the light beam that has passed through the right exit pupil EPHB of the photographic lens TL.
  Here, SA pixels are regularly arranged in the horizontal direction, and a subject image acquired by these pixel groups is defined as an A image. The SB pixels are also regularly arranged in the horizontal direction, and the subject image acquired by these pixel groups is defined as a B image. By detecting the relative positions of the A and B images, the amount of defocus (defocus amount) of the subject image can be detected. If it is desired to detect the amount of defocus in the vertical direction (vertical direction), the SA pixel opening OPHA may be biased upward and the SB pixel opening OPHB biased downward. In this way, charges corresponding to the focus states in a plurality of directions of the imaging optical system (imaging optical system) are accumulated in the focus detection pixels.
  FIG. 9 is a diagram illustrating an arrangement of imaging pixels and focus detection pixels. FIG. 9 shows the arrangement of pixels when performing thinning readout. Pixels are thinned out to be 1/3 in the X (horizontal) direction and 1/3 in the Y (vertical) direction. In the figure, G is a pixel coated with a green filter, R is a pixel coated with a red filter, and B is a pixel coated with a green pixel. Further, G, R, and B in the figure are pixels read out at the time of thinning readout. In addition, white pixels that are not marked with symbols in the figure represent pixels that are not read out during thinning readout.
  Further, SA in the figure is a focus detection pixel formed by deviating the aperture of the pixel portion in the horizontal direction, and is a reference pixel for detecting a horizontal image shift amount from an SB pixel described later. . Similarly, SB in the figure is a focus detection pixel formed by biasing the opening of the pixel in a direction opposite to that of the SA pixel, and is a reference for detecting a horizontal image shift amount with respect to the SA pixel. Pixel. The hatched portion of the SA pixel and the SB pixel is an offset pixel opening.
  In consideration of the fact that the focus detection pixel group cannot be used for imaging, in the present embodiment, the focus detection pixels are discretely arranged at a certain interval in the X direction and the Y direction. Further, the focus detection pixels are not arranged in the G pixel portion so that the deterioration of the image is not noticeable.
  In this embodiment, one pair of SA pixels and SB pixels is arranged in a block of 8 × 8 pixels (24 × 24 pixels in the pixel arrangement before thinning) indicated by a thick black frame in the drawing. . BLOCK_N (i, j) described in the figure represents a block name. A pixel arrangement pattern is completed in one block. In addition, when expanding to the entire imaging screen, the pixel group is appropriately arranged at an arbitrary position of the imaging element in units of blocks.
  FIG. 10 is a diagram illustrating an example of a focus detection line including a distance measurement area and a focus detection pixel group included in the distance measurement area. FIG. 4A shows the relationship between the image area (imaging area) of the image sensor and the focus detection area (ranging area). FIG. 2B shows the configuration of the focus detection line included in the focus detection area by enlarging the focus detection area. Here, the pixel pitch of the image sensor is 5.2 μm, and the focus detection area has a size of horizontal 2.2 mm and vertical 1.0 mm.
  The focus detection line is configured by continuously arranging one block in the horizontal direction, and the number of vertical pixels of one focus detection line is 24 pixels. Accordingly, the number of focus detection lines included in the focus detection area having a vertical width of 1.0 mm (192 pixels) is 8. In addition, as shown in FIG. 9, the SA pixels and SB pixels included in each focus detection line are arranged on the same line.
  The operation of the imaging apparatus having the above configuration will be described. FIG. 11 is a timing chart showing an imaging operation sequence during live view using a CMOS sensor. When realizing the live view function, it is necessary to continuously read out image signals from the image sensor. Since a CMOS sensor cannot perform batch transfer and batch readout as in the CCD system, a rolling shutter readout system is generally employed.
  In the rolling shutter readout method, as shown in FIG. 11, after the exposure operation is performed, the image sensor 107 reads the accumulated charge of each pixel in the image sensor 107 as an image signal. This read operation is performed in synchronization with the control pulse vertical synchronization signal VD and the horizontal synchronization signal HD (not shown). The control pulse vertical synchronization signal VD (VD signal) is a signal representing one frame of imaging. In the present embodiment, for example, when the image sensor drive circuit 124 receives a command from the CPU 121 every 1/30 seconds, the image sensor drive circuit 124 sends a control pulse vertical synchronization signal VD to the image sensor 107. That is, in the present embodiment, it is assumed that 30 frames of moving images are captured per second.
  A control pulse horizontal synchronization signal HD (HD signal) is a horizontal synchronization signal of the image sensor 107. In one frame period, a control pulse horizontal synchronizing signal HD having a pulse number corresponding to the number of horizontal lines is transmitted at a predetermined interval, and the horizontal lines are controlled.
  In addition, pixel reset is performed for each horizontal line (indicated by a dotted line in the figure) so that the set accumulation time is reached in synchronization with the horizontal synchronization signal HD. For this reason, in an image signal of one frame, there is a time difference in the accumulation and readout timing for each line. This time difference is limited by the reading speed of one line and the number of lines to be read.
  In addition, when the reading time is limited by the frame rate, as in live view operation, only a part of the entire screen area is read out, or pixels are thinned out in the horizontal and vertical directions for high speed. It is necessary to read out the image signal. In the present embodiment, reading is performed with thinning out to horizontal 1/3 and vertical 1/3.
  When accumulation and reading are executed by the VD signal and the HD signal, the read image signal is transferred to the image processing circuit 125. In the image processing circuit 125, defective pixel correction or the like is performed, and image processing is executed.
  Further, as described above, the imaging element 107 includes a focus detection pixel configured to provide a pupil division function to a part of the pixel group and so-called phase difference AF in addition to the imaging pixel. Exists. The focus detection pixel is also regarded as a defective pixel and defect correction is performed, and the image signal is transferred to an image processing circuit and a display circuit.
  Further, in order to pick up focus detection pixel data included in the image data and detect the focus state of the photographing lens, this data is transferred to a phase difference detection block (not shown) in the image processing circuit 125. The In this circuit block, the correlation calculation of the SA-divided SA pixel group and the SB pixel group is performed, and the focus shift amount of the subject is calculated. The CPU 121 controls the focus driving circuit 126 to drive the focus actuator 114 to adjust the focus of the photographing lens.
  The image processing circuit 125 and the CPU 121 constitute a photometric detection unit, and photometry is performed by the photometric detection unit, and exposure conditions such as accumulation time, gain, and aperture are determined. The CPU 121 controls the aperture shutter drive circuit 128 according to the determined aperture value to drive the aperture shutter actuator 112 to perform an aperture operation.
  Here, the accumulation control and focus detection method during auxiliary light emission according to the present embodiment will be described. FIG. 12 shows CMOS accumulation and readout timings during auxiliary light emission when the image sensor having the pixel arrangement of FIG. 9 is read out by horizontal 1/3 and vertical 1/3 thinning, accumulation timings of each focus detection line, and auxiliary It is a timing chart which shows the light emission timing.
  As described above, CMOS reading during live view is sequentially performed from the upper line in synchronization with the vertical synchronization signal VD. In addition, pixel reset of each line is sequentially performed from the upper line in synchronization with the horizontal synchronization signal HD at a set timing, and is controlled so as to have a desired accumulation time.
  FIG. 12 shows the accumulation timing of each focus detection line included in the focus detection region of FIG. It should be noted that only some focus detection lines are shown in order to avoid complication of the figure. At this time, accumulation of the focus detection line 1 in the figure starts at time t1 when the pixel reset is performed and continues until time t1 'when the pixel signal is read out. Therefore, the accumulation period of each focus detection line n is from time tn to time tn ′ (where n is a value from 1 to 8).
  If the period of the horizontal synchronization signal HD is 25 usec, the difference (tn-tn + 1) in the SA pixel arrangement line accumulation start time of each focus detection line is 8 focus detection line periods due to vertical 3 decimation. Therefore, it becomes 200 usec. Similarly, the difference in the accumulation start time of each focus detection line of the SB pixel is 200 usec. In this manner, the pixel signal is read so that the difference in exposure time between the focus detection lines becomes a predetermined time.
  Further, FIG. 12 shows the timing of the light emission start time ts and the light emission end time te of the auxiliary light for focus detection. As is apparent from the figure, the flash accumulation period (tn-te) of the auxiliary light of each focus detection line differs by the difference in the accumulation start time of each focus detection line.
  If the auxiliary light accumulation period (t8-te) of the focus detection line 8 is 100 usec, the auxiliary light accumulation period (tn-te) of each focus detection line is as shown in FIG. FIG. 13 is a table showing the difference between the auxiliary light accumulation period and the exposure amount in each focus detection line. In this table, the difference in exposure amount by the auxiliary light between each focus detection line and the focus detection line 8 is shown. The output signal of the exposure amount by the auxiliary light of each focus detection line becomes a signal amount having a stepwise difference in magnitude from about 0 to about 3.9. The output signal of the subject in each focus detection line is obtained by adding the amount of accumulation by the exposure of stationary light to the amount of exposure by the auxiliary light.
  Similarly, the output signals of the focus detection SA pixel group and SB pixel group of each focus detection line also have different signal amounts in stages. Therefore, it is possible to obtain a signal output of a focus detection line having different signal amounts for one distance measurement area by storing the auxiliary light once and reading from the image sensor.
  Here, the signal outputs of a plurality of different focus detection lines obtained by accumulation and readout during auxiliary light emission vary depending on the distance of the subject, the reflectance, and the brightness of the steady light.
  Therefore, by selecting only the focus detection lines that have a predetermined signal amount that does not cause underexposure or overexposure, except the line output that is not suitable for focus detection such as blackout or saturation, from the signal outputs of a plurality of different focus detection lines. This makes it possible to detect an appropriate focus.
  In addition, when pixel saturation due to stationary light becomes a problem during storage during auxiliary light emission, it can be improved by shortening the storage time, but this is similar to general automatic exposure control. Since there is, explanation is omitted.
  Also, depending on the distance and reflectance of the subject, if a desired output signal cannot be obtained for all focus detection lines, the desired signal can be obtained by changing the amount of auxiliary light emitted and performing accumulation and readout again. It becomes possible to obtain the quantity. Since this is also a known technique, the description thereof is omitted.
  Next, a method for controlling the exposure amount difference by the auxiliary light of each focus detection line will be described. The exposure amount difference due to the auxiliary light of each focus detection line is greatly influenced by the period of the horizontal synchronization signal HD and the line period of the focus detection line. Although it is conceivable that the HD cycle becomes shorter due to the increase in the number of pixels and the increase in the frame rate, in this case, the difference in exposure amount due to the auxiliary light of each focus detection line becomes small.
  The line period is determined by the pixel arrangement pattern, the line configuration pattern, and the vertical thinning rate when reading from the image sensor. Accordingly, when the line cycle is shortened depending on the frame rate and the number of focus detection pixels that can be included in the image sensor, the difference in exposure amount due to the auxiliary light of each focus detection line decreases.
  A first method for adjusting the exposure amount difference will be described with reference to FIGS. FIG. 14 and FIG. 15 are diagrams showing an exposure period of auxiliary light for each focus detection line. 14 and 15 show the exposure period when the same accumulation period is set with different horizontal synchronization signal HD cycles and read from the image sensor. That is, in FIG. 15, the period of the horizontal synchronization signal HD is set longer than that in FIG. 14, and the difference in accumulation start time (t1-t2) becomes longer, so the difference in exposure amount becomes larger. This causes a problem that the frame rate becomes slow, but the effect on the live view can be reduced by applying only to the frame emitting the auxiliary light. That is, the influence can be reduced by making the period of the horizontal synchronization signal HD (the period of the read timing) different between the frame that emits the auxiliary light and the frame that does not emit the light.
  A second method for adjusting the exposure amount difference will be described with reference to FIG. FIG. 16 is a diagram showing the timing of CMOS accumulation and readout during auxiliary light emission, the accumulation timing of each focus detection line, the emission timing of auxiliary light, and the amount of light emission. Note that the CMOS accumulation and readout timing, the accumulation timing of each focus detection line, and the auxiliary light emission period are the same as those described with reference to FIGS.
  The wave height of the auxiliary light in FIG. 16 indicates the size of the flash of auxiliary light, and is controlled stepwise over a period from the light emission start time ts to the light emission end time te. For this reason, as is apparent from the figure, the difference in exposure amount between the focus detection line 1 and the focus detection line 2 due to the auxiliary light is larger than when the auxiliary light is emitted with a constant wave height. By this method, it is possible to adjust the exposure amount difference due to the auxiliary light of each focus detection line without changing the cycle of the horizontal synchronization signal HD and the line cycle of the focus detection line. In the present embodiment, the amount of wave height is controlled in stages, but the control of the amount of wave of auxiliary light is not particularly limited. Of course, the wave amount may be changed continuously, and the same effect can be obtained.
  The electronic camera performs light emission and accumulation control of auxiliary light so that a plurality of focus detection lines in the distance measurement area of the CMOS sensor have different exposure amounts. Then, the imaging apparatus selects a focus detection line as a desired signal output from a plurality of different signal outputs obtained from the plurality of focus detection lines, and performs focus detection using the focus detection line.
  FIG. 17 is a flowchart showing a focusing operation procedure. This focusing operation process is repeatedly executed by the CPU 121 at predetermined intervals. Note that FIG. 17 describes only the processing related to the focusing operation.
  First, the CPU 121 determines whether or not the operation switch 132 has been pressed to instruct the start of the pre-shooting operation (step S1). When the start of the pre-shooting operation is not instructed, the CPU 121 ends this process as it is.
  On the other hand, when an instruction to start the pre-shooting operation is given, the CPU 121 controls the auxiliary light circuit 123 to cause the AF auxiliary light source 116 to emit light (step S2). Then, the CPU 121 controls the image sensor driving circuit 124, reads the output signal of the image sensor 107, and calculates the exposure amount of each focus detection line (step S3).
  The CPU 121 determines whether or not there is a focus detection line with a desired exposure amount (step S4). If there is no such focus detection line, the CPU 121 returns to the process of step S2, changes the light emission amount, and causes the AF auxiliary light source 116 to emit light again.
  When there is a focus detection line having a desired exposure amount in step S4, the CPU 121 detects the focus by the phase difference detection method using the focus detection line (step S5). The CPU 121 controls the focus driving circuit 126 to drive the focus actuator 114, and moves the third lens group (focus lens) 105 to the focal position (step S6). Thereafter, the CPU ends this process.
  As described above, according to the imaging apparatus of the first embodiment, in focus detection using the output signal of the CMOS sensor, the signals of the focus detection lines having different signal amounts with respect to one distance measurement area when the auxiliary light is emitted. Output can be obtained. Therefore, by selecting a focus detection line having a desired signal amount from a plurality of obtained signal outputs, appropriate focus detection can be performed at high speed. Moreover, the energy consumed can be reduced by reducing the frequency | count of light emission of auxiliary light. It is also possible to adjust the difference in exposure amount due to the auxiliary light of each focus detection line.
[Second Embodiment]
Since the configuration of the electronic camera in the second embodiment is the same as that of the first embodiment, the description thereof is omitted here. FIG. 18 is an arrangement diagram showing a pixel arrangement pattern and a photometric line arrangement pattern in the second embodiment.
  Among the 4 pixels in 2 rows × 2 columns, pixels having G (green) spectral sensitivity are arranged on the diagonal 2 pixels, and R (red) and B (blue) spectral sensitivity are arranged on the other 2 pixels. A Bayer array in which one of each is arranged is employed. Further, this Bayer array is fully developed and arranged in the imaging region of the imaging device.
  In the imaging region, photometric lines composed of pixels continuously arranged in the horizontal direction are periodically arranged for a predetermined number of lines. Here, n is an integer equal to or less than the number of vertical lines of the image sensor. In addition to the accumulation and readout control of the photometric detection line, when the accumulation and readout control of the focus detection line shown in the first embodiment is performed, a part of the imaging region is the same as that in the first embodiment. A pixel arrangement pattern is used.
  Next, an accumulation control method and a photometric method when a photometric flash is emitted will be described. FIG. 19 is a timing chart showing the storage timing and reading timing of the image sensor when the photometric flash is emitted, and the emission timing of the photometric flash. Reading from the image sensor is performed in synchronization with the vertical synchronization signal VD.
  Also, pixel reset for each line is performed collectively on a plurality of lines selected by a horizontal line selection unit (not shown), and is performed in synchronization with the horizontal synchronization signal HD at a set timing. The exposure is controlled to have a desired accumulation time.
  FIG. 19 shows the accumulation timing of the photometric line in FIG. It should be noted that only a part of the photometry lines are shown in order to avoid making the figure complicated.
  Accumulation of the photometry line A1 to photometry line An in the figure starts at time ta when the pixel reset is performed and continues until time ta 'when the pixel signal is read out. Similarly, accumulation of the photometry line B1 to photometry line Bn starts at time tb when the pixel reset is performed and continues until time tb '.
  FIG. 19 also shows the timing of the light emission start time ts and the light emission end time te of the photometric flash. As is apparent from the figure, the photometric flash accumulation period (ta-te, tb-te) of each photometric line differs by the difference in the accumulation start time of each photometric line. If the flash accumulation period (ta-te) of the photometry line An is 16 msec and the flash accumulation period (tb-te) of the photometry line Bn is 1 msec, the flash of the photometry line An and the photometry line Bn The difference in output signal due to the exposure is about 4 stages.
  Therefore, the photometric calculation unit by the CPU 121 can obtain signal outputs having different sizes from the respective photometric lines when reading from the image sensor once. Then, the photometric calculation unit performs photometry based on the output signal of each photometric line, so that the dynamic range capable of photometry can be expanded by about four stages.
  Also, the automatic exposure control unit by the CPU 121 sets parameters such as the flash emission amount, shutter speed, aperture, etc. in the main shooting based on the photometric result of the photometry calculating unit so that the appropriate exposure amount is obtained during the main shooting. Take control.
  Since the photometric calculation unit and the automatic exposure control unit are already known techniques, detailed description thereof will be omitted.
  As described above, the image pickup apparatus according to the second embodiment controls the pixel reset timing and the flash emission timing of a plurality of different photometry lines when emitting the flash for photometry. This makes it possible to obtain output signals with different exposure amounts for a single readout from the image sensor. Therefore, the time for obtaining an appropriate signal output in photometry is shortened, and quick photometry and automatic exposure control are possible.
  The present invention is not limited to the configuration of the above-described embodiment, and any configuration can be used as long as the functions shown in the claims or the functions of the configuration of the present embodiment can be achieved. Is also applicable. For example, the imaging apparatus of the present invention can be applied to a digital video camera, a digital SLR (single-lens reflex camera), and the like.
  The present invention is useful for an electronic camera provided with an image sensor, particularly an electronic still camera and a movie camera.
It is a figure which shows the structure of the electronic camera in 1st Embodiment. 2 is a diagram illustrating an internal configuration of an image sensor 107. FIG. It is a figure which shows arrangement | positioning of the photoelectric conversion part in the case of reading all the pixels of an image pick-up element. It is a timing chart which shows the change of the signal of each part at the time of reading the data of all the pixels. It is a figure which shows arrangement | positioning of the photoelectric conversion part in the case of reading out the thinning pixel of an image sensor. It is a timing chart which shows the change of the signal of each part at the time of reading the data of a thinning pixel. It is a figure which shows arrangement | positioning and a structure of an imaging pixel. It is a figure which shows the arrangement | positioning and structure of the pixel for a focus detection for performing pupil division in the horizontal direction (lateral direction) of a photographic lens. It is a figure which shows arrangement | positioning of the pixel for imaging, and the pixel for focus detection. It is a figure which shows an example of the focus detection line comprised by the ranging area and the pixel group for focus detection contained in this ranging area. It is a timing chart which shows the imaging operation sequence at the time of live view using a CMOS sensor. The CMOS accumulation and readout timing during auxiliary light emission when the image sensor having the pixel arrangement of FIG. 9 is read out by horizontal 1/3 and vertical 1/3 thinning, the accumulation timing of each focus detection line, and the emission of auxiliary light It is a timing chart which shows a timing. It is a table | surface which shows the accumulation period of the auxiliary light in each focus detection line, and the difference of exposure amount. It is a figure which shows the exposure period by the auxiliary light of each focus detection line. It is a figure which shows the exposure period by the auxiliary light of each focus detection line. It is a figure which shows the accumulation | storage and read-out timing of CMOS at the time of auxiliary light emission, the accumulation timing of each focus detection line, and the light emission timing and light emission amount of auxiliary light. It is a flowchart which shows a focusing operation | movement procedure. FIG. 10 is a layout diagram illustrating a pixel layout pattern and a photometric line layout pattern in the second embodiment. 6 is a timing chart showing the accumulation and readout timing of the image sensor when the photometric flash is emitted, and the emission timing of the photometric flash.
Explanation of symbols
107 Image sensor 116 AF auxiliary light source 121 CPU
123 Auxiliary light circuit 124 Image sensor driving circuit

Claims (6)

  1. A plurality of pixels arranged in a row direction and a column direction, row selection means for selecting the plurality of pixels for each row in a predetermined cycle, and resetting the charge accumulated in the pixels in the selected row, An image sensor having storage means for storing charges in the pixels of the row after reset, and reading means for reading out charges stored in the pixels of the selected row;
    An imaging device having a light emitting means for emitting a flash on a subject,
    The plurality of pixels are discretely arranged in an image pickup pixel group for accumulating charge for image pickup and an image pickup region of the image pickup device, and according to a focus state in a plurality of directions of an imaging optical system for forming a subject image. And a focus detection pixel group that accumulates the accumulated charges,
    A pixel group in a row direction including a part of the focus detection pixel group as a focus detection line,
    Control means for causing the light emitting means to emit light at different exposure amounts to the plurality of focus detection lines from which charges are read by the reading means;
    Selecting means for selecting at least one focus detection line having a predetermined exposure amount among the plurality of focus detection lines;
    Have a focus detection means for detecting a focal point with the charge accumulated in the focus detection pixels included in the selected focus detection line,
    The image pickup apparatus , wherein the control unit controls the readout timing of the pixels in the row by the readout unit so that a difference in exposure time between the plurality of focus detection lines becomes a predetermined time .
  2. Said control means, said light emitting means in a frame that does not emit light when the frame to emit light, an imaging apparatus according to claim 1, wherein varying the period of the read timing by the reading unit.
  3.   The imaging apparatus according to claim 1, wherein the control unit controls a light emission amount of the light emitting unit so that a difference in exposure amount between the plurality of focus detection lines is increased.
  4. Photometric means for performing photometry using the output signal of the image sensor when light is emitted at the timing of the different exposure amount;
    The imaging apparatus according to claim 1, further comprising an exposure control unit that adjusts an exposure according to a result of the photometry.
  5. A plurality of pixels arranged in a row direction and a column direction, row selection means for selecting the plurality of pixels for each row in a predetermined cycle, and resetting the charge accumulated in the pixels in the selected row, An image sensor having storage means for storing charges in the pixels of the row after reset, and reading means for reading out charges stored in the pixels of the selected row;
    An automatic focus detection device mounted on an imaging device having a light emitting means for emitting a flash on a subject,
    The plurality of pixels are discretely arranged in an image pickup pixel group for accumulating charge for image pickup and an image pickup region of the image pickup device, and according to a focus state in a plurality of directions of an imaging optical system for forming a subject image. And a focus detection pixel group that accumulates the accumulated charges,
    A pixel group in a row direction including a part of the focus detection pixel group as a focus detection line,
    Control means for causing the light emitting means to emit light at different exposure amounts to the plurality of focus detection lines from which charges are read by the reading means;
    Selecting means for selecting at least one focus detection line having a predetermined exposure amount among the plurality of focus detection lines;
    Have a focus detection means for detecting a focal point with the charge accumulated in the focus detection pixels included in the selected focus detection line,
    The automatic focus detection apparatus , wherein the control means controls the readout timing of the pixels in the row by the readout means so that a difference in exposure time between the plurality of focus detection lines becomes a predetermined time .
  6. A plurality of pixels arranged in a row direction and a column direction, row selection means for selecting the plurality of pixels for each row in a predetermined cycle, and resetting the charge accumulated in the pixels in the selected row, An image pickup device having storage means for storing charges in the pixels in the row after reset; a reading means for reading out charges stored in the pixels in the selected row; and light emitting means for emitting flash to the subject. The plurality of pixels are discretely arranged in an imaging region of the imaging element for storing imaging charges and an imaging optical system that forms a subject image and is focused in a plurality of directions. A method of controlling an imaging apparatus including a focus detection pixel group that accumulates electric charges according to
    A pixel group in a row direction including a part of the focus detection pixel group is used as a focus detection line, and the light emitting unit emits light to the plurality of focus detection lines from which charges are read by the reading unit at different exposure amounts. Control steps to cause
    A selection step of selecting at least one focus detection line having a predetermined exposure amount among the plurality of focus detection lines;
    Have a focus detection step of detecting a focus by using charges accumulated in the focus detection pixels included in the selected focus detection line,
    In the control step, the readout timing of the pixels in the row by the readout unit is controlled so that a difference in exposure time between the plurality of focus detection lines becomes a predetermined time .
JP2008221791A 2008-08-29 2008-08-29 Imaging apparatus, automatic focus detection apparatus, and control method thereof Expired - Fee Related JP5159520B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008221791A JP5159520B2 (en) 2008-08-29 2008-08-29 Imaging apparatus, automatic focus detection apparatus, and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008221791A JP5159520B2 (en) 2008-08-29 2008-08-29 Imaging apparatus, automatic focus detection apparatus, and control method thereof

Publications (2)

Publication Number Publication Date
JP2010054968A JP2010054968A (en) 2010-03-11
JP5159520B2 true JP5159520B2 (en) 2013-03-06

Family

ID=42070938

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008221791A Expired - Fee Related JP5159520B2 (en) 2008-08-29 2008-08-29 Imaging apparatus, automatic focus detection apparatus, and control method thereof

Country Status (1)

Country Link
JP (1) JP5159520B2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8115855B2 (en) * 2009-03-19 2012-02-14 Nokia Corporation Method, an apparatus and a computer readable storage medium for controlling an assist light during image capturing process
JP5699480B2 (en) * 2010-08-17 2015-04-08 株式会社ニコン Focus detection device and camera
JP5737924B2 (en) * 2010-12-16 2015-06-17 キヤノン株式会社 Imaging device
JP5978570B2 (en) * 2011-09-01 2016-08-24 株式会社ニコン Imaging device
JP5774512B2 (en) 2012-01-31 2015-09-09 株式会社東芝 Ranging device
CN104380712B (en) 2012-06-07 2016-06-22 富士胶片株式会社 Camera head and image capture method
JP6234058B2 (en) * 2013-05-09 2017-11-22 キヤノン株式会社 Focus adjustment apparatus and control method thereof
KR102170627B1 (en) 2014-01-08 2020-10-27 삼성전자주식회사 Image sensor
US9392160B2 (en) 2014-06-20 2016-07-12 Samsung Electronics Co., Ltd. Circuit and method providing wide dynamic-range operation of auto-focus(AF) focus state sensor elements, digital imaging device, and computer system including same
JP6445866B2 (en) * 2014-12-26 2018-12-26 キヤノン株式会社 Imaging apparatus, imaging system, and driving method of imaging apparatus

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08160503A (en) * 1994-12-06 1996-06-21 Olympus Optical Co Ltd Focus detecting device for camera
JP2000111791A (en) * 1998-10-08 2000-04-21 Canon Inc Auxiliary light controller for automatic focusing and automatic focusing camera
JP2000330007A (en) * 1999-05-20 2000-11-30 Nikon Corp Focus detector
JP2001128056A (en) * 1999-08-18 2001-05-11 Olympus Optical Co Ltd Camera
JP2002320236A (en) * 2001-04-23 2002-10-31 Canon Inc Imaging apparatus
JP2004157417A (en) * 2002-11-08 2004-06-03 Fuji Photo Film Co Ltd Digital camera and exposure setting method in performing af control
JP4764712B2 (en) * 2005-12-09 2011-09-07 富士フイルム株式会社 Digital camera and control method thereof
JP2007318581A (en) * 2006-05-29 2007-12-06 Casio Comput Co Ltd Imaging apparatus, photographing auxiliary light source emitting/imaging control method, and photographing auxiliary light source emitting/imaging control program
JP4867552B2 (en) * 2006-09-28 2012-02-01 株式会社ニコン Imaging device

Also Published As

Publication number Publication date
JP2010054968A (en) 2010-03-11

Similar Documents

Publication Publication Date Title
JP5400406B2 (en) Imaging device
JP5159520B2 (en) Imaging apparatus, automatic focus detection apparatus, and control method thereof
JP5595014B2 (en) Imaging device
JP5183565B2 (en) Imaging device
JP5319347B2 (en) Imaging apparatus and control method thereof
JP5180717B2 (en) Imaging device
JP5241355B2 (en) Imaging apparatus and control method thereof
JP5473977B2 (en) Imaging apparatus and camera system
JP5911252B2 (en) Imaging apparatus and image processing method
JP5247279B2 (en) Imaging apparatus, control method thereof, and program
JP5484617B2 (en) Imaging device
JP5954933B2 (en) Imaging device and lens device
JP5279638B2 (en) Imaging device
JP6238578B2 (en) Imaging apparatus and control method thereof
JP2012220790A (en) Imaging apparatus
JP2011234025A (en) Imaging apparatus and imaging device
JP2016018033A (en) Imaging device, control method of the same, program and recording medium
JP5864989B2 (en) Imaging device and imaging apparatus
JP5748826B2 (en) Imaging apparatus and camera system
JP2016018034A (en) Imaging device, control method of the same, program and recording medium
JP2016208437A (en) Imaging apparatus and control method of the same
JP2017107058A (en) Imaging apparatus and method for controlling the same
JP2015155952A (en) Imaging apparatus, control method of imaging apparatus, program, and storage medium
JP2015094871A (en) Image capturing device, image capturing system, method of controlling image capturing device, program, and storage medium

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20110825

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120417

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120418

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120618

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20121113

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20121211

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151221

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees