WO2018181615A1 - Electronic device - Google Patents

Electronic device Download PDF

Info

Publication number
WO2018181615A1
WO2018181615A1 PCT/JP2018/013042 JP2018013042W WO2018181615A1 WO 2018181615 A1 WO2018181615 A1 WO 2018181615A1 JP 2018013042 W JP2018013042 W JP 2018013042W WO 2018181615 A1 WO2018181615 A1 WO 2018181615A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
image
unit
detection
recording
Prior art date
Application number
PCT/JP2018/013042
Other languages
French (fr)
Japanese (ja)
Inventor
昌也 ▲高▼橋
直樹 關口
敏之 神原
孝 塩野谷
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2018181615A1 publication Critical patent/WO2018181615A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present invention relates to an electronic device.
  • Patent Document 1 An imaging device equipped with an image processing technique for generating an image based on a signal from an imaging element is known (see Patent Document 1). Conventionally, image quality improvement has been required.
  • the electronic device sets the first imaging condition in the first imaging region among the plurality of imaging regions, the imaging element having a plurality of imaging regions that capture the subject image, and the plurality of imaging regions.
  • a setting unit that sets a second imaging condition different from the first imaging condition in the second imaging area of the imaging area, and an image of the subject image captured in the first imaging area and the second imaging area
  • a correction unit that corrects blurring of the subject image due to the movement of the image sensor.
  • the electronic device sets an image sensor having a plurality of regions for capturing a subject image, a first imaging condition in the first region among the plurality of regions, and the second region in the first region.
  • a generation unit that generates an image based on the second image.
  • FIG. 7A is a diagram illustrating a recording image
  • FIG. 7B is a diagram illustrating a detection image. It is a figure which illustrates the photographic subject field in an imaging screen. It is a figure which illustrates a setting screen.
  • FIG. 12A and 12B are schematic diagrams for explaining image blur correction.
  • FIGS. 13A to 13E are schematic diagrams for explaining image blur correction. It is a figure which illustrates a kernel. It is a figure which illustrates the position of the pixel for focus detection in an imaging surface. It is the figure which expanded the one part area
  • FIG. 18A is a diagram illustrating a template image representing an object to be detected, and FIG.
  • FIG. 18B is a diagram illustrating a monitor image and a search range. It is a flowchart explaining the flow of the process by 1st Embodiment.
  • 6 is a flowchart illustrating a flow of image blur correction processing.
  • FIG. 21A is a diagram illustrating still images
  • FIG. 21B and FIG. 21C are diagrams illustrating an overview of still image blur correction according to the second embodiment. It is a figure which illustrates the imaging timing of the image for recording, and the display timing of a blurring correction image.
  • 10 is a flowchart illustrating a flow of image blur processing. It is a figure explaining the outline
  • FIG. 27A is a still image based on the first recording image
  • FIG. 27B is a still image based on the second recording image.
  • FIG. 27A is a block diagram which illustrates the composition of the imaging system by modification 2. It is a figure explaining supply of the program to a mobile device.
  • the digital camera 1 in FIG. 1 (hereinafter referred to as camera 1) is an example of an electronic device.
  • the camera 1 may be an interchangeable lens camera or a lens-integrated camera.
  • the camera mounted in portable terminals, such as a smart phone may be used.
  • you may comprise as imaging devices, such as a video camera and a mobile camera.
  • the camera 1 is equipped with an image sensor 32a as an example of an image sensor.
  • the imaging element 32a performs imaging under the imaging conditions set in the imaging area.
  • the imaging element 32a is configured to be able to perform imaging under different imaging conditions for each imaging region.
  • the image processing unit 33 of the camera 1 performs a process of suppressing the influence of image blur caused by the shaking (referred to as shaking of the camera 1) occurring in the camera 1 that is capturing a moving image.
  • Image blur refers to the movement of the subject image on the imaging surface of the image sensor 32a. Details of the camera 1 will be described with reference to the drawings. Note that image blur can be said to be that the image sensor 32a moves relative to the subject image.
  • FIG. 1 is a block diagram illustrating the configuration of a camera 1 according to an embodiment.
  • a camera 1 includes an imaging optical system 31, an imaging unit 32, an image processing unit 33, a control unit 34, a display unit 35, an operation member 36, a recording unit 37, and a photometric sensor 38. And a shake sensor 39.
  • the imaging optical system 31 guides the light flux from the subject to the imaging unit 32.
  • the imaging unit 32 includes an imaging element 32a and a driving unit 32b, and photoelectrically converts an object image formed by the imaging optical system 31.
  • the imaging unit 32 can capture images with the same imaging condition in the entire imaging region of the imaging element 32a, or can capture images with different imaging conditions for each imaging region. Details of the imaging unit 32 will be described later.
  • the drive unit 32b generates a drive signal necessary for causing the image sensor 32a to perform accumulation control. Imaging instructions such as charge accumulation time (exposure time), ISO sensitivity (gain), and frame rate for the drive unit 32b are transmitted from the control unit 34 to the drive unit 32b.
  • the image processing unit 33 includes an input unit 33a, a correction unit 33b, and a generation unit 33c.
  • the image data generated by the imaging unit 32 is input to the input unit 33a.
  • the correction unit 33b performs a correction process on the input image data with respect to image blur caused by camera shake. Details of the correction processing will be described later.
  • the generation unit 33c performs image processing on the input image data and the corrected image data to generate an image.
  • the image processing includes, for example, color interpolation processing, pixel defect correction processing, contour enhancement processing, noise reduction processing, white balance adjustment processing, gamma correction processing, display luminance adjustment processing, saturation adjustment processing, and the like. Further, the generation unit 33c generates an image to be displayed by the display unit 35 and an image to be recorded.
  • the control unit 34 is constituted by a CPU, for example, and controls the overall operation of the camera 1. For example, the control unit 34 performs a predetermined exposure calculation based on the photoelectric conversion signal generated by the imaging unit 32, the charge accumulation time of the imaging element 32a necessary for proper exposure, the aperture value of the imaging optical system 31, and the ISO sensitivity. The exposure condition such as is determined and an instruction is given to the drive unit 32b. In addition, image processing conditions for adjusting saturation, contrast, sharpness, and the like are determined and instructed to the image processing unit 33 according to the imaging scene mode set in the camera 1 and the type of the detected subject element. The detection of the subject element will be described later.
  • the control unit 34 includes an object detection unit 34a, a setting unit 34b, an imaging control unit 34c, and a lens movement control unit 34d. These are realized as software by the control unit 34 executing a program stored in a nonvolatile memory (not shown). However, these may be configured by an ASIC or the like.
  • the object detection unit 34a performs a known object recognition process, and from the image data generated by the imaging unit 32, an animal (animal face) such as a person (person's face), a dog or a cat, a plant, a bicycle, A subject element such as a vehicle, a vehicle such as a train, a building, a stationary object, a landscape such as a mountain or a cloud, or a predetermined specific object is detected.
  • the setting unit 34b sets imaging conditions for the imaging area of the imaging device 32a.
  • Imaging conditions include the exposure conditions (charge accumulation time, gain, ISO sensitivity, frame rate, etc.) and the image processing conditions (for example, white balance adjustment parameters, gamma correction curves, display brightness adjustment parameters, saturation adjustment parameters, etc.) ).
  • the setting unit 34b can set the same imaging condition for a plurality of imaging areas, or can set different imaging conditions for each of the plurality of imaging areas.
  • the imaging control unit 34c controls the imaging unit 32 (imaging element 32a) and the image processing unit 33 by applying the imaging conditions set in the imaging region by the setting unit 34b.
  • the number of pixels arranged in the imaging region may be singular or plural. Further, the number of pixels arranged between the plurality of imaging regions may be different.
  • the lens movement control unit 34d controls an automatic focus adjustment (autofocus: AF) operation for focusing on a corresponding subject at a predetermined position (called a focus point) on the imaging screen.
  • autofocus automatic focus adjustment
  • the lens movement control unit 34d is a drive signal for moving the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result, for example, a signal for adjusting the subject image with the focus lens of the imaging optical system 31. Is sent to the lens driving mechanism 31m of the imaging optical system 31.
  • the lens movement control unit 34d functions as a moving unit that moves the focus lens of the imaging optical system 31 in the optical axis direction based on the calculation result.
  • the process performed by the lens movement control unit 34d for the AF operation is also referred to as a focus detection process. Details of the focus detection process will be described later.
  • the display unit 35 displays an image generated by the image processing unit 33, an image processed image, an image read by the recording unit 37, and the like.
  • the display unit 35 also displays an operation menu screen, a setting screen for setting imaging conditions, and the like.
  • the operation member 36 includes various operation members such as a recording button, a shutter button, and a menu button.
  • the operation member 36 sends an operation signal corresponding to each operation to the control unit 34.
  • the operation member 36 includes a touch operation member provided on the display surface of the display unit 35. In the present embodiment, recording a moving image is referred to as recording.
  • the recording unit 37 records image data or the like on a recording medium including a memory card (not shown) in response to an instruction from the control unit 34.
  • the recording unit 37 reads image data recorded on the recording medium in response to an instruction from the control unit 34.
  • the photometric sensor 38 detects the brightness of the subject and outputs a detection signal.
  • the shake sensor 39 is constituted by, for example, an angular velocity sensor and an acceleration sensor.
  • the shake sensor 39 detects the shake of the camera 1 and outputs a detection signal.
  • the shake of the camera 1 is also referred to as camera shake.
  • FIG. 2 is a cross-sectional view of the image sensor 100.
  • the imaging element 100 includes an imaging chip 111, a signal processing chip 112, and a memory chip 113.
  • the imaging chip 111 is stacked on the signal processing chip 112.
  • the signal processing chip 112 is stacked on the memory chip 113.
  • the imaging chip 111, the signal processing chip 112, the signal processing chip 112, and the memory chip 113 are electrically connected by a connection unit 109.
  • the connection unit 109 is, for example, a bump or an electrode.
  • the imaging chip 111 captures a subject image and generates image data.
  • the imaging chip 111 outputs image data from the imaging chip 111 to the signal processing chip 112.
  • the signal processing chip 112 performs signal processing on the image data output from the imaging chip 111.
  • the memory chip 113 has a plurality of memories and stores image data.
  • the image sensor 100 may include an image pickup chip and a signal processing chip.
  • a storage unit for storing image data may be provided in the signal processing chip or may be provided separately from the imaging device 100.
  • the imaging element 100 may have a configuration in which the imaging chip 111 is stacked on the memory chip 113 and the memory chip 113 is stacked on the signal processing chip 112.
  • the incident light is incident mainly in the positive direction of the Z axis indicated by the white arrow.
  • the left direction of the paper orthogonal to the Z axis is the X axis plus direction
  • the front side of the paper orthogonal to the Z axis and the X axis is the Y axis plus direction.
  • the coordinate axes are displayed so that the orientation of each figure can be understood with reference to the coordinate axes in FIG.
  • the imaging chip 111 is, for example, a CMOS image sensor. Specifically, the imaging chip 111 is a backside illumination type CMOS image sensor.
  • the imaging chip 111 includes a microlens layer 101, a color filter layer 102, a passivation layer 103, a semiconductor layer 106, and a wiring layer 108.
  • the imaging chip 111 is arranged in the order of the microlens layer 101, the color filter layer 102, the passivation layer 103, the semiconductor layer 106, and the wiring layer 108 in the positive Z-axis direction.
  • the microlens layer 101 has a plurality of microlenses L.
  • the microlens L condenses incident light on the photoelectric conversion unit 104 described later.
  • the color filter layer 102 has a plurality of types of color filters F having different spectral characteristics.
  • the color filter layer 102 includes a first filter (R) having a spectral characteristic that mainly transmits red component light and a second filter (Gb, Gr) that has a spectral characteristic that mainly transmits green component light. ) And a third filter (B) having a spectral characteristic that mainly transmits blue component light.
  • a first filter, a second filter, and a third filter are arranged in a Bayer arrangement.
  • the passivation layer 103 is made of a nitride film or an oxide film, and protects the semiconductor layer 106.
  • the semiconductor layer 106 includes a photoelectric conversion unit 104 and a readout circuit 105.
  • the semiconductor layer 106 includes a plurality of photoelectric conversion units 104 between a first surface 106a that is a light incident surface and a second surface 106b opposite to the first surface 106a.
  • the semiconductor layer 106 includes a plurality of photoelectric conversion units 104 arranged in the X-axis direction and the Y-axis direction.
  • the photoelectric conversion unit 104 has a photoelectric conversion function of converting light into electric charge. In addition, the photoelectric conversion unit 104 accumulates charges based on the photoelectric conversion signal.
  • the photoelectric conversion unit 104 is, for example, a photodiode.
  • the semiconductor layer 106 includes a readout circuit 105 on the second surface 106b side of the photoelectric conversion unit 104.
  • a plurality of readout circuits 105 are arranged in the X-axis direction and the Y-axis direction.
  • the readout circuit 105 includes a plurality of transistors, reads out image data generated by the electric charges photoelectrically converted by the photoelectric conversion unit 104, and outputs the image data to the wiring layer 108.
  • the wiring layer 108 has a plurality of metal layers.
  • the metal layer is, for example, an Al wiring, a Cu wiring, or the like.
  • the wiring layer 108 outputs the image data read by the reading circuit 105.
  • the image data is output from the wiring layer 108 to the signal processing chip 112 via the connection unit 109.
  • connection unit 109 may be provided for each photoelectric conversion unit 104. Further, the connection unit 109 may be provided for each of the plurality of photoelectric conversion units 104. When the connection unit 109 is provided for each of the plurality of photoelectric conversion units 104, the pitch of the connection units 109 may be larger than the pitch of the photoelectric conversion units 104. In addition, the connection unit 109 may be provided in a peripheral region of the region where the photoelectric conversion unit 104 is disposed.
  • the signal processing chip 112 has a plurality of signal processing circuits.
  • the signal processing circuit performs signal processing on the image data output from the imaging chip 111.
  • the signal processing circuit includes, for example, an amplifier circuit that amplifies the signal value of the image data, a correlated double sampling circuit that performs noise reduction processing of the image data, and analog / digital (A / D) conversion that converts the analog signal into a digital signal. Circuit etc.
  • a signal processing circuit may be provided for each photoelectric conversion unit 104.
  • a signal processing circuit may be provided for each of the plurality of photoelectric conversion units 104.
  • the signal processing chip 112 has a plurality of through electrodes 110.
  • the through electrode 110 is, for example, a silicon through electrode.
  • the through electrode 110 connects circuits provided in the signal processing chip 112 to each other.
  • the through electrode 110 may also be provided in the peripheral region of the imaging chip 111 and the memory chip 113.
  • some elements constituting the signal processing circuit may be provided in the imaging chip 111.
  • a comparator that compares an input voltage with a reference voltage may be provided in the imaging chip 111, and circuits such as a counter circuit and a latch circuit may be provided in the signal processing chip 112.
  • the memory chip 113 has a plurality of storage units.
  • the storage unit stores image data that has been subjected to signal processing by the signal processing chip 112.
  • the storage unit is a volatile memory such as a DRAM, for example.
  • a storage unit may be provided for each photoelectric conversion unit 104.
  • the storage unit may be provided for each of the plurality of photoelectric conversion units 104.
  • the image data stored in the storage unit is output to the subsequent image processing unit.
  • FIG. 3 is a diagram for explaining the pixel array and the unit area 131 of the imaging chip 111.
  • a state where the imaging chip 111 is observed from the back surface (imaging surface) side is shown.
  • 20 million or more pixels are arranged in a matrix in the pixel region.
  • four adjacent pixels of 2 pixels ⁇ 2 pixels form one unit region 131.
  • the grid lines in the figure indicate the concept that adjacent pixels are grouped to form a unit region 131.
  • the number of pixels forming the unit region 131 is not limited to this, and may be about 1000, for example, 32 pixels ⁇ 32 pixels, more or less, or one pixel.
  • the unit area 131 in FIG. 3 includes a so-called Bayer array composed of four pixels of green pixels Gb, Gr, blue pixels B, and red pixels R.
  • the green pixels Gb and Gr are pixels having a green filter as the color filter F, and receive light in the green wavelength band of incident light.
  • the blue pixel B is a pixel having a blue filter as the color filter F and receives light in the blue wavelength band
  • the red pixel R is a pixel having a red filter as the color filter F and having a red wavelength band. Receives light.
  • a plurality of blocks are defined so as to include at least one unit region 131 per block. That is, the minimum unit of one block is one unit area 131. As described above, of the possible values for the number of pixels forming one unit region 131, the smallest number of pixels is one pixel. Therefore, when one block is defined in units of pixels, the minimum number of pixels among the number of pixels that can define one block is one pixel.
  • Each block can control pixels included in each block with different control parameters.
  • the unit region 131 in the block that is, the pixels in the block are controlled under the same imaging condition. That is, photoelectric conversion signals having different imaging conditions can be acquired between a pixel group included in a certain block and a pixel group included in another block.
  • control parameters examples include a frame rate, a gain, a thinning rate, the number of addition rows or addition columns to which photoelectric conversion signals are added, a charge accumulation time or accumulation count, a digitization bit number (word length), and the like.
  • the imaging device 100 can freely perform not only thinning in the row direction (X-axis direction of the imaging chip 111) but also thinning in the column direction (Y-axis direction of the imaging chip 111).
  • the control parameter may be a parameter in image processing.
  • FIG. 4 is a diagram for explaining a circuit in the unit region 131.
  • one unit region 131 is formed by four adjacent pixels of 2 pixels ⁇ 2 pixels.
  • the number of pixels included in the unit region 131 is not limited to this, and may be 1000 pixels or more, or may be a minimum of 1 pixel.
  • the two-dimensional position of the unit area 131 is indicated by reference signs A to D.
  • the reset transistor (RST) of the pixel included in the unit region 131 is configured to be turned on and off individually for each pixel.
  • a reset wiring 300 for turning on / off the reset transistor of the pixel A is provided, and a reset wiring 310 for turning on / off the reset transistor of the pixel B is provided separately from the reset wiring 300.
  • a reset line 320 for turning on and off the reset transistor of the pixel C is provided separately from the reset lines 300 and 310.
  • a dedicated reset wiring 330 for turning on and off the reset transistor is also provided for the other pixels D.
  • the pixel transfer transistor (TX) included in the unit region 131 is also configured to be turned on and off individually for each pixel.
  • a transfer wiring 302 for turning on / off the transfer transistor of the pixel A, a transfer wiring 312 for turning on / off the transfer transistor of the pixel B, and a transfer wiring 322 for turning on / off the transfer transistor of the pixel C are separately provided.
  • a dedicated transfer wiring 332 for turning on / off the transfer transistor is provided for the other pixels D.
  • the pixel selection transistor (SEL) included in the unit region 131 is also configured to be turned on and off individually for each pixel.
  • a selection wiring 306 for turning on / off the selection transistor of the pixel A, a selection wiring 316 for turning on / off the selection transistor of the pixel B, and a selection wiring 326 for turning on / off the selection transistor of the pixel C are separately provided.
  • a dedicated selection wiring 336 for turning on and off the selection transistor is provided for the other pixels D.
  • the power supply wiring 304 is commonly connected from the pixel A to the pixel D included in the unit region 131.
  • the output wiring 308 is commonly connected to the pixel D from the pixel A included in the unit region 131.
  • the power supply wiring 304 is commonly connected between a plurality of unit regions, but the output wiring 308 is provided for each unit region 131 individually.
  • the load current source 309 supplies current to the output wiring 308.
  • the load current source 309 may be provided on the imaging chip 111 side or may be provided on the signal processing chip 112 side.
  • the charge accumulation including the charge accumulation start time, the accumulation end time, and the transfer timing is controlled from the pixel A to the pixel D included in the unit region 131. can do.
  • the photoelectric conversion signals of the pixels A to D can be output via the common output wiring 308.
  • a so-called rolling shutter system in which charge accumulation is controlled in a regular order with respect to rows and columns for the pixels A to D included in the unit region 131.
  • photoelectric conversion signals are output in the order of “ABCD” in the example of FIG.
  • the charge accumulation time can be controlled for each unit region 131.
  • photoelectric conversion signals with different frame rates between the unit regions 131 can be output.
  • the unit area 131 included in another block is paused while the unit areas 131 included in some blocks in the imaging chip 111 perform charge accumulation (imaging). Only the imaging can be performed, and the photoelectric conversion signal can be output.
  • FIG. 5 is a block diagram illustrating a functional configuration of the image sensor 100 corresponding to the circuit illustrated in FIG.
  • the multiplexer 411 sequentially selects the four PDs 104 that form the unit region 131 and outputs each pixel signal to the output wiring 308 provided corresponding to the unit region 131.
  • the multiplexer 411 is formed in the imaging chip 111 together with the PD 104.
  • the pixel signal output through the multiplexer 411 is supplied to the signal processing chip 112 by a signal processing circuit 412 that performs correlated double sampling (CDS) / analog / digital (A / D) conversion. D conversion is performed.
  • CDS correlated double sampling
  • a / D converted pixel signal is transferred to the demultiplexer 413 and stored in the pixel memory 414 corresponding to each pixel.
  • the demultiplexer 413 and the pixel memory 414 are formed in the memory chip 113.
  • the arithmetic circuit 415 formed in the memory chip 113 processes the pixel signal stored in the pixel memory 414 and passes it to the subsequent image processing unit.
  • the arithmetic circuit 415 may be provided in the signal processing chip 112. Note that FIG. 5 shows connections for one unit region 131, but actually these exist for each unit region 131 and operate in parallel. However, the arithmetic circuit 415 does not have to exist for each unit region 131. For example, one arithmetic circuit 415 may perform sequential processing while sequentially referring to the values of the pixel memory 414 corresponding to each unit region 131. Good.
  • the arithmetic circuit 415 may be configured to include functions of a control unit, an image processing unit, and the like at the subsequent stage.
  • the output wiring 308 is provided corresponding to each of the unit areas 131. Since the image pickup device 100 includes the image pickup chip 111, the signal processing chip 112, and the memory chip 113, each chip is arranged in the surface direction by using the electrical connection between the chips using the connection portion 109 for the output wiring 308. The wiring can be routed without increasing the size.
  • an imaging condition can be set for each of a plurality of blocks in the imaging device 32a.
  • the imaging control unit 34c of the control unit 34 causes the plurality of regions to correspond to the block and performs imaging under imaging conditions set for each block.
  • the number of pixels constituting the block may be singular or plural.
  • the camera 1 repeats imaging at a predetermined frame rate (for example, 60 fps) when capturing a moving image to be recorded.
  • the monitor moving image is a moving image that is captured before the recording button is operated or before the shutter button is operated.
  • a moving image to be recorded when the recording button is operated is referred to as a recording image
  • a monitoring moving image is referred to as a monitor image.
  • FIG. 6A is a diagram illustrating the arrangement of the first imaging region B1 and the second imaging region B2 set on the imaging surface of the imaging element 32a when a recording image or a monitor image is captured.
  • the first imaging region B1 is composed of blocks of even rows in odd columns of blocks and blocks of odd rows in even columns of blocks.
  • the second imaging region B2 is composed of even-numbered blocks in even-numbered columns of blocks and odd-numbered blocks in odd-numbered columns of blocks.
  • the imaging surface of the imaging element 32a is divided into a checkered pattern by a plurality of blocks belonging to the first imaging region B1 and a plurality of blocks belonging to the second imaging region B2.
  • FIG. 7A and FIG. 7B are diagrams illustrating the recording image 51 and the detection image 52.
  • the control unit 34 generates the recording image 51 based on the photoelectric conversion signal read from the first imaging region B1 of the imaging element 32a that has captured one frame.
  • the control unit 34 generates the detection image 52 based on the photoelectric conversion signal read from the second imaging region B2 of the imaging element 32a.
  • the photoelectric conversion signal is also referred to as image data.
  • An image used for subject detection, focus detection, imaging condition setting, and image generation is referred to as a detection image 52.
  • the recording image 51 is a moving image to be recorded.
  • the recording image 51 is also used to generate a monitor image to be displayed on the display unit 35.
  • the detection image 52 is used for information acquisition for subject detection, information acquisition for focus detection, information acquisition for imaging condition setting, and information acquisition for image generation.
  • the control unit 34 detects the subject element by the object detection unit 34 a based on the image data of the attention area described later in the detection image 52, and the image data of the attention area described later in the detection image 52.
  • the lens movement control unit 34d performs focus detection processing based on the image data
  • the setting unit 34b performs exposure calculation processing based on image data of a region of interest described later in the detection image 52.
  • the image processing unit 33 Based on the image data of the region, the image processing unit 33 performs image generation processing.
  • the control unit 34 sets the density of image data read from the first imaging region B1 of the imaging element 32a for the recording image 51 to a value necessary for the recording image 51.
  • the image data may be read out from a smaller number of pixels than the pixels included in the first imaging region B1. .
  • control unit 34 sets the density of image data read from the second imaging region B2 of the imaging element 32a for the detection image 52 to a value necessary for the above-described information acquisition.
  • the image data may be read out from a smaller number of pixels than the pixels included in the second imaging region B2.
  • the control unit 34 can vary the number of readout signals per block in the first imaging region B1 of the imaging device 32a and the second imaging region B2 of the imaging device 32a. For example, the density of the image data read per predetermined number of pixels may be different between the first imaging area B1 and the second imaging area B2 of the imaging element 32a. When the number of pixels constituting the block is 1, the number of readout signals per block is zero or one.
  • the display unit 35 of the camera 1 has a lower display resolution than the number of pixels of the image sensor 22.
  • the control unit 34 thins out the image data of the recording image 51 for each predetermined number of data or adds the data for each predetermined number of data. A smaller number of image data than the data is generated to correspond to the display resolution of the display unit 35.
  • the above addition processing for each predetermined number of data may be performed by the arithmetic circuit 415 provided in the signal processing chip 112 or the memory chip 113, for example.
  • control unit 34 generates the recording image 51 and the detection image 52 based on the image data read from the imaging element 32a that has captured one frame.
  • the recording image 51 and the detection image 52 are captured at the same angle of view and include a common subject image. Acquisition of the recording image 51 and acquisition of the detection image 52 can be performed in parallel as shown in FIG.
  • first imaging area B1 and the second imaging area B2 in the imaging area may be divided as shown in FIG. 6B or FIG.
  • the first imaging region B1 is configured by even columns of blocks.
  • the second imaging region B2 is configured by an odd number of blocks.
  • the first imaging region B1 is configured by odd-numbered rows of blocks.
  • the second imaging region B2 is configured by even rows of blocks.
  • the control unit 34 records based on the image data read from the first imaging region B1 of the imaging device 32a that has captured one frame.
  • a work image 51 is generated.
  • the control unit 34 generates the detection image 52 based on the image data read from the second imaging region B2 of the imaging element 32a.
  • the recording image 51 and the detection image 52 are captured at the same angle of view and include a common subject image. Acquisition of the recording image 51 and acquisition of the detection image 52 can be performed in parallel as shown in FIG.
  • the above-described recording image 51 may be used for focus detection processing or the like.
  • the monitor image may be transmitted from the camera 1 to a monitor outside the camera 1, and the monitor image may be displayed on the external monitor.
  • the imaging condition set in the first imaging area B1 for capturing the recording image 51 is referred to as the first imaging condition
  • the imaging condition set in the second imaging area B2 for capturing the detection image 52 is the first. It will be referred to as two imaging conditions.
  • the control unit 34 may set the first imaging condition and the second imaging condition to the same condition or different conditions.
  • the control unit 34 sets the first imaging condition set in the first imaging area B1 to a condition suitable for the recording image 51. At this time, the same first imaging condition is set uniformly throughout the first imaging region B1.
  • the control unit 34 sets the second imaging condition to be set in the second imaging area B2 to a condition suitable for the information acquisition or the like.
  • the condition set as the second imaging condition may be the same second imaging condition uniformly in the entire second imaging area B2, or different second imaging conditions for each area in the second imaging area B2. May be set.
  • the control unit 34 sets the second imaging condition to be set in the second imaging region B2. For each of the two imaging regions B2, a condition suitable for subject detection, a condition suitable for focus detection, a condition suitable for imaging condition setting, and a condition suitable for image generation may be set.
  • control unit 34 may change the second imaging condition set in the second imaging area B2 for each frame.
  • the second imaging condition set in the second imaging region B2 in the first frame of the detection image 52 is set as a condition suitable for subject detection
  • the second imaging condition B2 is set in the second imaging region B2 in the second frame of the detection image 52.
  • Two imaging conditions are suitable for focus detection
  • the second imaging condition set in the second imaging region B2 in the third frame of the detection image 52 is a condition suitable for imaging condition setting
  • four frames of the detection image 52 are used.
  • the second imaging condition set in the second imaging area B2 is set as a condition suitable for image generation. In these cases, the same second imaging condition may be set uniformly throughout the second imaging area B2 in each frame, or different second imaging conditions may be set for each area.
  • the area ratio between the first imaging region B1 and the second imaging region B2 may be different.
  • the control unit 34 sets the ratio of the first imaging area B1 on the imaging surface to be higher than the ratio of the second imaging area B2 based on the operation by the user or the determination of the control unit 34, or on the imaging surface.
  • the ratio occupied by the first imaging area B1 and the second imaging area B2 is set to be equal as illustrated in FIGS. 6A to 6C, or the ratio occupied by the first imaging area B1 on the imaging surface is set. It is set lower than the ratio occupied by the second imaging region B2.
  • control part 34 can set the operation rate of the block contained in 1st imaging area B1, and the operation rate of the block contained in 2nd imaging area B2, respectively.
  • the example is illustrated in FIGS. 6 (a) to 6 (c). Imaging is performed with 80% of the blocks included in the first imaging area B1, and imaging is performed with 80% of the blocks included in the second imaging area B2.
  • imaging is performed with 80% of the blocks included in the second imaging area B2.
  • driving every other block is performed to drive half of the blocks and stop driving the remaining half of the blocks.
  • FIG. 8 is a diagram illustrating a subject area on the imaging screen of the camera 1.
  • the subject area includes a person 61, a car 62, a bag 63, a mountain 64, a cloud 65, and a cloud 66.
  • the person 61 holds the bag 63 with both hands.
  • the automobile 62 stops at the right rear side of the person 61.
  • the image data read from the first imaging area B1 is used as shown in FIG.
  • a detection image 52 as shown in FIG. 7B is obtained from the image data read from the second imaging region B2.
  • the imaging conditions can be set for each block corresponding to each subject area. For example, a block included in the area of the person 61, a block included in the car 62, a block included in the bag 63, a block included in the mountain 64, a block included in the cloud 65, and a block included in the cloud 66.
  • the imaging conditions are set for each. In order to appropriately set the imaging condition for each of these blocks, it is necessary to capture the above-described detection image 52 under an appropriate exposure condition, and to obtain an image with no overexposure or underexposure. is there.
  • Overexposure means that the gradation of data in a high-luminance portion of an image is lost due to overexposure.
  • blackout means that the gradation of data in the low-luminance portion of the image is lost due to underexposure. This is because, in the detection image 52 in which whiteout or blackout occurs, for example, the outline (edge) of each subject area does not appear, and it is difficult to detect the subject element based on the detection image 52.
  • the subject based on the detection image 52 is set by appropriately setting the second imaging condition for acquiring the detection image 52 for the second imaging region B2 of the imaging element 32a. It is possible to appropriately perform detection, focus detection, imaging condition setting, and image generation.
  • the second imaging condition is set as follows.
  • the control unit 34 makes the second gain as the second imaging condition for obtaining the detection image 52 higher than the first gain as the first imaging condition for obtaining the recording image 51.
  • the object detection unit 34a can detect the bright detection image 52 even when the recording image 51 is a dark image. It is possible to accurately detect the subject based on the above. If the subject detection can be accurately performed, the image can be appropriately divided for each subject.
  • the control unit 34 sets the first imaging area B1 and the second imaging area B2 in the imaging element 32a.
  • the first imaging region B1 is a region where the recording image 51 is captured.
  • the second imaging region B2 is a region where a detection image 52 for detection is captured.
  • the control unit 34 sets a first gain for recording in the first imaging region B1 of the imaging device 32a, and a second gain higher than the first gain in the second imaging region B2 of the imaging device 32a.
  • the control unit 34 that has performed the gain setting performs imaging for the detection image 52 in the second imaging region B2 of the imaging device 32a, and performs detection based on the image data read from the second imaging region B2 of the imaging device 32a.
  • An image 52 is generated.
  • the control unit 34 causes the object detection unit 34 a to detect the subject element based on the detection image 52.
  • the control unit 34 causes the recording image 51 to be captured in the first imaging region B1 of the imaging element 32a.
  • the recording image 51 is not recorded by the recording unit 37, but the recording image 51 is captured in order to generate a monitor image based on the recording image 51.
  • the setting unit 34b divides the recording image 51 based on the subject element detected by the object detection unit 34a.
  • the recording image 51 acquired by the imaging unit 32 includes, for example, a person 61 area, an automobile 62 area, a bag 63 area, a mountain 64 area, and clouds 65. It is divided into a region, a cloud 66 region, and other regions.
  • the control unit 34 causes the display unit 35 to display a setting screen illustrated in FIG. In FIG. 9, a monitor image 60a is displayed on the display unit 35, and an imaging condition setting screen 70 is displayed on the right side of the monitor image 60a.
  • the setting screen 70 lists frame rate, shutter speed (TV), and gain (ISO sensitivity) in order from the top as an example of setting items for imaging conditions.
  • the frame rate is the number of frames of a moving image captured by the camera 1 per second.
  • the shutter speed corresponds to the exposure time.
  • the gain corresponds to the ISO sensitivity.
  • the setting items of the imaging conditions may be added as appropriate in addition to those exemplified in FIG. When all the setting items do not fit in the setting screen 70, other setting items may be displayed by scrolling the setting items up and down.
  • the control unit 34 can set an area selected by a user operation among the areas divided by the setting unit 34b as a target for setting (changing) the imaging condition. For example, in the camera 1 that can be touched, the user touches the display position of the subject for which the imaging condition is to be set (changed) on the display surface of the display unit 35 on which the monitor image 60a is displayed. For example, when the display position of the person 61 is touched, the control unit 34 sets an area corresponding to the person 61 in the monitor image 60 a as an imaging condition setting (change) target area and an area corresponding to the person 61. The outline is highlighted.
  • an area to be displayed with an emphasized outline indicates an area to be set (changed) for imaging conditions.
  • the highlighted display is, for example, a thick display, a bright display, a display with a different color, a broken line display, a blinking display, or the like.
  • the monitor image 60 a in which the outline of the region corresponding to the person 61 is emphasized is displayed.
  • the highlighted area is a target for setting (changing) the imaging condition.
  • the control unit 34 displays the current shutter speed for the highlighted area (person 61).
  • the set value is displayed on the screen (reference numeral 68).
  • the camera 1 is described on the premise of a touch operation.
  • the imaging condition may be set (changed) by operating a button or the like constituting the operation member 36.
  • the setting unit 34b increases or decreases the shutter speed display 68 from the current setting value according to the touch operation.
  • An instruction is sent to the imaging unit 32 (FIG. 1) so as to change the imaging condition of the unit area 131 (FIG. 3) of the imaging element 32a corresponding to the displayed area (person 61) in accordance with the touch operation.
  • the decision icon 72 is an operation icon for confirming the set imaging condition.
  • the setting unit 34b performs the setting (change) of the frame rate and gain (ISO) in the same manner as the setting (change) of the shutter speed (TV).
  • the setting unit 34b may set the imaging conditions based on the determination of the control unit 34 without being based on a user operation. For example, when an overexposure or underexposure occurs in an area including a subject having the maximum luminance or the minimum luminance in the image, the setting unit 34b cancels the overexposure or underexposure based on the determination of the control unit 34. Imaging conditions may be set. For the area that is not highlighted (area other than the person 61), the set imaging conditions are maintained.
  • the control unit 34 displays the entire target area brightly, increases the contrast of the entire target area, or displays the entire target area. May be displayed blinking.
  • the target area may be surrounded by a frame.
  • the display of the frame surrounding the target area may be a double frame or a single frame, and the display mode such as the line type, color, and brightness of the surrounding frame may be appropriately changed.
  • the control unit 34 may display an indication of an area for which an imaging condition is set, such as an arrow, in the vicinity of the target area.
  • the control unit 34 may darkly display a region other than the target region for which the imaging condition is set (changed), or may display a low contrast other than the target region.
  • the control unit 34 When a recording button (not shown) constituting the operation member 36 or a display for instructing recording start (for example, a release icon 74 in FIG. 9) is operated, the control unit 34 records the recording image 51 by the recording unit 37. Let it begin.
  • the imaging conditions of the recording image 51 may be applied to different imaging conditions for each of the above divided areas (person 61, car 62, bag 63, mountain 64, cloud 65, cloud 66). A common imaging condition can also be applied to the region.
  • the image processing unit 33 performs image processing on the image data acquired by the imaging unit 32. Image processing can also be performed under different image processing conditions for each of the divided areas.
  • the control unit 34 sets the first condition to the sixth condition as the imaging condition for each area divided as described above.
  • the area of the person 61 for setting the first condition is referred to as the first area 61
  • the area of the automobile 62 for setting the second condition is referred to as the second area 62
  • the bag 63 for setting the third condition is called the third region 63
  • the region of the mountain 64 that sets the fourth condition is called the fourth region 64
  • the region of the cloud 65 that sets the fifth condition is called the fifth region 65
  • the sixth condition is set
  • a region of the cloud 66 that is to be referred to is called a sixth region 66.
  • the image data of the recording image 51 set and acquired in this way is recorded by the recording unit 37.
  • a recording button (not shown) constituting the operation member 36 is operated again or a display instructing the end of recording is operated, the control unit 34 ends the recording of the recording image 51 by the recording unit 37.
  • FIG. 10 shows the imaging timing of the recording image 51 (Dv1, Dv2, Dv3,%), The imaging timing of the detection image 52 (Di, Dii, Diii, Div %), and the monitor images (LV1, LV2, LV3). ,... Is a diagram illustrating display timing of.
  • the imaging control unit 34c causes the imaging unit 32 to capture the recording image 51 and the detection image 52 under the imaging conditions set by the setting unit 34b.
  • the imaging control unit 34c performs imaging for the recording image 51 in the first imaging area B1 of the imaging element 32a.
  • the image sensor 32a sequentially captures the first frame recording image Dv1, the second frame recording image Dv2, the third frame recording image Dv3,.
  • the imaging of the recording image 51 is repeated by the first imaging area B1 of the imaging element 32a.
  • the imaging control unit 34c causes the second imaging region B2 of the imaging element 32a to perform imaging for the detection image 52 four times during one frame period of imaging of the recording image 51.
  • the image sensor 32a sequentially captures the detection image Di for the first frame, the detection image Dii for the second frame, the detection image Diii for the third frame, and the detection image Div for the fourth frame.
  • four frames of detection images Di, Dii, Diii, and Div are captured as the detection image 52 for one frame period of the recording image 51.
  • the detection image 52 of 4 frames is captured by the second imaging region B2 in parallel with the capturing of the recording image 51 of 1 frame by the first imaging region B1.
  • the detection image Di is, for example, for subject detection
  • the detection image Dii is, for example, for focus detection
  • the detection image Diii is, for example, an imaging condition setting. Therefore, the detection image Div is used for image generation, for example.
  • control unit 34 causes the display unit 35 to display the monitor image LV1 of the first frame based on the recording image Dv1 of the first frame. Then, the imaging control unit 34c sets the subject position detected based on the detection images Di, Dii, Diii, and Div captured in parallel with the recording of the recording image Dv1 for the first frame and the determined imaging conditions to the second The imaging unit 32 is controlled so as to be reflected in the imaging of the frame recording image Dv2.
  • control unit 34 causes the display unit 35 to display the monitor image LV2 of the second frame based on the recording image Dv2 of the second frame. Then, the imaging control unit 34c sets the subject position detected based on the detection images Di, Dii, Diii, and Div captured in parallel with the recording of the recording image Dv2 for the second frame, and the determined imaging conditions to the third
  • the imaging unit 32 is controlled so as to be reflected in the imaging of the frame recording image Dv3.
  • the control unit 34 repeats the same processing thereafter.
  • the control unit 34 causes the recording unit 37 to record the image data of the recording image 51 (recording image Dv3 and later) captured after that time.
  • the imaging control unit 34c detects the detection image Di in the second imaging area B2 of the imaging element 32a before starting the imaging of the recording image Dv1 of the first frame in the first imaging area B1 of the imaging element 32a.
  • the detection image Dii, the detection image Diii, and the detection image Div may be captured. For example, after the user turns on the camera 1, the detection image Di in the second imaging region B2 and the detection image are detected before the recording of the first frame recording image Dv1 in the first imaging region B1.
  • the image Dii, the detection image Diii, and the detection image Div are imaged.
  • the imaging control unit 34c captures the subject position detected based on the detection images Di, Dii, Diii, and Div and the determined imaging conditions in the imaging of the recording image Dv1 of the first frame. 32 is controlled.
  • the imaging control unit 34c does not automatically start imaging the detection images Di, Dii, Diii, and Div, but waits for the following operations to detect the detection images Di, Dii, and Diii. , Div imaging may be started.
  • the imaging control unit 34c starts imaging of the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging element 32a.
  • operations related to imaging there are, for example, an operation for changing an imaging magnification, an operation for changing an aperture, an operation related to focus adjustment (for example, selection of a focus point), and the like.
  • the imaging control unit 34c images the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging element 32a.
  • the detection images Di, Dii, Diii, and Div can be generated before the recording image Dv1 for the first frame is performed.
  • the imaging control unit 34c displays the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging device 32a. Imaging may be performed. This is because there is a high possibility that a new setting is made when an operation related to imaging is performed from the menu screen. In this case, the imaging control unit 34c repeatedly performs imaging of the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging element 32a during the period when the user is operating from the menu screen.
  • the detection image Di, the detection image Dii, the detection image Diii, and the detection image Div are imaged before the recording of the first frame recording image Dv1 is started, the detection image Di and the detection are detected.
  • the image for recording Dv1 may be taken.
  • the recording image Dv1 can be imaged reflecting the subject position detected based on the detection images Di, Dii, Diii, Div and the determined imaging conditions, and the monitor image LV1 can be displayed based on the recording image Dv1.
  • the imaging control unit 34 c controls the imaging unit 32.
  • the imaging control unit 34c performs the operation of the imaging element 32a. It is not necessary to capture the detection images Di, Dii, Diii, and Div in the two imaging regions B2. This is because the detection images Di, Dii, Diii, and Div are not wastefully imaged when there is a low possibility that new settings for imaging are performed.
  • the imaging control unit 34c May image the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging element 32a.
  • the imaging control unit 34c performs a predetermined operation during the period during which the dedicated button is operated.
  • the detection images Di, Dii, Diii, Div may be imaged in the second imaging region B2 of the imaging device 32a every period, or in the second imaging region B2 of the imaging device 32a when the operation of the dedicated button is finished.
  • the detection images Di, Dii, Diii, and Div may be captured. As a result, the detection images Di, Dii, Diii, and Div can be captured at a timing desired by the user.
  • Image blur correction> the recording image 51 read out from the first imaging region B1 of the image pickup device 32a is subjected to trimming processing based on the shake of the camera 1 due to camera shake, so that the recorded recording image 51 is captured. Reduce image blur. Such suppression of image blur is also referred to as image blur correction. Details of the image blur correction will be described below.
  • image blur generated in the camera 1 due to camera shake is divided into image blur (also referred to as angle blur) accompanying the rotational movement of the camera 1 and image blur (also referred to as translation blur) accompanying the translational movement of the camera 1.
  • the control unit 34 calculates image blur due to rotational movement of the camera 1 and image blur due to translational movement of the camera 1.
  • FIG. 11 is a diagram illustrating the control unit 34 that functions as a shake correction unit.
  • the control unit 34 includes a shake amount calculation unit 34e and a target movement amount calculation unit 34f.
  • the shake amount calculation unit 34e calculates an image shake in the Y-axis direction due to the rotational motion using a detection signal around the axis (Pitch direction) parallel to the X axis (FIG. 3) by the shake sensor 39. Further, the shake amount calculation unit 34e calculates an image shake in the X-axis direction due to the rotational motion using a detection signal around the axis (Yaw direction) parallel to the Y-axis (FIG. 3) by the shake sensor 39.
  • the shake amount calculation unit 34e further calculates an image shake in the X-axis direction due to translational motion using a detection signal in the X-axis direction by the shake sensor 39. Furthermore, the shake amount calculation unit 34e calculates the image shake in the Y-axis direction due to translational motion using the detection signal in the Y-axis direction from the shake sensor 39.
  • the target movement amount calculation unit 34f calculates the image blur in the X axis direction and the Y axis direction due to the rotational motion and the image blur in the X axis direction and the Y axis direction due to the translational motion calculated by the blur amount calculation unit 34e for each axis.
  • the image blur in the X-axis direction and the Y-axis direction is calculated. For example, when the direction of the image blur due to the rotational motion and the direction of the image blur due to the translational motion calculated by the blur amount calculation unit 34e in the same axial direction are the same, the image blur increases due to the addition, but the two calculated image blurs. When the orientations of the images are different, the image blur is reduced by the addition. In this way, the addition calculation is performed by adding a positive or negative sign depending on the image blur direction of each axis.
  • the target movement amount calculation unit 34f calculates the image blur in the X-axis direction and the Y-axis direction after addition, the photographing magnification (calculated based on the position of the zoom lens of the imaging optical system 31), and the camera 1. Based on the distance to the subject (calculated based on the position of the focus lens of the imaging optical system 31), the image blur amount of the image plane (imaging plane of the imaging element 32a) with respect to the subject image is calculated.
  • the control unit 34 sets a trimming range W ⁇ b> 1 smaller than the recording image 51 in the recording image 51.
  • the size of the trimming range W1 is, for example, 60% of the size of the recording image 51.
  • the size of the trimming range W1 may be determined by the average size of the detection signal from the shake sensor 39.
  • the control unit 34 decreases the trimming range W1 as the average value of the detection signal from the shake sensor 39 in the most recent predetermined time increases, and increases the trimming range W1 as the average value of the detection signal from the shake sensor 39 decreases.
  • the control unit 34 performs image blur correction on the recording image 51 by moving the trimming range W ⁇ b> 1 set as described above in the direction opposite to the shake direction of the camera 1 in the recording image 51. Therefore, the target movement amount calculation unit 34f calculates, as the target movement amount, the movement direction and movement amount of the trimming range W1 necessary for canceling the above-described image blur amount on the image plane. The target movement amount calculation unit 34f outputs the target movement amount calculated for the correction unit 33b to the correction unit 33b.
  • FIG. 12B illustrates a state in which the trimming range W1 is moved in the recording image 51. That is, since the camera 1 is shaken upward from the state of FIG. 12A and the subject image in the recording image 51 is blurred downward, the trimming range W1 is moved downward by the correction unit 33b. By such image blur correction, even if the position of the subject in the recording image 51 is moved due to camera shake, the subject stays at substantially the same position in the trimming range W1.
  • the area hatched in FIGS. 12A and 12B corresponds to the movement allowance of the trimming range W1.
  • the correction unit 33b like the image 51-1 and the image 51-2 illustrated in FIGS. 12A and 12B, the image corresponding to the trimming range W1 in the image data of the recording image 51.
  • Image data is extracted, and the extracted image data is used as the image data of the recording image 51 after image blur correction.
  • the control unit 34 performs such image blur correction for each frame constituting the recording image 51.
  • control unit 34 performs image blur correction similar to the recording image 51 not only on the recording image 51 but also on the detection image 52.
  • the control unit 34 also moves a later-described attention area set for the detection image 52 in the detection image 52. This will be described in detail below with reference to FIG.
  • FIG. 13A is a diagram for explaining image blur correction for the recording image 51
  • FIG. 13B is an image blur for the detection image 52 corresponding to the recording image 51 of FIG. 13A. It is a figure explaining correction
  • the control unit 34 sets a trimming range W ⁇ b> 2 smaller than the detection image 52 in the detection image 52 like the corresponding recording image 51.
  • the size of the trimming range W2 is set to 60% of the size of the detection image 52 as in the case of the recording image 51.
  • the size of the trimming range W ⁇ b> 2 may be determined by the average size of the detection signal from the shake sensor 39 as in the case of the recording image 51.
  • the control unit 34 performs image blur correction on the detection image 52 by moving the trimming range W2 set as described above in a direction opposite to the shake direction of the camera 1 in the detection image 51.
  • the correction unit 33b extracts the image data of the image corresponding to the trimming range W2 from the image data of the detection image 52, as in the image 52-2 illustrated in FIG. 13B, and extracts the extracted image data.
  • the image data of the detection image 52 after image blur correction is used.
  • the control unit 34 also moves the attention range 61-2 set for the detection image 52 in the direction opposite to the shake direction of the camera 1 together with the trimming range W2 in the detection image 52.
  • the distance between the trimming range W2 until the movement and the trimming range W2 after the change is the same as the distance between the attention range 61-2 before the movement and the attention range 61-2 after the movement.
  • the distance between the trimming range W2 before movement and the trimming range W2 after change is, for example, the distance between the center of gravity of the trimming range W2 before movement and the center of gravity of the trimming range W2 after movement.
  • the distance corresponds to the amount of movement of the trimming range W2. The same applies to the distance between the attention range 61-2 before the movement and the attention range 61-2 after the movement.
  • the attention range 61-2 is a region in the detection image 52 where the object detection unit 34a detects the subject element, a region in the detection image 52 where the lens movement control unit 34d performs focus detection processing, and a detection image. 52 corresponds to an area where the setting unit 34b performs the exposure calculation process, and an area referred to by the image processing unit 33 during the image generation process in the detection image 52.
  • the shape of the attention range 61-2 may be a square or a rectangle, and may be a circle or an ellipse.
  • the size of the attention range 61-2 is appropriately changed according to the size of the area for detecting the subject element, the area for performing the focus detection process, the area for performing the exposure calculation process, and the area to be referred to during the image generation process. It's okay.
  • the attention area 61-2 after movement may partially overlap the attention area 61-2 before movement (may include the attention area 61-2 before movement) or may not overlap. Good. Further, the attention area 61-2 after the movement may include the attention area 61-2 before the movement.
  • the control unit 34 can set different imaging conditions for the attention range 61-2 and other areas in the second imaging area B2 in the second imaging conditions for obtaining the detection image 52.
  • the control unit 34 changes the setting of the second imaging condition as the attention range 61-2 moves. That is, the second imaging condition for the second imaging region B2 is reset according to the position of the attention range 61-2 after movement.
  • FIG. 13C is a diagram for explaining image blur correction for the recording image 51, and illustrates a state in which the trimming range W1 is moved to the lower limit in the recording image 51.
  • FIG. 13D is a diagram for explaining image blur correction for the detection image 52, and illustrates a state in which the trimming range W ⁇ b> 2 is moved to the lower limit in the detection image 52. Since the trimming range W1 in FIG. 13C moves in the recording image 51, it cannot move below the position shown in FIG. 13C. Similarly, since the trimming range W2 in FIG. 13D also moves in the detection image 52, it cannot move below the position shown in FIG. 13D.
  • the control unit 34 moves in the direction opposite to the shake direction of the camera 1 in the trimming range W2, as shown in FIG. 13 (d). Move continuously.
  • the moving direction and moving amount of the attention range 61-3 are the target moving amounts output from the target moving amount calculating unit 34f to the correcting unit 33b.
  • the control unit 34 trims the attention range 61-3 when the subject of interest (for example, the head of a person) moves further downward. Move within the range W2. This is so that the head of the person is within the attention range 61-3 after the movement.
  • the distance between the trimming range W2 before the movement and the trimming range W2 after the movement is different from the distance between the attention range 61-3 before the movement and the attention range 61-3 after the movement.
  • the distance between the trimming range W2 until the movement and the trimming range W2 after the change is, for example, the distance between the center of gravity of the trimming range W2 before the movement and the center of gravity of the trimming range W2 after the movement.
  • the distance here corresponds to the amount of movement of the trimming range W2.
  • a region in which the object detection unit 34a detects a subject element in the detection image 52, a region in which the lens movement control unit 34d performs focus detection processing in the detection image 52, and detection In the image 52, the setting unit 34b performs the exposure calculation process, and in the detection image 52, the area referred to by the image processing unit 33 during the image generation process includes the same subject (in this example, the human head). Will exist.
  • the detection image 52 imaged as described above and subjected to image blur correction is used for image processing, focus detection processing, subject detection processing, and exposure condition setting processing.
  • the image processing unit 33 (the generation unit 33c) performs image processing using a kernel having a predetermined size centered on the target pixel P (processing target pixel) in the image data of the detection image 52.
  • FIG. 14 is a diagram illustrating a kernel, and corresponds to the attention area 90 in the monitor image 60a of FIG. In FIG.
  • pixels around the target pixel P (eight pixels in this example) included in the target region 90 (for example, 3 ⁇ 3 pixels) centered on the target pixel P are set as reference pixels Pr1 to Pr8.
  • the position of the target pixel P is the target position, and the positions of the reference pixels Pr1 to Pr8 surrounding the target pixel P are reference positions.
  • the pixel defect correction process is one of image processes performed on a captured image.
  • the image pickup element 32a which is a solid-state image pickup element, may produce pixel defects in the manufacturing process or after manufacturing, and output abnormal level image data. Therefore, the generation unit 33c of the image processing unit 33 corrects the image data output from the pixel in which the pixel defect has occurred, thereby making the image data in the pixel position in which the pixel defect has occurred inconspicuous.
  • the generation unit 33c of the image processing unit 33 uses, for example, a pixel at the position of a pixel defect recorded in advance in a non-illustrated nonvolatile memory in an image of one frame as a target pixel P (processing target pixel), Pixels around the target pixel P included in the central target region 90 are set as reference pixels Pr1 to Pr8.
  • the generation unit 33c of the image processing unit 33 calculates the maximum value and the minimum value of the image data in the reference pixels Pr1 to Pr8, and when the image data output from the target pixel P exceeds these maximum value or minimum value, the target pixel Max and Min filter processing for replacing the image data output from P with the maximum value or the minimum value is performed. Such a process is performed for all pixel defects whose position information is recorded in a non-volatile memory (not shown).
  • the generation unit 33c of the image processing unit 33 performs the above-described pixel defect correction processing on the image data of the detection image 52, whereby subject detection processing based on the detection image 52, focus It is possible to prevent the influence of pixel defects in the detection process and the exposure calculation process.
  • the pixel defect correction processing is not limited to the detection image 52, and may be performed on the recording image 51.
  • the generation unit 33c of the image processing unit 33 performs the above-described pixel defect correction processing on the image data of the recording image 51, thereby preventing the influence of pixel defects in the recording image 51.
  • Color interpolation processing is one of image processing performed on a captured image. As illustrated in FIG. 3, in the imaging chip 111 of the imaging device 100, green pixels Gb and Gr, a blue pixel B, and a red pixel R are arranged in a Bayer array. Since the generation unit 33c of the image processing unit 33 lacks image data of a color component different from the color component of the color filter F arranged at each pixel position, by performing a known color interpolation process, The image data of the missing color component is generated with reference to the image data.
  • the generation unit 33c of the image processing unit 33 performs color interpolation processing on the image data of the detection image 52, so that subject detection processing is performed based on the color-interpolated detection image 52. It is possible to appropriately perform the focus detection process and the exposure calculation process. Note that the color interpolation process may be performed not only on the detection image 52 but also on the recording image 51.
  • the generation unit 33c of the image processing unit 33 performs the above-described color interpolation processing on the image data of the recording image 51, whereby the recording image 51 subjected to color interpolation can be obtained.
  • the outline enhancement process is one of image processes performed on a captured image.
  • the generation unit 33c of the image processing unit 33 performs, for example, a known linear filter calculation using a kernel of a predetermined size centered on the pixel of interest P (processing target pixel) in an image of one frame.
  • the kernel size of the sharpening filter which is an example of the linear filter is N ⁇ N pixels
  • the position of the target pixel P is the target position
  • the positions of (N 2 ⁇ 1) reference pixels Pr surrounding the target pixel P are Reference position.
  • the kernel size may be N ⁇ M pixels.
  • the generation unit 33c of the image processing unit 33 performs a filter process for replacing the image data in the target pixel P with a linear filter calculation result on each horizontal line, for example, from the upper horizontal line to the lower horizontal line of the frame image. This is done while shifting the pixels from left to right.
  • the generation unit 33c of the image processing unit 33 performs the above-described contour emphasis processing on the image data of the detection image 52, so that the contour is enhanced based on the detection image 52.
  • Subject detection processing, focus detection processing, and exposure calculation processing can be appropriately performed.
  • the contour enhancement processing is not limited to the detection image 52 but may be performed on the recording image 51.
  • the generation unit 33c of the image processing unit 33 performs the above-described contour enhancement processing on the image data of the recording image 51, whereby the recording image 51 with the contour enhanced can be obtained.
  • the strength of contour emphasis may be different between the case where it is performed on the image data of the detection image 52 and the case where it is performed on the image data of the recording image 51.
  • Noise reduction processing is one type of image processing performed on a captured image.
  • the generation unit 33c of the image processing unit 33 performs, for example, a known linear filter calculation using a kernel of a predetermined size centered on the pixel of interest P (processing target pixel) in an image of one frame.
  • the kernel size of the smoothing filter which is an example of the linear filter is N ⁇ N pixels
  • the position of the target pixel P is the target position
  • the positions of the (N 2 ⁇ 1) reference pixels Pr surrounding the target pixel P are Reference position.
  • the kernel size may be N ⁇ M pixels.
  • the generation unit 33c of the image processing unit 33 performs a filter process for replacing the image data in the target pixel P with a linear filter calculation result on each horizontal line, for example, from the upper horizontal line to the lower horizontal line of the frame image. This is done while shifting the pixels from left to right.
  • the generation unit 33c of the image processing unit 33 performs the above-described noise reduction processing on the image data of the detection image 52, thereby performing subject detection processing and focus detection based on the detection image 52. It is possible to prevent the influence of noise in the processing and the exposure calculation processing.
  • the noise reduction process may be performed not only on the detection image 52 but also on the recording image 51.
  • the generation unit 33c of the image processing unit 33 performs the above-described noise reduction processing on the image data of the recording image 51, whereby the recording image 51 with reduced noise can be obtained.
  • the degree of noise reduction may be different between the case of performing the image data of the detection image 52 and the case of performing the image data of the recording image 51.
  • the lens movement control unit 34d of the control unit 34 performs focus detection processing using signal data (image data) corresponding to a predetermined position (focus point) on the imaging screen.
  • image data image data
  • the lens movement control unit 34d (generation unit) of the control unit 34 detects image shift amounts (phase differences) of a plurality of subject images due to light beams that have passed through different pupil regions of the imaging optical system 31, thereby capturing the imaging optical system.
  • a defocus amount of 31 is calculated.
  • the lens movement control unit 34d of the control unit 34 adjusts the focus of the imaging optical system 31 by moving the focus lens of the imaging optical system 31 to a position where the defocus amount is zero (allowable value or less), that is, a focus position. .
  • FIG. 15 is a diagram illustrating the position of the focus detection pixel on the imaging surface of the imaging device 32a.
  • focus detection pixels are discretely arranged along the X-axis direction (horizontal direction) of the imaging chip 111.
  • fifteen focus detection pixel lines 160 are provided at predetermined intervals.
  • the focus detection pixels constituting the focus detection pixel line 160 output image data of the focus detection, recording image 51, and detection image 52.
  • normal imaging pixels are provided at pixel positions other than the focus detection pixel line 160.
  • the imaging pixels output image data of the recording image 51 and the detection image 52.
  • FIG. 16 is an enlarged view of a part of the focus detection pixel line 160 corresponding to the focus point 80A shown in FIG.
  • a red pixel R, a green pixel G (Gb, Gr), a blue pixel B, and a focus detection pixel are illustrated.
  • the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B are arranged according to the rules of the Bayer arrangement described above.
  • the square area illustrated inside (behind) the microlens L and a color filter (not shown) is a photoelectric conversion unit of the imaging pixel.
  • Each imaging pixel receives a light beam passing through the exit pupil of the imaging optical system 31 (FIG. 1). That is, the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B each have a square-shaped mask opening, and light passing through these mask openings reaches the photoelectric conversion unit of the imaging pixel.
  • the shape of the photoelectric conversion part (mask opening part) of the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B is not limited to a quadrangle, and may be, for example, a circle.
  • the focus detection pixel has two photoelectric conversion units S1 and S2 inside (behind) a microlens L and a color filter (not shown).
  • a first photoelectric conversion unit S1 disposed on the left side of the pixel position and a second photoelectric conversion unit S2 disposed on the right side of the pixel position are included.
  • the first light beam passing through the first region of the exit pupil of the imaging optical system 31 (FIG. 1) is incident on the first photoelectric conversion unit S1, and the second photoelectric conversion unit S2 is imaged.
  • a second light beam passing through the second region of the exit pupil of the optical system 31 (FIG. 1) enters.
  • a photoelectric conversion unit and a readout circuit 105 that reads out a photoelectric conversion signal from the photoelectric conversion unit are referred to as “pixels”.
  • the read circuit 105 will be described with an example including a transfer transistor (TX), an amplification transistor (AMP), a reset transistor (RST), and a selection transistor (SEL).
  • TX transfer transistor
  • AMP amplification transistor
  • RST reset transistor
  • SEL selection transistor
  • the range of the read circuit 105 is not necessarily the same as this example. May be.
  • the position of the focus detection pixel line 160 in the imaging chip 111 is not limited to the position illustrated in FIG. Also, the number of focus detection pixel lines 160 is not limited to the example of FIG. For example, focus detection pixels may be arranged at all pixel positions.
  • the focus detection pixel line 160 in the imaging chip 111 may be a line in which focus detection pixels are arranged along the Y-axis direction (vertical direction) of the imaging chip 111.
  • An imaging element in which imaging pixels and focus detection pixels are two-dimensionally arranged as shown in FIG. 16 is known, and detailed illustration and description of these pixels are omitted.
  • the focus detection pixels are the first and first focus detection pixels. It may be configured to receive one of the two light beams.
  • the lens movement control unit 34d of the control unit 34 passes through different regions of the imaging optical system 31 (FIG. 1) based on the focus detection photoelectric conversion signals output from the photoelectric conversion units S1 and S2 of the focus detection pixels. An image shift amount (phase difference) between the pair of images by the pair of light beams is detected. Then, the defocus amount is calculated based on the image shift amount (phase difference). Such defocus amount calculation by the pupil division phase difference method is well known in the field of cameras, and thus detailed description thereof is omitted.
  • FIG. 17 is an enlarged view of the focus point 80A.
  • the position surrounded by the frame 170 corresponds to the focus detection pixel line 160 (FIG. 15).
  • the lens movement control unit 34d of the control unit 34 performs focus detection processing using signal data from the focus detection pixels indicated by the frame 170 in the detection image 52.
  • the lens movement control unit 34d sets the attention ranges 61-2 and 61-3 in the detection image 52 in FIGS. 13B and 13D corresponding to the range indicated by the frame 170 in FIG.
  • the lens movement control unit 34d can appropriately perform the focus detection process by using the signal data of the focus detection pixels of the detection image 52 subjected to the image blur correction process. Further, for example, the focus detection processing is performed by using the signal data of the focus detection pixels of the detection image 52 in which the gain is set high or the image processing suitable for detection of the image shift amount (phase difference) is performed. Can be done appropriately.
  • the focus detection process using the pupil division phase difference method is exemplified.
  • the contrast detection method in which the focus lens of the imaging optical system 31 is moved to the in-focus position based on the contrast of the subject image. Can be done as follows.
  • the control unit 34 moves the focus lens of the image pickup optical system 31 and moves the focus lens at each position of the focus lens so as to include an image included in the second image pickup region B2 of the image pickup element 32a corresponding to the focus point.
  • a known focus evaluation value calculation is performed based on the signal data output from the pixel for use. Then, the position of the focus lens that maximizes the focus evaluation value is obtained as the focus position. That is, the lens movement control unit 34d of the control unit 34 performs a focus evaluation value calculation using signal data from the imaging pixels corresponding to the focus point 80A in the detection image 52.
  • the lens movement control unit 34d associates the attention ranges 61-2 and 61-3 in the detection image 52 in FIGS.
  • the lens movement control unit 34d can appropriately perform the focus detection process by using the signal data of the detection image 52 subjected to the image blur correction process. Further, for example, by using the signal data of the detection image 52 in which the gain is set high or image processing suitable for detection of the image shift amount (phase difference) is used, the focus detection processing can be appropriately performed. it can.
  • FIG. 18A is a diagram illustrating a template image representing an object to be detected
  • FIG. 18B is a diagram illustrating a monitor image 60a and a search range 190.
  • the object detection unit 34a of the control unit 34 detects an object (for example, the bag 63 which is one of the subject elements in FIG. 9) from the monitor image 60a.
  • the object detection unit 34a of the control unit 34 may set the range in which the object is detected as the entire range of the monitor image 60a. However, in order to reduce the detection process, a part of the monitor image 60a may be used as the search range 190. Good.
  • the object detection unit 34 a of the control unit 34 sets the search range 190 in the vicinity of the region including the person 61. Note that the region 61 including the person 61 may be set as the search range.
  • the object detection unit 34 a of the control unit 34 performs subject detection processing using image data constituting the search range 190 in the detection image 52.
  • the object detection unit 34a sets the attention ranges 61-2 and 61-3 in the detection image 52 in FIGS. 13B and 13D in correspondence with the search range 190 in FIG.
  • the object detection unit 34a can appropriately perform the subject detection process by using the image data of the detection image 52 subjected to the image blur correction process. Further, for example, by using the image data of the detection image 52 in which the gain is set high or image processing suitable for detection of the subject element is used, subject detection processing can be performed appropriately.
  • the subject detection process is not limited to the detection image 52 but may be performed on the recording image 51.
  • Exposure calculation processing is performed using image data constituting the photometric range.
  • the setting unit 34b sets the imaging condition based on the exposure calculation result as follows. For example, when an overexposure or underexposure occurs in an area including a subject having the maximum luminance or the minimum luminance in the image, the setting unit 34b sets an imaging condition so as to eliminate overexposure or underexposure.
  • the setting unit 34 b of the control unit 34 performs an exposure calculation process using image data constituting the photometric range in the detection image 52.
  • the setting unit 34b sets the attention ranges 61-2 and 61-3 in the detection image 52 in FIGS. 13B and 13D in correspondence with the photometric range.
  • the setting unit 34b can appropriately perform the arithmetic processing by using the image data of the detection image 52 subjected to the image blur correction processing. Further, for example, by using the image data of the detection image 52 in which the gain is set to be low, the arithmetic processing can be appropriately performed.
  • the photometric range when performing the exposure calculation process described above but also the photometric (colorimetric) range used when determining the white balance adjustment value, or the necessity of emission of the auxiliary photographing light by the light source that emits the auxiliary photographing light is determined.
  • the process for setting the imaging conditions is not limited to the detection image 52 but may be performed on the recording image 51.
  • FIG. 19 is a flowchart for explaining the flow of processing of the camera 1 that captures an image by setting an imaging condition for each region.
  • the control unit 34 activates a program that executes the process shown in FIG.
  • the control unit 34 sets the first imaging area B1 and the second imaging area B2 in the imaging element 32a, and proceeds to step S20.
  • the first imaging region B1 is a region where the recording image 51 is captured
  • the second imaging region B2 is a region where the detection image 52 is captured.
  • step S20 the control unit 34 causes the image sensor 32a to start capturing a moving image, and proceeds to step S30.
  • the imaging element 32a repeats the imaging of the recording image 51 in the first imaging area B1, and repeats the imaging of the detection image 52 in the second imaging area B2.
  • step S ⁇ b> 30 the control unit 34 determines whether the camera 1 is shaken. When the shake of the camera 1 is detected, the control unit 34 makes a positive determination in step S30 and proceeds to step S40. When the shake of the camera 1 is not detected, the control unit 34 makes a negative determination in step S30 and proceeds to step S50.
  • step S40 the control unit 34 performs image blur correction processing on the moving image.
  • FIG. 20 is a diagram for explaining the flow of image blur correction processing.
  • the control unit 34 calculates the amount of image blur.
  • the target movement amount calculation unit 34f calculates the image blur calculated by the blur amount calculation unit 34e, the photographing magnification (calculated based on the position of the zoom lens of the imaging optical system 31), and the subject from the camera 1.
  • the image blur amount of the imaging surface of the imaging element 32a is calculated based on the distance to (calculated based on the position of the focus lens of the imaging optical system 31), and the process proceeds to step S44.
  • step S44 the control unit 34 performs image blur correction based on the image blur amount. Specifically, in order to cancel the image blur amount calculated by the target movement amount calculation unit 34f, the target movement amount of the trimming range W1 in the recording image 51 is calculated. Then, as illustrated in FIG. 13A, the correction unit 33 b moves the trimming range W ⁇ b> 1 in the recording image 51. By such image blur correction, even if the position of the subject in the recording image 51 is moved due to camera shake, the subject stays at substantially the same position in the trimming range W1.
  • the target movement amount calculation unit 34f also applies the target movement amount to the trimming range W2 in the detection image 52. Then, as shown in FIG. 13B, the correction unit 33b moves the trimming range W2 in the detection image 52. By such image blur correction, even if the position of the subject in the detection image 52 is moved due to camera shake, the subject stays at substantially the same position in the trimming range W2.
  • step S46 the control unit 34 sets the attention range 61-2 for the detection image 52, and the attention range 61-2 is also detected together with the trimming range W2 as shown in FIG. 13B.
  • step S50 the control unit 34 moves only the attention range 61-3 downward in the detection image 52 as shown in FIG. 13 (d).
  • step S50 of FIG. 19 the control unit 34 causes the display unit 35 to start displaying the monitor image, and proceeds to step S60.
  • step S60 the control unit 34 causes the object detection unit 34a to start processing for detecting a subject based on the detection image 52 imaged in the second imaging region B2, and proceeds to step S70.
  • the monitor image based on the recording image 51 imaged in the first imaging area B1 is sequentially displayed on the display unit 35.
  • the lens movement control unit 34d of the control unit 34 performs focus detection processing based on the detection image 52 captured in the second imaging region B2.
  • the AF operation for focusing on the subject element corresponding to the predetermined focus point is controlled. If the setting for performing the AF operation is not performed while the monitor image is being displayed, the lens movement control unit 34d of the control unit 34 performs the AF operation when the AF operation is instructed later.
  • step S70 the setting unit 34b of the control unit 34 divides the imaging screen imaged by the imaging element 32a into a plurality of regions including the subject element, and proceeds to step S80.
  • step S ⁇ b> 80 the control unit 34 displays an area on the display unit 35. As illustrated in FIG. 9, the control unit 34 highlights a region to be set (changed) for the imaging condition in the monitor image 60 a. The control unit 34 determines a region to be highlighted based on the touch operation position by the user. In addition, the control unit 34 displays the imaging condition setting screen 70 on the display unit 35 and proceeds to step S90. Note that when the display position of another subject on the display screen is touched with the user's finger, the control unit 34 changes the region including the subject to a region for which the imaging condition is set (changed). To highlight it.
  • step S90 the control unit 34 determines whether an AF operation is necessary. For example, when the focus adjustment state is changed due to the movement of the subject, the position of the focus point is changed by a user operation, or the execution of the AF operation is instructed by the user operation, the control unit 34 Then, affirmative determination is made in step S90 and the process proceeds to step S100. If the focus adjustment state does not change, the position of the focus point is not changed by the user operation, and the execution of the AF operation is not instructed by the user operation, the control unit 34 makes a negative determination in step S90 and proceeds to step 110. .
  • step S100 the control unit 34 performs an AF operation and proceeds to step S110.
  • the lens movement control unit 34d of the control unit 34 performs an AF operation for focusing on a subject corresponding to a predetermined focus point by performing a focus detection process based on the detection image 52 imaged in the second imaging region B2. Control.
  • step S110 the setting unit 34b of the control unit 34 sets (changes) the imaging condition for the highlighted area in accordance with a user operation, and the process proceeds to step S120.
  • the setting unit 34b sets (changes) an imaging condition for the first imaging region B1.
  • the setting unit 34b may set (change) the imaging condition for the second imaging region B2.
  • the setting unit 34b continuously sets the initial imaging conditions for the first imaging area B1 and the second imaging area B2.
  • the setting unit 34b performs an exposure calculation process based on the detection image 52 imaged in the second imaging area B2 when the imaging condition is determined by newly measuring the light with the imaging condition set (changed).
  • step S120 the imaging control unit 34c of the control unit 34 sends an instruction to the image processing unit 33, performs image processing on the image data of the recording image, and proceeds to step S130.
  • Image processing includes the pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing.
  • step S130 the control unit 34 determines whether or not the recording button is operated.
  • the control unit 34 makes a positive determination in step S130 and proceeds to step S140. If the recording button is not operated, the control unit 34 makes a negative determination in step S130 and returns to step S30.
  • the control unit 34 repeats the above-described processing.
  • step S140 the control unit 34 sends an instruction to the recording unit 37, starts a process of recording the image data after the image processing on a recording medium (not shown), and proceeds to step S150.
  • step S150 the control unit 34 determines whether an end operation has been performed. For example, when the recording button is operated again, the control unit 34 makes an affirmative determination in step S150 and ends the process of FIG. When the end operation is not performed, the control unit 34 makes a negative determination in step S150 and returns to step S30. When returning to step S30, the control unit 34 repeats the above-described processing. If the recording process is started, the above-described process is repeated while continuing the recording process.
  • the imaging element 32a the multilayer imaging element 100 in which imaging conditions can be set for each of a plurality of blocks in the imaging element (imaging chip 111) is illustrated, but the imaging element 32a is not necessarily configured as a multilayer imaging element. There is no.
  • the camera 1 changes the imaging condition of the region in the direction opposite to the shake direction, and performs imaging according to the changed imaging condition. .
  • the image of the subject image can be appropriately generated.
  • the subject element can be detected appropriately.
  • focus detection can be performed appropriately.
  • exposure calculation can be performed appropriately.
  • the camera 1 moves the attention range to an area in the direction opposite to the shake direction, and changes the imaging condition set in the attention range. Accordingly, an image of the subject image can be appropriately generated based on the subject image captured in the attention range.
  • the subject element can be detected appropriately. Furthermore, focus detection can be performed appropriately. Moreover, exposure calculation can be performed appropriately.
  • the camera 1 moves the trimming range so that the subject image exists at the same position in the trimming range even if the camera 1 is shaken. Thereby, the image blur of the subject image can be appropriately suppressed.
  • the camera 1 moves the range of interest included in the trimming range along with the movement of the trimming range. This makes it possible to focus on the same subject image even when the camera 1 is shaken.
  • the camera 1 moves only the attention range even if the trimming range cannot be moved. Thereby, even when the image blur of the subject image due to the shake of the camera 1 cannot be suppressed, it is possible to focus on the same subject image.
  • FIG. 13 (e) is a diagram illustrating image blur correction for the detection image 52 in Modification 1 of the first embodiment.
  • the control unit 34 gradually expands the area of the attention range 61-2 of FIG. 13B set for the detection image 52 as the trimming range W2 moves in the detection image 52.
  • the direction in which the area of the attention range 61-2 is expanded is the same as the direction in which the trimming range W2 is moved, and is the direction opposite to the shake of the camera 1.
  • the area of the attention range 61-3 in FIG. 13 (e) is wider downward than the area of the attention range 61-2 in FIG. 13 (b).
  • the control unit 34 moves the attention range 61 when the subject of interest (for example, the head of a person) moves further downward.
  • the area of ⁇ 3 is gradually expanded downward in the trimming range W2. This is so that the head of the person can be accommodated in the attention range 61-3 with the expanded area.
  • the image processing unit 33 during the image generation process includes the same subject (in this example, the human head).
  • control unit 34 changes the setting of the second imaging condition in accordance with the change of the areas of the attention ranges 61-2 and 61-3. That is, the second imaging condition for the second imaging region B2 is reset according to the positions and areas of the attention ranges 61-2 and 61-3 after changing the area.
  • the image processing unit 33 of the camera 1 performs a process of suppressing the influence of image blur due to the shake of the camera 1 that is capturing a still image. Details of such a camera 1 will be described below.
  • the camera 1 in the second embodiment may be either a lens interchangeable type or a lens interchangeable type.
  • you may comprise as imaging devices, such as a smart phone and a video camera.
  • the still image is captured by setting only the first imaging region B1 on the imaging surface of the image sensor 32a.
  • the operating rate of the blocks included in the first imaging area B1 may be set as appropriate, as in the case of the first embodiment. For example, the operating rate may be set to 100%, or the operating rate may be set to 70% or 50%.
  • the first imaging area B1 is operated out of the first imaging area B1 and the second imaging area B2 set on the imaging surface. Then, the second imaging region B2 may be paused.
  • the still image to be recorded is a still image captured when the shutter button is operated.
  • a still image that is recorded when the shutter button is operated is referred to as a recording image.
  • a recording image may be captured when a display for instructing capturing of a still image (for example, the release icon 74 in FIG. 9) is operated.
  • FIG. 21 (a) is a diagram illustrating still images taken indoors by the camera 1 without using flash light.
  • the still image in FIG. 21A is taken by setting different imaging conditions on the left and right of the screen of the imaging device 32a. For example, if the female outfit on the right side of the screen is white and brighter than the male outfit on the left side of the screen, the image sensor corresponding to the right side of the screen with respect to the area of the image sensor 32a corresponding to the left side of the screen (left side of the first image pickup area B1). An exposure time longer than the region 32a (the right side of the first imaging region B1) is set.
  • the exposure time of the area of the image sensor 32a corresponding to the left side of the screen is the first exposure time and the exposure time of the area of the image sensor 32a corresponding to the right side of the screen is the second exposure time, the first exposure time> the second exposure time. It is. Thereby, it is possible to take an image so that the male costume on the left side of the screen is not underexposed while preventing the female costume on the right side of the screen from being overexposed.
  • FIG. 21A when the amount of illumination light is reduced by the effect of the venue for taking a still image of FIG. 21A, the first exposure time and the second exposure time set in the image sensor 32a are exposures called so-called camera shake limits. It becomes longer than the time (for example, 1/60 seconds). For this reason, the camera 1 performs processing for suppressing the influence of image blur due to the shake of the camera 1 that is capturing a still image.
  • FIG. 21B and FIG. 21C are schematic diagrams for explaining the outline of still image blur correction according to the second embodiment.
  • ⁇ Image blur correction> when image blur correction of a still image is performed, an exposure time shorter than the exposure time (1/60 seconds) that is the camera shake limit (a shortened exposure time described later: for example, 1/300 seconds).
  • the first recording image 51L is imaged in a short exposure time by the region of the image sensor 32a corresponding to the left side of the screen (n images), and the region of the image sensor 32a corresponding to the right side of the screen
  • a plurality (m) of second recording images 51R are taken with the shortened exposure time.
  • the controller 34 sets the number n of the first recording images so that the sum of the shortened exposure times (n ⁇ shortened exposure time) of the n first recording images 51L becomes the first exposure time. decide. Further, the number m of the second recording images is determined so that the sum of the shortened exposure times (m ⁇ shortened exposure time) of the m second recording images 51R becomes the second exposure time.
  • the control unit 34 and the correction unit 33b include n first recording images 51 (DL-1 to DL-1) captured in the area of the image sensor 32a corresponding to the left side of the screen.
  • DL-n is subjected to a first synthesis process in which the positions of the feature points in each image are aligned and superimposed. Since such a synthesis process is known, a detailed description thereof will be omitted.
  • the control unit 34 and the correction unit 33b include m second recording images 51 (DR-1 to DR-1 to 51) captured in the area of the image sensor 32a corresponding to the right side of the screen.
  • DR-m is subjected to a second synthesis process in which the positions of the feature points in each image are aligned and superimposed.
  • the correcting unit 33b further combines the first image after the first combining process and the second image after the second combining process to obtain a combined image as shown in FIG.
  • the correction unit 33b sets the image data of the composite image as image data of a recording image after image blur correction. In this way, the control unit 34 and the correction unit 33b perform image blur correction on the still image.
  • FIG. 22 shows the imaging timing of the recording image 51L (DL-1, DL-2, DL-3,...) And the imaging timing of the recording image 51R (DR-1, DR-2, DR-3,).
  • FIG. 5 is a diagram illustrating display timing of a display (blur correction image) based on a recording image after blur correction.
  • the imaging control unit 34c performs the processing before the recording button is operated (before time t1) described in the first embodiment until the release operation (for example, the shutter button pressing operation) is performed at the time t2. .
  • the imaging control unit 34c When the release operation is performed at time t2, the imaging control unit 34c performs imaging at the timing of FIG. 22 by the first imaging area B1 set on the imaging surface of the imaging element 32a. In other words, the imaging control unit 34c causes the imaging unit 32 to capture the recording image 51 under the imaging conditions set in the first imaging region B1 by the setting unit 34b.
  • the object detection unit 34a detects the male on the left side of the screen and the female on the right side of the screen based on the detection image Di in FIG. 10 described above, and the setting unit 34b performs the first imaging based on the detection result. It is assumed that the area B1 is divided into a screen left side and a screen right side.
  • the setting unit 32b sets different imaging conditions (for example, the first exposure time and the second exposure time) on the left side of the screen and the right side of the screen by performing an exposure calculation based on the detection image Diii in FIG. 10 described above. To do.
  • the imaging control unit 34c performs imaging for image blur correction. That is, the first image pickup area 51 corresponds to the right side of the screen in the first image pickup area B1 while the first image pickup area B1 in the first image pickup area B1 is picked up by the area of the image pickup element 32a corresponding to the left side of the screen.
  • the second image for recording 51R is imaged in the shortened exposure time depending on the area of the imaging element 32a to be used.
  • the first recording is performed by the area of the imaging element 32a corresponding to the left side of the screen.
  • a work image 51L is captured. This is because n> m.
  • the size of the first recording images DL-n-m + 1 to DL-n is larger than the size of the first recording images DL-1 to DL-m (in this example, in the right direction). Because it is wide), it includes a part of the woman located on the right side of the screen. As a result, even if the camera 1 becomes more shaken after the second recording image 51R has been captured (the camera shake in FIG. 21B causes the region of the image sensor 32a to shift to the left with respect to the person. Even when the first image after the first combining process (FIG. 21B) and the second image after the second combining process (FIG. 21C) are combined, the image data is A composite image as shown in FIG. 21A can be obtained without being insufficient.
  • the correction unit 33b sets the image data of the composite image as image data of a recording image after image blur correction.
  • the imaging control unit 34c Since no correction is required, normal imaging is performed. That is, the first exposure time is set in the area of the imaging element 32a corresponding to the left side of the screen in the first imaging area B1, and the second exposure is set in the area of the imaging element 32a corresponding to the right side of the screen in the first imaging area B1. Time is set and one recording image 51 (51L and 51R) is captured.
  • step S210 determines whether or not the shutter button is operated in step S210, which is subsequent to step S110 in FIG.
  • the control unit 34 makes a positive determination in step S210 and proceeds to step S220. If the shutter button is not operated, the control unit 34 makes a negative determination in step S210 and returns to step S30.
  • the control unit 34 repeats the processing from step S30 to S210.
  • step S220 the control unit 34 determines whether image blur correction is necessary for the still image.
  • the control unit 34 makes a positive determination in step S220 and performs step Proceed to S230. If both the first exposure time and the second exposure time described above are shorter than the exposure time set in advance as the camera shake limit, the control unit 34 makes a negative determination in step S220 and proceeds to step S250.
  • step S230 the control unit 34 captures a plurality of images with a shortened exposure time.
  • n first imaging images 51L are captured with a reduced exposure time by the area of the imaging element 32a corresponding to the left side of the screen in the first imaging area B1, and the first imaging area B1 With the area of the image sensor 32a corresponding to the right side of the screen, the second recording image 51R is imaged with the shortened exposure time.
  • the region of the imaging element 32a corresponding to the left side of the screen in the first imaging region B1 is set to the second
  • the first recording image 51L is imaged to the right of the screen from the time before the recording image 51R is imaged.
  • step S240 the control unit 34 and the correction unit 33b perform image blur correction on the recording image. Specifically, a composition process is performed in which a plurality of recording images are combined and superimposed by combining the positions of feature points in each image.
  • step S250 the control unit 34 performs normal imaging. That is, the first exposure time is set in the area of the imaging element 32a corresponding to the left side of the screen in the first imaging area B1, and the second exposure is set in the area of the imaging element 32a corresponding to the right side of the screen in the first imaging area B1. Time is set and a single recording image 51L, 51R is captured.
  • step S260 the imaging control unit 34c of the control unit 34 sends an instruction to the image processing unit 33, performs image processing on the image data of the recording image, and proceeds to step S270.
  • Image processing includes the pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing.
  • step S270 the control unit 34 sends an instruction to the recording unit 37, records the image data after the image processing on a recording medium (not shown), and proceeds to step S280.
  • step S280 the control unit 34 determines whether an end operation has been performed. When the end operation is performed, the control unit 34 makes a positive determination in step S280 and ends the process illustrated in FIG. If the end operation is not performed, the control unit 34 makes a negative determination in step S280 and returns to step S10. When returning to step S10, the control unit 34 repeats the above-described processing.
  • the camera 1 can appropriately perform image blur correction that suppresses the influence of image blur due to camera shake on a subject image captured in a plurality of imaging regions under different imaging conditions. .
  • the camera 1 may cause image blurring when, for example, different exposure times are set for the left side of the screen for setting the first imaging condition and the right side of the screen for setting the second imaging condition, or when the left side of the screen and the right side of the screen are set.
  • image blur correction can be appropriately performed on the left side and the right side of the screen.
  • the camera 1 increases the size of the first recording image 51L, for example, by expanding the area on the left side of the screen for setting the first imaging condition.
  • the first image after the first image blur correction (FIG. 21B) and the first image are corrected.
  • FIG. 21C When combining with the second image after image blur correction (2) (FIG. 21C), a composite image as shown in FIG. 21A can be obtained without lack of image data.
  • the camera 1 enlarges the left range of the first imaging area B1 in the right direction. By enlarging the left range of the first imaging area B1, a wider range of subject images can be included in the first recording image 51L.
  • the camera 1 performs composition by shifting the positions of the plurality of first recording images 51L so as to match the positions of the other first recording images 51L. As a result, it is possible to obtain a composite image that is aligned between the plurality of first recording images 51L.
  • the camera 1 changes the position of the first recording image 51L so that the subject image of the first recording image 51L overlaps the position of the subject image of the other first recording image 51L ( Align). Thereby, even if the position of the subject image is shifted between the plurality of first recording images 51L, a composite image in which the shift of the subject image, that is, the image blur is suppressed can be obtained.
  • the third embodiment is a mode different from the second embodiment, and suppresses the influence of image blur due to the shake of the camera 1 that is capturing a still image.
  • the camera 1 in the third embodiment may be either an interchangeable lens type or not a interchangeable lens type.
  • you may comprise as imaging devices, such as a smart phone and a video camera.
  • the third embodiment sets the first shortened exposure time in the area of the image sensor 32a corresponding to the left side of the screen and the area of the image sensor 32a corresponding to the right side of the screen.
  • the second shortened exposure time (first shortened exposure time> second shortened exposure time) is set to n and recording images 51-1 to 51-n are different.
  • the number of first recording images 51L captured by the area of the image sensor 32a corresponding to the left side of the screen and the number of second recording images 51R captured by the area of the image sensor 32a corresponding to the right side of the screen. are the same (n).
  • the difference from the second embodiment will be mainly described with reference to the drawings.
  • FIG. 24 is a schematic diagram for explaining an outline of still image blur correction according to the third embodiment.
  • the control unit 34 sets the first exposure time such that the sum of the first shortened exposure times (n ⁇ first shortened exposure time) of the n first recording images 51L becomes the first exposure time.
  • the number n of recording images and the first shortened exposure time are determined.
  • the second shortened exposure time is determined such that the sum of the second shortened exposure times of the n second recording images 51R (n ⁇ second shortened exposure time) becomes the second exposure time. To do.
  • the control unit 34 and the correction unit 33b perform a composition process on the n recording images 51-1 to 51-n so that the positions of the feature points in each image are aligned and superimposed. . Since such a synthesis process is known, a detailed description thereof will be omitted.
  • the correction unit 33b sets the image data of the composite image as image data of a recording image after image blur correction. In this way, the control unit 34 and the correction unit 33b perform image blur correction on the still image.
  • FIG. 25 shows the imaging timing of the recording image 51L (DL1, DL2, DL3,%) In the area of the imaging element 32a corresponding to the left side of the screen and the recording image 51R (DR1 in the area of the imaging element 32a corresponding to the right side of the screen. , DR2, DR3, etc And the display timing of the display (blur correction image) based on the recording image after blur correction.
  • the imaging control unit 34c performs the processing before the recording button is operated (before time t1) described in the first embodiment until a release operation (for example, a shutter button pressing operation) is performed at time t3. .
  • the imaging control unit 34c When the release operation is performed at time t3, the imaging control unit 34c performs imaging at the timing of FIG. 25 by the first imaging area B1 set on the imaging surface of the imaging element 32a. In other words, the imaging control unit 34c causes the imaging unit 32 to capture the recording image 51 under the imaging conditions set in the first imaging region B1 by the setting unit 34b. Similar to the second embodiment, the object detection unit 34a detects the male on the left side of the screen and the female on the right side of the screen based on the detection image Di in FIG. 10 described above, and the setting unit 34b determines based on the detection result. Assume that the screen is divided into a left side and a right side.
  • the setting unit 32b performs an exposure calculation based on the detection image Diii of FIG. 10 described above, so that different imaging conditions (for example, the above-described first condition) 1 exposure time and 2nd exposure time) are set.
  • the imaging control unit 34c performs imaging for image blur correction. That is, the first shortened exposure time is set in the area of the image pickup element 32a corresponding to the left side of the screen in the first image pickup area B1, and the area of the image pickup element 32a corresponding to the right side of the screen in the first image pickup area B1.
  • n images for recording 51 51L and 51R
  • the number n of recording images 51 to be captured is the same on the left and right sides of the screen. This is because the second shortened exposure time is shorter than the first shortened exposure time as the shortened exposure time per sheet.
  • the image data between the left and right sides of the screen is not affected even if the camera 1 shakes greatly.
  • a composite image as shown in FIG. 24 can be obtained without being insufficient.
  • the correction unit 33b sets the image data of the composite image as image data of a recording image after image blur correction.
  • the camera 1 can appropriately perform image blur correction that suppresses the influence of image blur due to the shake of the camera 1 on an image of a subject image captured under different imaging conditions in a plurality of imaging regions.
  • the camera 1 may cause image blurring when, for example, different exposure times are set for the left side of the screen for setting the first imaging condition and the right side of the screen for setting the second imaging condition, When the sizes are different, image blur correction can be appropriately performed.
  • the camera 1 appropriately performs image blur correction on the subject image captured at different exposure times on the left side of the screen for setting the first imaging condition and the right side of the screen for setting the second imaging condition. It can be carried out.
  • the camera 1 can maintain imaging simultaneity on the left side of the screen for setting the first imaging condition and the right side of the screen for setting the second imaging condition.
  • the image blur correction of the moving image according to the first embodiment is performed in parallel under two conditions (for example, the width of the trimming range is different).
  • the camera 1 in the fourth embodiment may be either a lens interchangeable type or a lens interchangeable type.
  • you may comprise as imaging devices, such as a smart phone and a video camera.
  • the control unit 34 sets four imaging areas on the imaging surface of the imaging element 32a when imaging a recording image or a monitor image.
  • FIG. 26 is a diagram illustrating the arrangement of the first imaging area B1, the second imaging area B2, the third imaging area C1, and the fourth imaging area C4 set on the imaging surface of the imaging element 32a. According to the partially enlarged view of FIG. 26, the unit U of a block having four of the plurality of blocks defined in the image sensor 32a as one unit is repeatedly arranged in the horizontal direction and the vertical direction of the pixel region. .
  • the block unit U includes a block belonging to the first imaging area B1, a block belonging to the second imaging area B2, a block belonging to the third imaging area C1, and a block belonging to the fourth imaging area C2.
  • the control unit 34 generates the first recording image 51A based on the photoelectric conversion signal read from the first imaging region B1 of the imaging element 32a that has captured one frame. Further, the control unit 34 generates the first detection image 52A based on the photoelectric conversion signal read from the second imaging region B2 of the imaging element 32a. The control unit 34 further generates a second recording image 51B based on the photoelectric conversion signal read from the third imaging region C1 of the imaging element 32a that has captured one frame. In addition, the control unit 34 generates the second detection image 52B based on the photoelectric conversion signal read from the fourth imaging region C2 of the imaging element 32a.
  • the first recording image 51A, the first detection image 52A, the second recording image 51B, and the second detection image 52B are all captured at the same angle of view and include a common subject image. . These images can be taken in parallel.
  • the imaging condition set in the first imaging area B1 for imaging the first recording image 51A is referred to as a first imaging condition, and the second imaging area for imaging the first detection image 52A.
  • the imaging condition set to B2 will be referred to as the second imaging condition.
  • the imaging condition set in the third imaging area C1 for capturing the second recording image 51B is referred to as a third imaging condition, and the imaging condition set in the fourth imaging area C2 for capturing the second detection image 52B. Will be referred to as the fourth imaging condition.
  • the control unit 34 may set the first imaging condition and the second imaging condition, the third imaging condition and the fourth imaging condition to the same condition, or may be set to different conditions.
  • control unit 34 performs different image blur correction on the moving images captured in parallel as described above.
  • image blur correction is performed under different conditions between the first recording image 51A and the first detection image 52A, and the second recording image 51B and the second detection image 52B.
  • the control unit 34 sets a trimming range W1 that is 90% of the size of the first recording image 51A in the first recording image 51A.
  • the image blur correction processing after setting the trimming ranges W1 and W2 is the same as that in the first embodiment.
  • control unit 34 sets a trimming range W3 that is 60% of the size of the second recording image 51B in the second recording image 51B.
  • the image blur correction processing after setting the trimming ranges W3 and W4 is the same as that in the first embodiment.
  • the trimming ranges W1 to W4 are often set to be narrow in order to sufficiently secure the movement allowance of the trimming ranges W1 to W4 when the camera 1 is largely shaken. For this reason, the screen size of a moving image after image blur correction tends to be small.
  • image blur correction with a wide trimming range is usually employed, and when camera 1 has a large shake, image blur correction with a narrow trimming range is employed. Is possible.
  • image blur correction of a moving image can be performed in parallel under two conditions (for example, the width of the trimming range is different).
  • the image blur correction of a moving image according to the first embodiment and the image blur correction of a still image according to the second or third embodiment are performed in parallel.
  • the still image is captured by the third imaging region C1 (FIG. 26) set on the imaging surface of the imaging element 32a.
  • the operating rate of the blocks included in the third imaging region C1 may be set as appropriate, as in the case of the first embodiment. For example, the operating rate may be set to 100%, or the operating rate may be set to 70% or 50%.
  • the fourth imaging region C2 (FIG. 26) set on the imaging surface of the imaging element 32a is paused. In FIG. 26, the position of the third imaging area C1 and the position of the fourth imaging area C2 may be combined and set as the third imaging area C1 without setting the fourth imaging area C2.
  • the first recording image 51A is generated based on the photoelectric conversion signal read from the first imaging region B1 of the imaging element 32a that has captured one frame.
  • the control unit 34 generates the first detection image 52A based on the photoelectric conversion signal read from the second imaging region B2 of the imaging element 32a.
  • the control unit 34 further generates a second recording image 51 based on the photoelectric conversion signal read from the third imaging region C1 of the imaging element 32a that has captured one frame.
  • the first recording image 51A and the first detection image 52A are moving images.
  • the second recording image 51 is a still image.
  • the control unit 34 performs image blur correction on the moving image on the first recording image 51A and the first detection image 52A, as in the first embodiment. In addition, the control unit 34 performs image blur correction on the still image on the second recording image 51 as in the second embodiment or the third embodiment.
  • image blur correction for a moving image and image blur correction for a still image can be performed in parallel.
  • the first imaging region B1 (FIG. 26) set on the imaging surface of the imaging element 32a is used for the first. Take a still image.
  • the operating rate of the blocks included in the third imaging region C1 may be set as appropriate, as in the case of the first embodiment. For example, the operating rate may be set to 100%, or the operating rate may be set to 70% or 50%.
  • the second imaging region B2 (FIG. 26) set on the imaging surface of the imaging element 32a is paused. In FIG. 26, the position of the first imaging area B1 and the position of the second imaging area B2 may be combined and set as the first imaging area B1 without setting the second imaging area B2.
  • the third imaging area C1 (FIG. 26) set on the imaging surface of the imaging element 32a is used. 2 still images are taken.
  • the operating rate of the blocks included in the third imaging region C1 may be set as appropriate, as in the case of the first embodiment. For example, the operating rate may be set to 100%, or the operating rate may be set to 70% or 50%.
  • the fourth imaging region C2 (FIG. 26) set on the imaging surface of the imaging element 32a is paused. In FIG. 26, the position of the third imaging area C1 and the position of the fourth imaging area C2 may be combined and set as the third imaging area C1 without setting the fourth imaging area C2.
  • the first recording image 51A is generated based on the photoelectric conversion signal read from the first imaging region B1 of the imaging element 32a that has captured one frame.
  • the control unit 34 generates the second recording image 51B based on the photoelectric conversion signal read from the third imaging region C1 of the imaging element 32a that has captured one frame.
  • the first recording image 51A is a first still image.
  • the second recording image 51B is a second still image.
  • the controller 34 performs image blur correction on a still image on the first recording image 51A and the second recording image 52B, respectively, as in the second embodiment or the third embodiment.
  • image blur correction is performed under different conditions for the first recording image 51A and the second recording image 51B.
  • FIG. 27 is a diagram illustrating still images obtained by capturing waterfalls under different conditions.
  • FIG. 27A is a still image based on the first recording image 51A
  • FIG. 27B is a still image based on the second recording image 51B.
  • the still images in FIGS. 27 (a) and 27 (b) are taken by setting different imaging conditions for the flow of water and the background of a rock or the like.
  • the imaging control unit 34c makes the shortened exposure time of the first recording image 51A shorter than the shortened exposure time of the second recording image 51B.
  • the imaging control unit 34c makes the number of first recording images 51A smaller than the number of second recording images 51B.
  • still image blur correction can be performed in parallel under two conditions (for example, different shortened exposure times).
  • the first imaging area B1 (FIG. 26) set on the imaging surface of the imaging element 32a is used for the first. Take a still image.
  • the second still image to be recorded is captured, the second still image is captured by the second imaging region B2 (FIG. 26) set on the imaging surface of the image sensor 32a.
  • the third still image to be recorded is captured, the third still image is captured by the third imaging region B1 (FIG. 26) set on the imaging surface of the imaging element 32a.
  • the fourth still image to be recorded is captured, the fourth still image is captured by the fourth imaging region C2 (FIG. 26) set on the imaging surface of the image sensor 32a.
  • the operating rates of the blocks included in the first imaging area B1, the second imaging area B2, the third imaging area C1, and the fourth imaging area C2 are set appropriately as in the case of the first embodiment. It doesn't matter.
  • the first recording image 51A is generated based on the photoelectric conversion signal read from the first imaging region B1 of the imaging device 32a that has captured one frame.
  • the control unit 34 generates the second recording image 51B based on the photoelectric conversion signal read from the second imaging region B2 of the imaging element 32a that has captured one frame.
  • the control unit 34 generates a third recording image 51C based on the photoelectric conversion signal read from the third imaging region C1 of the imaging element 32a that has captured one frame.
  • the control unit 34 generates a fourth recording image 51D based on the photoelectric conversion signal read from the fourth imaging region C2 of the imaging device 32a that has captured one frame.
  • the first recording image 51A is a first still image.
  • the second recording image 51B is a second still image.
  • the third recording image 51C is a third still image.
  • the fourth recording image 51D is a fourth still image.
  • the control unit 34 performs the second embodiment or the third recording image 51A on the first recording image 51A, the second recording image 52B, the third recording image 51C, and the fourth recording image 52D, respectively.
  • image blur correction is performed on a still image.
  • image blur correction is performed for each of the recording images 51A to 51D under different conditions.
  • still image blur correction can be performed in parallel under four conditions (for example, different shortened exposure times).
  • the camera 1 has been described as an example of an electronic device.
  • a mobile device such as a high-functional mobile phone, a tablet terminal, or a wearable device having a camera function like a smartphone having an image sensor 32a. It may be an electronic device.
  • the camera 1 in which the imaging unit 32 and the control unit 34 are configured as a single electronic device has been described as an example.
  • the imaging unit 32 and the control unit 34 may be provided separately, and the imaging system 1A that controls the imaging unit 32 from the control unit 34 via communication may be configured.
  • the imaging device 1001 including the imaging unit 32 is controlled from the control device 1002 including the control unit 34 will be described with reference to FIG.
  • FIG. 28 is a block diagram illustrating the configuration of an imaging system 1A according to the second modification.
  • the imaging system 1 ⁇ / b> A includes an imaging device 1001 and a display device 1002.
  • the imaging apparatus 1001 includes a first communication unit 1003 in addition to the imaging optical system 31, the imaging unit 32, and the photometric sensor 38 described in the above embodiment.
  • the display device 1002 includes a second communication unit 1004 in addition to the image processing unit 33, the control unit 34, the display unit 35, the operation member 36, and the recording unit 37 described in the above embodiment.
  • the first communication unit 1003 and the second communication unit 1004 can perform bidirectional image data communication using, for example, a well-known wireless communication technology or optical communication technology. Note that the imaging device 1001 and the display device 1002 may be connected by a wired cable, and the first communication unit 1003 and the second communication unit 1004 may perform bidirectional image data communication.
  • the control unit 34 controls the imaging unit 32 by performing data communication via the second communication unit 1004 and the first communication unit 1003. For example, by transmitting and receiving predetermined control data between the imaging device 1001 and the display device 1002, the display device 1002 divides the screen into a plurality of regions based on the images as described above, or the divided regions. A different imaging condition is set for each area, or a photoelectric conversion signal photoelectrically converted in each area is read out.
  • the monitor image acquired on the imaging device 1001 side and transmitted to the display device 1002 is displayed on the display unit 35 of the display device 1002, so that the user is at a position away from the imaging device 1001.
  • Remote control can be performed from a certain display device 1002.
  • the display device 1002 can be configured by a high-function mobile phone 250 such as a smartphone, for example.
  • the imaging device 1001 can be configured by an electronic device including the above-described stacked imaging element 100.
  • the object detection part 34a, the setting part 34b, the imaging control part 34c, and the lens movement control part 34d in the control part 34 of the display apparatus 1002 was demonstrated, the object detection part 34a, the setting part 34b, A part of the imaging control unit 34c and the lens movement control unit 34d may be provided in the imaging device 1001.
  • the program is supplied to the mobile device such as the camera 1, the high-function mobile phone 250, or the tablet terminal described above by, for example, infrared communication or short-range wireless communication from the personal computer 205 storing the program as illustrated in FIG. 29. Can be sent to mobile devices.
  • the program may be supplied to the personal computer 205 by setting a recording medium 204 such as a CD-ROM storing the program in the personal computer 205 or by a method via the communication line 201 such as a network. You may load. When passing through the communication line 201, the program is stored in the storage device 203 of the server 202 connected to the communication line.
  • the program can be directly transmitted to the mobile device via a wireless LAN access point (not shown) connected to the communication line 201.
  • a recording medium 204B such as a memory card storing the program may be set in the mobile device.
  • the program can be supplied as various forms of computer program products, such as provision via a recording medium or a communication line.
  • the object detection unit 34a of the control unit 34 detects the subject element based on the detection image Di, and the setting unit 34b divides the recording image 51 into regions including the subject element.
  • the control unit 34 divides the recording image 51 on the basis of the output signal from the image sensor 32a and another photometric sensor 38 instead of dividing the recording image 51 on the basis of the detection image Di. May be divided.
  • the control unit 34 divides the recording image 51 into a foreground and a background based on an output signal from the photometric sensor 38. Specifically, the recording image 51 acquired by the image sensor 32 b is determined to be the foreground area corresponding to the area determined to be the foreground from the output signal of the photometry sensor 38 and the background from the output signal of the photometry sensor 38. Divide into background areas corresponding to the areas.
  • the control unit 34 further controls the first imaging region B1 and the second imaging region with respect to the position corresponding to the foreground region on the imaging surface of the imaging device 32a.
  • Set B2 the control unit 34 sets only the first imaging region B1 on the imaging surface of the imaging device 32a with respect to the position corresponding to the background region of the imaging surface of the imaging device 32a.
  • the control unit 34 uses the recording image 51 imaged in the first imaging area B1 for display of the monitor image, and uses the detection image 52 imaged in the second imaging area B2 to detect subject elements, focus detection, And used for exposure calculation.
  • the first imaging area B1 is operated out of the first imaging area B1 and the second imaging area B2 set on the imaging surface. Then, the second imaging region B2 may be paused.
  • the fourth modification by using the output signal from the photometric sensor 38, it is possible to divide the monitor image acquired by the image sensor 32b. Further, the recording image 51 and the detection image 52 can be obtained for the foreground area, and only the recording image 51 can be obtained for the background area.
  • the imaging optical system 31 described above may include a zoom lens or tilt lens.
  • the lens movement control unit 34d adjusts the angle of view by the imaging optical system 31 by moving the zoom lens in the optical axis direction. That is, the image by the imaging optical system 31 can be adjusted by moving the zoom lens, such as obtaining an image of a wide range of subjects or obtaining a large image of a far subject. Further, the lens movement control unit 34d can adjust the distortion of the image by the imaging optical system 31 by moving the tilt lens in a direction orthogonal to the optical axis. Based on the idea that it is preferable to use the preprocessed image data as described above in order to adjust the state of the image by the imaging optical system 31 (for example, the state of the angle of view or the state of distortion of the image). The pre-processing described above may be performed.
  • the imaging element 32a described above has been described in advance as being divided into a plurality of blocks belonging to the first imaging area B1 and a plurality of blocks belonging to the second imaging area B2, as shown in FIG. It is not something.
  • the positions of the first imaging region B1 and the second imaging region B2 in the image sensor 32a may be set according to the brightness, type, shape, etc. of the subject, and the first imaging region B1 in the image sensor 32a and The position of the second imaging area B2 may be set.
  • the imaging area in the imaging element 32a is not limited to the first imaging area B1 and the second imaging area B2, and imaging conditions different from the imaging conditions set for the first imaging area B1 and the second imaging area B2 are set.
  • the captured imaging area may be used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

This electronic device is equipped with: an imaging element which is provided with a plurality of imaging regions and captures a subject image; a setting unit which sets first imaging conditions for a first imaging region from among the plurality of imaging regions and sets second imaging conditions differing from the first imaging conditions for a second imaging region from among the plurality of imaging regions; and a correction unit which performs, with respect to images of the subject image captured in the first imaging region and in the second imaging region, correction for blurring of the subject image due to movement of the imaging element.

Description

電子機器Electronics
 本発明は、電子機器に関する。 The present invention relates to an electronic device.
 撮像素子からの信号により画像を生成する画像処理技術を搭載した撮像装置が知られている(特許文献1参照)。
 従来から画像の画質向上が要求されていた。
An imaging device equipped with an image processing technique for generating an image based on a signal from an imaging element is known (see Patent Document 1).
Conventionally, image quality improvement has been required.
日本国特開2006-197192号公報Japanese Unexamined Patent Publication No. 2006-197192
 発明の第1の態様によると、電子機器は、被写体像を撮像する複数の撮像領域を有する撮像素子と、複数の前記撮像領域のうち第1撮像領域に第1撮像条件を設定し、複数の前記撮像領域のうち第2撮像領域に前記第1撮像条件とは異なる第2撮像条件を設定する設定部と、前記第1撮像領域及び前記第2撮像領域で撮像された前記被写体像の画像に対して前記撮像素子の動きによる前記被写体像のぶれの補正を行う補正部と、を備える。
 発明の第2の態様によると、電子機器は、被写体像を撮像する複数の領域を有する撮像素子と、複数の前記領域のうち第1領域に第1撮像条件を設定し、第2領域に前記第1撮像条件とは異なる第2撮像条件を設定する設定部と、前記第1領域及び前記第2領域で撮像されて生成された第1画像と、前記第1領域で撮像されて生成された第2画像と、に基づいて画像を生成する生成部と、を備える。
According to the first aspect of the invention, the electronic device sets the first imaging condition in the first imaging region among the plurality of imaging regions, the imaging element having a plurality of imaging regions that capture the subject image, and the plurality of imaging regions. A setting unit that sets a second imaging condition different from the first imaging condition in the second imaging area of the imaging area, and an image of the subject image captured in the first imaging area and the second imaging area And a correction unit that corrects blurring of the subject image due to the movement of the image sensor.
According to the second aspect of the invention, the electronic device sets an image sensor having a plurality of regions for capturing a subject image, a first imaging condition in the first region among the plurality of regions, and the second region in the first region. A setting unit for setting a second imaging condition different from the first imaging condition, a first image generated by imaging in the first area and the second area, and an image generated by imaging in the first area And a generation unit that generates an image based on the second image.
第1の実施の形態によるカメラの構成を例示するブロック図である。It is a block diagram which illustrates the composition of the camera by a 1st embodiment. 積層型の撮像素子の断面図である。It is sectional drawing of a laminated type image pick-up element. 撮像チップの画素配列と単位領域を説明する図である。It is a figure explaining the pixel arrangement | sequence and unit area | region of an imaging chip. 単位領域における回路を説明する図である。It is a figure explaining the circuit in a unit area. 撮像素子の機能的構成を示すブロック図である。It is a block diagram which shows the functional structure of an image pick-up element. 図6(a)~図6(c)は、撮像素子の撮像面に設定される第1撮像領域および第2撮像領域の配置を例示する図である。6A to 6C are diagrams illustrating the arrangement of the first imaging region and the second imaging region set on the imaging surface of the imaging device. 図7(a)は、記録用画像を例示する図、図7(b)は、検出用画像を例示する図である。FIG. 7A is a diagram illustrating a recording image, and FIG. 7B is a diagram illustrating a detection image. 撮像画面における被写体領域を例示する図である。It is a figure which illustrates the photographic subject field in an imaging screen. 設定画面を例示する図である。It is a figure which illustrates a setting screen. 記録用画像の撮像タイミングと、検出用画像の撮像タイミングと、モニタ用画像の表示タイミングとを例示する図である。It is a figure which illustrates the imaging timing of the image for recording, the imaging timing of the image for a detection, and the display timing of the image for a monitor. ぶれ補正部として機能する制御部を説明する図である。It is a figure explaining the control part which functions as a shake correction part. 図12(a)および図12(b)は、像ぶれ補正を説明する模式図である。FIGS. 12A and 12B are schematic diagrams for explaining image blur correction. 図13(a)~図13(e)は、像ぶれ補正を説明する模式図である。FIGS. 13A to 13E are schematic diagrams for explaining image blur correction. カーネルを例示する図である。It is a figure which illustrates a kernel. 撮像面における焦点検出用画素の位置を例示する図である。It is a figure which illustrates the position of the pixel for focus detection in an imaging surface. 焦点検出画素ラインの一部の領域を拡大した図である。It is the figure which expanded the one part area | region of the focus detection pixel line. フォーカスポイントを拡大した図である。It is the figure which expanded the focus point. 図18(a)は、検出しようとする対象物を表すテンプレート画像を例示する図であり、図18(b)は、モニタ用画像および探索範囲を例示する図である。FIG. 18A is a diagram illustrating a template image representing an object to be detected, and FIG. 18B is a diagram illustrating a monitor image and a search range. 第1の実施の形態による処理の流れを説明するフローチャートである。It is a flowchart explaining the flow of the process by 1st Embodiment. 像ぶれ補正処理の流れを説明するフローチャートである。6 is a flowchart illustrating a flow of image blur correction processing. 図21(a)は、静止画を例示する図、図21(b)および図21(c)は、第2の実施の形態による静止画ぶれ補正の概要を説明する図である。FIG. 21A is a diagram illustrating still images, and FIG. 21B and FIG. 21C are diagrams illustrating an overview of still image blur correction according to the second embodiment. 記録用画像の撮像タイミングと、ぶれ補正画像の表示タイミングとを例示する図である。It is a figure which illustrates the imaging timing of the image for recording, and the display timing of a blurring correction image. 像ぶれ処理の流れを説明するフローチャートである。10 is a flowchart illustrating a flow of image blur processing. 第3の実施の形態による静止画ぶれ補正の概要を説明する図である。It is a figure explaining the outline | summary of the still image blurring correction | amendment by 3rd Embodiment. 記録用画像の撮像タイミングと、ぶれ補正画像の表示タイミングとを例示する図である。It is a figure which illustrates the imaging timing of the image for recording, and the display timing of a blurring correction image. 撮像素子の撮像面に設定される第1撮像領域、第2撮像領域、第3撮像領域および第4撮像領域の配置を例示する図である。It is a figure which illustrates arrangement | positioning of the 1st imaging area set to the imaging surface of an imaging device, a 2nd imaging area, a 3rd imaging area, and a 4th imaging area. 図27(a)は、第1の記録用画像による静止画であり、図27(b)は、第2の記録用画像による静止画である。FIG. 27A is a still image based on the first recording image, and FIG. 27B is a still image based on the second recording image. 変形例2による撮像システムの構成を例示するブロック図である。It is a block diagram which illustrates the composition of the imaging system by modification 2. モバイル機器へのプログラムの供給を説明する図である。It is a figure explaining supply of the program to a mobile device.
(第1の実施の形態)
 第1の実施の形態による撮像装置を搭載する電子機器について説明する。図1のデジタルカメラ1(以降、カメラ1と称する)は、電子機器の一例である。カメラ1は、レンズ交換式のカメラでも、レンズ一体型のカメラでもよい。また、スマートフォン等の携帯端末に搭載されるカメラでもよい。また、ビデオカメラやモバイルカメラ等の撮像装置として構成してもよい。
(First embodiment)
An electronic apparatus equipped with the imaging device according to the first embodiment will be described. The digital camera 1 in FIG. 1 (hereinafter referred to as camera 1) is an example of an electronic device. The camera 1 may be an interchangeable lens camera or a lens-integrated camera. Moreover, the camera mounted in portable terminals, such as a smart phone, may be used. Moreover, you may comprise as imaging devices, such as a video camera and a mobile camera.
 カメラ1には、イメージセンサの一例として撮像素子32aが搭載される。撮像素子32aは、撮像領域に設定された撮像条件で撮像を行う。また、撮像素子32aは、撮像領域ごとに異なる撮像条件で撮像を行うことが可能に構成されている。第1の実施の形態では、カメラ1の画像処理部33が、動画を撮像中のカメラ1に生じた揺動(カメラ1の振れと称する)に伴う像ぶれの影響を抑える処理を行う。像ぶれは、撮像素子32aの撮像面の被写体像が動くことをいう。このようなカメラ1の詳細について、図面を参照して説明する。なお、像ぶれは、被写体像に対して撮像素子32aが動くともいえる。 The camera 1 is equipped with an image sensor 32a as an example of an image sensor. The imaging element 32a performs imaging under the imaging conditions set in the imaging area. In addition, the imaging element 32a is configured to be able to perform imaging under different imaging conditions for each imaging region. In the first embodiment, the image processing unit 33 of the camera 1 performs a process of suppressing the influence of image blur caused by the shaking (referred to as shaking of the camera 1) occurring in the camera 1 that is capturing a moving image. Image blur refers to the movement of the subject image on the imaging surface of the image sensor 32a. Details of the camera 1 will be described with reference to the drawings. Note that image blur can be said to be that the image sensor 32a moves relative to the subject image.
<カメラの説明>
 図1は、一実施の形態によるカメラ1の構成を例示するブロック図である。図1において、カメラ1は、撮像光学系31と、撮像部32と、画像処理部33と、制御部34と、表示部35と、操作部材36と、記録部37と、測光用センサ38と、振れセンサ39とを有する。
<Explanation of camera>
FIG. 1 is a block diagram illustrating the configuration of a camera 1 according to an embodiment. In FIG. 1, a camera 1 includes an imaging optical system 31, an imaging unit 32, an image processing unit 33, a control unit 34, a display unit 35, an operation member 36, a recording unit 37, and a photometric sensor 38. And a shake sensor 39.
 撮像光学系31は、被写体からの光束を撮像部32へ導く。撮像部32は、撮像素子32aおよび駆動部32bを含み、撮像光学系31によって結像された被写体の像を光電変換する。撮像部32は、撮像素子32aにおける撮像領域の全域において同じ撮像条件で撮像したり、撮像領域ごとに異なる撮像条件で撮像したりすることができる。撮像部32の詳細については後述する。駆動部32bは、撮像素子32aに蓄積制御を行わせるために必要な駆動信号を生成する。駆動部32bに対する電荷蓄積時間(露光時間)、ISO感度(ゲイン)、フレームレートなどの撮像指示は、制御部34から駆動部32bへ送信される。 The imaging optical system 31 guides the light flux from the subject to the imaging unit 32. The imaging unit 32 includes an imaging element 32a and a driving unit 32b, and photoelectrically converts an object image formed by the imaging optical system 31. The imaging unit 32 can capture images with the same imaging condition in the entire imaging region of the imaging element 32a, or can capture images with different imaging conditions for each imaging region. Details of the imaging unit 32 will be described later. The drive unit 32b generates a drive signal necessary for causing the image sensor 32a to perform accumulation control. Imaging instructions such as charge accumulation time (exposure time), ISO sensitivity (gain), and frame rate for the drive unit 32b are transmitted from the control unit 34 to the drive unit 32b.
 画像処理部33は、入力部33aと、補正部33bと、生成部33cとを含む。入力部33aには、撮像部32によって生成された画像データが入力される。補正部33bは、上記入力された画像データに対して、手ぶれに伴う像ぶれに対して補正処理を行う。補正処理の詳細については後述する。生成部33cは、上記入力された画像データと補正処理後の画像データとに対する画像処理を行い、画像を生成する。画像処理には、例えば、色補間処理、画素欠陥補正処理、輪郭強調処理、ノイズ低減処理、ホワイトバランス調整処理、ガンマ補正処理、表示輝度調整処理、彩度調整処理等が含まれる。さらに、生成部33cは、表示部35により表示する画像や、記録する画像を生成する。 The image processing unit 33 includes an input unit 33a, a correction unit 33b, and a generation unit 33c. The image data generated by the imaging unit 32 is input to the input unit 33a. The correction unit 33b performs a correction process on the input image data with respect to image blur caused by camera shake. Details of the correction processing will be described later. The generation unit 33c performs image processing on the input image data and the corrected image data to generate an image. The image processing includes, for example, color interpolation processing, pixel defect correction processing, contour enhancement processing, noise reduction processing, white balance adjustment processing, gamma correction processing, display luminance adjustment processing, saturation adjustment processing, and the like. Further, the generation unit 33c generates an image to be displayed by the display unit 35 and an image to be recorded.
 制御部34は、例えばCPUによって構成され、カメラ1による全体の動作を制御する。例えば、制御部34は、撮像部32で生成された光電変換信号に基づいて所定の露出演算を行い、適正露出に必要な撮像素子32aの電荷蓄積時間、撮像光学系31の絞り値、ISO感度等の露出条件を決定して駆動部32bへ指示する。また、カメラ1に設定されている撮像シーンモードや、検出した被写体要素の種類に応じて、彩度、コントラスト、シャープネス等を調整する画像処理条件を決定して画像処理部33へ指示する。被写体要素の検出については後述する。 The control unit 34 is constituted by a CPU, for example, and controls the overall operation of the camera 1. For example, the control unit 34 performs a predetermined exposure calculation based on the photoelectric conversion signal generated by the imaging unit 32, the charge accumulation time of the imaging element 32a necessary for proper exposure, the aperture value of the imaging optical system 31, and the ISO sensitivity. The exposure condition such as is determined and an instruction is given to the drive unit 32b. In addition, image processing conditions for adjusting saturation, contrast, sharpness, and the like are determined and instructed to the image processing unit 33 according to the imaging scene mode set in the camera 1 and the type of the detected subject element. The detection of the subject element will be described later.
 制御部34には、物体検出部34aと、設定部34bと、撮像制御部34cと、レンズ移動制御部34dとが含まれる。これらは、制御部34が不図示の不揮発性メモリに格納されているプログラムを実行することにより、ソフトウェア的に実現されるが、これらをASIC等により構成しても構わない。 The control unit 34 includes an object detection unit 34a, a setting unit 34b, an imaging control unit 34c, and a lens movement control unit 34d. These are realized as software by the control unit 34 executing a program stored in a nonvolatile memory (not shown). However, these may be configured by an ASIC or the like.
 物体検出部34aは、公知の物体認識処理を行うことにより、撮像部32によって生成された画像データから、人物(人物の顔)、犬、猫などの動物(動物の顔)、植物、自転車、自動車、電車などの乗物、建造物、静止物、山、雲などの風景、あらかじめ定められた特定の物体などの被写体要素を検出する。 The object detection unit 34a performs a known object recognition process, and from the image data generated by the imaging unit 32, an animal (animal face) such as a person (person's face), a dog or a cat, a plant, a bicycle, A subject element such as a vehicle, a vehicle such as a train, a building, a stationary object, a landscape such as a mountain or a cloud, or a predetermined specific object is detected.
 設定部34bは、撮像素子32aの撮像領域に対して撮像条件を設定する。撮像条件は、上記露出条件(電荷蓄積時間、ゲイン、ISO感度、フレームレート等)と、上記画像処理条件(例えば、ホワイトバランス調整用パラメータ、ガンマ補正カーブ、表示輝度調整パラメータ、彩度調整パラメータ等)とを含む。なお、設定部34bは、複数の撮像領域に同じ撮像条件を設定することも、複数の撮像領域において異なる撮像条件をそれぞれ設定することも可能である。 The setting unit 34b sets imaging conditions for the imaging area of the imaging device 32a. Imaging conditions include the exposure conditions (charge accumulation time, gain, ISO sensitivity, frame rate, etc.) and the image processing conditions (for example, white balance adjustment parameters, gamma correction curves, display brightness adjustment parameters, saturation adjustment parameters, etc.) ). Note that the setting unit 34b can set the same imaging condition for a plurality of imaging areas, or can set different imaging conditions for each of the plurality of imaging areas.
 撮像制御部34cは、設定部34bによって撮像領域に設定された撮像条件を適用して撮像部32(撮像素子32a)、画像処理部33を制御する。撮像領域に配置される画素の数は、単数でも複数でもよい。また、複数の撮像領域間で配置される画素の数が異なっていてもよい。 The imaging control unit 34c controls the imaging unit 32 (imaging element 32a) and the image processing unit 33 by applying the imaging conditions set in the imaging region by the setting unit 34b. The number of pixels arranged in the imaging region may be singular or plural. Further, the number of pixels arranged between the plurality of imaging regions may be different.
 レンズ移動制御部34dは、撮像画面の所定の位置(フォーカスポイントと呼ぶ)において、対応する被写体に対してフォーカスを合わせる自動焦点調節(オートフォーカス:AF)動作を制御する。フォーカスを合わせると、被写体の像の尖鋭度が高まる。すなわち、レンズ移動機構31mによって撮像光学系31のフォーカスレンズを光軸方向に移動させることにより、撮像光学系31による像を調節する。レンズ移動制御部34dは、演算結果に基づいて、撮像光学系31のフォーカスレンズを合焦位置へ移動させるための駆動信号、例えば被写体の像を撮像光学系31のフォーカスレンズで調節するための信号を、撮像光学系31のレンズ駆動機構31mに送る。このように、レンズ移動制御部34dは、演算結果に基づいて、撮像光学系31のフォーカスレンズを光軸方向に移動させる移動部として機能する。レンズ移動制御部34dがAF動作のために行う処理は、焦点検出処理とも呼ばれる。焦点検出処理の詳細については後述する。 The lens movement control unit 34d controls an automatic focus adjustment (autofocus: AF) operation for focusing on a corresponding subject at a predetermined position (called a focus point) on the imaging screen. When the focus is adjusted, the sharpness of the subject image increases. That is, the image by the imaging optical system 31 is adjusted by moving the focus lens of the imaging optical system 31 in the optical axis direction by the lens moving mechanism 31m. The lens movement control unit 34d is a drive signal for moving the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result, for example, a signal for adjusting the subject image with the focus lens of the imaging optical system 31. Is sent to the lens driving mechanism 31m of the imaging optical system 31. In this way, the lens movement control unit 34d functions as a moving unit that moves the focus lens of the imaging optical system 31 in the optical axis direction based on the calculation result. The process performed by the lens movement control unit 34d for the AF operation is also referred to as a focus detection process. Details of the focus detection process will be described later.
 表示部35は、画像処理部33によって生成された画像や画像処理された画像、記録部37によって読み出された画像などを表示する。表示部35は、操作メニュー画面や、撮像条件を設定するための設定画面等の表示も行う。 The display unit 35 displays an image generated by the image processing unit 33, an image processed image, an image read by the recording unit 37, and the like. The display unit 35 also displays an operation menu screen, a setting screen for setting imaging conditions, and the like.
 操作部材36は、録画ボタン、シャッターボタンやメニューボタン等の種々の操作部材によって構成される。操作部材36は、各操作に対応する操作信号を制御部34へ送出する。操作部材36には、表示部35の表示面に設けられたタッチ操作部材も含まれる。本実施の形態では、動画を記録することを録画と称する。 The operation member 36 includes various operation members such as a recording button, a shutter button, and a menu button. The operation member 36 sends an operation signal corresponding to each operation to the control unit 34. The operation member 36 includes a touch operation member provided on the display surface of the display unit 35. In the present embodiment, recording a moving image is referred to as recording.
 記録部37は、制御部34からの指示に応じて、不図示のメモリカードなどで構成される記録媒体に画像データなどを記録する。また、記録部37は、制御部34からの指示に応じて記録媒体に記録されている画像データを読み出す。 The recording unit 37 records image data or the like on a recording medium including a memory card (not shown) in response to an instruction from the control unit 34. The recording unit 37 reads image data recorded on the recording medium in response to an instruction from the control unit 34.
 測光用センサ38は、被写体の明るさを検出し、検出信号を出力する。振れセンサ39は、例えば角速度センサおよび加速度センサによって構成される。振れセンサ39は、カメラ1の振れを検出し、検出信号を出力する。カメラ1の振れは、手ぶれとも称される。
 本実施の形態では、カメラ1の振れによって像ぶれが生じるものとする。
 なお、振れセンサ39による検出信号に基づいて像ぶれを検出する代わりに、撮像された被写体の画像データに基づいて像ぶれを検出してもよい。
The photometric sensor 38 detects the brightness of the subject and outputs a detection signal. The shake sensor 39 is constituted by, for example, an angular velocity sensor and an acceleration sensor. The shake sensor 39 detects the shake of the camera 1 and outputs a detection signal. The shake of the camera 1 is also referred to as camera shake.
In the present embodiment, it is assumed that image blur occurs due to camera 1 shake.
Instead of detecting the image blur based on the detection signal from the shake sensor 39, the image blur may be detected based on the image data of the captured subject.
<積層型の撮像素子の説明>
 上述した撮像素子32aの一例として積層型の撮像素子100について説明する。図2は、撮像素子100の断面図である。撮像素子100は、撮像チップ111と、信号処理チップ112と、メモリチップ113とを備える。撮像チップ111は、信号処理チップ112に積層されている。信号処理チップ112は、メモリチップ113に積層されている。撮像チップ111および信号処理チップ112、信号処理チップ112およびメモリチップ113は、それぞれ接続部109により電気的に接続されている。接続部109は、例えばバンプや電極である。撮像チップ111は、被写体像を撮像して画像データを生成する。撮像チップ111は、画像データを撮像チップ111から信号処理チップ112へ出力する。信号処理チップ112は、撮像チップ111から出力された画像データに対して信号処理を施す。メモリチップ113は、複数のメモリを有し、画像データを記憶する。
 なお、撮像素子100は、撮像チップおよび信号処理チップで構成されてもよい。撮像素子100が撮像チップおよび信号処理チップで構成されている場合、画像データを記憶するための記憶部は、信号処理チップに設けられてもよいし、撮像素子100とは別に設けていてもよい。また、撮像素子100は、撮像チップ111がメモリチップ113に積層され、メモリチップ113が信号処理チップ112に積層される構成でもよい。
<Description of Laminated Image Sensor>
A laminated image sensor 100 will be described as an example of the image sensor 32a described above. FIG. 2 is a cross-sectional view of the image sensor 100. The imaging element 100 includes an imaging chip 111, a signal processing chip 112, and a memory chip 113. The imaging chip 111 is stacked on the signal processing chip 112. The signal processing chip 112 is stacked on the memory chip 113. The imaging chip 111, the signal processing chip 112, the signal processing chip 112, and the memory chip 113 are electrically connected by a connection unit 109. The connection unit 109 is, for example, a bump or an electrode. The imaging chip 111 captures a subject image and generates image data. The imaging chip 111 outputs image data from the imaging chip 111 to the signal processing chip 112. The signal processing chip 112 performs signal processing on the image data output from the imaging chip 111. The memory chip 113 has a plurality of memories and stores image data.
Note that the image sensor 100 may include an image pickup chip and a signal processing chip. When the imaging device 100 is configured by an imaging chip and a signal processing chip, a storage unit for storing image data may be provided in the signal processing chip or may be provided separately from the imaging device 100. . In addition, the imaging element 100 may have a configuration in which the imaging chip 111 is stacked on the memory chip 113 and the memory chip 113 is stacked on the signal processing chip 112.
 図2に示すように、入射光は、主に白抜き矢印で示すZ軸プラス方向へ向かって入射する。また、座標軸に示すように、Z軸に直交する紙面左方向をX軸プラス方向、Z軸およびX軸に直交する紙面手前方向をY軸プラス方向とする。以降のいくつかの図においては、図2の座標軸を基準として、それぞれの図の向きがわかるように座標軸を表示する。 As shown in FIG. 2, the incident light is incident mainly in the positive direction of the Z axis indicated by the white arrow. Further, as shown in the coordinate axes, the left direction of the paper orthogonal to the Z axis is the X axis plus direction, and the front side of the paper orthogonal to the Z axis and the X axis is the Y axis plus direction. In the following several figures, the coordinate axes are displayed so that the orientation of each figure can be understood with reference to the coordinate axes in FIG.
 撮像チップ111は、例えば、CMOSイメージセンサである。撮像チップ111は、具体的には、裏面照射型のCMOSイメージセンサである。撮像チップ111は、マイクロレンズ層101、カラーフィルタ層102、パッシベーション層103、半導体層106、および配線層108を有する。撮像チップ111は、Z軸プラス方向に向かってマイクロレンズ層101、カラーフィルタ層102、パッシベーション層103、半導体層106、および配線層108の順に配置されている。 The imaging chip 111 is, for example, a CMOS image sensor. Specifically, the imaging chip 111 is a backside illumination type CMOS image sensor. The imaging chip 111 includes a microlens layer 101, a color filter layer 102, a passivation layer 103, a semiconductor layer 106, and a wiring layer 108. The imaging chip 111 is arranged in the order of the microlens layer 101, the color filter layer 102, the passivation layer 103, the semiconductor layer 106, and the wiring layer 108 in the positive Z-axis direction.
 マイクロレンズ層101は、複数のマイクロレンズLを有する。マイクロレンズLは、入射した光を後述する光電変換部104に集光する。カラーフィルタ層102は、分光特性の異なる複数種類のカラーフィルタFを有する。カラーフィルタ層102は、具体的には、主に赤色成分の光を透過させる分光特性の第1フィルタ(R)と、主に緑色成分の光を透過させる分光特性の第2フィルタ(Gb、Gr)と、主に青色成分の光を透過させる分光特性の第3フィルタ(B)と、を有する。カラーフィルタ層102は、例えば、ベイヤー配列により第1フィルタ、第2フィルタおよび第3フィルタが配置されている。パッシベーション層103は、窒化膜や酸化膜で構成され、半導体層106を保護する。 The microlens layer 101 has a plurality of microlenses L. The microlens L condenses incident light on the photoelectric conversion unit 104 described later. The color filter layer 102 has a plurality of types of color filters F having different spectral characteristics. Specifically, the color filter layer 102 includes a first filter (R) having a spectral characteristic that mainly transmits red component light and a second filter (Gb, Gr) that has a spectral characteristic that mainly transmits green component light. ) And a third filter (B) having a spectral characteristic that mainly transmits blue component light. In the color filter layer 102, for example, a first filter, a second filter, and a third filter are arranged in a Bayer arrangement. The passivation layer 103 is made of a nitride film or an oxide film, and protects the semiconductor layer 106.
 半導体層106は、光電変換部104および読出回路105を有する。半導体層106は、光の入射面である第1面106aと第1面106aの反対側の第2面106bとの間に複数の光電変換部104を有する。半導体層106は、光電変換部104がX軸方向およびY軸方向に複数配列されている。光電変換部104は、光を電荷に変換する光電変換機能を有する。また、光電変換部104は、光電変換信号による電荷を蓄積する。光電変換部104は、例えば、フォトダイオードである。半導体層106は、光電変換部104よりも第2面106b側に読出回路105を有する。半導体層106は、読出回路105がX軸方向およびY軸方向に複数配列されている。読出回路105は、複数のトランジスタにより構成され、光電変換部104によって光電変換された電荷により生成される画像データを読み出して配線層108へ出力する。 The semiconductor layer 106 includes a photoelectric conversion unit 104 and a readout circuit 105. The semiconductor layer 106 includes a plurality of photoelectric conversion units 104 between a first surface 106a that is a light incident surface and a second surface 106b opposite to the first surface 106a. The semiconductor layer 106 includes a plurality of photoelectric conversion units 104 arranged in the X-axis direction and the Y-axis direction. The photoelectric conversion unit 104 has a photoelectric conversion function of converting light into electric charge. In addition, the photoelectric conversion unit 104 accumulates charges based on the photoelectric conversion signal. The photoelectric conversion unit 104 is, for example, a photodiode. The semiconductor layer 106 includes a readout circuit 105 on the second surface 106b side of the photoelectric conversion unit 104. In the semiconductor layer 106, a plurality of readout circuits 105 are arranged in the X-axis direction and the Y-axis direction. The readout circuit 105 includes a plurality of transistors, reads out image data generated by the electric charges photoelectrically converted by the photoelectric conversion unit 104, and outputs the image data to the wiring layer 108.
 配線層108は、複数の金属層を有する。金属層は、例えば、Al配線、Cu配線等である。配線層108は、読出回路105により読み出された画像データが出力される。画像データは、接続部109を介して配線層108から信号処理チップ112へ出力される。 The wiring layer 108 has a plurality of metal layers. The metal layer is, for example, an Al wiring, a Cu wiring, or the like. The wiring layer 108 outputs the image data read by the reading circuit 105. The image data is output from the wiring layer 108 to the signal processing chip 112 via the connection unit 109.
 なお、接続部109は、光電変換部104ごとに設けられていてもよい。また、接続部109は、複数の光電変換部104ごとに設けられていてもよい。接続部109が複数の光電変換部104ごとに設けられている場合、接続部109のピッチは、光電変換部104のピッチよりも大きくてもよい。また、接続部109は、光電変換部104が配置されている領域の周辺領域に設けられていてもよい。 Note that the connection unit 109 may be provided for each photoelectric conversion unit 104. Further, the connection unit 109 may be provided for each of the plurality of photoelectric conversion units 104. When the connection unit 109 is provided for each of the plurality of photoelectric conversion units 104, the pitch of the connection units 109 may be larger than the pitch of the photoelectric conversion units 104. In addition, the connection unit 109 may be provided in a peripheral region of the region where the photoelectric conversion unit 104 is disposed.
 信号処理チップ112は、複数の信号処理回路を有する。信号処理回路は、撮像チップ111から出力された画像データに対して信号処理を行う。信号処理回路は、例えば、画像データの信号値を増幅するアンプ回路、画像データのノイズの低減処理を行う相関二重サンプリング回路およびアナログ信号をデジタル信号に変換するアナログ/デジタル(A/D)変換回路等である。信号処理回路は、光電変換部104ごとに設けられていてもよい。 The signal processing chip 112 has a plurality of signal processing circuits. The signal processing circuit performs signal processing on the image data output from the imaging chip 111. The signal processing circuit includes, for example, an amplifier circuit that amplifies the signal value of the image data, a correlated double sampling circuit that performs noise reduction processing of the image data, and analog / digital (A / D) conversion that converts the analog signal into a digital signal. Circuit etc. A signal processing circuit may be provided for each photoelectric conversion unit 104.
 また、信号処理回路は、複数の光電変換部104ごとに設けられていてもよい。信号処理チップ112は、複数の貫通電極110を有する。貫通電極110は、例えばシリコン貫通電極である。貫通電極110は、信号処理チップ112に設けられた回路を互いに接続する。貫通電極110は、撮像チップ111の周辺領域、メモリチップ113にも設けられてもよい。なお、信号処理回路を構成する一部の素子を撮像チップ111に設けてもよい。例えば、アナログ/デジタル変換回路の場合、入力電圧と基準電圧の比較を行う比較器を撮像チップ111に設け、カウンター回路やラッチ回路等の回路を、信号処理チップ112に設けてもよい。 Further, a signal processing circuit may be provided for each of the plurality of photoelectric conversion units 104. The signal processing chip 112 has a plurality of through electrodes 110. The through electrode 110 is, for example, a silicon through electrode. The through electrode 110 connects circuits provided in the signal processing chip 112 to each other. The through electrode 110 may also be provided in the peripheral region of the imaging chip 111 and the memory chip 113. Note that some elements constituting the signal processing circuit may be provided in the imaging chip 111. For example, in the case of an analog / digital conversion circuit, a comparator that compares an input voltage with a reference voltage may be provided in the imaging chip 111, and circuits such as a counter circuit and a latch circuit may be provided in the signal processing chip 112.
 メモリチップ113は、複数の記憶部を有する。記憶部は、信号処理チップ112で信号処理が施された画像データを記憶する。記憶部は、例えば、DRAM等の揮発性メモリである。記憶部は、光電変換部104ごとに設けられていてもよい。また、記憶部は、複数の光電変換部104ごとに設けられていてもよい。記憶部に記憶された画像データは、後段の画像処理部に出力される。 The memory chip 113 has a plurality of storage units. The storage unit stores image data that has been subjected to signal processing by the signal processing chip 112. The storage unit is a volatile memory such as a DRAM, for example. A storage unit may be provided for each photoelectric conversion unit 104. In addition, the storage unit may be provided for each of the plurality of photoelectric conversion units 104. The image data stored in the storage unit is output to the subsequent image processing unit.
 図3は、撮像チップ111の画素配列と単位領域131を説明する図である。特に、撮像チップ111を裏面(撮像面)側から観察した様子を示す。画素領域には例えば2000万個以上の画素がマトリックス状に配列されている。図3の例では、隣接する2画素×2画素の4画素が1つの単位領域131を形成する。図の格子線は、隣接する画素がグループ化されて単位領域131を形成する概念を示す。単位領域131を形成する画素の数は、これに限られず1000個程度、例えば32画素×32画素でもよいし、それ以上でもそれ以下でもよく、1画素であってもよい。 FIG. 3 is a diagram for explaining the pixel array and the unit area 131 of the imaging chip 111. In particular, a state where the imaging chip 111 is observed from the back surface (imaging surface) side is shown. For example, 20 million or more pixels are arranged in a matrix in the pixel region. In the example of FIG. 3, four adjacent pixels of 2 pixels × 2 pixels form one unit region 131. The grid lines in the figure indicate the concept that adjacent pixels are grouped to form a unit region 131. The number of pixels forming the unit region 131 is not limited to this, and may be about 1000, for example, 32 pixels × 32 pixels, more or less, or one pixel.
 画素領域の部分拡大図に示すように、図3の単位領域131は、緑色画素Gb、Gr、青色画素Bおよび赤色画素Rの4画素から成るいわゆるベイヤー配列を内包する。緑色画素Gb、Grは、カラーフィルタFとして緑色フィルタを有する画素であり、入射光のうち緑色波長帯の光を受光する。同様に、青色画素Bは、カラーフィルタFとして青色フィルタを有する画素であって青色波長帯の光を受光し、赤色画素Rは、カラーフィルタFとして赤色フィルタを有する画素であって赤色波長帯の光を受光する。 As shown in the partially enlarged view of the pixel area, the unit area 131 in FIG. 3 includes a so-called Bayer array composed of four pixels of green pixels Gb, Gr, blue pixels B, and red pixels R. The green pixels Gb and Gr are pixels having a green filter as the color filter F, and receive light in the green wavelength band of incident light. Similarly, the blue pixel B is a pixel having a blue filter as the color filter F and receives light in the blue wavelength band, and the red pixel R is a pixel having a red filter as the color filter F and having a red wavelength band. Receives light.
 本実施の形態において、1ブロックにつき単位領域131を少なくとも1つ含むように複数のブロックが定義される。すなわち、1ブロックの最小単位は1つの単位領域131となる。上述したように、1つの単位領域131を形成する画素の数として取り得る値のうち、最も小さい画素の数は1画素である。したがって、1ブロックを画素単位で定義する場合、1ブロックを定義し得る画素の数のうち最小の画素の数は1画素となる。各ブロックはそれぞれ異なる制御パラメータで各ブロックに含まれる画素を制御できる。各ブロックは、そのブロック内の単位領域131、すなわち、そのブロック内の画素が同一の撮像条件で制御される。つまり、あるブロックに含まれる画素群と、別のブロックに含まれる画素群とで、撮像条件が異なる光電変換信号を取得できる。制御パラメータの例は、フレームレート、ゲイン、間引き率、光電変換信号を加算する加算行数または加算列数、電荷の蓄積時間または蓄積回数、デジタル化のビット数(語長)等である。撮像素子100は、行方向(撮像チップ111のX軸方向)の間引きのみでなく、列方向(撮像チップ111のY軸方向)の間引きも自在に行える。さらに、制御パラメータは、画像処理におけるパラメータであってもよい。 In the present embodiment, a plurality of blocks are defined so as to include at least one unit region 131 per block. That is, the minimum unit of one block is one unit area 131. As described above, of the possible values for the number of pixels forming one unit region 131, the smallest number of pixels is one pixel. Therefore, when one block is defined in units of pixels, the minimum number of pixels among the number of pixels that can define one block is one pixel. Each block can control pixels included in each block with different control parameters. In each block, the unit region 131 in the block, that is, the pixels in the block are controlled under the same imaging condition. That is, photoelectric conversion signals having different imaging conditions can be acquired between a pixel group included in a certain block and a pixel group included in another block. Examples of the control parameters include a frame rate, a gain, a thinning rate, the number of addition rows or addition columns to which photoelectric conversion signals are added, a charge accumulation time or accumulation count, a digitization bit number (word length), and the like. The imaging device 100 can freely perform not only thinning in the row direction (X-axis direction of the imaging chip 111) but also thinning in the column direction (Y-axis direction of the imaging chip 111). Furthermore, the control parameter may be a parameter in image processing.
 図4は、単位領域131における回路を説明する図である。図4の例では、隣接する2画素×2画素の4画素により1つの単位領域131を形成する。なお、上述したように単位領域131に含まれる画素の数はこれに限られず、1000画素以上でもよいし、最小1画素でもよい。単位領域131の二次元的な位置を符号A~Dにより示す。 FIG. 4 is a diagram for explaining a circuit in the unit region 131. In the example of FIG. 4, one unit region 131 is formed by four adjacent pixels of 2 pixels × 2 pixels. As described above, the number of pixels included in the unit region 131 is not limited to this, and may be 1000 pixels or more, or may be a minimum of 1 pixel. The two-dimensional position of the unit area 131 is indicated by reference signs A to D.
 単位領域131に含まれる画素のリセットトランジスタ(RST)は、画素ごとに個別にオンオフ可能に構成される。図4において、画素Aのリセットトランジスタをオンオフするリセット配線300が設けられており、画素Bのリセットトランジスタをオンオフするリセット配線310が、上記リセット配線300とは別個に設けられている。同様に、画素Cのリセットトランジスタをオンオフするリセット配線320が、上記リセット配線300、310とは別個に設けられている。他の画素Dに対しても、リセットトランジスタをオンオフするための専用のリセット配線330が設けられている。 The reset transistor (RST) of the pixel included in the unit region 131 is configured to be turned on and off individually for each pixel. In FIG. 4, a reset wiring 300 for turning on / off the reset transistor of the pixel A is provided, and a reset wiring 310 for turning on / off the reset transistor of the pixel B is provided separately from the reset wiring 300. Similarly, a reset line 320 for turning on and off the reset transistor of the pixel C is provided separately from the reset lines 300 and 310. A dedicated reset wiring 330 for turning on and off the reset transistor is also provided for the other pixels D.
 単位領域131に含まれる画素の転送トランジスタ(TX)についても、画素ごとに個別にオンオフ可能に構成される。図4において、画素Aの転送トランジスタをオンオフする転送配線302、画素Bの転送トランジスタをオンオフする転送配線312、画素Cの転送トランジスタをオンオフする転送配線322が、別個に設けられている。他の画素Dに対しても、転送トランジスタをオンオフするための専用の転送配線332が設けられている。 The pixel transfer transistor (TX) included in the unit region 131 is also configured to be turned on and off individually for each pixel. In FIG. 4, a transfer wiring 302 for turning on / off the transfer transistor of the pixel A, a transfer wiring 312 for turning on / off the transfer transistor of the pixel B, and a transfer wiring 322 for turning on / off the transfer transistor of the pixel C are separately provided. Also for the other pixels D, a dedicated transfer wiring 332 for turning on / off the transfer transistor is provided.
 さらに、単位領域131に含まれる画素の選択トランジスタ(SEL)についても、画素ごとに個別にオンオフ可能に構成される。図4において、画素Aの選択トランジスタをオンオフする選択配線306、画素Bの選択トランジスタをオンオフする選択配線316、画素Cの選択トランジスタをオンオフする選択配線326が、別個に設けられている。他の画素Dに対しても、選択トランジスタをオンオフするための専用の選択配線336が設けられている。 Furthermore, the pixel selection transistor (SEL) included in the unit region 131 is also configured to be turned on and off individually for each pixel. In FIG. 4, a selection wiring 306 for turning on / off the selection transistor of the pixel A, a selection wiring 316 for turning on / off the selection transistor of the pixel B, and a selection wiring 326 for turning on / off the selection transistor of the pixel C are separately provided. Also for the other pixels D, a dedicated selection wiring 336 for turning on and off the selection transistor is provided.
 なお、電源配線304は、単位領域131に含まれる画素Aから画素Dで共通に接続されている。同様に、出力配線308は、単位領域131に含まれる画素Aから画素Dで共通に接続されている。また、電源配線304は複数の単位領域間で共通に接続されるが、出力配線308は単位領域131ごとに個別に設けられる。負荷電流源309は、出力配線308へ電流を供給する。負荷電流源309は、撮像チップ111側に設けられてもよいし、信号処理チップ112側に設けられてもよい。 Note that the power supply wiring 304 is commonly connected from the pixel A to the pixel D included in the unit region 131. Similarly, the output wiring 308 is commonly connected to the pixel D from the pixel A included in the unit region 131. Further, the power supply wiring 304 is commonly connected between a plurality of unit regions, but the output wiring 308 is provided for each unit region 131 individually. The load current source 309 supplies current to the output wiring 308. The load current source 309 may be provided on the imaging chip 111 side or may be provided on the signal processing chip 112 side.
 単位領域131のリセットトランジスタおよび転送トランジスタを個別にオンオフすることにより、単位領域131に含まれる画素Aから画素Dに対して、電荷の蓄積開始時間、蓄積終了時間、転送タイミングを含む電荷蓄積を制御することができる。また、単位領域131の選択トランジスタを個別にオンオフすることにより、各画素Aから画素Dの光電変換信号を共通の出力配線308を介して出力することができる。 By individually turning on and off the reset transistor and the transfer transistor in the unit region 131, the charge accumulation including the charge accumulation start time, the accumulation end time, and the transfer timing is controlled from the pixel A to the pixel D included in the unit region 131. can do. In addition, by individually turning on and off the selection transistors in the unit region 131, the photoelectric conversion signals of the pixels A to D can be output via the common output wiring 308.
 ここで、単位領域131に含まれる画素Aから画素Dについて、行および列に対して規則的な順序で電荷蓄積を制御する、いわゆるローリングシャッタ方式が公知である。ローリングシャッタ方式により行ごとに画素を選択してから列を指定すると、図4の例では「ABCD」の順序で光電変換信号が出力される。 Here, a so-called rolling shutter system is known in which charge accumulation is controlled in a regular order with respect to rows and columns for the pixels A to D included in the unit region 131. When a column is designated after selecting a pixel for each row by the rolling shutter method, photoelectric conversion signals are output in the order of “ABCD” in the example of FIG.
 このように単位領域131を基準として回路を構成することにより、単位領域131ごとに電荷蓄積時間を制御することができる。例えば、単位領域131間で異なったフレームレートによる光電変換信号をそれぞれ出力させることができる。また、撮像チップ111において一部のブロックに含まれる単位領域131に電荷蓄積(撮像)を行わせる間に他のブロックに含まれる単位領域131を休止させることにより、撮像チップ111の所定のブロックでのみ撮像を行わせて、その光電変換信号を出力させることができる。さらに、フレーム間で電荷蓄積(撮像)を行わせるブロック(蓄積制御の対象ブロック)を切り替えて、撮像チップ111の異なるブロックで逐次撮像を行わせて、光電変換信号を出力させることもできる。 Thus, by configuring the circuit with the unit region 131 as a reference, the charge accumulation time can be controlled for each unit region 131. For example, photoelectric conversion signals with different frame rates between the unit regions 131 can be output. In addition, the unit area 131 included in another block is paused while the unit areas 131 included in some blocks in the imaging chip 111 perform charge accumulation (imaging). Only the imaging can be performed, and the photoelectric conversion signal can be output. Furthermore, it is also possible to switch a block (accumulation control target block) where charge accumulation (imaging) is performed between frames, sequentially perform imaging with different blocks of the imaging chip 111, and output a photoelectric conversion signal.
 図5は、図4に例示した回路に対応する撮像素子100の機能的構成を示すブロック図である。マルチプレクサ411は、単位領域131を形成する4個のPD104を順番に選択して、それぞれの画素信号を当該単位領域131に対応して設けられた出力配線308へ出力させる。マルチプレクサ411は、PD104とともに、撮像チップ111に形成される。 FIG. 5 is a block diagram illustrating a functional configuration of the image sensor 100 corresponding to the circuit illustrated in FIG. The multiplexer 411 sequentially selects the four PDs 104 that form the unit region 131 and outputs each pixel signal to the output wiring 308 provided corresponding to the unit region 131. The multiplexer 411 is formed in the imaging chip 111 together with the PD 104.
 マルチプレクサ411を介して出力された画素信号は、信号処理チップ112に形成された、相関二重サンプリング(CDS)・アナログ/デジタル(A/D)変換を行う信号処理回路412により、CDSおよびA/D変換が行われる。A/D変換された画素信号は、デマルチプレクサ413に引き渡され、それぞれの画素に対応する画素メモリ414に格納される。デマルチプレクサ413および画素メモリ414は、メモリチップ113に形成される。 The pixel signal output through the multiplexer 411 is supplied to the signal processing chip 112 by a signal processing circuit 412 that performs correlated double sampling (CDS) / analog / digital (A / D) conversion. D conversion is performed. The A / D converted pixel signal is transferred to the demultiplexer 413 and stored in the pixel memory 414 corresponding to each pixel. The demultiplexer 413 and the pixel memory 414 are formed in the memory chip 113.
 メモリチップ113に形成された演算回路415は、画素メモリ414に格納された画素信号を処理して後段の画像処理部に引き渡す。演算回路415は、信号処理チップ112に設けられてもよい。なお、図5では1つの単位領域131の分の接続を示すが、実際にはこれらが単位領域131ごとに存在して、並列で動作する。ただし、演算回路415は単位領域131ごとに存在しなくてもよく、例えば、一つの演算回路415がそれぞれの単位領域131に対応する画素メモリ414の値を順に参照しながらシーケンシャルに処理してもよい。
 また、演算回路415は、後段の制御部、画像処理部等の機能を含めた構成としてもよい。
The arithmetic circuit 415 formed in the memory chip 113 processes the pixel signal stored in the pixel memory 414 and passes it to the subsequent image processing unit. The arithmetic circuit 415 may be provided in the signal processing chip 112. Note that FIG. 5 shows connections for one unit region 131, but actually these exist for each unit region 131 and operate in parallel. However, the arithmetic circuit 415 does not have to exist for each unit region 131. For example, one arithmetic circuit 415 may perform sequential processing while sequentially referring to the values of the pixel memory 414 corresponding to each unit region 131. Good.
The arithmetic circuit 415 may be configured to include functions of a control unit, an image processing unit, and the like at the subsequent stage.
 上記の通り、単位領域131のそれぞれに対応して出力配線308が設けられている。撮像素子100は撮像チップ111、信号処理チップ112およびメモリチップ113を積層しているので、これら出力配線308に接続部109を用いたチップ間の電気的接続を用いることにより、各チップを面方向に大きくすることなく配線を引き回すことができる。 As described above, the output wiring 308 is provided corresponding to each of the unit areas 131. Since the image pickup device 100 includes the image pickup chip 111, the signal processing chip 112, and the memory chip 113, each chip is arranged in the surface direction by using the electrical connection between the chips using the connection portion 109 for the output wiring 308. The wiring can be routed without increasing the size.
<撮像素子のブロック制御>
 本実施の形態では、撮像素子32aにおける複数のブロックごとに撮像条件を設定可能に構成される。制御部34の撮像制御部34cは、上記複数の領域を上記ブロックに対応させて、ブロックごとに設定された撮像条件で撮像を行わせる。ブロックを構成する画素の数は、単数でも複数でもよい。
<Block control of image sensor>
In the present embodiment, an imaging condition can be set for each of a plurality of blocks in the imaging device 32a. The imaging control unit 34c of the control unit 34 causes the plurality of regions to correspond to the block and performs imaging under imaging conditions set for each block. The number of pixels constituting the block may be singular or plural.
 カメラ1は、記録する動画を撮像する場合、所定のフレームレート(例えば60fps)で撮像を繰り返す。モニタ用の動画を撮像する場合も同様である。モニタ用の動画は、録画ボタンが操作される前や、シャッターボタンが操作される前に撮像する動画である。本実施の形態では、録画ボタンが操作された場合に記録する動画を記録用画像と称し、モニタ用の動画をモニタ用画像と称する。図6(a)は、記録用画像やモニタ用画像を撮像するときに撮像素子32aの撮像面に設定される第1撮像領域B1および第2撮像領域B2の配置を例示する図である。 The camera 1 repeats imaging at a predetermined frame rate (for example, 60 fps) when capturing a moving image to be recorded. The same applies when capturing moving images for monitoring. The monitor moving image is a moving image that is captured before the recording button is operated or before the shutter button is operated. In the present embodiment, a moving image to be recorded when the recording button is operated is referred to as a recording image, and a monitoring moving image is referred to as a monitor image. FIG. 6A is a diagram illustrating the arrangement of the first imaging region B1 and the second imaging region B2 set on the imaging surface of the imaging element 32a when a recording image or a monitor image is captured.
<第1撮像領域と第2撮像領域>
 図6(a)によれば、第1撮像領域B1は、ブロックの奇数列における偶数行のブロックと、ブロックの偶数列における奇数行のブロックとによって構成される。また、第2撮像領域B2は、ブロックの偶数列における偶数行のブロックと、ブロックの奇数列における奇数行のブロックとによって構成される。このように、撮像素子32aの撮像面が第1撮像領域B1に属する複数のブロックと、第2撮像領域B2に属する複数のブロックとによって市松模様状に分割されている。
<First imaging region and second imaging region>
According to FIG. 6 (a), the first imaging region B1 is composed of blocks of even rows in odd columns of blocks and blocks of odd rows in even columns of blocks. The second imaging region B2 is composed of even-numbered blocks in even-numbered columns of blocks and odd-numbered blocks in odd-numbered columns of blocks. As described above, the imaging surface of the imaging element 32a is divided into a checkered pattern by a plurality of blocks belonging to the first imaging region B1 and a plurality of blocks belonging to the second imaging region B2.
<記録用画像と検出用画像>
 図7(a)、図7(b)は、記録用画像51と検出用画像52とを例示する図である。制御部34は、1フレームの撮像を行った撮像素子32aの第1撮像領域B1から読み出された光電変換信号に基づいて、記録用画像51を生成する。また、制御部34は、上記撮像素子32aの第2撮像領域B2から読み出された光電変換信号に基づいて、検出用画像52を生成する。
 本実施の形態では、光電変換信号を画像データとも称する。また、被写体検出、焦点検出、撮像条件設定、および画像生成に用いる画像を検出用画像52と称する。上述したように、記録用画像51は記録する動画である。
<Recording image and detection image>
FIG. 7A and FIG. 7B are diagrams illustrating the recording image 51 and the detection image 52. The control unit 34 generates the recording image 51 based on the photoelectric conversion signal read from the first imaging region B1 of the imaging element 32a that has captured one frame. In addition, the control unit 34 generates the detection image 52 based on the photoelectric conversion signal read from the second imaging region B2 of the imaging element 32a.
In this embodiment, the photoelectric conversion signal is also referred to as image data. An image used for subject detection, focus detection, imaging condition setting, and image generation is referred to as a detection image 52. As described above, the recording image 51 is a moving image to be recorded.
 記録用画像51は、表示部35に表示するモニタ用画像の生成にも用いられる。また、検出用画像52は、被写体検出のための情報取得用、焦点検出のための情報取得用、撮像条件設定のための情報取得用、画像生成のための情報取得用として用いられる。具体的には、制御部34は、検出用画像52のうち後述する注目領域の画像データに基づいて物体検出部34aにより被写体要素を検出し、検出用画像52のうち後述する注目領域の画像データに基づいてレンズ移動制御部34dにより焦点検出処理を行い、検出用画像52のうち後述する注目領域の画像データに基づいて設定部34bにより露出演算処理を行い、検出用画像52のうち後述する注目領域の画像データに基づいて画像処理部33により画像生成処理を行う。 The recording image 51 is also used to generate a monitor image to be displayed on the display unit 35. The detection image 52 is used for information acquisition for subject detection, information acquisition for focus detection, information acquisition for imaging condition setting, and information acquisition for image generation. Specifically, the control unit 34 detects the subject element by the object detection unit 34 a based on the image data of the attention area described later in the detection image 52, and the image data of the attention area described later in the detection image 52. The lens movement control unit 34d performs focus detection processing based on the image data, and the setting unit 34b performs exposure calculation processing based on image data of a region of interest described later in the detection image 52. Based on the image data of the region, the image processing unit 33 performs image generation processing.
<ブロック当たりの読み出し信号数>
 制御部34は、記録用画像51のために撮像素子32aの第1撮像領域B1から読み出す画像データの密度を、記録用画像51として必要な値に設定する。撮像素子32aの第1撮像領域B1に含まれる画素の全てについて画像データの読み出しが必要でない場合には、第1撮像領域B1に含まれる画素よりも少ない数の画素から画像データを読み出してもよい。
<Number of read signals per block>
The control unit 34 sets the density of image data read from the first imaging region B1 of the imaging element 32a for the recording image 51 to a value necessary for the recording image 51. When it is not necessary to read out image data for all the pixels included in the first imaging region B1 of the imaging element 32a, the image data may be read out from a smaller number of pixels than the pixels included in the first imaging region B1. .
 また、制御部34は、検出用画像52のために撮像素子32aの第2撮像領域B2から読み出す画像データの密度を、上述した情報取得に必要な値に設定する。撮像素子32aの第2撮像領域B2に含まれる画素の全てについて画像データの読み出しが必要でない場合には、第2撮像領域B2に含まれる画素よりも少ない数の画素から画像データを読み出してもよい。 Further, the control unit 34 sets the density of image data read from the second imaging region B2 of the imaging element 32a for the detection image 52 to a value necessary for the above-described information acquisition. When it is not necessary to read out image data for all the pixels included in the second imaging region B2 of the imaging element 32a, the image data may be read out from a smaller number of pixels than the pixels included in the second imaging region B2. .
 制御部34は、撮像素子32aの第1撮像領域B1と、撮像素子32aの第2撮像領域B2とで、上記1ブロック当たりの読み出し信号数を異ならせることができる。例えば、撮像素子32aの第1撮像領域B1と第2撮像領域B2とで、所定の画素数当たりに読み出す画像データの密度を異ならせてもよい。
 なお、ブロックを構成する画素の数が1のときは、1ブロック当たりの読み出し信号数はゼロか1である。
The control unit 34 can vary the number of readout signals per block in the first imaging region B1 of the imaging device 32a and the second imaging region B2 of the imaging device 32a. For example, the density of the image data read per predetermined number of pixels may be different between the first imaging area B1 and the second imaging area B2 of the imaging element 32a.
When the number of pixels constituting the block is 1, the number of readout signals per block is zero or one.
 一般に、カメラ1の表示部35は、撮像素子22が有する画素数に比べて表示解像度が低い。制御部34は、表示部35にモニタ用画像を表示するため、記録用画像51の画像データを所定データ数ごとに間引く、あるいは所定データ数ごとに加算するなどして、記録用画像51の画像データより少ない数の画像データを生成し、表示部35の表示解像度に対応させる。
 上述した所定データ数ごとの加算処理は、例えば、信号処理チップ112またはメモリチップ113に設けた演算回路415によって行ってもよい。
In general, the display unit 35 of the camera 1 has a lower display resolution than the number of pixels of the image sensor 22. In order to display the monitor image on the display unit 35, the control unit 34 thins out the image data of the recording image 51 for each predetermined number of data or adds the data for each predetermined number of data. A smaller number of image data than the data is generated to correspond to the display resolution of the display unit 35.
The above addition processing for each predetermined number of data may be performed by the arithmetic circuit 415 provided in the signal processing chip 112 or the memory chip 113, for example.
 制御部34は、以上説明したように、1フレームの撮像を行った撮像素子32aから読み出した画像データによって、記録用画像51および検出用画像52をそれぞれ生成する。図7(a)、図7(b)によると、記録用画像51および検出用画像52は同じ画角で撮像され、共通の被写体の像を含む。記録用画像51の取得と検出用画像52の取得は、後に説明する図10に示すように、並行して行うことができる。 As described above, the control unit 34 generates the recording image 51 and the detection image 52 based on the image data read from the imaging element 32a that has captured one frame. According to FIGS. 7A and 7B, the recording image 51 and the detection image 52 are captured at the same angle of view and include a common subject image. Acquisition of the recording image 51 and acquisition of the detection image 52 can be performed in parallel as shown in FIG.
<第1撮像領域および第2撮像領域の他の分割例>
 なお、撮像領域における第1撮像領域B1および第2撮像領域B2を市松模様状に分割する代わりに、図6(b)または図6(c)のように分割してもよい。図6(b)の例によれば、第1撮像領域B1は、ブロックの偶数列によって構成される。また、第2撮像領域B2は、ブロックの奇数列によって構成される。また、図6(c)の例によれば、第1撮像領域B1は、ブロックの奇数行によって構成される。また、第2撮像領域B2は、ブロックの偶数行によって構成される。
<Another example of division of the first imaging area and the second imaging area>
Note that the first imaging area B1 and the second imaging area B2 in the imaging area may be divided as shown in FIG. 6B or FIG. According to the example of FIG. 6B, the first imaging region B1 is configured by even columns of blocks. Further, the second imaging region B2 is configured by an odd number of blocks. Further, according to the example of FIG. 6C, the first imaging region B1 is configured by odd-numbered rows of blocks. Further, the second imaging region B2 is configured by even rows of blocks.
 図6(b)、図6(c)のいずれの場合も、制御部34は、1フレームの撮像を行った撮像素子32aの第1撮像領域B1から読み出された画像データに基づいて、記録用画像51を生成する。また、制御部34は、上記撮像素子32aの第2撮像領域B2から読み出された画像データに基づいて、検出用画像52を生成する。記録用画像51および検出用画像52は同じ画角で撮像され、共通の被写体の像を含む。記録用画像51の取得と検出用画像52の取得は、後に説明する図10に示すように、並行して行うことができる。 In both cases of FIG. 6B and FIG. 6C, the control unit 34 records based on the image data read from the first imaging region B1 of the imaging device 32a that has captured one frame. A work image 51 is generated. Further, the control unit 34 generates the detection image 52 based on the image data read from the second imaging region B2 of the imaging element 32a. The recording image 51 and the detection image 52 are captured at the same angle of view and include a common subject image. Acquisition of the recording image 51 and acquisition of the detection image 52 can be performed in parallel as shown in FIG.
 なお、上記の記録用画像51を、焦点検出処理等に用いてもよい。また、モニタ用画像を表示部35に表示させる代わりに、カメラ1の外部にあるモニタへカメラ1からモニタ用画像を送信し、外部のモニタにモニタ用画像を表示させてもよい。 Note that the above-described recording image 51 may be used for focus detection processing or the like. Instead of displaying the monitor image on the display unit 35, the monitor image may be transmitted from the camera 1 to a monitor outside the camera 1, and the monitor image may be displayed on the external monitor.
 本実施の形態において、記録用画像51を撮像する第1撮像領域B1に設定する撮像条件を第1撮像条件と呼び、検出用画像52を撮像する第2撮像領域B2に設定する撮像条件を第2撮像条件と呼ぶことにする。制御部34は、第1撮像条件と第2撮像条件とを同じ条件に設定してもよく、異なる条件に設定してもよい。 In the present embodiment, the imaging condition set in the first imaging area B1 for capturing the recording image 51 is referred to as the first imaging condition, and the imaging condition set in the second imaging area B2 for capturing the detection image 52 is the first. It will be referred to as two imaging conditions. The control unit 34 may set the first imaging condition and the second imaging condition to the same condition or different conditions.
 一例をあげると、制御部34は、第1撮像領域B1に設定する第1撮像条件を、記録用画像51に適した条件に設定する。このとき、第1撮像領域B1の全体で一様に同じ第1撮像条件を設定する。一方、制御部34は、第2撮像領域B2に設定する第2撮像条件を、上記情報取得等に適した条件に設定する。第2撮像条件として設定する条件は、第2撮像領域B2の全体で一様に同じ第2撮像条件を設定してもよいし、第2撮像領域B2のうちの領域ごとに異なる第2撮像条件を設定してもよい。 As an example, the control unit 34 sets the first imaging condition set in the first imaging area B1 to a condition suitable for the recording image 51. At this time, the same first imaging condition is set uniformly throughout the first imaging region B1. On the other hand, the control unit 34 sets the second imaging condition to be set in the second imaging area B2 to a condition suitable for the information acquisition or the like. The condition set as the second imaging condition may be the same second imaging condition uniformly in the entire second imaging area B2, or different second imaging conditions for each area in the second imaging area B2. May be set.
 制御部34は、例えば、被写体検出、焦点検出、撮像条件設定、および画像生成のための情報取得に適した条件がそれぞれ異なる場合は、第2撮像領域B2に設定する第2撮像条件を、第2撮像領域B2のうちの領域ごとに、被写体検出に適した条件、焦点検出に適した条件、撮像条件設定に適した条件、および画像生成に適した条件を、それぞれ設定してよい。 For example, when the conditions suitable for subject detection, focus detection, imaging condition setting, and information acquisition for image generation are different, the control unit 34 sets the second imaging condition to be set in the second imaging region B2. For each of the two imaging regions B2, a condition suitable for subject detection, a condition suitable for focus detection, a condition suitable for imaging condition setting, and a condition suitable for image generation may be set.
 また、制御部34は、第2撮像領域B2に設定する第2撮像条件をフレームごとに異ならせてもよい。例えば、検出用画像52の1フレーム目に第2撮像領域B2に設定する第2撮像条件を被写体検出に適した条件とし、検出用画像52の2フレーム目に第2撮像領域B2に設定する第2撮像条件を焦点検出に適した条件とし、検出用画像52の3フレーム目に第2撮像領域B2に設定する第2撮像条件を撮像条件設定に適した条件とし、検出用画像52の4フレーム目に第2撮像領域B2に設定する第2撮像条件を画像生成に適した条件とする。これらの場合において、各フレームにおける第2撮像領域B2の全体で一様に同じ第2撮像条件を設定してもよいし、領域ごとに異なる第2撮像条件を設定してもよい。 Further, the control unit 34 may change the second imaging condition set in the second imaging area B2 for each frame. For example, the second imaging condition set in the second imaging region B2 in the first frame of the detection image 52 is set as a condition suitable for subject detection, and the second imaging condition B2 is set in the second imaging region B2 in the second frame of the detection image 52. Two imaging conditions are suitable for focus detection, the second imaging condition set in the second imaging region B2 in the third frame of the detection image 52 is a condition suitable for imaging condition setting, and four frames of the detection image 52 are used. The second imaging condition set in the second imaging area B2 is set as a condition suitable for image generation. In these cases, the same second imaging condition may be set uniformly throughout the second imaging area B2 in each frame, or different second imaging conditions may be set for each area.
<第1撮像領域と第2撮像領域の面積比>
 さらにまた、図6(a)~図6(c)において、第1撮像領域B1と第2撮像領域B2との面積比を異ならせてもよい。制御部34は、例えば、ユーザによる操作、または制御部34の判断に基づき、撮像面で第1撮像領域B1が占める比率を第2撮像領域B2が占める比率よりも高く設定したり、撮像面で第1撮像領域B1と第2撮像領域B2とが占める比率を図6(a)~図6(c)に例示したように同等に設定したり、撮像面で第1撮像領域B1が占める比率を第2撮像領域B2が占める比率よりも低く設定したりする。
<Area ratio between first imaging region and second imaging region>
Furthermore, in FIG. 6A to FIG. 6C, the area ratio between the first imaging region B1 and the second imaging region B2 may be different. For example, the control unit 34 sets the ratio of the first imaging area B1 on the imaging surface to be higher than the ratio of the second imaging area B2 based on the operation by the user or the determination of the control unit 34, or on the imaging surface. The ratio occupied by the first imaging area B1 and the second imaging area B2 is set to be equal as illustrated in FIGS. 6A to 6C, or the ratio occupied by the first imaging area B1 on the imaging surface is set. It is set lower than the ratio occupied by the second imaging region B2.
<第1撮像領域と第2撮像領域の稼働率>
 また、制御部34は、第1撮像領域B1に含まれるブロックの稼働率と第2撮像領域B2に含まれるブロックの稼働率とを、それぞれ設定することができる。例えば、第1撮像領域B1に含まれるブロックの稼働率と第2撮像領域B2に含まれるブロックの稼働率をともに80%に設定する場合、図6(a)~図6(c)に例示した第1撮像領域B1に含まれる8割のブロックで撮像を行うとともに、第2撮像領域B2に含まれる8割のブロックで撮像を行う。
<Occupancy rate of first imaging area and second imaging area>
Moreover, the control part 34 can set the operation rate of the block contained in 1st imaging area B1, and the operation rate of the block contained in 2nd imaging area B2, respectively. For example, when both the operation rate of the blocks included in the first imaging region B1 and the operation rate of the blocks included in the second imaging region B2 are set to 80%, the example is illustrated in FIGS. 6 (a) to 6 (c). Imaging is performed with 80% of the blocks included in the first imaging area B1, and imaging is performed with 80% of the blocks included in the second imaging area B2.
 これに対して、例えば、第1撮像領域B1と第2撮像領域B2とに含まれるブロックの稼働率をそれぞれ50%と80%とに設定する場合、図6(a)~図6(c)に例示した第1撮像領域B1に含まれるブロックの半数で撮像を行うとともに、第2撮像領域B2に含まれるブロックの8割で撮像を行う。第1撮像領域B1に含まれるブロックの半数で撮像を行う場合、例えば、1ブロックおきに駆動するなどして、半数のブロックを駆動して、残り半数のブロックの駆動を休止させる。 On the other hand, for example, when the operating rates of the blocks included in the first imaging area B1 and the second imaging area B2 are set to 50% and 80%, respectively, FIG. 6 (a) to FIG. 6 (c). In addition to performing imaging with half of the blocks included in the first imaging area B1 illustrated in FIG. 4, imaging is performed with 80% of the blocks included in the second imaging area B2. When imaging is performed with half of the blocks included in the first imaging area B1, for example, driving every other block is performed to drive half of the blocks and stop driving the remaining half of the blocks.
<被写体の例示>
 図8は、カメラ1の撮像画面における被写体領域を例示する図である。図8において、被写体領域には、人物61と、自動車62と、バッグ63と、山64と、雲65、雲66とが含まれている。人物61は、バッグ63を両手で抱えている。人物61の右後方に、自動車62が止まっている。図6(a)、図6(b)または図6(c)のように第1撮像領域B1および第2撮像領域B2を定める場合、第1撮像領域B1から読み出した画像データによって図7(a)のような記録用画像51が得られる。また、第2撮像領域B2から読み出した画像データによって図7(b)のような検出用画像52が得られる。
<Example of subject>
FIG. 8 is a diagram illustrating a subject area on the imaging screen of the camera 1. In FIG. 8, the subject area includes a person 61, a car 62, a bag 63, a mountain 64, a cloud 65, and a cloud 66. The person 61 holds the bag 63 with both hands. The automobile 62 stops at the right rear side of the person 61. When the first imaging area B1 and the second imaging area B2 are determined as shown in FIG. 6A, FIG. 6B, or FIG. 6C, the image data read from the first imaging area B1 is used as shown in FIG. ) As shown in FIG. Further, a detection image 52 as shown in FIG. 7B is obtained from the image data read from the second imaging region B2.
 被写体に対応させて、撮像素子32aのブロックに異なる撮像条件を設定する一例として、各被写体領域に対応するブロックごとに撮像条件を設定することができる。例えば、人物61の領域に含まれるブロックと、自動車62に含まれるブロックと、バッグ63に含まれるブロックと、山64に含まれるブロックと、雲65に含まれるブロックと、雲66に含まれるブロックとに対し、それぞれ撮像条件を設定する。これらの各ブロックに対して適切に撮像条件を設定するためには、上述した検出用画像52を適切な露出条件で撮像し、白とびや黒つぶれがない画像が得られていることが必要である。白飛びは、オーバー露光によって画像の高輝度部分のデータの階調が失われるこという。また、黒潰れは、アンダー露光によって画像の低輝度部分のデータの階調が失われることをいう。
 白とびや黒つぶれが生じた検出用画像52では、例えば個々の被写体領域の輪郭(エッジ)が現れず、検出用画像52に基づいて被写体要素を検出することが困難になるからである。
As an example of setting different imaging conditions for the blocks of the image sensor 32a in correspondence with the subject, the imaging conditions can be set for each block corresponding to each subject area. For example, a block included in the area of the person 61, a block included in the car 62, a block included in the bag 63, a block included in the mountain 64, a block included in the cloud 65, and a block included in the cloud 66. The imaging conditions are set for each. In order to appropriately set the imaging condition for each of these blocks, it is necessary to capture the above-described detection image 52 under an appropriate exposure condition, and to obtain an image with no overexposure or underexposure. is there. Overexposure means that the gradation of data in a high-luminance portion of an image is lost due to overexposure. Further, blackout means that the gradation of data in the low-luminance portion of the image is lost due to underexposure.
This is because, in the detection image 52 in which whiteout or blackout occurs, for example, the outline (edge) of each subject area does not appear, and it is difficult to detect the subject element based on the detection image 52.
<第2撮像領域に設定する撮像条件>
 そこで、第1の実施の形態では、撮像素子32aの第2撮像領域B2に対し、検出用画像52を取得するための第2撮像条件を適切に設定することにより、検出用画像52に基づく被写体検出、焦点検出、撮像条件設定、および画像生成を適切に行えるようにする。第1の実施の形態では、第2撮像条件を以下のように設定する。
<Imaging conditions set in the second imaging area>
Therefore, in the first embodiment, the subject based on the detection image 52 is set by appropriately setting the second imaging condition for acquiring the detection image 52 for the second imaging region B2 of the imaging element 32a. It is possible to appropriately perform detection, focus detection, imaging condition setting, and image generation. In the first embodiment, the second imaging condition is set as follows.
 仮に、検出用画像52が暗すぎる画像、または、明るすぎる画像であると仮定すると、撮像画面内のエッジや色差等が抽出しづらくなり、被写体を検出しづらくなる。これに対し、検出用画像52が、白飛びや黒つぶれがない適切な明るさで得られていると、画面内のエッジや色差等が抽出しやすくなり、被写体検出や被写体認識の精度の向上を期待できる。制御部34は、例えば、検出用画像52を得るための第2撮像条件としての第2ゲインを、記録用画像51を得るための第1撮像条件としての第1ゲインに比べて高くする。記録用画像51の第1ゲインに比べて高い第2ゲインを検出用画像52のために設定するので、記録用画像51が暗い画像である場合でも、物体検出部34aは、明るい検出用画像52に基づいて正確に被写体検出を行うことが可能になる。被写体検出を正確に行うことができると、画像を被写体ごとに適切に分割することができる。 Assuming that the detection image 52 is an image that is too dark or an image that is too bright, it is difficult to extract edges, color differences, and the like in the imaging screen, and it is difficult to detect the subject. On the other hand, if the detection image 52 is obtained with an appropriate brightness without overexposure or underexposure, it becomes easier to extract edges and color differences in the screen, and the accuracy of subject detection and subject recognition is improved. Can be expected. For example, the control unit 34 makes the second gain as the second imaging condition for obtaining the detection image 52 higher than the first gain as the first imaging condition for obtaining the recording image 51. Since the second gain, which is higher than the first gain of the recording image 51, is set for the detection image 52, the object detection unit 34a can detect the bright detection image 52 even when the recording image 51 is a dark image. It is possible to accurately detect the subject based on the above. If the subject detection can be accurately performed, the image can be appropriately divided for each subject.
<検出用画像に基づく被写体要素の検出>
 例えば、カメラ1のメインスイッチがオン操作されると、制御部34は、撮像素子32aに第1撮像領域B1および第2撮像領域B2をセットする。上述したように、第1撮像領域B1は、記録用画像51を撮像する領域である。また、第2撮像領域B2は、検出用の検出用画像52を撮像する領域である。
<Detection of subject element based on detection image>
For example, when the main switch of the camera 1 is turned on, the control unit 34 sets the first imaging area B1 and the second imaging area B2 in the imaging element 32a. As described above, the first imaging region B1 is a region where the recording image 51 is captured. The second imaging region B2 is a region where a detection image 52 for detection is captured.
 次に、制御部34は、撮像素子32aの第1撮像領域B1に記録用の第1ゲインを設定するとともに、撮像素子32aの第2撮像領域B2に上記第1ゲインよりも高い第2ゲインを設定する。ゲイン設定を行った制御部34は、撮像素子32aの第2撮像領域B2で検出用画像52のための撮像を行わせ、撮像素子32aの第2撮像領域B2から読み出した画像データに基づき検出用画像52を生成する。制御部34は、物体検出部34aによって、検出用画像52に基づいて被写体要素を検出させる。
 一方、制御部34は、撮像素子32aの第1撮像領域B1で記録用画像51の撮像を行わせる。録画ボタンが操作される前は、記録部37による記録用画像51の記録を行わないものの、記録用画像51に基づいてモニタ用画像を生成するために記録用画像51を撮像する。設定部34bは、物体検出部34aによって検出された被写体要素に基づき、記録用画像51を分割する。
Next, the control unit 34 sets a first gain for recording in the first imaging region B1 of the imaging device 32a, and a second gain higher than the first gain in the second imaging region B2 of the imaging device 32a. Set. The control unit 34 that has performed the gain setting performs imaging for the detection image 52 in the second imaging region B2 of the imaging device 32a, and performs detection based on the image data read from the second imaging region B2 of the imaging device 32a. An image 52 is generated. The control unit 34 causes the object detection unit 34 a to detect the subject element based on the detection image 52.
On the other hand, the control unit 34 causes the recording image 51 to be captured in the first imaging region B1 of the imaging element 32a. Before the recording button is operated, the recording image 51 is not recorded by the recording unit 37, but the recording image 51 is captured in order to generate a monitor image based on the recording image 51. The setting unit 34b divides the recording image 51 based on the subject element detected by the object detection unit 34a.
<領域ごとの撮像条件の設定>
 図8を参照して説明すると、撮像部32で取得された記録用画像51は、例えば人物61の領域と、自動車62の領域と、バッグ63の領域と、山64の領域と、雲65の領域と、雲66の領域と、その他の領域とに分割される。制御部34は、物体検出部34aによって被写体検出処理が行われ、設定部34bによって画像を自動で分割すると、図9に例示するような設定画面を表示部35に表示させる。図9において、モニタ用画像60aが表示部35に表示され、モニタ用画像60aの右側に撮像条件の設定画面70が表示される。
<Setting imaging conditions for each area>
Referring to FIG. 8, the recording image 51 acquired by the imaging unit 32 includes, for example, a person 61 area, an automobile 62 area, a bag 63 area, a mountain 64 area, and clouds 65. It is divided into a region, a cloud 66 region, and other regions. When the subject detection process is performed by the object detection unit 34a and the image is automatically divided by the setting unit 34b, the control unit 34 causes the display unit 35 to display a setting screen illustrated in FIG. In FIG. 9, a monitor image 60a is displayed on the display unit 35, and an imaging condition setting screen 70 is displayed on the right side of the monitor image 60a.
 設定画面70には、撮像条件の設定項目の一例として、上から順にフレームレート、シャッタースピード(TV)、ゲイン(ISO感度)が挙げられている。フレームレートは、カメラ1によって1秒間に撮像される動画のフレーム数である。シャッタースピードは露光時間に対応する。ゲインはISO感度に対応する。撮像条件の設定項目は、図9に例示した他にも適宜加えて構わない。全ての設定項目が設定画面70の中に収まらない場合は、設定項目を上下にスクロールさせることによって他の設定項目を表示させるようにしてもよい。 The setting screen 70 lists frame rate, shutter speed (TV), and gain (ISO sensitivity) in order from the top as an example of setting items for imaging conditions. The frame rate is the number of frames of a moving image captured by the camera 1 per second. The shutter speed corresponds to the exposure time. The gain corresponds to the ISO sensitivity. The setting items of the imaging conditions may be added as appropriate in addition to those exemplified in FIG. When all the setting items do not fit in the setting screen 70, other setting items may be displayed by scrolling the setting items up and down.
 本実施の形態において、制御部34は、設定部34bによって分割された領域のうち、ユーザ操作によって選択された領域を撮像条件の設定(変更)の対象にすることができる。例えば、タッチ操作が可能なカメラ1において、ユーザは、モニタ用画像60aが表示されている表示部35の表示面上で、撮像条件を設定(変更)したい被写体の表示位置をタッチ操作する。制御部34は、例えば人物61の表示位置がタッチ操作された場合に、モニタ用画像60aにおいて人物61に対応する領域を撮像条件の設定(変更)対象領域にするとともに、人物61に対応する領域の輪郭を強調して表示させる。 In the present embodiment, the control unit 34 can set an area selected by a user operation among the areas divided by the setting unit 34b as a target for setting (changing) the imaging condition. For example, in the camera 1 that can be touched, the user touches the display position of the subject for which the imaging condition is to be set (changed) on the display surface of the display unit 35 on which the monitor image 60a is displayed. For example, when the display position of the person 61 is touched, the control unit 34 sets an area corresponding to the person 61 in the monitor image 60 a as an imaging condition setting (change) target area and an area corresponding to the person 61. The outline is highlighted.
 図9において、輪郭を強調して表示する領域は、撮像条件の設定(変更)の対象となる領域を示す。強調した表示とは、例えば、太く表示、明るく表示、色を変えて表示、破線で表示、点滅表示等である。図9の例では、人物61に対応する領域の輪郭を強調したモニタ用画像60aが表示されているものとする。この場合は、強調表示されている領域が、撮像条件の設定(変更)の対象である。例えば、タッチ操作が可能なカメラ1において、ユーザによってシャッタースピード(TV)の表示71がタッチ操作されると、制御部34は、強調して表示されている領域(人物61)に対するシャッタースピードの現設定値を画面内に表示させる(符号68)。
 以降の説明では、タッチ操作を前提としてカメラ1の説明を行うが、操作部材36を構成するボタン等の操作により、撮像条件の設定(変更)を行うようにしてもよい。
In FIG. 9, an area to be displayed with an emphasized outline indicates an area to be set (changed) for imaging conditions. The highlighted display is, for example, a thick display, a bright display, a display with a different color, a broken line display, a blinking display, or the like. In the example of FIG. 9, it is assumed that the monitor image 60 a in which the outline of the region corresponding to the person 61 is emphasized is displayed. In this case, the highlighted area is a target for setting (changing) the imaging condition. For example, in the camera 1 capable of touch operation, when the user performs a touch operation on the shutter speed (TV) display 71, the control unit 34 displays the current shutter speed for the highlighted area (person 61). The set value is displayed on the screen (reference numeral 68).
In the following description, the camera 1 is described on the premise of a touch operation. However, the imaging condition may be set (changed) by operating a button or the like constituting the operation member 36.
 シャッタースピード(TV)の上アイコン71aまたは下アイコン71bがユーザによってタッチ操作されると、設定部34bは、シャッタースピードの表示68を現設定値から上記タッチ操作に応じて増減させるとともに、強調して表示されている領域(人物61)に対応する撮像素子32aの単位領域131(図3)の撮像条件を、上記タッチ操作に応じて変更するように撮像部32(図1)へ指示を送る。決定アイコン72は、設定された撮像条件を確定させるための操作アイコンである。設定部34bは、フレームレートやゲイン(ISO)の設定(変更)についても、シャッタースピード(TV)の設定(変更)の場合と同様に行う。 When the upper icon 71a or the lower icon 71b of the shutter speed (TV) is touched by the user, the setting unit 34b increases or decreases the shutter speed display 68 from the current setting value according to the touch operation. An instruction is sent to the imaging unit 32 (FIG. 1) so as to change the imaging condition of the unit area 131 (FIG. 3) of the imaging element 32a corresponding to the displayed area (person 61) in accordance with the touch operation. The decision icon 72 is an operation icon for confirming the set imaging condition. The setting unit 34b performs the setting (change) of the frame rate and gain (ISO) in the same manner as the setting (change) of the shutter speed (TV).
 なお、設定部34bは、ユーザの操作に基づいて撮像条件を設定するように説明したが、これに限定されない。設定部34bは、ユーザの操作に基づかずに、制御部34の判断により撮像条件を設定するようにしてもよい。例えば、画像における最大輝度または最小輝度である被写体を含む領域において、白とびまたは黒つぶれが生じている場合、設定部34bは、制御部34の判断により、白とびまたは黒つぶれを解消するように撮像条件を設定するようにしてもよい。
 強調表示されていない領域(人物61以外の他の領域)については、設定されている撮像条件が維持される。
Although the setting unit 34b has been described as setting the imaging condition based on the user's operation, the setting unit 34b is not limited to this. The setting unit 34b may set the imaging conditions based on the determination of the control unit 34 without being based on a user operation. For example, when an overexposure or underexposure occurs in an area including a subject having the maximum luminance or the minimum luminance in the image, the setting unit 34b cancels the overexposure or underexposure based on the determination of the control unit 34. Imaging conditions may be set.
For the area that is not highlighted (area other than the person 61), the set imaging conditions are maintained.
 制御部34は、撮像条件の設定(変更)の対象となる領域の輪郭を強調表示する代わりに、対象領域全体を明るく表示させたり、対象領域全体のコントラストを高めて表示させたり、対象領域全体を点滅表示させたりしてもよい。また、対象領域を枠で囲ってもよい。対象領域を囲う枠の表示は、二重枠や一重枠でもよく、囲う枠の線種、色や明るさ等の表示態様は、適宜変更して構わない。また、制御部34は、対象領域の近傍に矢印などの撮像条件の設定の対象となる領域を指し示す表示をしてもよい。制御部34は、撮像条件の設定(変更)の対象となる対象領域以外を暗く表示させたり、対象領域以外のコントラストを低く表示させたりしてもよい。 Instead of highlighting the outline of the area for which the imaging condition is set (changed), the control unit 34 displays the entire target area brightly, increases the contrast of the entire target area, or displays the entire target area. May be displayed blinking. Further, the target area may be surrounded by a frame. The display of the frame surrounding the target area may be a double frame or a single frame, and the display mode such as the line type, color, and brightness of the surrounding frame may be appropriately changed. In addition, the control unit 34 may display an indication of an area for which an imaging condition is set, such as an arrow, in the vicinity of the target area. The control unit 34 may darkly display a region other than the target region for which the imaging condition is set (changed), or may display a low contrast other than the target region.
 制御部34は、操作部材36を構成する不図示の録画ボタン、または録画開始を指示する表示(例えば図9のレリーズアイコン74)が操作されると、記録部37によって記録用画像51の記録を開始させる。
 記録用画像51の撮像条件は、上記の分割された領域(人物61、自動車62、バッグ63、山64、雲65、雲66)ごとに異なる撮像条件を適用することも、上記の分割された領域に共通の撮像条件を適用することもできる。そして、画像処理部33は、撮像部32によって取得された画像データに対して画像処理を行う。画像処理は、上記の分割された領域ごとに異なる画像処理条件で行うこともできる。
When a recording button (not shown) constituting the operation member 36 or a display for instructing recording start (for example, a release icon 74 in FIG. 9) is operated, the control unit 34 records the recording image 51 by the recording unit 37. Let it begin.
The imaging conditions of the recording image 51 may be applied to different imaging conditions for each of the above divided areas (person 61, car 62, bag 63, mountain 64, cloud 65, cloud 66). A common imaging condition can also be applied to the region. The image processing unit 33 performs image processing on the image data acquired by the imaging unit 32. Image processing can also be performed under different image processing conditions for each of the divided areas.
 例えば、制御部34は、上記のように分割された各領域に対する撮像条件として、第1条件から第6条件を設定する。なお、以降は、第1条件を設定する人物61の領域を第1領域61と呼び、第2条件を設定する自動車62の領域を第2領域62と呼び、第3条件を設定するバッグ63の領域を第3領域63と呼び、第4条件を設定する山64の領域を第4領域64と呼び、第5条件を設定する雲65の領域を第5領域65と呼び、第6条件を設定する雲66の領域を第6領域66と呼ぶ。
 このように設定され、取得された記録用画像51の画像データは、記録部37によって記録される。制御部34は、操作部材36を構成する不図示の録画ボタンが再度操作される、または、録画終了を指示する表示が操作されると、記録部37による記録用画像51の記録を終了させる。
For example, the control unit 34 sets the first condition to the sixth condition as the imaging condition for each area divided as described above. Hereinafter, the area of the person 61 for setting the first condition is referred to as the first area 61, the area of the automobile 62 for setting the second condition is referred to as the second area 62, and the bag 63 for setting the third condition. The region is called the third region 63, the region of the mountain 64 that sets the fourth condition is called the fourth region 64, the region of the cloud 65 that sets the fifth condition is called the fifth region 65, and the sixth condition is set A region of the cloud 66 that is to be referred to is called a sixth region 66.
The image data of the recording image 51 set and acquired in this way is recorded by the recording unit 37. When a recording button (not shown) constituting the operation member 36 is operated again or a display instructing the end of recording is operated, the control unit 34 ends the recording of the recording image 51 by the recording unit 37.
<記録用画像51および検出用画像52を撮像するタイミング>
 図10は、記録用画像51(Dv1、Dv2、Dv3、…)の撮像タイミングと、検出用画像52(Di、Dii、Diii、Div…)の撮像タイミングと、モニタ用画像(LV1、LV2、LV3、…)の表示タイミングとを例示する図である。撮像制御部34cは、時刻t0においてカメラ1のメインスイッチがオン操作されると、設定部34bによって設定される撮像条件で、撮像部32により記録用画像51と検出用画像52とを撮像させる。撮像制御部34cは、撮像素子32aの第1撮像領域B1で記録用画像51のための撮像を行わせる。これにより、撮像素子32aが1フレーム目の記録用画像Dv1、2フレーム目の記録用画像Dv2、3フレーム目の記録用画像Dv3、…を順次撮像する。本例では、撮像素子32aの第1撮像領域B1により、記録用画像51の撮像を繰り返す。
<Timing for capturing the recording image 51 and the detection image 52>
FIG. 10 shows the imaging timing of the recording image 51 (Dv1, Dv2, Dv3,...), The imaging timing of the detection image 52 (Di, Dii, Diii, Div ...), and the monitor images (LV1, LV2, LV3). ,... Is a diagram illustrating display timing of. When the main switch of the camera 1 is turned on at time t0, the imaging control unit 34c causes the imaging unit 32 to capture the recording image 51 and the detection image 52 under the imaging conditions set by the setting unit 34b. The imaging control unit 34c performs imaging for the recording image 51 in the first imaging area B1 of the imaging element 32a. As a result, the image sensor 32a sequentially captures the first frame recording image Dv1, the second frame recording image Dv2, the third frame recording image Dv3,. In this example, the imaging of the recording image 51 is repeated by the first imaging area B1 of the imaging element 32a.
 また、撮像制御部34cは、記録用画像51の撮像の1フレーム期間において、撮像素子32aの第2撮像領域B2により、検出用画像52のための撮像を4回行わせる。これにより、撮像素子32aが1フレーム目の検出用画像Di、2フレーム目の検出用画像Dii、3フレーム目の検出用画像Diii、4フレーム目の検出用画像Divを順次撮像する。本例では、記録用画像51の1フレーム期間に対して、検出用画像52として4フレームの検出用画像Di、Dii、Diii、Divを撮像する。 Further, the imaging control unit 34c causes the second imaging region B2 of the imaging element 32a to perform imaging for the detection image 52 four times during one frame period of imaging of the recording image 51. Thus, the image sensor 32a sequentially captures the detection image Di for the first frame, the detection image Dii for the second frame, the detection image Diii for the third frame, and the detection image Div for the fourth frame. In this example, four frames of detection images Di, Dii, Diii, and Div are captured as the detection image 52 for one frame period of the recording image 51.
 図10の例では、第1撮像領域B1による1フレームの記録用画像51の撮像と並行して、第2撮像領域B2により4フレームの検出用画像52を撮像する。4フレームの検出用画像Di、Dii、Diii、Divのうちの検出用画像Diは、例えば被写体検出用に、検出用画像Diiは、例えば焦点検出用に、検出用画像Diiiは、例えば撮像条件設定用に、検出用画像Divは、例えば画像生成用に、それぞれ用いられる。 In the example of FIG. 10, the detection image 52 of 4 frames is captured by the second imaging region B2 in parallel with the capturing of the recording image 51 of 1 frame by the first imaging region B1. Of the four-frame detection images Di, Dii, Diii, and Div, the detection image Di is, for example, for subject detection, the detection image Dii is, for example, for focus detection, and the detection image Diii is, for example, an imaging condition setting. Therefore, the detection image Div is used for image generation, for example.
 また、制御部34は、第1フレームの記録用画像Dv1に基づいて第1フレームのモニタ用画像LV1を表示部35に表示させる。そして撮像制御部34cは、第1フレームの記録用画像Dv1の撮像と並行して撮像した検出用画像Di、Dii、Diii、Divに基づいて検出した被写体位置や、決定した撮像条件を、第2フレームの記録用画像Dv2の撮像に反映させるように、撮像部32を制御する。 Further, the control unit 34 causes the display unit 35 to display the monitor image LV1 of the first frame based on the recording image Dv1 of the first frame. Then, the imaging control unit 34c sets the subject position detected based on the detection images Di, Dii, Diii, and Div captured in parallel with the recording of the recording image Dv1 for the first frame and the determined imaging conditions to the second The imaging unit 32 is controlled so as to be reflected in the imaging of the frame recording image Dv2.
 同様に、制御部34は、第2フレームの記録用画像Dv2に基づいて第2フレームのモニタ用画像LV2を表示部35に表示させる。そして撮像制御部34cは、第2フレームの記録用画像Dv2の撮像と並行して撮像した検出用画像Di、Dii、Diii、Divに基づいて検出した被写体位置や、決定した撮像条件を、第3フレームの記録用画像Dv3の撮像に反映させるように、撮像部32を制御する。制御部34は、以降も同様の処理を繰り返す。 Similarly, the control unit 34 causes the display unit 35 to display the monitor image LV2 of the second frame based on the recording image Dv2 of the second frame. Then, the imaging control unit 34c sets the subject position detected based on the detection images Di, Dii, Diii, and Div captured in parallel with the recording of the recording image Dv2 for the second frame, and the determined imaging conditions to the third The imaging unit 32 is controlled so as to be reflected in the imaging of the frame recording image Dv3. The control unit 34 repeats the same processing thereafter.
 制御部34は、時刻t1においてカメラ1の録画ボタンが操作されると、その時点より後に撮像された記録用画像51(記録用画像Dv3以降)の画像データを、記録部37によって記録させる。 When the recording button of the camera 1 is operated at time t1, the control unit 34 causes the recording unit 37 to record the image data of the recording image 51 (recording image Dv3 and later) captured after that time.
 なお、撮像制御部34cは、撮像素子32aの第1撮像領域B1で第1フレームの記録用画像Dv1の撮像を開始する前に、撮像素子32aの第2撮像領域B2において検出用画像Diと、検出用画像Diiと、検出用画像Diiiと、検出用画像Divとを撮像させてもよい。例えば、ユーザによりカメラ1の電源オン操作が行われた後、第1撮像領域B1における第1フレームの記録用画像Dv1の撮像より先に、第2撮像領域B2において検出用画像Diと、検出用画像Diiと、検出用画像Diiiと、検出用画像Divとを撮像させる。そして撮像制御部34cは、検出用画像Di、Dii、Diii、Divに基づいて検出した被写体位置や、決定した撮像条件を、第1フレームの記録用画像Dv1の撮像に反映させるように、撮像部32を制御する。 The imaging control unit 34c detects the detection image Di in the second imaging area B2 of the imaging element 32a before starting the imaging of the recording image Dv1 of the first frame in the first imaging area B1 of the imaging element 32a. The detection image Dii, the detection image Diii, and the detection image Div may be captured. For example, after the user turns on the camera 1, the detection image Di in the second imaging region B2 and the detection image are detected before the recording of the first frame recording image Dv1 in the first imaging region B1. The image Dii, the detection image Diii, and the detection image Div are imaged. Then, the imaging control unit 34c captures the subject position detected based on the detection images Di, Dii, Diii, and Div and the determined imaging conditions in the imaging of the recording image Dv1 of the first frame. 32 is controlled.
 また、カメラ1の電源オン操作後、撮像制御部34cが自動的に検出用画像Di、Dii、Diii、Divの撮像を開始させる代わりに、以下の操作を待って検出用画像Di、Dii、Diii、Divの撮像を開始させてもよい。撮像制御部34cは、例えば、撮像に関連する操作がユーザにより行われた場合に、撮像素子32aの第2撮像領域B2において検出用画像Di、Dii、Diii、Divの撮像を開始させる。撮像に関連する操作としては、例えば、撮像倍率を変更する操作、絞りを変更する操作、焦点調節に関する操作(例えばフォーカスポイントの選択)等がある。変更の操作が終了し新たな設定が確定すると、撮像制御部34cは、撮像素子32aの第2撮像領域B2において検出用画像Di、Dii、Diii、Divを撮像させる。これにより、第1フレームの記録用画像Dv1を行う前に、検出用画像Di、Dii、Diii、Divを生成できる。 In addition, after the power-on operation of the camera 1, the imaging control unit 34c does not automatically start imaging the detection images Di, Dii, Diii, and Div, but waits for the following operations to detect the detection images Di, Dii, and Diii. , Div imaging may be started. For example, when an operation related to imaging is performed by the user, the imaging control unit 34c starts imaging of the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging element 32a. As operations related to imaging, there are, for example, an operation for changing an imaging magnification, an operation for changing an aperture, an operation related to focus adjustment (for example, selection of a focus point), and the like. When the change operation is completed and the new setting is confirmed, the imaging control unit 34c images the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging element 32a. As a result, the detection images Di, Dii, Diii, and Div can be generated before the recording image Dv1 for the first frame is performed.
 さらにまた、撮像制御部34cは、例えば表示部35の画面上に対してメニュー操作が行われている場合に、撮像素子32aの第2撮像領域B2において検出用画像Di、Dii、Diii、Divの撮像を行わせてもよい。この理由は、メニュー画面から撮像に関連する操作が行われている場合において、新たな設定が行われる可能性が高いからである。この場合、撮像制御部34cは、メニュー画面によってユーザが操作している期間に、撮像素子32aの第2撮像領域B2において検出用画像Di、Dii、Diii、Divの撮像を繰り返し行わせる。 Furthermore, when the menu operation is performed on the screen of the display unit 35, for example, the imaging control unit 34c displays the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging device 32a. Imaging may be performed. This is because there is a high possibility that a new setting is made when an operation related to imaging is performed from the menu screen. In this case, the imaging control unit 34c repeatedly performs imaging of the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging element 32a during the period when the user is operating from the menu screen.
 なお、第1フレームの記録用画像Dv1の撮像を開始する前に検出用画像Di、検出用画像Dii、検出用画像Diii、検出用画像Divの撮像を行った場合において、検出用画像Di、検出用画像Dii、検出用画像Diii、検出用画像Divの撮像を行った後は、記録用画像Dv1の撮像を開始してよい。例えば、検出用画像Di、Dii、Diii、Divに基づいて検出した被写体位置や、決定した撮像条件を反映して記録用画像Dv1を撮像し、記録用画像Dv1に基づきモニタ用画像LV1を表示できるように、撮像制御部34cが撮像部32を制御する。 It should be noted that when the detection image Di, the detection image Dii, the detection image Diii, and the detection image Div are imaged before the recording of the first frame recording image Dv1 is started, the detection image Di and the detection are detected. After the image for image Dii, the image for detection Diii, and the image for detection Div are taken, the image for recording Dv1 may be taken. For example, the recording image Dv1 can be imaged reflecting the subject position detected based on the detection images Di, Dii, Diii, Div and the determined imaging conditions, and the monitor image LV1 can be displayed based on the recording image Dv1. As described above, the imaging control unit 34 c controls the imaging unit 32.
 また、撮像に関連しない操作、例えば画像を再生表示するための操作や再生表示中の操作や時計合わせのための操作が行われている場合には、撮像制御部34cは、撮像素子32aの第2撮像領域B2において検出用画像Di、Dii、Diii、Divの撮像を行わせなくてもよい。この理由は、撮像のための新たな設定が行われる可能性が低い場合に、無駄に検出用画像Di、Dii、Diii、Divを撮像させないためである。 In addition, when an operation not related to imaging, for example, an operation for reproducing and displaying an image, an operation during reproduction display, or an operation for clock adjustment is performed, the imaging control unit 34c performs the operation of the imaging element 32a. It is not necessary to capture the detection images Di, Dii, Diii, and Div in the two imaging regions B2. This is because the detection images Di, Dii, Diii, and Div are not wastefully imaged when there is a low possibility that new settings for imaging are performed.
 さらにまた、操作部材36が検出用画像Di、検出用画像Dii、検出用画像Diiiの撮像を指示するための専用ボタンを有する場合には、ユーザにより専用ボタンが操作されると、撮像制御部34cは、撮像素子32aの第2撮像領域B2において検出用画像Di、Dii、Diii、Divを撮像させてよい。 Furthermore, when the operation member 36 has a dedicated button for instructing to capture the detection image Di, the detection image Dii, and the detection image Diii, when the dedicated button is operated by the user, the imaging control unit 34c. May image the detection images Di, Dii, Diii, and Div in the second imaging region B2 of the imaging element 32a.
 また、ユーザが専用ボタンを操作している間に操作部材36が操作信号を出力し続ける構成を有する場合には、撮像制御部34cは、専用ボタンの操作が行われている期間中、所定の周期ごとに撮像素子32aの第2撮像領域B2において検出用画像Di、Dii、Diii、Divを撮像させてもよいし、専用ボタンの操作が終了した時点で撮像素子32aの第2撮像領域B2において検出用画像Di、Dii、Diii、Divを撮像させてもよい。これにより、ユーザが所望するタイミングで検出用画像Di、Dii、Diii、Divを撮像させることができる。 In addition, when the operation member 36 continues to output the operation signal while the user operates the dedicated button, the imaging control unit 34c performs a predetermined operation during the period during which the dedicated button is operated. The detection images Di, Dii, Diii, Div may be imaged in the second imaging region B2 of the imaging device 32a every period, or in the second imaging region B2 of the imaging device 32a when the operation of the dedicated button is finished. The detection images Di, Dii, Diii, and Div may be captured. As a result, the detection images Di, Dii, Diii, and Div can be captured at a timing desired by the user.
 なお、以上の説明では、撮像素子32aの第2撮像領域B2において、それぞれ用途が異なる4フレームの検出用画像Di、Dii、Diii、Divを撮像する例を説明したが、必ずしも4フレームの検出画像を撮像しなくてもよい。例えば、検出用画像Diを被写体検出用と焦点調節用とに共用できる場合は、検出用画像Diと、撮像条件設定用に用いる検出用画像Diiとを撮像すればよい。
 検出用画像Di、Dii、Diii、Divをいずれの用途に用いるかは、ユーザが表示部35に表示されるメニュー画面から選択、決定可能に構成してよい。
In the above description, an example in which four frames of detection images Di, Dii, Diii, and Div having different uses are captured in the second imaging region B2 of the imaging element 32a has been described. Does not have to be imaged. For example, if the detection image Di can be shared for subject detection and focus adjustment, the detection image Di and the detection image Dii used for setting the imaging conditions may be captured.
The use of the detection images Di, Dii, Diii, and Div may be configured so that the user can select and determine from the menu screen displayed on the display unit 35.
<像ぶれ補正>
 第1の実施の形態では、手ぶれに伴うカメラ1の振れに基づき、撮像素子32aの第1撮像領域B1から読み出した記録用画像51にトリミング処理を施すことによって、撮像された記録用画像51の像ぶれを抑える。このような像ぶれの抑制は、像ぶれ補正とも称される。像ぶれ補正の詳細について以下に説明する。
 一般に、手ぶれによりカメラ1で発生する像ぶれは、カメラ1の回転運動に伴う像ぶれ(角度ぶれとも称する)と、カメラ1の並進移動に伴う像ぶれ(並進ぶれとも称する)とに分けられる。制御部34は、カメラ1の回転運動による像ぶれと、カメラ1の並進移動による像ぶれとを算出する。
<Image blur correction>
In the first embodiment, the recording image 51 read out from the first imaging region B1 of the image pickup device 32a is subjected to trimming processing based on the shake of the camera 1 due to camera shake, so that the recorded recording image 51 is captured. Reduce image blur. Such suppression of image blur is also referred to as image blur correction. Details of the image blur correction will be described below.
In general, image blur generated in the camera 1 due to camera shake is divided into image blur (also referred to as angle blur) accompanying the rotational movement of the camera 1 and image blur (also referred to as translation blur) accompanying the translational movement of the camera 1. The control unit 34 calculates image blur due to rotational movement of the camera 1 and image blur due to translational movement of the camera 1.
 図11は、ぶれ補正部として機能する制御部34を説明する図である。制御部34は、ぶれ量演算部34eと、目標移動量演算部34fとを有する。
 ぶれ量演算部34eは、振れセンサ39によるX軸(図3)と平行な軸回り(Pitch方向)の検出信号を用いて、回転運動によるY軸方向の像ぶれを算出する。また、ぶれ量演算部34eは、振れセンサ39によるY軸(図3)と平行な軸回り(Yaw方向)の検出信号を用いて、回転運動によるX軸方向の像ぶれを算出する。
FIG. 11 is a diagram illustrating the control unit 34 that functions as a shake correction unit. The control unit 34 includes a shake amount calculation unit 34e and a target movement amount calculation unit 34f.
The shake amount calculation unit 34e calculates an image shake in the Y-axis direction due to the rotational motion using a detection signal around the axis (Pitch direction) parallel to the X axis (FIG. 3) by the shake sensor 39. Further, the shake amount calculation unit 34e calculates an image shake in the X-axis direction due to the rotational motion using a detection signal around the axis (Yaw direction) parallel to the Y-axis (FIG. 3) by the shake sensor 39.
 ぶれ量演算部34eはさらに、振れセンサ39によるX軸方向の検出信号を用いて、並進運動によるX軸方向の像ぶれを算出する。さらにまた、ぶれ量演算部34eは、振れセンサ39によるY軸方向の検出信号を用いて、並進運動によるY軸方向の像ぶれを算出する。 The shake amount calculation unit 34e further calculates an image shake in the X-axis direction due to translational motion using a detection signal in the X-axis direction by the shake sensor 39. Furthermore, the shake amount calculation unit 34e calculates the image shake in the Y-axis direction due to translational motion using the detection signal in the Y-axis direction from the shake sensor 39.
 目標移動量演算部34fは、ぶれ量演算部34eによって算出された、回転運動によるX軸方向およびY軸方向の像ぶれと、並進運動によるX軸方向およびY軸方向の像ぶれとを軸ごとに足し合わせて、X軸方向およびY軸方向の像ぶれを算出する。例えば、ある軸方向についてぶれ量演算部34eによって算出された回転運動による像ぶれと並進運動による像ぶれの向きが同じ場合は、足し合わせにより像ぶれが大きくなるが、算出された2つの像ぶれの向きが異なる場合は、足し合わせにより像ぶれが小さくなる。このように、各軸の像ぶれの向きにより、正負の符号をつけて足し合わせ演算を行う。 The target movement amount calculation unit 34f calculates the image blur in the X axis direction and the Y axis direction due to the rotational motion and the image blur in the X axis direction and the Y axis direction due to the translational motion calculated by the blur amount calculation unit 34e for each axis. In addition, the image blur in the X-axis direction and the Y-axis direction is calculated. For example, when the direction of the image blur due to the rotational motion and the direction of the image blur due to the translational motion calculated by the blur amount calculation unit 34e in the same axial direction are the same, the image blur increases due to the addition, but the two calculated image blurs. When the orientations of the images are different, the image blur is reduced by the addition. In this way, the addition calculation is performed by adding a positive or negative sign depending on the image blur direction of each axis.
 次に、目標移動量演算部34fは、足し合わせ後のX軸方向およびY軸方向の像ぶれと、撮影倍率(撮像光学系31のズームレンズの位置に基づいて算出する)と、カメラ1から被写体までの距離(撮像光学系31のフォーカスレンズの位置に基づいて算出する)とに基づいて、被写体像に対する像面(撮像素子32aの撮像面)の像ぶれ量を算出する。 Next, the target movement amount calculation unit 34f calculates the image blur in the X-axis direction and the Y-axis direction after addition, the photographing magnification (calculated based on the position of the zoom lens of the imaging optical system 31), and the camera 1. Based on the distance to the subject (calculated based on the position of the focus lens of the imaging optical system 31), the image blur amount of the image plane (imaging plane of the imaging element 32a) with respect to the subject image is calculated.
 図12(a)および図12(b)は、像ぶれ補正を説明する模式図である。図12(a)において、制御部34は、記録用画像51の中に記録用画像51よりも小さいトリミング範囲W1を設定する。トリミング範囲W1の大きさは、例えば記録用画像51のサイズの60%とする。なお、トリミング範囲W1の大きさを、振れセンサ39による検出信号の平均的な大きさによって決定してもよい。制御部34は、例えば、直近の所定時間における振れセンサ39による検出信号の平均値が大きいほどトリミング範囲W1を小さくし、振れセンサ39による検出信号の平均値が小さいほどトリミング範囲W1を大きくする。 12 (a) and 12 (b) are schematic diagrams for explaining image blur correction. In FIG. 12A, the control unit 34 sets a trimming range W <b> 1 smaller than the recording image 51 in the recording image 51. The size of the trimming range W1 is, for example, 60% of the size of the recording image 51. Note that the size of the trimming range W1 may be determined by the average size of the detection signal from the shake sensor 39. For example, the control unit 34 decreases the trimming range W1 as the average value of the detection signal from the shake sensor 39 in the most recent predetermined time increases, and increases the trimming range W1 as the average value of the detection signal from the shake sensor 39 decreases.
 制御部34は、上述したように設定したトリミング範囲W1を、記録用画像51の中でカメラ1の振れ方向と反対方向に移動させることによって記録用画像51に対する像ぶれ補正を行う。そのため、目標移動量演算部34fは、上述した像面の像ぶれ量を打ち消すのに必要なトリミング範囲W1の移動方向および移動量を、目標移動量として演算する。目標移動量演算部34fは、補正部33bに対して演算した目標移動量を補正部33bへ出力する。 The control unit 34 performs image blur correction on the recording image 51 by moving the trimming range W <b> 1 set as described above in the direction opposite to the shake direction of the camera 1 in the recording image 51. Therefore, the target movement amount calculation unit 34f calculates, as the target movement amount, the movement direction and movement amount of the trimming range W1 necessary for canceling the above-described image blur amount on the image plane. The target movement amount calculation unit 34f outputs the target movement amount calculated for the correction unit 33b to the correction unit 33b.
 図12(b)は、記録用画像51の中でトリミング範囲W1を移動させた状態を例示する。すなわち、図12(a)の状態からカメラ1が上方に振れ、記録用画像51における被写体像が下方に像ぶれしたため、補正部33bによってトリミング範囲W1が下方に移動された状態を示す。このような像ぶれ補正により、手ぶれによって記録用画像51における被写体の位置が移動しても、トリミング範囲W1において略同じ位置に被写体が収まる。図12(a)および図12(b)においてハッチング処理した領域は、トリミング範囲W1の移動代に相当する。 FIG. 12B illustrates a state in which the trimming range W1 is moved in the recording image 51. That is, since the camera 1 is shaken upward from the state of FIG. 12A and the subject image in the recording image 51 is blurred downward, the trimming range W1 is moved downward by the correction unit 33b. By such image blur correction, even if the position of the subject in the recording image 51 is moved due to camera shake, the subject stays at substantially the same position in the trimming range W1. The area hatched in FIGS. 12A and 12B corresponds to the movement allowance of the trimming range W1.
 補正部33bは、図12(a)および図12(b)に例示した画像51-1および画像51-2のように、記録用画像51の画像データのうちのトリミング範囲W1に相当する画像の画像データをそれぞれ抽出し、抽出した画像データを像ぶれ補正後の記録用画像51の画像データとする。制御部34は、このような像ぶれ補正を、記録用画像51を構成するフレームごとに行う。 The correction unit 33b, like the image 51-1 and the image 51-2 illustrated in FIGS. 12A and 12B, the image corresponding to the trimming range W1 in the image data of the recording image 51. Image data is extracted, and the extracted image data is used as the image data of the recording image 51 after image blur correction. The control unit 34 performs such image blur correction for each frame constituting the recording image 51.
 第1の実施の形態において、制御部34は、記録用画像51だけでなく検出用画像52に対しても記録用画像51と同様の像ぶれ補正を行う。制御部34はさらに、検出用画像52に対して設定する後述の注目領域も、検出用画像52の中で移動させる。図13を参照して、以下に詳細に説明する。 In the first embodiment, the control unit 34 performs image blur correction similar to the recording image 51 not only on the recording image 51 but also on the detection image 52. The control unit 34 also moves a later-described attention area set for the detection image 52 in the detection image 52. This will be described in detail below with reference to FIG.
 図13(a)~図13(d)は、像ぶれ補正を説明する模式図である。図13(a)は、記録用画像51に対する像ぶれ補正を説明する図であり、図13(b)は、図13(a)の記録用画像51に対応する、検出用画像52に対する像ぶれ補正を説明する図である。図13(b)において、制御部34は、対応する記録用画像51と同様に、検出用画像52の中に検出用画像52よりも小さいトリミング範囲W2を設定する。トリミング範囲W2の大きさは、記録用画像51の場合と同様に、検出用画像52のサイズの60%とする。なお、トリミング範囲W2の大きさを、振れセンサ39による検出信号の平均的な大きさによって決定してよい点も、記録用画像51の場合と同様である。 13 (a) to 13 (d) are schematic diagrams for explaining image blur correction. 13A is a diagram for explaining image blur correction for the recording image 51, and FIG. 13B is an image blur for the detection image 52 corresponding to the recording image 51 of FIG. 13A. It is a figure explaining correction | amendment. In FIG. 13B, the control unit 34 sets a trimming range W <b> 2 smaller than the detection image 52 in the detection image 52 like the corresponding recording image 51. The size of the trimming range W2 is set to 60% of the size of the detection image 52 as in the case of the recording image 51. Note that the size of the trimming range W <b> 2 may be determined by the average size of the detection signal from the shake sensor 39 as in the case of the recording image 51.
 制御部34は、上述したように設定したトリミング範囲W2を、検出用画像51の中でカメラ1の振れ方向と反対の方向へ移動させることによって、検出用画像52に対する像ぶれ補正を行う。補正部33bは、図13(b)に例示した画像52-2のように、検出用画像52の画像データのうちのトリミング範囲W2に相当する画像の画像データを抽出し、抽出した画像データを像ぶれ補正後の検出用画像52の画像データとする。 The control unit 34 performs image blur correction on the detection image 52 by moving the trimming range W2 set as described above in a direction opposite to the shake direction of the camera 1 in the detection image 51. The correction unit 33b extracts the image data of the image corresponding to the trimming range W2 from the image data of the detection image 52, as in the image 52-2 illustrated in FIG. 13B, and extracts the extracted image data. The image data of the detection image 52 after image blur correction is used.
 制御部34は、検出用画像52に対して設定した注目範囲61-2についても、検出用画像52の中でトリミング範囲W2とともにカメラ1の振れ方向と反対の方向へ移動させる。このとき、移動までのトリミング範囲W2と異動後のトリミング範囲W2との距離は、移動前の注目範囲61-2と移動後の注目範囲61-2との距離と同じとなる。ここで、移動前のトリミング範囲W2と異動後のトリミング範囲W2との距離とは、例えば、移動前のトリミング範囲W2の重心と移動後のトリミング範囲W2の重心との距離である。ここでいう距離は、トリミング範囲W2の移動量に一致する。なお、移動前の注目範囲61-2と移動後の注目範囲61-2との距離につても同様である。注目範囲61-2とは、検出用画像52の中で物体検出部34aが被写体要素を検出する領域、検出用画像52の中でレンズ移動制御部34dが焦点検出処理を行う領域、検出用画像52の中で設定部34bが露出演算処理を行う領域、検出用画像52の中で画像処理部33が画像生成処理時に参照する領域に対応する。注目範囲61-2の形状は、正方形でも長方形でもよく、円形や楕円形状でもよい。また、注目範囲61-2の大きさは、被写体要素を検出する領域、焦点検出処理を行う領域、露出演算処理を行う領域、および画像生成処理時に参照する領域の大きさに合わせて適宜変更してよい。なお、移動後の注目領域61-2は、移動前の注目領域61-2と一部が重なっていても(移動前の注目領域61-2を含んでいても)よく、重なっていなくてもよい。また、移動後の注目領域61-2は、移動前の注目領域61-2を包含していてもよい。 The control unit 34 also moves the attention range 61-2 set for the detection image 52 in the direction opposite to the shake direction of the camera 1 together with the trimming range W2 in the detection image 52. At this time, the distance between the trimming range W2 until the movement and the trimming range W2 after the change is the same as the distance between the attention range 61-2 before the movement and the attention range 61-2 after the movement. Here, the distance between the trimming range W2 before movement and the trimming range W2 after change is, for example, the distance between the center of gravity of the trimming range W2 before movement and the center of gravity of the trimming range W2 after movement. The distance here corresponds to the amount of movement of the trimming range W2. The same applies to the distance between the attention range 61-2 before the movement and the attention range 61-2 after the movement. The attention range 61-2 is a region in the detection image 52 where the object detection unit 34a detects the subject element, a region in the detection image 52 where the lens movement control unit 34d performs focus detection processing, and a detection image. 52 corresponds to an area where the setting unit 34b performs the exposure calculation process, and an area referred to by the image processing unit 33 during the image generation process in the detection image 52. The shape of the attention range 61-2 may be a square or a rectangle, and may be a circle or an ellipse. The size of the attention range 61-2 is appropriately changed according to the size of the area for detecting the subject element, the area for performing the focus detection process, the area for performing the exposure calculation process, and the area to be referred to during the image generation process. It's okay. Note that the attention area 61-2 after movement may partially overlap the attention area 61-2 before movement (may include the attention area 61-2 before movement) or may not overlap. Good. Further, the attention area 61-2 after the movement may include the attention area 61-2 before the movement.
 制御部34は、検出用画像52を得るための第2撮像条件の中で、注目範囲61-2と、第2撮像領域B2における他の領域とで異なる撮像条件を設定することができる。注目範囲61-2を検出用画像52の中でトリミング範囲W2とともに移動させる場合に、制御部34は、注目範囲61-2の移動に伴って第2撮像条件の設定を変更する。すなわち、移動後の注目範囲61-2の位置に合わせて、第2撮像領域B2に対する第2撮像条件を設定し直す。 The control unit 34 can set different imaging conditions for the attention range 61-2 and other areas in the second imaging area B2 in the second imaging conditions for obtaining the detection image 52. When the attention range 61-2 is moved together with the trimming range W2 in the detection image 52, the control unit 34 changes the setting of the second imaging condition as the attention range 61-2 moves. That is, the second imaging condition for the second imaging region B2 is reset according to the position of the attention range 61-2 after movement.
 図13(c)は、記録用画像51に対する像ぶれ補正を説明する図であり、記録用画像51の中でトリミング範囲W1を下方の限界まで移動させた状態を例示する。図13(d)は、検出用画像52に対する像ぶれ補正を説明する図であり、検出用画像52の中でトリミング範囲W2を下方の限界まで移動させた状態を例示する。図13(c)のトリミング範囲W1は、記録用画像51の中で移動するので、図13(c)に示す位置よりも下方には移動できない。同様に、図13(d)のトリミング範囲W2も、検出用画像52の中で移動するので、図13(d)に示す位置よりも下方には移動できない。 FIG. 13C is a diagram for explaining image blur correction for the recording image 51, and illustrates a state in which the trimming range W1 is moved to the lower limit in the recording image 51. FIG. 13D is a diagram for explaining image blur correction for the detection image 52, and illustrates a state in which the trimming range W <b> 2 is moved to the lower limit in the detection image 52. Since the trimming range W1 in FIG. 13C moves in the recording image 51, it cannot move below the position shown in FIG. 13C. Similarly, since the trimming range W2 in FIG. 13D also moves in the detection image 52, it cannot move below the position shown in FIG. 13D.
 しかしながら、制御部34は、検出用画像52に対して設定した注目範囲61-3については、図13(d)に示すように、トリミング範囲W2の中でカメラ1の振れ方向と反対の方向へ継続して移動させる。注目範囲61-3の移動方向および移動量は、目標移動量演算部34fから補正部33bに対して出力された目標移動量である。 However, for the attention range 61-3 set for the detection image 52, the control unit 34 moves in the direction opposite to the shake direction of the camera 1 in the trimming range W2, as shown in FIG. 13 (d). Move continuously. The moving direction and moving amount of the attention range 61-3 are the target moving amounts output from the target moving amount calculating unit 34f to the correcting unit 33b.
 制御部34は、検出用画像52の中でトリミング範囲W2を下方の限界まで移動した以降において、注目する被写体(例えば、人物の頭部)がさらに下方に移動すると、注目範囲61-3をトリミング範囲W2の中で移動させる。移動後の注目範囲61-3に、人物の頭部が収まるようにするためである。このとき、移動前のトリミング範囲W2と異動後のトリミング範囲W2との距離は、移動前の注目範囲61-3と移動後の注目範囲61-3との距離と異なる。ここで、移動までのトリミング範囲W2と異動後のトリミング範囲W2との距離とは、例えば、移動前のトリミング範囲W2の重心と移動後のトリミング範囲W2の重心との距離である。ここでいう距離は、トリミング範囲W2の移動量に一致する。なお、移動前の注目範囲61-3と移動後の注目範囲61-3との距離についても同様である。これにより、カメラ1が振れても、検出用画像52の中で物体検出部34aが被写体要素を検出する領域、検出用画像52の中でレンズ移動制御部34dが焦点検出処理を行う領域、検出用画像52の中で設定部34bが露出演算処理を行う領域、検出用画像52の中で画像処理部33が画像生成処理時に参照する領域には、同じ被写体(本例では人物の頭部)が存在することとなる。この結果、カメラ1の振れによって注目する被写体が異なってしまう場合に比べて、被写体要素の検出、焦点検出、露出演算、画像生成に及ぼす影響を抑えることができる。 After moving the trimming range W2 to the lower limit in the detection image 52, the control unit 34 trims the attention range 61-3 when the subject of interest (for example, the head of a person) moves further downward. Move within the range W2. This is so that the head of the person is within the attention range 61-3 after the movement. At this time, the distance between the trimming range W2 before the movement and the trimming range W2 after the movement is different from the distance between the attention range 61-3 before the movement and the attention range 61-3 after the movement. Here, the distance between the trimming range W2 until the movement and the trimming range W2 after the change is, for example, the distance between the center of gravity of the trimming range W2 before the movement and the center of gravity of the trimming range W2 after the movement. The distance here corresponds to the amount of movement of the trimming range W2. The same applies to the distance between the attention range 61-3 before the movement and the attention range 61-3 after the movement. Thereby, even if the camera 1 is shaken, a region in which the object detection unit 34a detects a subject element in the detection image 52, a region in which the lens movement control unit 34d performs focus detection processing in the detection image 52, and detection In the image 52, the setting unit 34b performs the exposure calculation process, and in the detection image 52, the area referred to by the image processing unit 33 during the image generation process includes the same subject (in this example, the human head). Will exist. As a result, it is possible to suppress the influence on subject element detection, focus detection, exposure calculation, and image generation as compared with the case where the subject to be noticed differs due to camera shake.
<検出用画像52を用いる処理>
 第1の実施の形態では、上記のように撮像し、像ぶれ補正を行った検出用画像52を、画像処理、焦点検出処理、被写体検出処理、および露出条件設定処理に用いる。ここでは、各処理の一例を説明する。
<画像処理の例示>
 画像処理部33(生成部33c)は、例えば、検出用画像52の画像データのうちの注目画素P(処理対象画素)を中心とする所定サイズのカーネルを用いて画像処理を行う。図14は、カーネルを例示する図であり、図9のモニタ用画像60aにおける注目領域90に対応する。図14において、注目画素Pを中心とする注目領域90(例えば3×3画素)に含まれる注目画素Pの周囲の画素(本例では8画素)を参照画素Pr1~Pr8とする。注目画素Pの位置が注目位置であり、注目画素Pを囲む参照画素Pr1~Pr8の位置が参照位置である。
<Processing Using Detection Image 52>
In the first embodiment, the detection image 52 imaged as described above and subjected to image blur correction is used for image processing, focus detection processing, subject detection processing, and exposure condition setting processing. Here, an example of each process will be described.
<Example of image processing>
For example, the image processing unit 33 (the generation unit 33c) performs image processing using a kernel having a predetermined size centered on the target pixel P (processing target pixel) in the image data of the detection image 52. FIG. 14 is a diagram illustrating a kernel, and corresponds to the attention area 90 in the monitor image 60a of FIG. In FIG. 14, pixels around the target pixel P (eight pixels in this example) included in the target region 90 (for example, 3 × 3 pixels) centered on the target pixel P are set as reference pixels Pr1 to Pr8. The position of the target pixel P is the target position, and the positions of the reference pixels Pr1 to Pr8 surrounding the target pixel P are reference positions.
(1)画素欠陥補正処理
 画素欠陥補正処理は、撮像した画像に対して行う画像処理の1つである。一般に、固体撮像素子である撮像素子32aは、製造過程や製造後において画素欠陥が生じ、異常なレベルの画像データを出力する場合がある。そこで、画像処理部33の生成部33cは、画素欠陥が生じた画素から出力された画像データを補正することにより、画素欠陥が生じた画素位置における画像データを目立たないようにする。
(1) Pixel Defect Correction Process The pixel defect correction process is one of image processes performed on a captured image. In general, the image pickup element 32a, which is a solid-state image pickup element, may produce pixel defects in the manufacturing process or after manufacturing, and output abnormal level image data. Therefore, the generation unit 33c of the image processing unit 33 corrects the image data output from the pixel in which the pixel defect has occurred, thereby making the image data in the pixel position in which the pixel defect has occurred inconspicuous.
 画像処理部33の生成部33cは、例えば、1フレームの画像においてあらかじめ不図示の不揮発性メモリに記録されている画素欠陥の位置の画素を注目画素P(処理対象画素)とし、注目画素Pを中心とする注目領域90に含まれる注目画素Pの周囲の画素を参照画素Pr1~Pr8とする。 The generation unit 33c of the image processing unit 33 uses, for example, a pixel at the position of a pixel defect recorded in advance in a non-illustrated nonvolatile memory in an image of one frame as a target pixel P (processing target pixel), Pixels around the target pixel P included in the central target region 90 are set as reference pixels Pr1 to Pr8.
 画像処理部33の生成部33cは、参照画素Pr1~Pr8における画像データの最大値、最小値を算出し、注目画素Pから出力された画像データがこれら最大値または最小値を超えるときは注目画素Pから出力された画像データを上記最大値または最小値で置き換えるMax,Minフィルタ処理を行う。このような処理を、不図示の不揮発性メモリに位置情報が記録されている全ての画素欠陥に対して行う。 The generation unit 33c of the image processing unit 33 calculates the maximum value and the minimum value of the image data in the reference pixels Pr1 to Pr8, and when the image data output from the target pixel P exceeds these maximum value or minimum value, the target pixel Max and Min filter processing for replacing the image data output from P with the maximum value or the minimum value is performed. Such a process is performed for all pixel defects whose position information is recorded in a non-volatile memory (not shown).
 第1の実施の形態において、画像処理部33の生成部33cが、上述した画素欠陥補正処理を検出用画像52の画像データに対して行うことにより、検出用画像52に基づく被写体検出処理、焦点検出処理、および露出演算処理において画素欠陥の影響が及ぶことを防止できる。
 なお、画素欠陥補正処理は、検出用画像52に限らず、記録用画像51に対して行ってもよい。画像処理部33の生成部33cが、上述した画素欠陥補正処理を記録用画像51の画像データに対して行うことにより、記録用画像51における画素欠陥の影響を防止できる。
In the first embodiment, the generation unit 33c of the image processing unit 33 performs the above-described pixel defect correction processing on the image data of the detection image 52, whereby subject detection processing based on the detection image 52, focus It is possible to prevent the influence of pixel defects in the detection process and the exposure calculation process.
The pixel defect correction processing is not limited to the detection image 52, and may be performed on the recording image 51. The generation unit 33c of the image processing unit 33 performs the above-described pixel defect correction processing on the image data of the recording image 51, thereby preventing the influence of pixel defects in the recording image 51.
(2)色補間処理
 色補間処理は、撮像した画像に対して行う画像処理の1つである。図3に例示したように、撮像素子100の撮像チップ111は、緑色画素Gb、Gr、青色画素Bおよび赤色画素Rがベイヤー配列されている。画像処理部33の生成部33cは、各画素位置において配置されたカラーフィルタFの色成分と異なる色成分の画像データが不足するので、公知の色補間処理を行うことにより、周辺の画素位置の画像データを参照して不足する色成分の画像データを生成する。
(2) Color Interpolation Processing Color interpolation processing is one of image processing performed on a captured image. As illustrated in FIG. 3, in the imaging chip 111 of the imaging device 100, green pixels Gb and Gr, a blue pixel B, and a red pixel R are arranged in a Bayer array. Since the generation unit 33c of the image processing unit 33 lacks image data of a color component different from the color component of the color filter F arranged at each pixel position, by performing a known color interpolation process, The image data of the missing color component is generated with reference to the image data.
 第1の実施の形態において、画像処理部33の生成部33cが、色補間処理を検出用画像52の画像データに対して行うことにより、色補間された検出用画像52に基づき、被写体検出処理、焦点検出処理、および露出演算処理を適切に行うことが可能になる。
 なお、色補間処理は、検出用画像52に限らず、記録用画像51に対して行ってもよい。画像処理部33の生成部33cが、上述した色補間処理を記録用画像51の画像データに対して行うことにより、色補間された記録用画像51を得ることができる。
In the first embodiment, the generation unit 33c of the image processing unit 33 performs color interpolation processing on the image data of the detection image 52, so that subject detection processing is performed based on the color-interpolated detection image 52. It is possible to appropriately perform the focus detection process and the exposure calculation process.
Note that the color interpolation process may be performed not only on the detection image 52 but also on the recording image 51. The generation unit 33c of the image processing unit 33 performs the above-described color interpolation processing on the image data of the recording image 51, whereby the recording image 51 subjected to color interpolation can be obtained.
(3)輪郭強調処理
 輪郭強調処理は、撮像した画像に対して行う画像処理の1つである。画像処理部33の生成部33cは、例えば、1フレームの画像において、注目画素P(処理対象画素)を中心とする所定サイズのカーネルを用いた公知の線形フィルタ(Linear filter)演算を行う。線型フィルタの一例である尖鋭化フィルタのカーネルサイズがN×N画素の場合、注目画素Pの位置が注目位置であり、注目画素Pを囲む(N-1)個の参照画素Prの位置が参照位置である。
 なお、カーネルサイズはN×M画素であってもよい。
(3) Outline Enhancement Process The outline enhancement process is one of image processes performed on a captured image. The generation unit 33c of the image processing unit 33 performs, for example, a known linear filter calculation using a kernel of a predetermined size centered on the pixel of interest P (processing target pixel) in an image of one frame. When the kernel size of the sharpening filter which is an example of the linear filter is N × N pixels, the position of the target pixel P is the target position, and the positions of (N 2 −1) reference pixels Pr surrounding the target pixel P are Reference position.
The kernel size may be N × M pixels.
 画像処理部33の生成部33cは、注目画素Pにおける画像データを線型フィルタ演算結果で置き換えるフィルタ処理を、例えばフレーム画像の上部の水平ラインから下部の水平ラインへ向けて、各水平ライン上で注目画素を左から右へずらしながら行う。 The generation unit 33c of the image processing unit 33 performs a filter process for replacing the image data in the target pixel P with a linear filter calculation result on each horizontal line, for example, from the upper horizontal line to the lower horizontal line of the frame image. This is done while shifting the pixels from left to right.
 第1の実施の形態において、画像処理部33の生成部33cが、上述した輪郭強調処理を検出用画像52の画像データに対して行うことにより、輪郭が強調された検出用画像52に基づき、被写体検出処理、焦点検出処理、および露出演算処理を適切に行うことができる。
 なお、輪郭強調処理は、検出用画像52に限らず、記録用画像51に対して行ってもよい。画像処理部33の生成部33cが、上述した輪郭強調処理を記録用画像51の画像データに対して行うことにより、輪郭が強調された記録用画像51を得ることができる。輪郭強調の強さは、検出用画像52の画像データに対して行う場合と、記録用画像51の画像データに対して行う場合とで異なっていてもよい。
In the first embodiment, the generation unit 33c of the image processing unit 33 performs the above-described contour emphasis processing on the image data of the detection image 52, so that the contour is enhanced based on the detection image 52. Subject detection processing, focus detection processing, and exposure calculation processing can be appropriately performed.
Note that the contour enhancement processing is not limited to the detection image 52 but may be performed on the recording image 51. The generation unit 33c of the image processing unit 33 performs the above-described contour enhancement processing on the image data of the recording image 51, whereby the recording image 51 with the contour enhanced can be obtained. The strength of contour emphasis may be different between the case where it is performed on the image data of the detection image 52 and the case where it is performed on the image data of the recording image 51.
(4)ノイズ低減処理
 ノイズ低減処理は、撮像した画像に対して行う画像処理の1つである。画像処理部33の生成部33cは、例えば、1フレームの画像において、注目画素P(処理対象画素)を中心とする所定サイズのカーネルを用いた公知の線形フィルタ(Linear filter)演算を行う。線型フィルタの一例である平滑化フィルタのカーネルサイズがN×N画素の場合、注目画素Pの位置が注目位置であり、注目画素Pを囲む(N-1)個の参照画素Prの位置が参照位置である。
 なお、カーネルサイズはN×M画素であってもよい。
(4) Noise reduction processing Noise reduction processing is one type of image processing performed on a captured image. The generation unit 33c of the image processing unit 33 performs, for example, a known linear filter calculation using a kernel of a predetermined size centered on the pixel of interest P (processing target pixel) in an image of one frame. When the kernel size of the smoothing filter which is an example of the linear filter is N × N pixels, the position of the target pixel P is the target position, and the positions of the (N 2 −1) reference pixels Pr surrounding the target pixel P are Reference position.
The kernel size may be N × M pixels.
 画像処理部33の生成部33cは、注目画素Pにおける画像データを線型フィルタ演算結果で置き換えるフィルタ処理を、例えばフレーム画像の上部の水平ラインから下部の水平ラインへ向けて、各水平ライン上で注目画素を左から右へずらしながら行う。 The generation unit 33c of the image processing unit 33 performs a filter process for replacing the image data in the target pixel P with a linear filter calculation result on each horizontal line, for example, from the upper horizontal line to the lower horizontal line of the frame image. This is done while shifting the pixels from left to right.
 第1の実施の形態において、画像処理部33の生成部33cが、上述したノイズ低減処理を検出用画像52の画像データに対して行うことにより、検出用画像52に基づく被写体検出処理、焦点検出処理、および露出演算処理においてノイズの影響が及ぶことを防止できる。
 なお、ノイズ低減処理は、検出用画像52に限らず、記録用画像51に対して行ってもよい。画像処理部33の生成部33cが、上述したノイズ低減処理を記録用画像51の画像データに対して行うことにより、ノイズが低減された記録用画像51を得ることができる。ノイズ低減の度合いは、検出用画像52の画像データに対して行う場合と、記録用画像51の画像データに対して行う場合とで異なっていてもよい。
In the first embodiment, the generation unit 33c of the image processing unit 33 performs the above-described noise reduction processing on the image data of the detection image 52, thereby performing subject detection processing and focus detection based on the detection image 52. It is possible to prevent the influence of noise in the processing and the exposure calculation processing.
The noise reduction process may be performed not only on the detection image 52 but also on the recording image 51. The generation unit 33c of the image processing unit 33 performs the above-described noise reduction processing on the image data of the recording image 51, whereby the recording image 51 with reduced noise can be obtained. The degree of noise reduction may be different between the case of performing the image data of the detection image 52 and the case of performing the image data of the recording image 51.
<焦点検出処理の例示>
 制御部34(レンズ移動制御部34d)が行う焦点検出処理の一例について説明する。制御部34のレンズ移動制御部34dは、撮像画面の所定の位置(フォーカスポイント)に対応する信号データ(画像データ)を用いて焦点検出処理を行う。本実施の形態のAF動作は、例えば、撮像画面における複数のフォーカスポイントの中からユーザが選んだフォーカスポイントに対応する被写体にフォーカスを合わせる。制御部34のレンズ移動制御部34d(生成部)は、撮像光学系31の異なる瞳の領域を通過した光束による複数の被写体像の像ズレ量(位相差)を検出することにより、撮像光学系31のデフォーカス量を算出する。制御部34のレンズ移動制御部34dは、デフォーカス量をゼロ(許容値以下)にする位置、すなわち合焦位置へ撮像光学系31のフォーカスレンズを移動させ、撮像光学系31の焦点を調節する。
<Example of focus detection processing>
An example of focus detection processing performed by the control unit 34 (lens movement control unit 34d) will be described. The lens movement control unit 34d of the control unit 34 performs focus detection processing using signal data (image data) corresponding to a predetermined position (focus point) on the imaging screen. In the AF operation of the present embodiment, for example, the subject corresponding to the focus point selected by the user from a plurality of focus points on the imaging screen is focused. The lens movement control unit 34d (generation unit) of the control unit 34 detects image shift amounts (phase differences) of a plurality of subject images due to light beams that have passed through different pupil regions of the imaging optical system 31, thereby capturing the imaging optical system. A defocus amount of 31 is calculated. The lens movement control unit 34d of the control unit 34 adjusts the focus of the imaging optical system 31 by moving the focus lens of the imaging optical system 31 to a position where the defocus amount is zero (allowable value or less), that is, a focus position. .
 図15は、撮像素子32aの撮像面における焦点検出用画素の位置を例示する図である。本実施の形態では、撮像チップ111のX軸方向(水平方向)に沿って離散的に焦点検出用画素が並べて設けられている。図15の例では、15本の焦点検出画素ライン160が所定の間隔で設けられる。焦点検出画素ライン160を構成する焦点検出用画素は、焦点検出用、記録用画像51および検出用画像52の画像データを出力する。撮像チップ111において焦点検出画素ライン160以外の画素位置には通常の撮像用画素が設けられている。撮像用画素は、記録用画像51および検出用画像52の画像データを出力する。 FIG. 15 is a diagram illustrating the position of the focus detection pixel on the imaging surface of the imaging device 32a. In the present embodiment, focus detection pixels are discretely arranged along the X-axis direction (horizontal direction) of the imaging chip 111. In the example of FIG. 15, fifteen focus detection pixel lines 160 are provided at predetermined intervals. The focus detection pixels constituting the focus detection pixel line 160 output image data of the focus detection, recording image 51, and detection image 52. In the imaging chip 111, normal imaging pixels are provided at pixel positions other than the focus detection pixel line 160. The imaging pixels output image data of the recording image 51 and the detection image 52.
 図16は、図15に示すフォーカスポイント80Aに対応する上記焦点検出画素ライン160の一部の領域を拡大した図である。図16において、赤色画素R、緑色画素G(Gb、Gr)、および青色画素Bと、焦点検出用画素とが例示される。赤色画素R、緑色画素G(Gb、Gr)、および青色画素Bは、上述したベイヤー配列の規則にしたがって配される。 FIG. 16 is an enlarged view of a part of the focus detection pixel line 160 corresponding to the focus point 80A shown in FIG. In FIG. 16, a red pixel R, a green pixel G (Gb, Gr), a blue pixel B, and a focus detection pixel are illustrated. The red pixel R, the green pixel G (Gb, Gr), and the blue pixel B are arranged according to the rules of the Bayer arrangement described above.
 赤色画素R、緑色画素G(Gb、Gr)、および青色画素Bにおいて、マイクロレンズLおよび不図示のカラーフィルタの内側(背後)に例示した正方形状の領域は、撮像用画素の光電変換部を示す。各撮像用画素は、撮像光学系31(図1)の射出瞳を通る光束を受光する。すなわち、赤色画素R、緑色画素G(Gb、Gr)、および青色画素Bはそれぞれ正方形状のマスク開口部を有し、これらのマスク開口部を通った光が撮像用画素の光電変換部に到達する。
 なお、赤色画素R、緑色画素G(Gb、Gr)、および青色画素Bの光電変換部(マスク開口部)の形状は四角形に限定されず、例えば円形であってもよい。
In the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B, the square area illustrated inside (behind) the microlens L and a color filter (not shown) is a photoelectric conversion unit of the imaging pixel. Show. Each imaging pixel receives a light beam passing through the exit pupil of the imaging optical system 31 (FIG. 1). That is, the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B each have a square-shaped mask opening, and light passing through these mask openings reaches the photoelectric conversion unit of the imaging pixel. To do.
In addition, the shape of the photoelectric conversion part (mask opening part) of the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B is not limited to a quadrangle, and may be, for example, a circle.
 焦点検出用画素は、マイクロレンズLおよび不図示のカラーフィルタの内側(背後)に2つの光電変換部S1およびS2を有する。例えば、画素位置の左側に配置された第1の光電変換部S1と、画素位置の右側に配置された第2の光電変換部S2とを有する。これにより、第1の光電変換部S1には、撮像光学系31(図1)の射出瞳の第1の領域を通る第1の光束が入射し、第2の光電変換部S2には、撮像光学系31(図1)の射出瞳の第2の領域を通る第2の光束が入射する。 The focus detection pixel has two photoelectric conversion units S1 and S2 inside (behind) a microlens L and a color filter (not shown). For example, a first photoelectric conversion unit S1 disposed on the left side of the pixel position and a second photoelectric conversion unit S2 disposed on the right side of the pixel position are included. Thereby, the first light beam passing through the first region of the exit pupil of the imaging optical system 31 (FIG. 1) is incident on the first photoelectric conversion unit S1, and the second photoelectric conversion unit S2 is imaged. A second light beam passing through the second region of the exit pupil of the optical system 31 (FIG. 1) enters.
 本実施の形態では、例えば、光電変換部と、光電変換部による光電変換信号を読み出す読出回路105(図2)とを含めて「画素」と呼ぶ。読出回路105は、転送トランジスタ(TX)、増幅トランジスタ(AMP)、リセットトランジスタ(RST)および選択トランジスタ(SEL)を含む例を説明するが、読出回路105の範囲は、必ずしも本例の通りでなくてもよい。 In this embodiment, for example, a photoelectric conversion unit and a readout circuit 105 (FIG. 2) that reads out a photoelectric conversion signal from the photoelectric conversion unit are referred to as “pixels”. The read circuit 105 will be described with an example including a transfer transistor (TX), an amplification transistor (AMP), a reset transistor (RST), and a selection transistor (SEL). However, the range of the read circuit 105 is not necessarily the same as this example. May be.
 なお、撮像チップ111における焦点検出画素ライン160の位置は、図15に例示した位置に限定されない。また、焦点検出画素ライン160の数についても、図15の例に限定されるものではない。例えば、画素位置の全てに焦点検出用画素を配置してもよい。 Note that the position of the focus detection pixel line 160 in the imaging chip 111 is not limited to the position illustrated in FIG. Also, the number of focus detection pixel lines 160 is not limited to the example of FIG. For example, focus detection pixels may be arranged at all pixel positions.
 また、撮像チップ111における焦点検出画素ライン160は、撮像チップ111のY軸方向(鉛直方向)に沿って焦点検出用画素を並べて設けたものであってもよい。図16のように撮像用画素と焦点検出用画素とを二次元状に配列した撮像素子は公知であり、これらの画素の詳細な図示および説明は省略する。 Further, the focus detection pixel line 160 in the imaging chip 111 may be a line in which focus detection pixels are arranged along the Y-axis direction (vertical direction) of the imaging chip 111. An imaging element in which imaging pixels and focus detection pixels are two-dimensionally arranged as shown in FIG. 16 is known, and detailed illustration and description of these pixels are omitted.
 さらにまた、図16の例では、焦点検出用画素がそれぞれ焦点検出用の第1および第2の光束の双方を受光する構成を説明したが、焦点検出用画素が焦点検出用の第1および第2の光束のうちの一方を受光する構成にしてもよい。 Furthermore, in the example of FIG. 16, the configuration in which the focus detection pixels receive both the first and second light beams for focus detection has been described. However, the focus detection pixels are the first and first focus detection pixels. It may be configured to receive one of the two light beams.
 制御部34のレンズ移動制御部34dは、焦点検出用画素の光電変換部S1およびS2から出力される焦点検出用の光電変換信号に基づいて、撮像光学系31(図1)の異なる領域を通る一対の光束による一対の像の像ズレ量(位相差)を検出する。そして、像ズレ量(位相差)に基づいてデフォーカス量を演算する。このような瞳分割位相差方式によるデフォーカス量演算は、カメラの分野において公知であるので詳細な説明は省略する。 The lens movement control unit 34d of the control unit 34 passes through different regions of the imaging optical system 31 (FIG. 1) based on the focus detection photoelectric conversion signals output from the photoelectric conversion units S1 and S2 of the focus detection pixels. An image shift amount (phase difference) between the pair of images by the pair of light beams is detected. Then, the defocus amount is calculated based on the image shift amount (phase difference). Such defocus amount calculation by the pupil division phase difference method is well known in the field of cameras, and thus detailed description thereof is omitted.
 フォーカスポイント80A(図15)は、図9に例示したモニタ用画像60aにおいて、ユーザによって選ばれているものとする。図17は、フォーカスポイント80Aを拡大した図である。図17において枠170で囲む位置は、焦点検出画素ライン160(図15)に対応する。 It is assumed that the focus point 80A (FIG. 15) is selected by the user in the monitor image 60a illustrated in FIG. FIG. 17 is an enlarged view of the focus point 80A. In FIG. 17, the position surrounded by the frame 170 corresponds to the focus detection pixel line 160 (FIG. 15).
 本実施の形態において、制御部34のレンズ移動制御部34dは、検出用画像52のうちの枠170で示す焦点検出用画素による信号データを用いて焦点検出処理を行う。レンズ移動制御部34dは、図13(b)および図13(d)の検出用画像52における注目範囲61-2、61-3を、図17の枠170で示す範囲に対応させて設定する。
 レンズ移動制御部34dは、像ぶれ補正処理を行った検出用画像52の焦点検出用画素の信号データを用いることで、焦点検出処理を適切に行うことができる。また、例えばゲインを高めに設定したり、像ズレ量(位相差)の検出に適した画像処理を施したりした検出用画像52の焦点検出用画素の信号データを用いることで、焦点検出処理を適切に行うことができる。
In the present embodiment, the lens movement control unit 34d of the control unit 34 performs focus detection processing using signal data from the focus detection pixels indicated by the frame 170 in the detection image 52. The lens movement control unit 34d sets the attention ranges 61-2 and 61-3 in the detection image 52 in FIGS. 13B and 13D corresponding to the range indicated by the frame 170 in FIG.
The lens movement control unit 34d can appropriately perform the focus detection process by using the signal data of the focus detection pixels of the detection image 52 subjected to the image blur correction process. Further, for example, the focus detection processing is performed by using the signal data of the focus detection pixels of the detection image 52 in which the gain is set high or the image processing suitable for detection of the image shift amount (phase difference) is performed. Can be done appropriately.
 以上の説明では、瞳分割位相差方式を用いた焦点検出処理を例示したが、被写体像のコントラストの大小に基づいて、撮像光学系31のフォーカスレンズを合焦位置へ移動させるコントラスト検出方式の場合は以下のように行うことができる。 In the above description, the focus detection process using the pupil division phase difference method is exemplified. However, in the case of the contrast detection method in which the focus lens of the imaging optical system 31 is moved to the in-focus position based on the contrast of the subject image. Can be done as follows.
 コントラスト検出方式を用いる場合、制御部34は、撮像光学系31のフォーカスレンズを移動させながら、フォーカスレンズのそれぞれの位置において、フォーカスポイントに対応する撮像素子32aの第2撮像領域B2に含まれる撮像用画素から出力された信号データに基づいて公知の焦点評価値演算を行う。そして、焦点評価値を最大にするフォーカスレンズの位置を合焦位置として求める。
 すなわち、制御部34のレンズ移動制御部34dは、検出用画像52のうちのフォーカスポイント80Aに対応する撮像用画素による信号データを用いて焦点評価値演算を行う。レンズ移動制御部34dは、図13(b)および図13(d)の検出用画像52における注目範囲61-2、61-3を、図15のフォーカスポイント80Aの枠で示す範囲に対応させて設定する。
 レンズ移動制御部34dは、像ぶれ補正処理を行った検出用画像52の信号データを用いることで、焦点検出処理を適切に行うことができる。また、例えばゲインを高めに設定したり、像ズレ量(位相差)の検出に適した画像処理を施したりした検出用画像52の信号データを用いることで、焦点検出処理を適切に行うことができる。
When the contrast detection method is used, the control unit 34 moves the focus lens of the image pickup optical system 31 and moves the focus lens at each position of the focus lens so as to include an image included in the second image pickup region B2 of the image pickup element 32a corresponding to the focus point. A known focus evaluation value calculation is performed based on the signal data output from the pixel for use. Then, the position of the focus lens that maximizes the focus evaluation value is obtained as the focus position.
That is, the lens movement control unit 34d of the control unit 34 performs a focus evaluation value calculation using signal data from the imaging pixels corresponding to the focus point 80A in the detection image 52. The lens movement control unit 34d associates the attention ranges 61-2 and 61-3 in the detection image 52 in FIGS. 13B and 13D with the range indicated by the frame of the focus point 80A in FIG. Set.
The lens movement control unit 34d can appropriately perform the focus detection process by using the signal data of the detection image 52 subjected to the image blur correction process. Further, for example, by using the signal data of the detection image 52 in which the gain is set high or image processing suitable for detection of the image shift amount (phase difference) is used, the focus detection processing can be appropriately performed. it can.
<被写体検出処理>
 図18(a)は、検出しようとする対象物を表すテンプレート画像を例示する図であり、図18(b)は、モニタ用画像60aおよび探索範囲190を例示する図である。制御部34の物体検出部34aは、モニタ用画像60aから対象物(例えば、図9の被写体要素の1つであるバッグ63)を検出する。制御部34の物体検出部34aは、対象物を検出する範囲をモニタ用画像60aの全範囲としてもよいが、検出処理を軽くするために、モニタ用画像60aの一部を探索範囲190としてもよい。
<Subject detection processing>
FIG. 18A is a diagram illustrating a template image representing an object to be detected, and FIG. 18B is a diagram illustrating a monitor image 60a and a search range 190. The object detection unit 34a of the control unit 34 detects an object (for example, the bag 63 which is one of the subject elements in FIG. 9) from the monitor image 60a. The object detection unit 34a of the control unit 34 may set the range in which the object is detected as the entire range of the monitor image 60a. However, in order to reduce the detection process, a part of the monitor image 60a may be used as the search range 190. Good.
 図9に例示したモニタ用画像60aにおいて、人物61の持ち物であるバッグ63を検出する場合を説明する。制御部34の物体検出部34aは、人物61を含む領域の近傍に探索範囲190を設定する。なお、人物61を含む領域61を探索範囲に設定してもよい。 A case will be described in which the bag 63 that is the belonging of the person 61 is detected in the monitor image 60a illustrated in FIG. The object detection unit 34 a of the control unit 34 sets the search range 190 in the vicinity of the region including the person 61. Note that the region 61 including the person 61 may be set as the search range.
 本実施の形態において、制御部34の物体検出部34aは、検出用画像52のうちの探索範囲190を構成する画像データを用いて被写体検出処理を行う。物体検出部34aは、図13(b)および図13(d)の検出用画像52における注目範囲61-2、61-3を、図18の探索範囲190に対応させて設定する。
 物体検出部34aは、像ぶれ補正処理を行った検出用画像52の画像データを用いることで、被写体検出処理を適切に行うことができる。また、例えばゲインを高めに設定したり、被写体要素の検出に適した画像処理を施したりした検出用画像52の画像データを用いることで、被写体検出処理を適切に行うことができる。
 なお、被写体検出処理は、検出用画像52に限らず、記録用画像51に対して行ってもよい。
In the present embodiment, the object detection unit 34 a of the control unit 34 performs subject detection processing using image data constituting the search range 190 in the detection image 52. The object detection unit 34a sets the attention ranges 61-2 and 61-3 in the detection image 52 in FIGS. 13B and 13D in correspondence with the search range 190 in FIG.
The object detection unit 34a can appropriately perform the subject detection process by using the image data of the detection image 52 subjected to the image blur correction process. Further, for example, by using the image data of the detection image 52 in which the gain is set high or image processing suitable for detection of the subject element is used, subject detection processing can be performed appropriately.
The subject detection process is not limited to the detection image 52 but may be performed on the recording image 51.
<撮像条件を設定する処理>
 制御部34の設定部34bは、例えば、撮像画面の領域を分割し、分割した領域間で異なる撮像条件を設定した状態で、新たに測光し直して露出条件を決定する場合、撮像画面のうちの測光範囲を構成する画像データを用いて露出演算処理を行う。設定部34bは、露出演算結果に基づき以下のように撮像条件を設定する。例えば、画像における最大輝度または最小輝度である被写体を含む領域において、白とびまたは黒つぶれが生じている場合、設定部34bは、白とびまたは黒つぶれを解消するように撮像条件を設定する。
<Processing for setting imaging conditions>
For example, when the setting unit 34b of the control unit 34 divides the area of the imaging screen and sets different imaging conditions between the divided areas and newly determines the exposure condition by performing re-photometry, Exposure calculation processing is performed using image data constituting the photometric range. The setting unit 34b sets the imaging condition based on the exposure calculation result as follows. For example, when an overexposure or underexposure occurs in an area including a subject having the maximum luminance or the minimum luminance in the image, the setting unit 34b sets an imaging condition so as to eliminate overexposure or underexposure.
 本実施の形態において、制御部34の設定部34bは、検出用画像52のうちの測光範囲を構成する画像データを用いて露出演算処理を行う。設定部34bは、図13(b)および図13(d)の検出用画像52における注目範囲61-2、61-3を、測光範囲に対応させて設定する。
 設定部34bは、像ぶれ補正処理を行った検出用画像52の画像データを用いることで、演算処理を適切に行うことができる。また、例えばゲインを低めに設定した検出用画像52の画像データを用いることで、演算処理を適切に行うことができる。
In the present embodiment, the setting unit 34 b of the control unit 34 performs an exposure calculation process using image data constituting the photometric range in the detection image 52. The setting unit 34b sets the attention ranges 61-2 and 61-3 in the detection image 52 in FIGS. 13B and 13D in correspondence with the photometric range.
The setting unit 34b can appropriately perform the arithmetic processing by using the image data of the detection image 52 subjected to the image blur correction processing. Further, for example, by using the image data of the detection image 52 in which the gain is set to be low, the arithmetic processing can be appropriately performed.
 上述した露出演算処理を行う際の測光範囲に限らず、ホワイトバランス調整値を決定するときに行う測光(測色)範囲や、撮影補助光を発する光源による撮影補助光の発光要否を決定するときに行う測光範囲、さらには、上記光源による撮影補助光の発光量を決定するときに行う測光範囲に対しても同様である。
 また、撮像条件を設定する処理は、検出用画像52に限らず、記録用画像51に対して行ってもよい。
Not only the photometric range when performing the exposure calculation process described above, but also the photometric (colorimetric) range used when determining the white balance adjustment value, or the necessity of emission of the auxiliary photographing light by the light source that emits the auxiliary photographing light is determined. The same applies to the photometric range that is sometimes performed, and further to the photometric range that is performed when determining the light emission amount of the photographing auxiliary light by the light source.
Further, the process for setting the imaging conditions is not limited to the detection image 52 but may be performed on the recording image 51.
<フローチャートの説明>
 図19は、領域ごとに撮像条件を設定して撮像するカメラ1の処理の流れを説明するフローチャートである。例えば、カメラ1のメインスイッチがオン操作されると、制御部34は、図19に示す処理を実行するプログラムを起動させる。ステップS10において、制御部34は、撮像素子32aに第1撮像領域B1および第2撮像領域B2をセットしてステップS20へ進む。上述したように、第1撮像領域B1は記録用画像51を撮像する領域であり、第2撮像領域B2は検出用画像52を撮像する領域である。
<Description of flowchart>
FIG. 19 is a flowchart for explaining the flow of processing of the camera 1 that captures an image by setting an imaging condition for each region. For example, when the main switch of the camera 1 is turned on, the control unit 34 activates a program that executes the process shown in FIG. In step S10, the control unit 34 sets the first imaging area B1 and the second imaging area B2 in the imaging element 32a, and proceeds to step S20. As described above, the first imaging region B1 is a region where the recording image 51 is captured, and the second imaging region B2 is a region where the detection image 52 is captured.
 ステップS20において、制御部34は、撮像素子32aに動画の撮像を開始させてステップS30へ進む。撮像素子32aは、第1撮像領域B1で記録用画像51の撮像を繰り返し、第2撮像領域B2で検出用画像52の撮像を繰り返す。ステップS30において、制御部34は、カメラ1の振れの有無を判断する。制御部34は、カメラ1の振れを検知した場合にステップS30を肯定判定してステップS40へ進み、カメラ1の振れを検知しない場合には、ステップS30を否定判定してステップS50へ進む。 In step S20, the control unit 34 causes the image sensor 32a to start capturing a moving image, and proceeds to step S30. The imaging element 32a repeats the imaging of the recording image 51 in the first imaging area B1, and repeats the imaging of the detection image 52 in the second imaging area B2. In step S <b> 30, the control unit 34 determines whether the camera 1 is shaken. When the shake of the camera 1 is detected, the control unit 34 makes a positive determination in step S30 and proceeds to step S40. When the shake of the camera 1 is not detected, the control unit 34 makes a negative determination in step S30 and proceeds to step S50.
 ステップS40において、制御部34は、動画に対する像ぶれ補正処理を行う。図20は、像ぶれ補正処理の流れを説明する図である。図20のステップS42において、制御部34は像ぶれ量を算出する。具体的には、目標移動量演算部34fにより、ぶれ量演算部34eで演算された像ぶれと、撮影倍率(撮像光学系31のズームレンズの位置に基づいて算出する)と、カメラ1から被写体までの距離(撮像光学系31のフォーカスレンズの位置に基づいて算出する)とに基づいて、撮像素子32aの撮像面の像ぶれ量を算出してステップS44へ進む。 In step S40, the control unit 34 performs image blur correction processing on the moving image. FIG. 20 is a diagram for explaining the flow of image blur correction processing. In step S42 of FIG. 20, the control unit 34 calculates the amount of image blur. Specifically, the target movement amount calculation unit 34f calculates the image blur calculated by the blur amount calculation unit 34e, the photographing magnification (calculated based on the position of the zoom lens of the imaging optical system 31), and the subject from the camera 1. The image blur amount of the imaging surface of the imaging element 32a is calculated based on the distance to (calculated based on the position of the focus lens of the imaging optical system 31), and the process proceeds to step S44.
 ステップS44において、制御部34は像ぶれ量に基づく像ぶれ補正を行う。具体的には、目標移動量演算部34fが算出した像ぶれ量を打ち消すために、記録用画像51におけるトリミング範囲W1の目標移動量を演算する。そして、図13(a)に示すように、補正部33bが記録用画像51の中でトリミング範囲W1を移動させる。このような像ぶれ補正により、手ぶれによって記録用画像51における被写体の位置が移動しても、トリミング範囲W1において略同じ位置に被写体が収まる。 In step S44, the control unit 34 performs image blur correction based on the image blur amount. Specifically, in order to cancel the image blur amount calculated by the target movement amount calculation unit 34f, the target movement amount of the trimming range W1 in the recording image 51 is calculated. Then, as illustrated in FIG. 13A, the correction unit 33 b moves the trimming range W <b> 1 in the recording image 51. By such image blur correction, even if the position of the subject in the recording image 51 is moved due to camera shake, the subject stays at substantially the same position in the trimming range W1.
 目標移動量演算部34fは、上記目標移動量を、検出用画像52におけるトリミング範囲W2にも適用する。そして、図13(b)に示すように、補正部33bが検出用画像52の中でトリミング範囲W2を移動させる。このような像ぶれ補正により、手ぶれによって検出用画像52における被写体の位置が移動しても、トリミング範囲W2において略同じ位置に被写体が収まる。 The target movement amount calculation unit 34f also applies the target movement amount to the trimming range W2 in the detection image 52. Then, as shown in FIG. 13B, the correction unit 33b moves the trimming range W2 in the detection image 52. By such image blur correction, even if the position of the subject in the detection image 52 is moved due to camera shake, the subject stays at substantially the same position in the trimming range W2.
 ステップS46において、制御部34は、検出用画像52に対して注目範囲61-2を設定し、注目範囲61-2についても、図13(b)に示すように、トリミング範囲W2とともに検出用画像52の中で移動させて図19のステップS50へ進む。制御部34は、トリミング範囲W2を図13(b)の下方限界まで移動した以降は、図13(d)に示すように、注目範囲61-3のみを検出用画像52の中で下方に移動させる。注目範囲61-2,61-3を移動することにより、カメラ1が振れても同じ被写体に注目を続けるので、注目する被写体が異なる場合に比べて、被写体要素の検出、焦点検出、露出演算、画像生成に及ぼす影響を抑えることができる。 In step S46, the control unit 34 sets the attention range 61-2 for the detection image 52, and the attention range 61-2 is also detected together with the trimming range W2 as shown in FIG. 13B. The process moves to step S50 in FIG. After moving the trimming range W2 to the lower limit of FIG. 13 (b), the control unit 34 moves only the attention range 61-3 downward in the detection image 52 as shown in FIG. 13 (d). Let By moving the attention range 61-2, 61-3, attention is paid to the same subject even if the camera 1 is shaken. Therefore, detection of subject elements, focus detection, exposure calculation, The effect on image generation can be suppressed.
 図19のステップS50において、制御部34は、表示部35にモニタ用画像の表示を開始させてステップS60へ進む。ステップS60において、制御部34は、第2撮像領域B2で撮像された検出用画像52に基づいて被写体を検出する処理を物体検出部34aで開始させてステップS70へ進む。 In step S50 of FIG. 19, the control unit 34 causes the display unit 35 to start displaying the monitor image, and proceeds to step S60. In step S60, the control unit 34 causes the object detection unit 34a to start processing for detecting a subject based on the detection image 52 imaged in the second imaging region B2, and proceeds to step S70.
 モニタ用画像の表示の開始により、第1撮像領域B1で撮像された記録用画像51に基づくモニタ用画像が表示部35に逐次表示される。なお、モニタ用画像の表示中にAF動作を行う設定がなされている場合、制御部34のレンズ移動制御部34dは、第2撮像領域B2で撮像された検出用画像52に基づいて焦点検出処理を行うことにより、所定のフォーカスポイントに対応する被写体要素にフォーカスを合わせるAF動作を制御する。
 また、モニタ用画像の表示中にAF動作を行う設定がなされていない場合、制御部34のレンズ移動制御部34dは、後にAF動作が指示された時点でAF動作を行う。
By starting the display of the monitor image, the monitor image based on the recording image 51 imaged in the first imaging area B1 is sequentially displayed on the display unit 35. When the setting for performing the AF operation is performed while the monitor image is displayed, the lens movement control unit 34d of the control unit 34 performs focus detection processing based on the detection image 52 captured in the second imaging region B2. As a result, the AF operation for focusing on the subject element corresponding to the predetermined focus point is controlled.
If the setting for performing the AF operation is not performed while the monitor image is being displayed, the lens movement control unit 34d of the control unit 34 performs the AF operation when the AF operation is instructed later.
 ステップS70において、制御部34の設定部34bは、撮像素子32aにより撮像された撮像画面を、被写体要素を含む複数の領域に分割してステップS80へ進む。ステップS80において、制御部34は表示部35に領域の表示を行う。制御部34は、図9に例示したように、モニタ用画像60aのうち撮像条件の設定(変更)の対象となる領域を強調表示させる。制御部34は、強調表示する領域を、ユーザによるタッチ操作位置に基づいて決定する。また、制御部34は、撮像条件の設定画面70を表示部35に表示させてステップS90へ進む。
 なお、制御部34は、ユーザの指で表示画面上の他の被写体の表示位置がタッチ操作された場合は、その被写体を含む領域を撮像条件の設定(変更)の対象となる領域に変更して強調表示させる。
In step S70, the setting unit 34b of the control unit 34 divides the imaging screen imaged by the imaging element 32a into a plurality of regions including the subject element, and proceeds to step S80. In step S <b> 80, the control unit 34 displays an area on the display unit 35. As illustrated in FIG. 9, the control unit 34 highlights a region to be set (changed) for the imaging condition in the monitor image 60 a. The control unit 34 determines a region to be highlighted based on the touch operation position by the user. In addition, the control unit 34 displays the imaging condition setting screen 70 on the display unit 35 and proceeds to step S90.
Note that when the display position of another subject on the display screen is touched with the user's finger, the control unit 34 changes the region including the subject to a region for which the imaging condition is set (changed). To highlight it.
 ステップS90において、制御部34は、AF動作が必要か否かを判定する。制御部34は、例えば、被写体が動いたことによって焦点調節状態が変化した場合や、ユーザ操作によってフォーカスポイントの位置が変更された場合、または、ユーザ操作によってAF動作の実行が指示された場合に、ステップS90を肯定判定してステップS100へ進む。制御部34は、焦点調節状態が変化せず、ユーザ操作によりフォーカスポイントの位置が変更されず、ユーザ操作によってAF動作の実行も指示されない場合には、ステップS90を否定判定してステップ110へ進む。 In step S90, the control unit 34 determines whether an AF operation is necessary. For example, when the focus adjustment state is changed due to the movement of the subject, the position of the focus point is changed by a user operation, or the execution of the AF operation is instructed by the user operation, the control unit 34 Then, affirmative determination is made in step S90 and the process proceeds to step S100. If the focus adjustment state does not change, the position of the focus point is not changed by the user operation, and the execution of the AF operation is not instructed by the user operation, the control unit 34 makes a negative determination in step S90 and proceeds to step 110. .
 ステップS100において、制御部34は、AF動作を行わせてステップS110へ進む。制御部34のレンズ移動制御部34dは、第2撮像領域B2で撮像された検出用画像52に基づいて焦点検出処理を行うことにより、所定のフォーカスポイントに対応する被写体にフォーカスを合わせるAF動作を制御する。 In step S100, the control unit 34 performs an AF operation and proceeds to step S110. The lens movement control unit 34d of the control unit 34 performs an AF operation for focusing on a subject corresponding to a predetermined focus point by performing a focus detection process based on the detection image 52 imaged in the second imaging region B2. Control.
 ステップS110において、制御部34の設定部34bは、ユーザ操作に応じて、強調して表示されている領域に対する撮像条件を設定(変更)してステップS120へ進む。設定部34bは、例えば第1撮像領域B1に対する撮像条件を設定(変更)する。また、設定部34bは、第2撮像領域B2に対する撮像条件を設定(変更)してもよい。
 なお、撮像条件を設定(変更)するユーザ操作が行われない場合、設定部34bは、第1撮像領域B1および第2撮像領域B2に対し、初期設定の撮像条件を継続して設定する。
 設定部34bは、撮像条件を設定(変更)した状態で新たに測光し直して撮像条件を決定する場合、第2撮像領域B2で撮像される検出用画像52に基づいて露出演算処理を行う。
In step S110, the setting unit 34b of the control unit 34 sets (changes) the imaging condition for the highlighted area in accordance with a user operation, and the process proceeds to step S120. For example, the setting unit 34b sets (changes) an imaging condition for the first imaging region B1. Further, the setting unit 34b may set (change) the imaging condition for the second imaging region B2.
When the user operation for setting (changing) the imaging conditions is not performed, the setting unit 34b continuously sets the initial imaging conditions for the first imaging area B1 and the second imaging area B2.
The setting unit 34b performs an exposure calculation process based on the detection image 52 imaged in the second imaging area B2 when the imaging condition is determined by newly measuring the light with the imaging condition set (changed).
 ステップS120において、制御部34の撮像制御部34cは画像処理部33へ指示を送り、記録用画像の画像データに対して画像処理を行わせてステップS130へ進む。画像処理は、上記画素欠陥補正処理、色補間処理、輪郭強調処理、ノイズ低減処理を含む。 In step S120, the imaging control unit 34c of the control unit 34 sends an instruction to the image processing unit 33, performs image processing on the image data of the recording image, and proceeds to step S130. Image processing includes the pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing.
 ステップS130において、制御部34は、録画ボタンの操作の有無を判定する。制御部34は、操作部材36を構成する不図示の録画ボタン、または録画を指示する表示アイコン74(図9)が操作された場合、ステップS130を肯定判定してステップS140へ進む。制御部34は、録画ボタンの操作が行われない場合には、ステップS130を否定判定してステップS30へ戻る。ステップS30へ戻った場合、制御部34は、上述した処理を繰り返す。 In step S130, the control unit 34 determines whether or not the recording button is operated. When the recording button (not shown) constituting the operation member 36 or the display icon 74 (FIG. 9) instructing recording is operated, the control unit 34 makes a positive determination in step S130 and proceeds to step S140. If the recording button is not operated, the control unit 34 makes a negative determination in step S130 and returns to step S30. When returning to step S30, the control unit 34 repeats the above-described processing.
 ステップS140において、制御部34は記録部37へ指示を送り、画像処理後の画像データを不図示の記録媒体に記録する処理を開始させてステップS150へ進む。ステップS150において、制御部34は、終了操作が行われたか否かを判断する。制御部34は、例えば録画ボタンが再び操作された場合にステップS150を肯定判定して図19による処理を終了する。制御部34は、終了操作が行われない場合には、ステップS150を否定判定してステップS30へ戻る。ステップS30へ戻った場合、制御部34は、上述した処理を繰り返す。記録する処理を開始している場合には、記録処理を継続しながら上述した処理を繰り返す。 In step S140, the control unit 34 sends an instruction to the recording unit 37, starts a process of recording the image data after the image processing on a recording medium (not shown), and proceeds to step S150. In step S150, the control unit 34 determines whether an end operation has been performed. For example, when the recording button is operated again, the control unit 34 makes an affirmative determination in step S150 and ends the process of FIG. When the end operation is not performed, the control unit 34 makes a negative determination in step S150 and returns to step S30. When returning to step S30, the control unit 34 repeats the above-described processing. If the recording process is started, the above-described process is repeated while continuing the recording process.
 以上の説明では、撮像素子32aとして、撮像素子(撮像チップ111)における複数のブロックごとに撮像条件を設定可能な積層型の撮像素子100を例示したが、必ずしも積層型の撮像素子として構成する必要はない。 In the above description, as the imaging element 32a, the multilayer imaging element 100 in which imaging conditions can be set for each of a plurality of blocks in the imaging element (imaging chip 111) is illustrated, but the imaging element 32a is not necessarily configured as a multilayer imaging element. There is no.
 以上説明した第1の実施の形態によれば、次の作用効果が得られる。
(1)カメラ1は、例えば、カメラ1の振れによって撮像素子32aの撮像面を被写体像が移動すると、振れ方向と反対方向の領域の撮像条件を変更し、変更後の撮像条件によって撮像を行う。これにより、適切に、被写体像の画像を生成できる。また、適切に、被写体要素を検出できる。さらに、適切に、焦点検出を行うことができる。また、適切に、露出演算を行うことができる。
According to the first embodiment described above, the following operational effects can be obtained.
(1) For example, when the subject image moves on the imaging surface of the image sensor 32a due to camera shake, the camera 1 changes the imaging condition of the region in the direction opposite to the shake direction, and performs imaging according to the changed imaging condition. . Thereby, the image of the subject image can be appropriately generated. In addition, the subject element can be detected appropriately. Furthermore, focus detection can be performed appropriately. Moreover, exposure calculation can be performed appropriately.
(2)カメラ1は、撮像素子32aの撮像面を被写体像が移動すると、振れ方向と反対方向の領域に注目範囲を移動し、注目範囲に設定する撮像条件を変更する。これにより、注目範囲で撮像される被写体像に基づき、適切に、被写体像の画像を生成できる。また、適切に、被写体要素を検出できる。さらに、適切に、焦点検出を行うことができる。また、適切に、露出演算を行うことができる。 (2) When the subject image moves on the imaging surface of the image sensor 32a, the camera 1 moves the attention range to an area in the direction opposite to the shake direction, and changes the imaging condition set in the attention range. Accordingly, an image of the subject image can be appropriately generated based on the subject image captured in the attention range. In addition, the subject element can be detected appropriately. Furthermore, focus detection can be performed appropriately. Moreover, exposure calculation can be performed appropriately.
(3)カメラ1は、カメラ1が振れてもトリミング範囲の同じ位置に被写体像が存在するようにトリミング範囲を移動する。これにより、被写体像の像ぶれを適切に抑えることができる。 (3) The camera 1 moves the trimming range so that the subject image exists at the same position in the trimming range even if the camera 1 is shaken. Thereby, the image blur of the subject image can be appropriately suppressed.
(4)カメラ1は、トリミング範囲の移動とともに、トリミング範囲に含まれる注目範囲を移動する。これにより、カメラ1が振れても同じ被写体像に注目することができる。 (4) The camera 1 moves the range of interest included in the trimming range along with the movement of the trimming range. This makes it possible to focus on the same subject image even when the camera 1 is shaken.
(5)カメラ1は、トリミング範囲を移動できなくても注目範囲のみを移動する。これにより、カメラ1の振れによる被写体像の像ぶれが抑えきれない場合でも、同じ被写体像に注目することができる。 (5) The camera 1 moves only the attention range even if the trimming range cannot be moved. Thereby, even when the image blur of the subject image due to the shake of the camera 1 cannot be suppressed, it is possible to focus on the same subject image.
--- 第1の実施の形態の変形例1 ---
 制御部34は、検出用画像52に対して設定した図13(b)、図13(d)の注目範囲61-2、61-3について、検出用画像52の中で移動させる代わりに、注目範囲61-2、61-3の面積を広げてもよい。図13(e)は、第1の実施の形態の変形例1における、検出用画像52に対する像ぶれ補正を説明する図である。制御部34は、例えば、検出用画像52に対して設定した図13(b)の注目範囲61-2の面積を、検出用画像52の中でトリミング範囲W2の移動に伴って徐々に広げる。注目範囲61-2の面積を広げる方向は、トリミング範囲W2を移動させる方向と同じであり、カメラ1の振れと反対方向である。図13(e)の注目範囲61-3の面積は、図13(b)の注目範囲61-2の面積よりも下方に向かって広い。
--- Modification 1 of the first embodiment ---
Instead of moving the attention ranges 61-2 and 61-3 shown in FIGS. 13B and 13D set for the detection image 52 in the detection image 52, the control unit 34 pays attention. The area of the ranges 61-2 and 61-3 may be expanded. FIG. 13 (e) is a diagram illustrating image blur correction for the detection image 52 in Modification 1 of the first embodiment. For example, the control unit 34 gradually expands the area of the attention range 61-2 of FIG. 13B set for the detection image 52 as the trimming range W2 moves in the detection image 52. The direction in which the area of the attention range 61-2 is expanded is the same as the direction in which the trimming range W2 is moved, and is the direction opposite to the shake of the camera 1. The area of the attention range 61-3 in FIG. 13 (e) is wider downward than the area of the attention range 61-2 in FIG. 13 (b).
 また、制御部34は、検出用画像52の中でトリミング範囲W2を下方の限界まで移動した以降において、注目する被写体(例えば、人物の頭部)がさらに下方に移動する場合は、注目範囲61-3の面積をトリミング範囲W2の中で徐々に下方に広げる。面積を広げた注目範囲61-3に、人物の頭部が収まるようにするためである。これにより、カメラ1が振れても、検出用画像52の中で物体検出部34aが被写体要素を検出する領域、検出用画像52の中でレンズ移動制御部34dが焦点検出処理を行う領域、検出用画像52の中で設定部34bが露出演算処理を行う領域、検出用画像52の中で画像処理部33が画像生成処理時に参照する領域には、同じ被写体(本例では人物の頭部)が存在することとなる。この結果、カメラ1の振れによって注目する被写体が異なってしまう場合に比べて、被写体要素の検出、焦点検出、露出演算、画像生成に及ぼす影響を抑えることができる。 In addition, after moving the trimming range W2 to the lower limit in the detection image 52, the control unit 34 moves the attention range 61 when the subject of interest (for example, the head of a person) moves further downward. The area of −3 is gradually expanded downward in the trimming range W2. This is so that the head of the person can be accommodated in the attention range 61-3 with the expanded area. Thereby, even if the camera 1 is shaken, a region in which the object detection unit 34a detects a subject element in the detection image 52, a region in which the lens movement control unit 34d performs focus detection processing in the detection image 52, and detection In the image 52, the setting unit 34b performs the exposure calculation process, and in the detection image 52, the area referred to by the image processing unit 33 during the image generation process includes the same subject (in this example, the human head). Will exist. As a result, it is possible to suppress the influence on subject element detection, focus detection, exposure calculation, and image generation as compared with the case where the subject to be noticed differs due to camera shake.
 制御部34は、注目範囲61-2、61-3の面積を広げる場合に、注目範囲61-2、61-3の面積の変化に伴って第2撮像条件の設定を変更する。すなわち、面積を変更した後の注目範囲61-2、61-3の位置、面積に合わせて、第2撮像領域B2に対する第2撮像条件を設定し直す。 When the areas of the attention ranges 61-2 and 61-3 are increased, the control unit 34 changes the setting of the second imaging condition in accordance with the change of the areas of the attention ranges 61-2 and 61-3. That is, the second imaging condition for the second imaging region B2 is reset according to the positions and areas of the attention ranges 61-2 and 61-3 after changing the area.
(第2の実施の形態)
 第2の実施の形態では、カメラ1の画像処理部33が、静止画を撮像中のカメラ1の振れに伴う像ぶれの影響を抑える処理を行う。このようなカメラ1の詳細について、以下に説明する。
 第2の実施の形態におけるカメラ1は、第1の実施の形態と同様に、レンズ交換式であってもレンズ交換式でなくてもよい。また、スマートフォンやビデオカメラ等の撮像装置として構成してもよい。
(Second Embodiment)
In the second embodiment, the image processing unit 33 of the camera 1 performs a process of suppressing the influence of image blur due to the shake of the camera 1 that is capturing a still image. Details of such a camera 1 will be described below.
Similarly to the first embodiment, the camera 1 in the second embodiment may be either a lens interchangeable type or a lens interchangeable type. Moreover, you may comprise as imaging devices, such as a smart phone and a video camera.
 第2の実施の形態では、記録する静止画を撮像する場合、撮像素子32aの撮像面に第1撮像領域B1のみを設定して静止画を撮像する。第1撮像領域B1に含まれるブロックの稼働率は、第1の実施の形態の場合と同様に、適宜設定して構わない。例えば、稼働率100%に設定してもよいし、稼働率を70%あるいは50%に設定してもよい。
 なお、撮像素子32aの撮像面に第1撮像領域B1のみを設定する代わりに、撮像面に設定されている第1撮像領域B1および第2撮像領域B2のうち、第1撮像領域B1のみを稼働して第2撮像領域B2を休止させてもよい。
 記録する静止画は、シャッターボタンが操作されたときに撮像する静止画である。第2の実施の形態では、シャッターボタンが操作された場合に記録する静止画を記録用画像と称する。また、静止画の撮像を指示する表示(例えば図9のレリーズアイコン74)が操作されたときに記録用画像を撮像してもよい。
In the second embodiment, when a still image to be recorded is captured, the still image is captured by setting only the first imaging region B1 on the imaging surface of the image sensor 32a. The operating rate of the blocks included in the first imaging area B1 may be set as appropriate, as in the case of the first embodiment. For example, the operating rate may be set to 100%, or the operating rate may be set to 70% or 50%.
Instead of setting only the first imaging area B1 on the imaging surface of the imaging element 32a, only the first imaging area B1 is operated out of the first imaging area B1 and the second imaging area B2 set on the imaging surface. Then, the second imaging region B2 may be paused.
The still image to be recorded is a still image captured when the shutter button is operated. In the second embodiment, a still image that is recorded when the shutter button is operated is referred to as a recording image. In addition, a recording image may be captured when a display for instructing capturing of a still image (for example, the release icon 74 in FIG. 9) is operated.
 図21(a)は、カメラ1によってフラッシュ光を使わずに室内で撮像された静止画を例示する図である。図21(a)の静止画は、撮像素子32aの画面の左右で異なる撮像条件を設定し、撮像されたものである。例えば、画面左側の男性の衣装よりも画面右側の女性の衣装が白く明るい場合、画面左側に対応する撮像素子32aの領域(第1撮像領域B1の左側)に対し、画面右側に対応する撮像素子32aの領域(第1撮像領域B1の右側)よりも長い露光時間が設定される。画面左側に対応する撮像素子32aの領域の露光時間を第1露光時間とし、画面右側に対応する撮像素子32aの領域の露光時間を第2露光時間とすると、第1露光時間>第2露光時間である。これにより、画面右側の女性の衣装がオーバー露光になるのを防ぎつつ、画面左側の男性の衣装がアンダー露光にならないように撮像することができる。 FIG. 21 (a) is a diagram illustrating still images taken indoors by the camera 1 without using flash light. The still image in FIG. 21A is taken by setting different imaging conditions on the left and right of the screen of the imaging device 32a. For example, if the female outfit on the right side of the screen is white and brighter than the male outfit on the left side of the screen, the image sensor corresponding to the right side of the screen with respect to the area of the image sensor 32a corresponding to the left side of the screen (left side of the first image pickup area B1). An exposure time longer than the region 32a (the right side of the first imaging region B1) is set. If the exposure time of the area of the image sensor 32a corresponding to the left side of the screen is the first exposure time and the exposure time of the area of the image sensor 32a corresponding to the right side of the screen is the second exposure time, the first exposure time> the second exposure time. It is. Thereby, it is possible to take an image so that the male costume on the left side of the screen is not underexposed while preventing the female costume on the right side of the screen from being overexposed.
 例えば、図21(a)の静止画を撮影する会場の演出によって照明光量が絞られると、撮像素子32aに設定される第1露光時間および第2露光時間は、いわゆる手ぶれ限界と称される露光時間(例えば、1/60秒とする)よりも長くなる。そのため、カメラ1は、静止画を撮像中のカメラ1の振れに伴う像ぶれの影響を抑える処理を行う。図21(b)および図21(c)は、第2の実施の形態による静止画ぶれ補正の概要を説明する模式図である。 For example, when the amount of illumination light is reduced by the effect of the venue for taking a still image of FIG. 21A, the first exposure time and the second exposure time set in the image sensor 32a are exposures called so-called camera shake limits. It becomes longer than the time (for example, 1/60 seconds). For this reason, the camera 1 performs processing for suppressing the influence of image blur due to the shake of the camera 1 that is capturing a still image. FIG. 21B and FIG. 21C are schematic diagrams for explaining the outline of still image blur correction according to the second embodiment.
<像ぶれ補正>
 第2の実施の形態では、静止画の像ぶれ補正を行う場合、上記手ぶれ限界とされる露光時間(1/60秒)よりも短い露光時間(後述する短縮露光時間:例えば、1/300秒)で複数枚の記録用画像を撮像する。例えば、画面左側に対応する撮像素子32aの領域によって、短縮露光時間で第1の記録用画像51Lを複数枚(n枚とする)撮像するとともに、画面右側に対応する撮像素子32aの領域によって、上記短縮露光時間で第2の記録用画像51Rを複数枚(m枚とする)撮像する。本例では、n>mとする。
<Image blur correction>
In the second embodiment, when image blur correction of a still image is performed, an exposure time shorter than the exposure time (1/60 seconds) that is the camera shake limit (a shortened exposure time described later: for example, 1/300 seconds). ) To capture a plurality of recording images. For example, the first recording image 51L is imaged in a short exposure time by the region of the image sensor 32a corresponding to the left side of the screen (n images), and the region of the image sensor 32a corresponding to the right side of the screen A plurality (m) of second recording images 51R are taken with the shortened exposure time. In this example, n> m.
 制御部34は、上記n枚の第1の記録用画像51Lの短縮露光時間の和(n×短縮露光時間)が上記第1露光時間となるように、第1の記録用画像の枚数nを決定する。また、m枚の第2の記録用画像51Rの短縮露光時間の和(m×短縮露光時間)が上記第2露光時間となるように、第2の記録用画像の枚数mを決定する。 The controller 34 sets the number n of the first recording images so that the sum of the shortened exposure times (n × shortened exposure time) of the n first recording images 51L becomes the first exposure time. decide. Further, the number m of the second recording images is determined so that the sum of the shortened exposure times (m × shortened exposure time) of the m second recording images 51R becomes the second exposure time.
 そして、制御部34および補正部33bは、図21(b)に示すように、画面左側に対応する撮像素子32aの領域で撮像されたn枚の第1の記録用画像51(DL-1~DL-n)に対し、各画像における特徴点の位置を合わせて重ね合わせる第1の合成処理を行う。このような合成処理は、公知であるため詳細な説明は省略する。 Then, as shown in FIG. 21 (b), the control unit 34 and the correction unit 33b include n first recording images 51 (DL-1 to DL-1) captured in the area of the image sensor 32a corresponding to the left side of the screen. DL-n) is subjected to a first synthesis process in which the positions of the feature points in each image are aligned and superimposed. Since such a synthesis process is known, a detailed description thereof will be omitted.
 また、制御部34および補正部33bは、図21(c)に示すように、画面右側に対応する撮像素子32aの領域で撮像されたm枚の第2の記録用画像51(DR-1~DR-m)に対し、各画像における特徴点の位置を合わせて重ね合わせる第2の合成処理を行う。補正部33bはさらに、第1の合成処理後の第1画像と、第2の合成処理後の第2画像とを合成し、図21(a)のような合成画像を得る。補正部33bは、合成画像の画像データを像ぶれ補正後の記録用画像の画像データとする。制御部34および補正部33bは、このように静止画に対する像ぶれ補正を行う。 Further, as shown in FIG. 21 (c), the control unit 34 and the correction unit 33b include m second recording images 51 (DR-1 to DR-1 to 51) captured in the area of the image sensor 32a corresponding to the right side of the screen. DR-m) is subjected to a second synthesis process in which the positions of the feature points in each image are aligned and superimposed. The correcting unit 33b further combines the first image after the first combining process and the second image after the second combining process to obtain a combined image as shown in FIG. The correction unit 33b sets the image data of the composite image as image data of a recording image after image blur correction. In this way, the control unit 34 and the correction unit 33b perform image blur correction on the still image.
<記録用画像51Lおよび51Rを撮像するタイミング>
 図22は、記録用画像51L(DL-1、DL-2、DL-3、…)の撮像タイミングと、記録用画像51R(DR-1、DR-2、DR-3、…)の撮像タイミングと、ぶれ補正後の記録用画像に基づく表示(ぶれ補正画像)の表示タイミングとを例示する図である。撮像制御部34cは、時刻t2においてレリーズ操作(例えばシャッターボタン押下操作)が行われるまでは、第1の実施の形態で説明した、録画ボタンが操作される前(時刻t1以前)の処理を行う。
<Timing for capturing the recording images 51L and 51R>
FIG. 22 shows the imaging timing of the recording image 51L (DL-1, DL-2, DL-3,...) And the imaging timing of the recording image 51R (DR-1, DR-2, DR-3,...). FIG. 5 is a diagram illustrating display timing of a display (blur correction image) based on a recording image after blur correction. The imaging control unit 34c performs the processing before the recording button is operated (before time t1) described in the first embodiment until the release operation (for example, the shutter button pressing operation) is performed at the time t2. .
 撮像制御部34cは、時刻t2においてレリーズ操作が行われると、撮像素子32aの撮像面に設定される第1撮像領域B1により、図22のタイミングで撮像を行う。すなわち、撮像制御部34cは、設定部34bによって第1撮像領域B1に設定される撮像条件で、撮像部32に記録用画像51を撮像させる。本例の場合、物体検出部34aが、上述した図10の検出用画像Diに基づいて画面左側の男性と画面右側の女性とを検出し、設定部34bが、検出結果に基づいて第1撮像領域B1を画面左側と画面右側とに分割したものとする。 When the release operation is performed at time t2, the imaging control unit 34c performs imaging at the timing of FIG. 22 by the first imaging area B1 set on the imaging surface of the imaging element 32a. In other words, the imaging control unit 34c causes the imaging unit 32 to capture the recording image 51 under the imaging conditions set in the first imaging region B1 by the setting unit 34b. In the case of this example, the object detection unit 34a detects the male on the left side of the screen and the female on the right side of the screen based on the detection image Di in FIG. 10 described above, and the setting unit 34b performs the first imaging based on the detection result. It is assumed that the area B1 is divided into a screen left side and a screen right side.
<像ぶれ補正する撮像>
 設定部32bは、上述した図10の検出用画像Diiiに基づいて露出演算を行うことにより、画面左側と画面右側とで異なる撮像条件(例えば、上記第1露光時間と第2露光時間)を設定する。撮像制御部34cは、第1露光時間および第2露光時間の少なくとも一方が、手ぶれ限界として予め定めた露光時間(例えば、1/60秒)よりも長い場合、像ぶれ補正する撮像を行う。すなわち、第1撮像領域B1のうち画面左側に対応する撮像素子32aの領域によって、短縮露光時間で第1の記録用画像51Lをn枚撮像するとともに、第1撮像領域B1のうち画面右側に対応する撮像素子32aの領域によって、上記短縮露光時間で第2の記録用画像51Rをm枚撮像する。
<Imaging to correct image blur>
The setting unit 32b sets different imaging conditions (for example, the first exposure time and the second exposure time) on the left side of the screen and the right side of the screen by performing an exposure calculation based on the detection image Diii in FIG. 10 described above. To do. When at least one of the first exposure time and the second exposure time is longer than an exposure time (for example, 1/60 seconds) set in advance as a camera shake limit, the imaging control unit 34c performs imaging for image blur correction. That is, the first image pickup area 51 corresponds to the right side of the screen in the first image pickup area B1 while the first image pickup area B1 in the first image pickup area B1 is picked up by the area of the image pickup element 32a corresponding to the left side of the screen. The second image for recording 51R is imaged in the shortened exposure time depending on the area of the imaging element 32a to be used.
 図22によれば、画面右側に対応する撮像素子32aの領域によってm枚の第2の記録用画像51Rの撮像が終了した後にも、画面左側に対応する撮像素子32aの領域によって第1の記録用画像51Lを撮像する。この理由は、n>mとしたからである。第2の実施の形態では、第2の記録用画像51Rの撮像が終了した後に第1の記録用画像51Lを撮像する場合、第1撮像領域B1のうち画面左側に対応する撮像素子32aの領域を、第2の記録用画像51Rの撮像が終了する前の時点よりも、画面右側へ広げて(拡張して)第1の記録用画像51Lを撮像する。このことは、第1撮像領域B1のうち画面右側に対応する撮像素子32aの領域の少なくとも一部に第1露光時間に対応する短縮露光時間を設定することにより行う。 According to FIG. 22, after the imaging of the second image for recording 51R by the area of the imaging element 32a corresponding to the right side of the screen is completed, the first recording is performed by the area of the imaging element 32a corresponding to the left side of the screen. A work image 51L is captured. This is because n> m. In the second embodiment, when the first recording image 51L is captured after the second recording image 51R is captured, the region of the imaging element 32a corresponding to the left side of the screen in the first imaging region B1. Is expanded (expanded) to the right side of the screen from the time before the second recording image 51R is captured, and the first recording image 51L is captured. This is performed by setting a shortened exposure time corresponding to the first exposure time in at least a part of the area of the imaging element 32a corresponding to the right side of the screen in the first imaging area B1.
 図21(b)において、第1の記録用画像DL-n-m+1~DL-nのサイズが第1の記録用画像DL-1~DL-mのサイズよりも大きい(本例では右方向に広い)ので、画面右側に位置する女性の一部が含まれる。これにより、第2の記録用画像51Rの撮像が終了した後にカメラ1の振れが大きくなったとしても(手ぶれにより図21(b)において人物に対して撮像素子32aの領域が向かって左にずれても)、第1の合成処理後の第1画像(図21(b))と、第2の合成処理後の第2画像(図21(c))との合成時において、画像のデータが不足することなく、図21(a)に示すような合成画像を得ることができる。補正部33bは、この合成画像の画像データを像ぶれ補正後の記録用画像の画像データとする。 In FIG. 21 (b), the size of the first recording images DL-n-m + 1 to DL-n is larger than the size of the first recording images DL-1 to DL-m (in this example, in the right direction). Because it is wide), it includes a part of the woman located on the right side of the screen. As a result, even if the camera 1 becomes more shaken after the second recording image 51R has been captured (the camera shake in FIG. 21B causes the region of the image sensor 32a to shift to the left with respect to the person. Even when the first image after the first combining process (FIG. 21B) and the second image after the second combining process (FIG. 21C) are combined, the image data is A composite image as shown in FIG. 21A can be obtained without being insufficient. The correction unit 33b sets the image data of the composite image as image data of a recording image after image blur correction.
<通常の撮像>
 撮像制御部34cは、設定部32bによって設定された第1露光時間および第2露光時間が、いずれも手ぶれ限界として予め定めた露光時間(例えば、1/60秒)よりも短い場合は、像ぶれ補正が不要なので通常の撮像を行う。すなわち、第1撮像領域B1のうち画面左側に対応する撮像素子32aの領域に第1露光時間を設定するとともに、第1撮像領域B1のうち画面右側に対応する撮像素子32aの領域に第2露光時間を設定し、1枚の記録用画像51(51Lと51R)を撮像する。
<Normal imaging>
When the first exposure time and the second exposure time set by the setting unit 32b are both shorter than the exposure time (for example, 1/60 seconds) set in advance as the camera shake limit, the imaging control unit 34c Since no correction is required, normal imaging is performed. That is, the first exposure time is set in the area of the imaging element 32a corresponding to the left side of the screen in the first imaging area B1, and the second exposure is set in the area of the imaging element 32a corresponding to the right side of the screen in the first imaging area B1. Time is set and one recording image 51 (51L and 51R) is captured.
<フローチャートの説明>
 静止画の像ぶれ補正を行う処理の流れについて、図23のフローチャートを参照して説明する。図23において、ステップS10からステップS110までの処理は、図19の処理と同様なので説明を省略する。制御部34は、図19のステップS110の次に進むステップS210において、シャッターボタンの操作の有無を判定する。制御部34は、操作部材36を構成する不図示のシャッターボタン、または撮像を指示する表示アイコン74(図9)が操作された場合、ステップS210を肯定判定してステップS220へ進む。制御部34は、シャッターボタンの操作が行われない場合には、ステップS210を否定判定してステップS30へ戻る。ステップS30へ戻った場合、制御部34は、ステップS30~S210までの処理を繰り返す。
<Description of flowchart>
The flow of processing for correcting still image blurring will be described with reference to the flowchart of FIG. In FIG. 23, the processing from step S10 to step S110 is the same as the processing of FIG. The control unit 34 determines whether or not the shutter button is operated in step S210, which is subsequent to step S110 in FIG. When the shutter button (not shown) constituting the operation member 36 or the display icon 74 (FIG. 9) instructing imaging is operated, the control unit 34 makes a positive determination in step S210 and proceeds to step S220. If the shutter button is not operated, the control unit 34 makes a negative determination in step S210 and returns to step S30. When returning to step S30, the control unit 34 repeats the processing from step S30 to S210.
 ステップS220において、制御部34は、静止画に対する像ぶれ補正の必要の有無を判定する。制御部34は、上述した第1露光時間および第2露光時間の少なくとも一方が、手ぶれ限界として予め定めた露光時間(例えば、1/60秒)よりも長い場合、ステップS220を肯定判定してステップS230へ進む。制御部34は、上述した第1露光時間および第2露光時間がいずれも手ぶれ限界として予め定めた露光時間よりも短い場合には、ステップS220を否定判定してステップS250へ進む。 In step S220, the control unit 34 determines whether image blur correction is necessary for the still image. When at least one of the first exposure time and the second exposure time described above is longer than an exposure time (for example, 1/60 seconds) set in advance as a camera shake limit, the control unit 34 makes a positive determination in step S220 and performs step Proceed to S230. If both the first exposure time and the second exposure time described above are shorter than the exposure time set in advance as the camera shake limit, the control unit 34 makes a negative determination in step S220 and proceeds to step S250.
 ステップS230において、制御部34は、短縮露光時間で複数枚の撮像を行う。図22の例では、第1撮像領域B1のうち画面左側に対応する撮像素子32aの領域によって、短縮露光時間で第1の記録用画像51Lをn枚撮像するとともに、第1撮像領域B1のうち画面右側に対応する撮像素子32aの領域によって、上記短縮露光時間で第2の記録用画像51Rをm枚撮像する。このとき、第2の記録用画像51Rの撮像が終了した後に第1の記録用画像51Lを撮像する場合、第1撮像領域B1のうち画面左側に対応する撮像素子32aの領域を、第2の記録用画像51Rの撮像が終了する前の時点よりも、画面右側へ広げて第1の記録用画像51Lを撮像する。 In step S230, the control unit 34 captures a plurality of images with a shortened exposure time. In the example of FIG. 22, n first imaging images 51L are captured with a reduced exposure time by the area of the imaging element 32a corresponding to the left side of the screen in the first imaging area B1, and the first imaging area B1 With the area of the image sensor 32a corresponding to the right side of the screen, the second recording image 51R is imaged with the shortened exposure time. At this time, when the first recording image 51L is captured after the second recording image 51R is captured, the region of the imaging element 32a corresponding to the left side of the screen in the first imaging region B1 is set to the second The first recording image 51L is imaged to the right of the screen from the time before the recording image 51R is imaged.
 ステップS240において、制御部34および補正部33bは、記録用画像に対する像ぶれ補正を行う。具体的には、複数枚撮像した記録用画像に対し、各画像における特徴点の位置を合わせて重ね合わせて合成する合成処理を行う。 In step S240, the control unit 34 and the correction unit 33b perform image blur correction on the recording image. Specifically, a composition process is performed in which a plurality of recording images are combined and superimposed by combining the positions of feature points in each image.
 ステップS250において、制御部34は、通常の撮像を行う。すなわち、第1撮像領域B1のうち画面左側に対応する撮像素子32aの領域に第1露光時間を設定するとともに、第1撮像領域B1のうち画面右側に対応する撮像素子32aの領域に第2露光時間を設定し、1枚の記録用画像51L、51Rを撮像する。 In step S250, the control unit 34 performs normal imaging. That is, the first exposure time is set in the area of the imaging element 32a corresponding to the left side of the screen in the first imaging area B1, and the second exposure is set in the area of the imaging element 32a corresponding to the right side of the screen in the first imaging area B1. Time is set and a single recording image 51L, 51R is captured.
 ステップS260において、制御部34の撮像制御部34cは画像処理部33へ指示を送り、記録用画像の画像データに対する画像処理を行わせてステップS270へ進む。画像処理は、上記画素欠陥補正処理、色補間処理、輪郭強調処理、ノイズ低減処理を含む。 In step S260, the imaging control unit 34c of the control unit 34 sends an instruction to the image processing unit 33, performs image processing on the image data of the recording image, and proceeds to step S270. Image processing includes the pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing.
 ステップS270において、制御部34は記録部37へ指示を送り、画像処理後の画像データを不図示の記録媒体に記録させてステップS280へ進む。ステップS280において、制御部34は、終了操作が行われたか否かを判断する。制御部34は、終了操作が行われた場合にステップS280を肯定判定して図23による処理を終了する。制御部34は、終了操作が行われない場合には、ステップS280を否定判定してステップS10へ戻る。ステップS10へ戻った場合、制御部34は、上述した処理を繰り返す。 In step S270, the control unit 34 sends an instruction to the recording unit 37, records the image data after the image processing on a recording medium (not shown), and proceeds to step S280. In step S280, the control unit 34 determines whether an end operation has been performed. When the end operation is performed, the control unit 34 makes a positive determination in step S280 and ends the process illustrated in FIG. If the end operation is not performed, the control unit 34 makes a negative determination in step S280 and returns to step S10. When returning to step S10, the control unit 34 repeats the above-described processing.
 以上説明した第2の実施の形態によれば、次の作用効果が得られる。
(1)カメラ1は、複数の撮像領域で異なる撮像条件で撮像された被写体像の画像に対し、カメラ1の振れに伴う像ぶれの影響を抑える像ぶれ補正を、それぞれ適切に行うことができる。
According to the second embodiment described above, the following operational effects can be obtained.
(1) The camera 1 can appropriately perform image blur correction that suppresses the influence of image blur due to camera shake on a subject image captured in a plurality of imaging regions under different imaging conditions. .
(2)カメラ1は、例えば、第1撮像条件を設定する画面左側と第2撮像条件を設定する画面右側とで異なる露光時間が設定された場合や、画面左側と画面右側とで像ぶれの大きさが異なる場合において、画面左側と画面右側とで、それぞれ適切に像ぶれ補正を行うことができる。 (2) The camera 1 may cause image blurring when, for example, different exposure times are set for the left side of the screen for setting the first imaging condition and the right side of the screen for setting the second imaging condition, or when the left side of the screen and the right side of the screen are set. When the sizes are different, image blur correction can be appropriately performed on the left side and the right side of the screen.
(3)カメラ1は、例えば、第1撮像条件を設定する画面左側の領域を広げて第1の記録用画像51Lのサイズを大きくする。
 これにより、例えば、第2の記録用画像51Rの撮像が終了した後にカメラ1の振れが大きくなったとしても、第1の像ぶれ補正後の第1画像(図21(b))と、第2の像ぶれ補正後の第2画像(図21(c))との合成時において、画像のデータが不足することなく、図21(a)に示すような合成画像を得ることができる。
(3) The camera 1 increases the size of the first recording image 51L, for example, by expanding the area on the left side of the screen for setting the first imaging condition.
As a result, for example, even if the camera 1 becomes shaken after the second recording image 51R has been imaged, the first image after the first image blur correction (FIG. 21B) and the first image are corrected. When combining with the second image after image blur correction (2) (FIG. 21C), a composite image as shown in FIG. 21A can be obtained without lack of image data.
(4)カメラ1は、例えば、第1撮像領域B1の左側範囲を右側方向へ広げて大きくする。第1撮像領域B1の左側範囲を拡大することによって、より広い範囲の被写体像を第1の記録用画像51Lに含めることができる。 (4) For example, the camera 1 enlarges the left range of the first imaging area B1 in the right direction. By enlarging the left range of the first imaging area B1, a wider range of subject images can be included in the first recording image 51L.
(5)カメラ1は、複数の第1の記録用画像51Lの位置を、他の第1の記録用画像51Lの位置と合うようにずらして(位置合わせする)合成を行う。
 これにより、複数の第1の記録用画像51Lの間で位置合わせした合成画像を得ることができる。
(5) The camera 1 performs composition by shifting the positions of the plurality of first recording images 51L so as to match the positions of the other first recording images 51L.
As a result, it is possible to obtain a composite image that is aligned between the plurality of first recording images 51L.
(6)カメラ1は、第1の記録用画像51Lの被写体像が他の第1の記録用画像51Lの被写体像の位置と重なるように、第1の記録用画像51Lの位置を変えて(位置合わせする)合成を行う。
 これにより、複数の第1の記録用画像51Lの間で被写体像の位置がずれていたとしても、その被写体像のずれ、すなわち像ぶれを抑えた合成画像を得ることができる。
(6) The camera 1 changes the position of the first recording image 51L so that the subject image of the first recording image 51L overlaps the position of the subject image of the other first recording image 51L ( Align).
Thereby, even if the position of the subject image is shifted between the plurality of first recording images 51L, a composite image in which the shift of the subject image, that is, the image blur is suppressed can be obtained.
(第3の実施の形態)
 第3の実施の形態は、第2の実施の形態と異なる態様で、静止画を撮像中のカメラ1の振れに伴う像ぶれの影響を抑える。
 第3の実施の形態におけるカメラ1は、第1および第2の実施の形態と同様に、レンズ交換式であってもレンズ交換式でなくてもよい。また、スマートフォンやビデオカメラ等の撮像装置として構成してもよい。
(Third embodiment)
The third embodiment is a mode different from the second embodiment, and suppresses the influence of image blur due to the shake of the camera 1 that is capturing a still image.
As in the first and second embodiments, the camera 1 in the third embodiment may be either an interchangeable lens type or not a interchangeable lens type. Moreover, you may comprise as imaging devices, such as a smart phone and a video camera.
 第3の実施の形態は、第2の実施の形態と比べて、画面左側に対応する撮像素子32aの領域に第1の短縮露光時間を設定するとともに、画面右側に対応する撮像素子32aの領域に第2の短縮露光時間(第1の短縮露光時間>第2の短縮露光時間)を設定して、n枚の記録用画像51-1~51-nを撮像する点において相違する。換言すると、画面左側に対応する撮像素子32aの領域により撮像する第1の記録用画像51Lの枚数と、画面右側に対応する撮像素子32aの領域により撮像する第2の記録用画像51Rの枚数とが同じ(n枚)である。
 以下、図面を参照して第2の実施の形態との相違点を中心に説明する。
Compared to the second embodiment, the third embodiment sets the first shortened exposure time in the area of the image sensor 32a corresponding to the left side of the screen and the area of the image sensor 32a corresponding to the right side of the screen. The second shortened exposure time (first shortened exposure time> second shortened exposure time) is set to n and recording images 51-1 to 51-n are different. In other words, the number of first recording images 51L captured by the area of the image sensor 32a corresponding to the left side of the screen and the number of second recording images 51R captured by the area of the image sensor 32a corresponding to the right side of the screen. Are the same (n).
Hereinafter, the difference from the second embodiment will be mainly described with reference to the drawings.
<像ぶれ補正>
 図24は、第3の実施の形態による静止画ぶれ補正の概要を説明する模式図である。
 制御部34は、例えば、上記n枚の第1の記録用画像51Lの第1の短縮露光時間の和(n×第1の短縮露光時間)が上記第1露光時間となるように、第1の記録用画像の枚数nと第1の短縮露光時間とを決定する。また、n枚の第2の記録用画像51Rの第2の短縮露光時間の和(n×第2の短縮露光時間)が上記第2露光時間となるように、第2の短縮露光時間を決定する。
<Image blur correction>
FIG. 24 is a schematic diagram for explaining an outline of still image blur correction according to the third embodiment.
For example, the control unit 34 sets the first exposure time such that the sum of the first shortened exposure times (n × first shortened exposure time) of the n first recording images 51L becomes the first exposure time. The number n of recording images and the first shortened exposure time are determined. Further, the second shortened exposure time is determined such that the sum of the second shortened exposure times of the n second recording images 51R (n × second shortened exposure time) becomes the second exposure time. To do.
 そして、制御部34および補正部33bは、図24に示すように、n枚の記録用画像51-1~51-nに対し、各画像における特徴点の位置を合わせて重ね合わせる合成処理を行う。このような合成処理は、公知であるため詳細な説明は省略する。補正部33bは、合成画像の画像データを像ぶれ補正後の記録用画像の画像データとする。制御部34および補正部33bは、このように静止画に対する像ぶれ補正を行う。 Then, as shown in FIG. 24, the control unit 34 and the correction unit 33b perform a composition process on the n recording images 51-1 to 51-n so that the positions of the feature points in each image are aligned and superimposed. . Since such a synthesis process is known, a detailed description thereof will be omitted. The correction unit 33b sets the image data of the composite image as image data of a recording image after image blur correction. In this way, the control unit 34 and the correction unit 33b perform image blur correction on the still image.
<記録用画像51を撮像するタイミング>
 図25は、画面左側に対応する撮像素子32aの領域における記録用画像51L(DL1、DL2、DL3、…)の撮像タイミングと、画面右側に対応する撮像素子32aの領域における記録用画像51R(DR1、DR2、DR3、…)の撮像タイミングと、ぶれ補正後の記録用画像に基づく表示(ぶれ補正画像)の表示タイミングとを例示する図である。撮像制御部34cは、時刻t3においてレリーズ操作(例えばシャッターボタン押下操作)が行われるまでは、第1の実施の形態で説明した、録画ボタンが操作される前(時刻t1以前)の処理を行う。
<Timing for Recording Image 51 for Recording>
FIG. 25 shows the imaging timing of the recording image 51L (DL1, DL2, DL3,...) In the area of the imaging element 32a corresponding to the left side of the screen and the recording image 51R (DR1 in the area of the imaging element 32a corresponding to the right side of the screen. , DR2, DR3,...) And the display timing of the display (blur correction image) based on the recording image after blur correction. The imaging control unit 34c performs the processing before the recording button is operated (before time t1) described in the first embodiment until a release operation (for example, a shutter button pressing operation) is performed at time t3. .
 撮像制御部34cは、時刻t3においてレリーズ操作が行われると、撮像素子32aの撮像面に設定される第1撮像領域B1により、図25のタイミングで撮像を行う。すなわち、撮像制御部34cは、設定部34bによって第1撮像領域B1に設定される撮像条件で、撮像部32に記録用画像51を撮像させる。第2の実施の形態と同様に、上述した図10の検出用画像Diに基づいて物体検出部34aが画面左側の男性と画面右側の女性とを検出し、検出結果に基づいて設定部34bが画面左側と画面右側とに分割したものとする。 When the release operation is performed at time t3, the imaging control unit 34c performs imaging at the timing of FIG. 25 by the first imaging area B1 set on the imaging surface of the imaging element 32a. In other words, the imaging control unit 34c causes the imaging unit 32 to capture the recording image 51 under the imaging conditions set in the first imaging region B1 by the setting unit 34b. Similar to the second embodiment, the object detection unit 34a detects the male on the left side of the screen and the female on the right side of the screen based on the detection image Di in FIG. 10 described above, and the setting unit 34b determines based on the detection result. Assume that the screen is divided into a left side and a right side.
<像ぶれ補正する撮像>
 また、第2の実施の形態と同様に、設定部32bが上述した図10の検出用画像Diiiに基づいて露出演算を行うことにより、画面左側と画面右側とで異なる撮像条件(例えば、上記第1露光時間と第2露光時間)を設定する。撮像制御部34cは、第1露光時間および第2露光時間の少なくとも一方が、手ぶれ限界として予め定めた露光時間(例えば1/60秒)よりも長い場合、像ぶれ補正する撮像を行う。すなわち、第1撮像領域B1のうち画面左側に対応する撮像素子32aの領域に第1の短縮露光時間を設定するとともに、第1撮像領域B1のうち画面右側に対応する撮像素子32aの領域に第2の短縮露光時間を設定した上で、記録用画像51(51Lと51R)をn枚撮像する。
<Imaging to correct image blur>
Similarly to the second embodiment, the setting unit 32b performs an exposure calculation based on the detection image Diii of FIG. 10 described above, so that different imaging conditions (for example, the above-described first condition) 1 exposure time and 2nd exposure time) are set. When at least one of the first exposure time and the second exposure time is longer than an exposure time (for example, 1/60 seconds) set in advance as a camera shake limit, the imaging control unit 34c performs imaging for image blur correction. That is, the first shortened exposure time is set in the area of the image pickup element 32a corresponding to the left side of the screen in the first image pickup area B1, and the area of the image pickup element 32a corresponding to the right side of the screen in the first image pickup area B1. After setting a shortened exposure time of 2, n images for recording 51 (51L and 51R) are taken.
 図25によれば、画面左側と右側とで、撮像される記録用画像51の枚数nが同じである。この理由は、1枚当たりの短縮露光時間として、第1の短縮露光時間よりも第2の短縮露光時間を短くしたことによる。第3の実施の形態では、画面左側と右側とで、同じn枚の記録用画像を撮像するので、カメラ1の振れが大きくなったとしても、画面左側と右側との間の画像のデータが不足することなく、図24に示すような合成画像を得ることができる。補正部33bは、この合成画像の画像データを像ぶれ補正後の記録用画像の画像データとする。 According to FIG. 25, the number n of recording images 51 to be captured is the same on the left and right sides of the screen. This is because the second shortened exposure time is shorter than the first shortened exposure time as the shortened exposure time per sheet. In the third embodiment, since the same n recording images are taken on the left and right sides of the screen, the image data between the left and right sides of the screen is not affected even if the camera 1 shakes greatly. A composite image as shown in FIG. 24 can be obtained without being insufficient. The correction unit 33b sets the image data of the composite image as image data of a recording image after image blur correction.
<通常の撮像>
 撮像制御部34cは、設定部32bによって設定された第1露光時間および第2露光時間が、いずれも手ぶれ限界として予め定めた露光時間(例えば、1/60秒)よりも短い場合は、像ぶれ補正が不要なので通常の撮像を行う。
<Normal imaging>
When the first exposure time and the second exposure time set by the setting unit 32b are both shorter than the exposure time (for example, 1/60 seconds) set in advance as the camera shake limit, the imaging control unit 34c Since no correction is required, normal imaging is performed.
 以上説明した第3の実施の形態によれば、次の作用効果が得られる。
(1)カメラ1は、複数の撮像領域で異なる撮像条件で撮像された被写体像の画像に対し、カメラ1の振れに伴う像ぶれの影響を抑える像ぶれ補正を、適切に行うことができる。
According to the third embodiment described above, the following operational effects can be obtained.
(1) The camera 1 can appropriately perform image blur correction that suppresses the influence of image blur due to the shake of the camera 1 on an image of a subject image captured under different imaging conditions in a plurality of imaging regions.
(2)カメラ1は、例えば、第1撮像条件を設定する画面左側と第2撮像条件を設定する画面右側とで異なる露光時間が設定された場合や、画面左側と画面右側とで像ぶれの大きさが異なる場合において、適切に像ぶれ補正を行うことができる。 (2) The camera 1 may cause image blurring when, for example, different exposure times are set for the left side of the screen for setting the first imaging condition and the right side of the screen for setting the second imaging condition, When the sizes are different, image blur correction can be appropriately performed.
(3)カメラ1は、例えば、第1撮像条件を設定する画面左側と第2撮像条件を設定する画面右側とで異なる露光時間で撮像された被写体像の画像に対し、適切に像ぶれ補正を行うことができる。 (3) For example, the camera 1 appropriately performs image blur correction on the subject image captured at different exposure times on the left side of the screen for setting the first imaging condition and the right side of the screen for setting the second imaging condition. It can be carried out.
(4)カメラ1は、例えば、第1撮像条件を設定する画面左側と第2撮像条件を設定する画面右側とで、撮像の同時性を保つことができる。 (4) For example, the camera 1 can maintain imaging simultaneity on the left side of the screen for setting the first imaging condition and the right side of the screen for setting the second imaging condition.
(第4の実施の形態)
 第4の実施の形態は、第1の実施の形態による動画の像ぶれ補正を、2つの条件(例えば、トリミング範囲の広さが異なる)で並行して行う。
 第4の実施の形態におけるカメラ1は、第1および第2の実施の形態と同様に、レンズ交換式であってもレンズ交換式でなくてもよい。また、スマートフォンやビデオカメラ等の撮像装置として構成してもよい。
(Fourth embodiment)
In the fourth embodiment, the image blur correction of the moving image according to the first embodiment is performed in parallel under two conditions (for example, the width of the trimming range is different).
Similarly to the first and second embodiments, the camera 1 in the fourth embodiment may be either a lens interchangeable type or a lens interchangeable type. Moreover, you may comprise as imaging devices, such as a smart phone and a video camera.
 第4の実施の形態において、制御部34は、記録用画像やモニタ用画像を撮像するときに、撮像素子32aの撮像面に4つの撮像領域を設定する。図26は、撮像素子32aの撮像面に設定される第1撮像領域B1、第2撮像領域B2、第3撮像領域C1、および第4撮像領域C4の配置を例示する図である。図26の部分拡大図によれば、撮像素子32aに定義されている複数のブロックのうちの4つを1単位とするブロックの単位Uが、画素領域の水平方向および垂直方向に繰り返し配置される。 In the fourth embodiment, the control unit 34 sets four imaging areas on the imaging surface of the imaging element 32a when imaging a recording image or a monitor image. FIG. 26 is a diagram illustrating the arrangement of the first imaging area B1, the second imaging area B2, the third imaging area C1, and the fourth imaging area C4 set on the imaging surface of the imaging element 32a. According to the partially enlarged view of FIG. 26, the unit U of a block having four of the plurality of blocks defined in the image sensor 32a as one unit is repeatedly arranged in the horizontal direction and the vertical direction of the pixel region. .
 ブロックの単位Uは、第1撮像領域B1に属するブロックと、第2撮像領域B2に属するブロックと、第3撮像領域C1に属するブロックと、第4撮像領域C2に属するブロックとを含む。制御部34は、例えば、1フレームの撮像を行った撮像素子32aの第1撮像領域B1から読み出された光電変換信号に基づいて、第1の記録用画像51Aを生成する。また、制御部34は、上記撮像素子32aの第2撮像領域B2から読み出された光電変換信号に基づいて、第1の検出用画像52Aを生成する。制御部34はさらに、1フレームの撮像を行った撮像素子32aの第3撮像領域C1から読み出された光電変換信号に基づいて、第2の記録用画像51Bを生成する。また、制御部34は、上記撮像素子32aの第4撮像領域C2から読み出された光電変換信号に基づいて、第2の検出用画像52Bを生成する。 The block unit U includes a block belonging to the first imaging area B1, a block belonging to the second imaging area B2, a block belonging to the third imaging area C1, and a block belonging to the fourth imaging area C2. For example, the control unit 34 generates the first recording image 51A based on the photoelectric conversion signal read from the first imaging region B1 of the imaging element 32a that has captured one frame. Further, the control unit 34 generates the first detection image 52A based on the photoelectric conversion signal read from the second imaging region B2 of the imaging element 32a. The control unit 34 further generates a second recording image 51B based on the photoelectric conversion signal read from the third imaging region C1 of the imaging element 32a that has captured one frame. In addition, the control unit 34 generates the second detection image 52B based on the photoelectric conversion signal read from the fourth imaging region C2 of the imaging element 32a.
 第1の記録用画像51A、第1の検出用画像52A、第2の記録用画像51B、および第2の検出用画像52Bは、いずれも同じ画角で撮像され、共通の被写体の像を含む。これらの画像の撮像は、並行して行うことができる。 The first recording image 51A, the first detection image 52A, the second recording image 51B, and the second detection image 52B are all captured at the same angle of view and include a common subject image. . These images can be taken in parallel.
 第4の実施の形態において、第1の記録用画像51Aを撮像する第1撮像領域B1に設定する撮像条件を第1撮像条件と呼び、第1の検出用画像52Aを撮像する第2撮像領域B2に設定する撮像条件を第2撮像条件と呼ぶことにする。また、第2の記録用画像51Bを撮像する第3撮像領域C1に設定する撮像条件を第3撮像条件と呼び、第2の検出用画像52Bを撮像する第4撮像領域C2に設定する撮像条件を第4撮像条件と呼ぶことにする。
 制御部34は、第1撮像条件と第2撮像条件、第3撮像条件と第4撮像条件とを同じ条件に設定してもよく、異なる条件に設定してもよい。
In the fourth embodiment, the imaging condition set in the first imaging area B1 for imaging the first recording image 51A is referred to as a first imaging condition, and the second imaging area for imaging the first detection image 52A. The imaging condition set to B2 will be referred to as the second imaging condition. The imaging condition set in the third imaging area C1 for capturing the second recording image 51B is referred to as a third imaging condition, and the imaging condition set in the fourth imaging area C2 for capturing the second detection image 52B. Will be referred to as the fourth imaging condition.
The control unit 34 may set the first imaging condition and the second imaging condition, the third imaging condition and the fourth imaging condition to the same condition, or may be set to different conditions.
 第4の実施の形態において、制御部34は、上述したように並行して撮像する動画に対し、異なる像ぶれ補正を行う。例えば、第1の記録用画像51Aおよび第1の検出用画像52Aと、第2の記録用画像51Bおよび第2の検出用画像52Bとで、条件を変えて像ぶれ補正を行う。具体的には、制御部34は、第1の記録用画像51Aの中に第1の記録用画像51Aのサイズの90%のトリミング範囲W1を設定する。第1の検出用画像52Aの中に設定するトリミング範囲W2についても同様である。また、トリミング範囲W1、W2を設定した以降の像ぶれ補正処理も、第1の実施の形態の場合と同様である。 In the fourth embodiment, the control unit 34 performs different image blur correction on the moving images captured in parallel as described above. For example, image blur correction is performed under different conditions between the first recording image 51A and the first detection image 52A, and the second recording image 51B and the second detection image 52B. Specifically, the control unit 34 sets a trimming range W1 that is 90% of the size of the first recording image 51A in the first recording image 51A. The same applies to the trimming range W2 set in the first detection image 52A. The image blur correction processing after setting the trimming ranges W1 and W2 is the same as that in the first embodiment.
 一方で、制御部34は、第2の記録用画像51Bの中に第2の記録用画像51Bのサイズの60%のトリミング範囲W3を設定する。第2の検出用画像52Bの中に設定するトリミング範囲W4についても同様である。また、トリミング範囲W3、W4を設定した以降の像ぶれ補正処理も、第1の実施の形態の場合と同様である。 On the other hand, the control unit 34 sets a trimming range W3 that is 60% of the size of the second recording image 51B in the second recording image 51B. The same applies to the trimming range W4 set in the second detection image 52B. The image blur correction processing after setting the trimming ranges W3 and W4 is the same as that in the first embodiment.
 一般に、カメラ1が大きく振れた場合におけるトリミング範囲W1~W4の移動代を十分に確保するために、トリミング範囲W1~W4を狭く設定することが多い。このため、像ぶれ補正後の動画の画面サイズは小さくなりがちである。しかしながら、複数通りのトリミング範囲を設定することで、通常は広いトリミング範囲を設定した像ぶれ補正を採用し、カメラ1の振れが大きい場合には狭いトリミング範囲を設定した像ぶれ補正を採用することが可能となる。 In general, the trimming ranges W1 to W4 are often set to be narrow in order to sufficiently secure the movement allowance of the trimming ranges W1 to W4 when the camera 1 is largely shaken. For this reason, the screen size of a moving image after image blur correction tends to be small. However, by setting a plurality of trimming ranges, image blur correction with a wide trimming range is usually employed, and when camera 1 has a large shake, image blur correction with a narrow trimming range is employed. Is possible.
 以上説明した第4の実施の形態によれば、動画の像ぶれ補正を、2つの条件(例えば、トリミング範囲の広さが異なる)で並行して行うことができる。 According to the fourth embodiment described above, image blur correction of a moving image can be performed in parallel under two conditions (for example, the width of the trimming range is different).
--- 第4の実施の形態の変形例1 ---
 第4の実施の形態の変形例1は、第1の実施の形態による動画の像ぶれ補正と、第2または第3の実施の形態による静止画の像ぶれ補正とを並行して行う。
 第4の実施の形態の変形例1では、記録する静止画を撮像する場合、例えば、撮像素子32aの撮像面に設定される、第3撮像領域C1(図26)によって静止画を撮像する。第3撮像領域C1に含まれるブロックの稼働率は、第1の実施の形態の場合と同様に、適宜設定して構わない。例えば、稼働率100%に設定してもよいし、稼働率を70%あるいは50%に設定してもよい。撮像素子32aの撮像面に設定される、第4撮像領域C2(図26)は、休止させる。
 なお、図26において第4撮像領域C2を設定せずに、第3撮像領域C1の位置と第4撮像領域C2の位置とをまとめ合わせて、第3撮像領域C1として設定してもよい。
--- Modification 1 of the fourth embodiment ---
In the first modification of the fourth embodiment, the image blur correction of a moving image according to the first embodiment and the image blur correction of a still image according to the second or third embodiment are performed in parallel.
In the first modification of the fourth embodiment, when a still image to be recorded is captured, for example, the still image is captured by the third imaging region C1 (FIG. 26) set on the imaging surface of the imaging element 32a. The operating rate of the blocks included in the third imaging region C1 may be set as appropriate, as in the case of the first embodiment. For example, the operating rate may be set to 100%, or the operating rate may be set to 70% or 50%. The fourth imaging region C2 (FIG. 26) set on the imaging surface of the imaging element 32a is paused.
In FIG. 26, the position of the third imaging area C1 and the position of the fourth imaging area C2 may be combined and set as the third imaging area C1 without setting the fourth imaging area C2.
 第4の実施の形態の変形例1において、1フレームの撮像を行った撮像素子32aの第1撮像領域B1から読み出された光電変換信号に基づいて、第1の記録用画像51Aを生成する。また、制御部34は、上記撮像素子32aの第2撮像領域B2から読み出された光電変換信号に基づいて、第1の検出用画像52Aを生成する。制御部34はさらに、1フレームの撮像を行った撮像素子32aの第3撮像領域C1から読み出された光電変換信号に基づいて、第2の記録用画像51を生成する。第1の記録用画像51Aおよび第1の検出用画像52Aは、動画である。第2の記録用画像51は、静止画である。 In the first modification of the fourth embodiment, the first recording image 51A is generated based on the photoelectric conversion signal read from the first imaging region B1 of the imaging element 32a that has captured one frame. . Further, the control unit 34 generates the first detection image 52A based on the photoelectric conversion signal read from the second imaging region B2 of the imaging element 32a. The control unit 34 further generates a second recording image 51 based on the photoelectric conversion signal read from the third imaging region C1 of the imaging element 32a that has captured one frame. The first recording image 51A and the first detection image 52A are moving images. The second recording image 51 is a still image.
 制御部34は、第1の記録用画像51Aおよび第1の検出用画像52Aに対し、第1の実施形態と同様に、動画に対する像ぶれ補正を行う。また、制御部34は、第2の記録用画像51に対し、第2の実施の形態または第3の実施の形態と同様に、静止画に対する像ぶれ補正を行う。 The control unit 34 performs image blur correction on the moving image on the first recording image 51A and the first detection image 52A, as in the first embodiment. In addition, the control unit 34 performs image blur correction on the still image on the second recording image 51 as in the second embodiment or the third embodiment.
 第4の実施の形態の変形例1によれば、動画に対する像ぶれ補正と静止画に対する像ぶれ補正とを並行して行うことができる。 According to the first modification of the fourth embodiment, image blur correction for a moving image and image blur correction for a still image can be performed in parallel.
--- 第4の実施の形態の変形例2 ---
 第4の実施の形態の変形例2は、第2または第3の実施の形態による静止画の像ぶれ補正を、2つの条件で並行して行う。2つの条件とは、例えば、短縮露光時間が異なったり、複数枚の記録用画像の位置を合わせて合成するときの合成枚数が異なったりすることである。
--- Modification 2 of the fourth embodiment ---
In the second modification of the fourth embodiment, still image blur correction according to the second or third embodiment is performed in parallel under two conditions. The two conditions are, for example, that the shortened exposure time is different, or that the number of images to be combined when combining the positions of a plurality of recording images is different.
 第4の実施の形態の変形例2では、記録する第1の静止画を撮像する場合、例えば、撮像素子32aの撮像面に設定される、第1撮像領域B1(図26)によって第1の静止画を撮像する。第3撮像領域C1に含まれるブロックの稼働率は、第1の実施の形態の場合と同様に、適宜設定して構わない。例えば、稼働率100%に設定してもよいし、稼働率を70%あるいは50%に設定してもよい。撮像素子32aの撮像面に設定される、第2撮像領域B2(図26)は、休止させる。
 なお、図26において第2撮像領域B2を設定せずに、第1撮像領域B1の位置と第2撮像領域B2の位置とをまとめ合わせて、第1撮像領域B1として設定してもよい。
In the second modification of the fourth embodiment, when the first still image to be recorded is captured, for example, the first imaging region B1 (FIG. 26) set on the imaging surface of the imaging element 32a is used for the first. Take a still image. The operating rate of the blocks included in the third imaging region C1 may be set as appropriate, as in the case of the first embodiment. For example, the operating rate may be set to 100%, or the operating rate may be set to 70% or 50%. The second imaging region B2 (FIG. 26) set on the imaging surface of the imaging element 32a is paused.
In FIG. 26, the position of the first imaging area B1 and the position of the second imaging area B2 may be combined and set as the first imaging area B1 without setting the second imaging area B2.
 また、第4の実施の形態の変形例2において、記録する第2の静止画を撮像する場合、例えば、撮像素子32aの撮像面に設定される、第3撮像領域C1(図26)によって第2の静止画を撮像する。第3撮像領域C1に含まれるブロックの稼働率は、第1の実施の形態の場合と同様に、適宜設定して構わない。例えば、稼働率100%に設定してもよいし、稼働率を70%あるいは50%に設定してもよい。撮像素子32aの撮像面に設定される、第4撮像領域C2(図26)は、休止させる。
 なお、図26において第4撮像領域C2を設定せずに、第3撮像領域C1の位置と第4撮像領域C2の位置とをまとめ合わせて、第3撮像領域C1として設定してもよい。
Further, in the second modification of the fourth embodiment, when the second still image to be recorded is imaged, for example, the third imaging area C1 (FIG. 26) set on the imaging surface of the imaging element 32a is used. 2 still images are taken. The operating rate of the blocks included in the third imaging region C1 may be set as appropriate, as in the case of the first embodiment. For example, the operating rate may be set to 100%, or the operating rate may be set to 70% or 50%. The fourth imaging region C2 (FIG. 26) set on the imaging surface of the imaging element 32a is paused.
In FIG. 26, the position of the third imaging area C1 and the position of the fourth imaging area C2 may be combined and set as the third imaging area C1 without setting the fourth imaging area C2.
 第4の実施の形態の変形例2において、1フレームの撮像を行った撮像素子32aの第1撮像領域B1から読み出された光電変換信号に基づいて、第1の記録用画像51Aを生成する。また、制御部34は、1フレームの撮像を行った撮像素子32aの第3撮像領域C1から読み出された光電変換信号に基づいて、第2の記録用画像51Bを生成する。第1の記録用画像51Aは、第1の静止画である。第2の記録用画像51Bは、第2の静止画である。 In the second modification of the fourth embodiment, the first recording image 51A is generated based on the photoelectric conversion signal read from the first imaging region B1 of the imaging element 32a that has captured one frame. . Further, the control unit 34 generates the second recording image 51B based on the photoelectric conversion signal read from the third imaging region C1 of the imaging element 32a that has captured one frame. The first recording image 51A is a first still image. The second recording image 51B is a second still image.
 制御部34は、第1の記録用画像51Aおよび第2記録用画像52Bに対し、それぞれ、第2の実施の形態または第3の実施の形態と同様に、静止画に対する像ぶれ補正を行う。例えば、第1の記録用画像51Aと、第2の記録用画像51Bとで、条件を変えて像ぶれ補正を行う。図27は、異なる条件で滝を撮像した静止画を例示する図である。図27(a)は、第1の記録用画像51Aによる静止画であり、図27(b)は、第2の記録用画像51Bによる静止画である。図27(a)、図27(b)の静止画は、水の流れと岩などの背景とで異なる撮像条件を設定し、撮像されたものである。 The controller 34 performs image blur correction on a still image on the first recording image 51A and the second recording image 52B, respectively, as in the second embodiment or the third embodiment. For example, image blur correction is performed under different conditions for the first recording image 51A and the second recording image 51B. FIG. 27 is a diagram illustrating still images obtained by capturing waterfalls under different conditions. FIG. 27A is a still image based on the first recording image 51A, and FIG. 27B is a still image based on the second recording image 51B. The still images in FIGS. 27 (a) and 27 (b) are taken by setting different imaging conditions for the flow of water and the background of a rock or the like.
 例えば、動きが大きい水の流れに対応する撮像素子32aの領域に対し、岩などの背景に対応する撮像素子32aの領域よりも短い短縮露光時間を設定することにより、背景の像ぶれを抑えて、水の流れを撮ることができる。しかしながら、水の流れをほとんど止めて粒状感を表現した場合と、水の流れを止めないで表現したい場合とで、短縮露光時間や記録用画像の枚数の組み合わせを変えたい場合がある。このため、撮像制御部34cは、第1の記録用画像51Aの短縮露光時間を第2の記録用画像51Bの短縮露光時間より短くする。または、あるいは上記に加えて、撮像制御部34cは、第1の記録用画像51Aの枚数を、第2の記録用画像51Bの枚数より少なくする。 For example, by setting a shorter exposure time for the region of the image sensor 32a corresponding to the flow of water having a large movement than the region of the image sensor 32a corresponding to the background such as a rock, the image blur of the background is suppressed. , You can take a flow of water. However, there are cases where it is desired to change the combination of the shortened exposure time and the number of recording images depending on whether the granularity is expressed with almost no water flow or when the water flow is not stopped. Therefore, the imaging control unit 34c makes the shortened exposure time of the first recording image 51A shorter than the shortened exposure time of the second recording image 51B. Alternatively, or in addition to the above, the imaging control unit 34c makes the number of first recording images 51A smaller than the number of second recording images 51B.
 第4の実施の形態の変形例2によれば、静止画の像ぶれ補正を、2つの条件(例えば、短縮露光時間が異なる)で並行して行うことができる。 According to Modification 2 of the fourth embodiment, still image blur correction can be performed in parallel under two conditions (for example, different shortened exposure times).
--- 第4の実施の形態の変形例3 ---
 第4の実施の形態の変形例3は、第2または第3の実施の形態による静止画の像ぶれ補正を、4つの条件(例えば、短縮露光時間が異なる)で、並行して行う。4つの条件とは、例えば、短縮露光時間が異なったり、複数枚の記録用画像の位置を合わせて合成するときの合成枚数が異なったりすることである。
--- Modification 3 of the fourth embodiment ---
In the third modification of the fourth embodiment, still image blur correction according to the second or third embodiment is performed in parallel under four conditions (for example, different shortened exposure times). The four conditions are, for example, that the shortened exposure time is different, or that the number of images to be combined when combining the positions of a plurality of recording images is different.
 第4の実施の形態の変形例3では、記録する第1の静止画を撮像する場合、例えば、撮像素子32aの撮像面に設定される、第1撮像領域B1(図26)によって第1の静止画を撮像する。記録する第2の静止画を撮像する場合、撮像素子32aの撮像面に設定される、第2撮像領域B2(図26)によって第2の静止画を撮像する。さらに、記録する第3の静止画を撮像する場合、撮像素子32aの撮像面に設定される、第3撮像領域B1(図26)によって第3の静止画を撮像する。記録する第4の静止画を撮像する場合、撮像素子32aの撮像面に設定される、第4撮像領域C2(図26)によって第4の静止画を撮像する。
 上記第1撮像領域B1、第2撮像領域B2、第3撮像領域C1、第4撮像領域C2に含まれるブロックの稼働率は、それぞれ、第1の実施の形態の場合と同様に、適宜設定して構わない。
In the third modification of the fourth embodiment, when the first still image to be recorded is imaged, for example, the first imaging area B1 (FIG. 26) set on the imaging surface of the imaging element 32a is used for the first. Take a still image. When the second still image to be recorded is captured, the second still image is captured by the second imaging region B2 (FIG. 26) set on the imaging surface of the image sensor 32a. Furthermore, when the third still image to be recorded is captured, the third still image is captured by the third imaging region B1 (FIG. 26) set on the imaging surface of the imaging element 32a. When the fourth still image to be recorded is captured, the fourth still image is captured by the fourth imaging region C2 (FIG. 26) set on the imaging surface of the image sensor 32a.
The operating rates of the blocks included in the first imaging area B1, the second imaging area B2, the third imaging area C1, and the fourth imaging area C2 are set appropriately as in the case of the first embodiment. It doesn't matter.
 第4の実施の形態の変形例3において、1フレームの撮像を行った撮像素子32aの第1撮像領域B1から読み出された光電変換信号に基づいて、第1の記録用画像51Aを生成する。また、制御部34は、1フレームの撮像を行った撮像素子32aの第2撮像領域B2から読み出された光電変換信号に基づいて、第2の記録用画像51Bを生成する。さらに、制御部34は、1フレームの撮像を行った撮像素子32aの第3撮像領域C1から読み出された光電変換信号に基づいて、第3の記録用画像51Cを生成する。そして、制御部34は、1フレームの撮像を行った撮像素子32aの第4撮像領域C2から読み出された光電変換信号に基づいて、第4の記録用画像51Dを生成する。
 第1の記録用画像51Aは、第1の静止画である。第2の記録用画像51Bは、第2の静止画である。第3の記録用画像51Cは、第3の静止画である。第4の記録用画像51Dは、第4の静止画である。
In the third modification of the fourth embodiment, the first recording image 51A is generated based on the photoelectric conversion signal read from the first imaging region B1 of the imaging device 32a that has captured one frame. . Further, the control unit 34 generates the second recording image 51B based on the photoelectric conversion signal read from the second imaging region B2 of the imaging element 32a that has captured one frame. Furthermore, the control unit 34 generates a third recording image 51C based on the photoelectric conversion signal read from the third imaging region C1 of the imaging element 32a that has captured one frame. Then, the control unit 34 generates a fourth recording image 51D based on the photoelectric conversion signal read from the fourth imaging region C2 of the imaging device 32a that has captured one frame.
The first recording image 51A is a first still image. The second recording image 51B is a second still image. The third recording image 51C is a third still image. The fourth recording image 51D is a fourth still image.
 制御部34は、第1の記録用画像51A、第2記録用画像52B、第3の記録用画像51C、および第4の記録用画像52Dに対し、それぞれ、第2の実施の形態または第3の実施の形態と同様に、静止画に対する像ぶれ補正を行う。例えば、各記録用画像51A~51Dとで、条件を変えて像ぶれ補正を行う。 The control unit 34 performs the second embodiment or the third recording image 51A on the first recording image 51A, the second recording image 52B, the third recording image 51C, and the fourth recording image 52D, respectively. As in the embodiment, image blur correction is performed on a still image. For example, image blur correction is performed for each of the recording images 51A to 51D under different conditions.
 第4の実施の形態の変形例3によれば、静止画の像ぶれ補正を、4つの条件(例えば、短縮露光時間が異なる)で並行して行うことができる。 According to the third modification of the fourth embodiment, still image blur correction can be performed in parallel under four conditions (for example, different shortened exposure times).
 次のような変形も、上述した発明の範囲内であり、変形例の1つ、もしくは複数を上述の実施の形態または変形例と組み合わせることも可能である。
(変形例1)
 上述した第1の実施の形態では、電子機器の一例としてカメラ1を説明したが、撮像素子32aを備えるスマートフォンのように、カメラ機能を備えた高機能携帯電話機、タブレット端末、ウエアラブル機器などのモバイル電子機器であってもよい。
The following modifications are also within the scope of the invention described above, and one or a plurality of modifications can be combined with the above-described embodiment or modification.
(Modification 1)
In the first embodiment described above, the camera 1 has been described as an example of an electronic device. However, a mobile device such as a high-functional mobile phone, a tablet terminal, or a wearable device having a camera function like a smartphone having an image sensor 32a. It may be an electronic device.
(変形例2)
 上記の第1の実施の形態では、撮像部32と制御部34とを単一の電子機器として構成したカメラ1を例に説明した。この代わりに、例えば、撮像部32と制御部34とを分離して設け、制御部34から通信を介して撮像部32を制御する撮像システム1Aを構成してもよい。
 以下、撮像部32を備えた撮像装置1001を、制御部34を備えた制御装置1002から制御する例を、図28を参照して説明する。
(Modification 2)
In the first embodiment described above, the camera 1 in which the imaging unit 32 and the control unit 34 are configured as a single electronic device has been described as an example. Instead of this, for example, the imaging unit 32 and the control unit 34 may be provided separately, and the imaging system 1A that controls the imaging unit 32 from the control unit 34 via communication may be configured.
Hereinafter, an example in which the imaging device 1001 including the imaging unit 32 is controlled from the control device 1002 including the control unit 34 will be described with reference to FIG.
 図28は、変形例2に係る撮像システム1Aの構成を例示するブロック図である。図28において、撮像システム1Aは、撮像装置1001と、表示装置1002とによって構成される。撮像装置1001は、上記実施の形態で説明した撮像光学系31、撮像部32、測光センサ38に加えて、第1通信部1003を備える。また、表示装置1002は、上記実施の形態で説明した画像処理部33、制御部34、表示部35、操作部材36、および記録部37に加えて、第2通信部1004を備える。 FIG. 28 is a block diagram illustrating the configuration of an imaging system 1A according to the second modification. In FIG. 28, the imaging system 1 </ b> A includes an imaging device 1001 and a display device 1002. The imaging apparatus 1001 includes a first communication unit 1003 in addition to the imaging optical system 31, the imaging unit 32, and the photometric sensor 38 described in the above embodiment. The display device 1002 includes a second communication unit 1004 in addition to the image processing unit 33, the control unit 34, the display unit 35, the operation member 36, and the recording unit 37 described in the above embodiment.
 第1通信部1003および第2通信部1004は、例えば周知の無線通信技術や光通信技術等により、双方向の画像データ通信を行うことができる。
 なお、撮像装置1001と表示装置1002とを有線ケーブルにより有線接続し、第1通信部1003および第2通信部1004が双方向の画像データ通信を行う構成にしてもよい。
The first communication unit 1003 and the second communication unit 1004 can perform bidirectional image data communication using, for example, a well-known wireless communication technology or optical communication technology.
Note that the imaging device 1001 and the display device 1002 may be connected by a wired cable, and the first communication unit 1003 and the second communication unit 1004 may perform bidirectional image data communication.
 撮像システム1Aは、制御部34が、第2通信部1004および第1通信部1003を介したデータ通信を行うことにより、撮像部32に対する制御を行う。例えば、撮像装置1001と表示装置1002との間で所定の制御データを送受信することにより、表示装置1002は、上述したように画像に基づいて、画面を複数の領域に分割したり、分割した領域ごとに異なる撮像条件を設定したり、各々の領域で光電変換された光電変換信号を読み出したりする。 In the imaging system 1A, the control unit 34 controls the imaging unit 32 by performing data communication via the second communication unit 1004 and the first communication unit 1003. For example, by transmitting and receiving predetermined control data between the imaging device 1001 and the display device 1002, the display device 1002 divides the screen into a plurality of regions based on the images as described above, or the divided regions. A different imaging condition is set for each area, or a photoelectric conversion signal photoelectrically converted in each area is read out.
 変形例2によれば、撮像装置1001側で取得され、表示装置1002へ送信されたモニタ用画像が表示装置1002の表示部35に表示されるので、ユーザは、撮像装置1001から離れた位置にある表示装置1002から、遠隔操作を行うことができる。
 表示装置1002は、例えば、スマートフォンのような高機能携帯電話機250によって構成することができる。また、撮像装置1001は、上述した積層型の撮像素子100を備える電子機器によって構成することができる。
 なお、表示装置1002の制御部34に物体検出部34aと、設定部34bと、撮像制御部34cと、レンズ移動制御部34dとを設ける例を説明したが、物体検出部34a、設定部34b、撮像制御部34c、およびレンズ移動制御部34dの一部について、撮像装置1001に設けるようにしてもよい。
According to the second modification, the monitor image acquired on the imaging device 1001 side and transmitted to the display device 1002 is displayed on the display unit 35 of the display device 1002, so that the user is at a position away from the imaging device 1001. Remote control can be performed from a certain display device 1002.
The display device 1002 can be configured by a high-function mobile phone 250 such as a smartphone, for example. In addition, the imaging device 1001 can be configured by an electronic device including the above-described stacked imaging element 100.
In addition, although the example which provides the object detection part 34a, the setting part 34b, the imaging control part 34c, and the lens movement control part 34d in the control part 34 of the display apparatus 1002 was demonstrated, the object detection part 34a, the setting part 34b, A part of the imaging control unit 34c and the lens movement control unit 34d may be provided in the imaging device 1001.
(変形例3)
 上述したカメラ1、高機能携帯電話機250、またはタブレット端末などのモバイル機器へのプログラムの供給は、例えば図29に例示するように、プログラムを格納したパーソナルコンピュータ205から赤外線通信や近距離無線通信によってモバイル機器へ送信することができる。
(Modification 3)
The program is supplied to the mobile device such as the camera 1, the high-function mobile phone 250, or the tablet terminal described above by, for example, infrared communication or short-range wireless communication from the personal computer 205 storing the program as illustrated in FIG. 29. Can be sent to mobile devices.
 パーソナルコンピュータ205に対するプログラムの供給は、プログラムを格納したCD-ROMなどの記録媒体204をパーソナルコンピュータ205にセットして行ってもよいし、ネットワークなどの通信回線201を経由する方法でパーソナルコンピュータ205へローディングしてもよい。通信回線201を経由する場合は、当該通信回線に接続されたサーバー202のストレージ装置203などにプログラムを格納しておく。 The program may be supplied to the personal computer 205 by setting a recording medium 204 such as a CD-ROM storing the program in the personal computer 205 or by a method via the communication line 201 such as a network. You may load. When passing through the communication line 201, the program is stored in the storage device 203 of the server 202 connected to the communication line.
 また、通信回線201に接続された無線LANのアクセスポイント(不図示)を経由して、モバイル機器へプログラムを直接送信することもできる。さらに、プログラムを格納したメモリカードなどの記録媒体204Bをモバイル機器にセットしてもよい。このように、プログラムは記録媒体や通信回線を介する提供など、種々の形態のコンピュータプログラム製品として供給できる。 Also, the program can be directly transmitted to the mobile device via a wireless LAN access point (not shown) connected to the communication line 201. Further, a recording medium 204B such as a memory card storing the program may be set in the mobile device. Thus, the program can be supplied as various forms of computer program products, such as provision via a recording medium or a communication line.
(変形例4)
 上記第1の実施の形態では、制御部34の物体検出部34aが検出用画像Diに基づいて被写体要素を検出し、設定部34bが記録用画像51を、被写体要素を含む領域に分割する例を説明した。変形例4において、制御部34は、検出用画像Diに基づいて記録用画像51を分割する代わりに、撮像素子32aと別の測光用センサ38からの出力信号に基づき、記録用画像51の領域を分割してもよい。
(Modification 4)
In the first embodiment, the object detection unit 34a of the control unit 34 detects the subject element based on the detection image Di, and the setting unit 34b divides the recording image 51 into regions including the subject element. Explained. In the fourth modification, the control unit 34 divides the recording image 51 on the basis of the output signal from the image sensor 32a and another photometric sensor 38 instead of dividing the recording image 51 on the basis of the detection image Di. May be divided.
 制御部34は、測光用センサ38からの出力信号に基づき、記録用画像51を前景と背景とに分割する。具体的には、撮像素子32bによって取得された記録用画像51を、測光用センサ38の出力信号から前景と判断した領域に対応する前景領域と、測光用センサ38の出力信号から背景と判断した領域に対応する背景領域とに分割する。 The control unit 34 divides the recording image 51 into a foreground and a background based on an output signal from the photometric sensor 38. Specifically, the recording image 51 acquired by the image sensor 32 b is determined to be the foreground area corresponding to the area determined to be the foreground from the output signal of the photometry sensor 38 and the background from the output signal of the photometry sensor 38. Divide into background areas corresponding to the areas.
 制御部34はさらに、撮像素子32aの撮像面の前景領域に対応する位置に対して、図6(a)~図6(c)に例示したように、第1撮像領域B1および第2撮像領域B2を設定する。一方、制御部34は、撮像素子32aの撮像面の背景領域に対応する位置に対して、撮像素子32aの撮像面に第1撮像領域B1のみを設定する。制御部34は、第1撮像領域B1で撮像される記録用画像51をモニタ用画像の表示に用いるとともに、第2撮像領域B2で撮像される検出用画像52を被写体要素の検出、焦点検出、および露出演算に用いる。
 なお、撮像素子32aの撮像面に第1撮像領域B1のみを設定する代わりに、撮像面に設定されている第1撮像領域B1および第2撮像領域B2のうち、第1撮像領域B1のみを稼働して第2撮像領域B2を休止させてもよい。
Further, as illustrated in FIGS. 6A to 6C, the control unit 34 further controls the first imaging region B1 and the second imaging region with respect to the position corresponding to the foreground region on the imaging surface of the imaging device 32a. Set B2. On the other hand, the control unit 34 sets only the first imaging region B1 on the imaging surface of the imaging device 32a with respect to the position corresponding to the background region of the imaging surface of the imaging device 32a. The control unit 34 uses the recording image 51 imaged in the first imaging area B1 for display of the monitor image, and uses the detection image 52 imaged in the second imaging area B2 to detect subject elements, focus detection, And used for exposure calculation.
Instead of setting only the first imaging area B1 on the imaging surface of the imaging element 32a, only the first imaging area B1 is operated out of the first imaging area B1 and the second imaging area B2 set on the imaging surface. Then, the second imaging region B2 may be paused.
 変形例4によれば、測光用センサ38からの出力信号を用いることにより、撮像素子32bによって取得されたモニタ用画像の領域分割を行うことができる。また、前景領域に対しては、記録用画像51と検出用画像52とを得ることができ、背景領域に対しては、記録用画像51のみを得ることができる。 According to the fourth modification, by using the output signal from the photometric sensor 38, it is possible to divide the monitor image acquired by the image sensor 32b. Further, the recording image 51 and the detection image 52 can be obtained for the foreground area, and only the recording image 51 can be obtained for the background area.
 以上説明した撮像光学系31は、ズームレンズやアオリレンズを含んでいてもよい。レンズ移動制御部34dは、ズームレンズを光軸方向に移動させることによって、撮像光学系31による画角を調節する。すなわち、ズームレンズの移動によって、広い範囲の被写体の像を得たり、遠くの被写体について大きな像を得たりするなど、撮像光学系31による像を調節することができる。
 また、レンズ移動制御部34dは、アオリレンズを光軸に直交する方向に移動させることによって、撮像光学系31による像の歪みを調節することができる。
 そして、撮像光学系31による像の状態(例えば画角の状態、または像の歪みの状態)を調節するために、上述したような前処理後の画像データを用いる方が好ましいという考え方に基づき、上述した前処理を行うとよい。
The imaging optical system 31 described above may include a zoom lens or tilt lens. The lens movement control unit 34d adjusts the angle of view by the imaging optical system 31 by moving the zoom lens in the optical axis direction. That is, the image by the imaging optical system 31 can be adjusted by moving the zoom lens, such as obtaining an image of a wide range of subjects or obtaining a large image of a far subject.
Further, the lens movement control unit 34d can adjust the distortion of the image by the imaging optical system 31 by moving the tilt lens in a direction orthogonal to the optical axis.
Based on the idea that it is preferable to use the preprocessed image data as described above in order to adjust the state of the image by the imaging optical system 31 (for example, the state of the angle of view or the state of distortion of the image). The pre-processing described above may be performed.
 また、以上説明した撮像素子32aは、図6で示したように第1撮像領域B1に属する複数のブロックと第2撮像領域B2に属する複数のブロックに予め分けて説明したが、これに限定されるものではない。たとえば、被写体の輝度、種類、形状等により撮像素子32aにおける第1撮像領域B1および第2撮像領域B2の位置を設定してもよく、ユーザの選択操作により撮像素子32aにおける第1撮像領域B1および第2撮像領域B2の位置を設定してもよい。
 また、撮像素子32aにおける撮像領域は、第1撮像領域B1および第2撮像領域B2に限られず、第1撮像領域B1および第2撮像領域B2それぞれに設定された撮像条件とは異なる撮像条件が設定された撮像領域でもよい。
Further, the imaging element 32a described above has been described in advance as being divided into a plurality of blocks belonging to the first imaging area B1 and a plurality of blocks belonging to the second imaging area B2, as shown in FIG. It is not something. For example, the positions of the first imaging region B1 and the second imaging region B2 in the image sensor 32a may be set according to the brightness, type, shape, etc. of the subject, and the first imaging region B1 in the image sensor 32a and The position of the second imaging area B2 may be set.
The imaging area in the imaging element 32a is not limited to the first imaging area B1 and the second imaging area B2, and imaging conditions different from the imaging conditions set for the first imaging area B1 and the second imaging area B2 are set. The captured imaging area may be used.
 上記では、種々の実施の形態および変形例を説明したが、本発明はこれらの内容に限定されるものではない。本発明の技術的思想の範囲内で考えられるその他の態様も本発明の範囲内に含まれる。
 また、上述した実施の形態やその変形例を、適宜組み合わせてもよい。
Although various embodiments and modifications have been described above, the present invention is not limited to these contents. Other embodiments conceivable within the scope of the technical idea of the present invention are also included in the scope of the present invention.
Moreover, you may combine embodiment mentioned above and its modification suitably.
 次の優先権基礎出願の開示内容は引用文としてここに組み込まれる。
 日本国特許出願2017年第72714号(2017年3月31日出願)
The disclosure of the following priority application is hereby incorporated by reference.
Japanese patent application No. 72714 in 2017 (filed on March 31, 2017)
1…カメラ
1A…撮像システム
31…撮像光学系
32…撮像部
32a、100…撮像素子
33…画像処理部
33a…入力部
33b…補正部
33c…生成部
34…制御部
34a…物体検出部
34b…設定部
34c…撮像制御部
34d…レンズ移動制御部
35…表示部
38…測光センサ
90…注目領域
1001…撮像装置
1002…表示装置
P…注目画素
DESCRIPTION OF SYMBOLS 1 ... Camera 1A ... Imaging system 31 ... Imaging optical system 32 ... Imaging part 32a, 100 ... Imaging element 33 ... Image processing part 33a ... Input part 33b ... Correction | amendment part 33c ... Generating part 34 ... Control part 34a ... Object detection part 34b ... Setting unit 34c ... Imaging control unit 34d ... Lens movement control unit 35 ... Display unit 38 ... Photometric sensor 90 ... Region of interest 1001 ... Imaging device 1002 ... Display device P ... Pixel of interest

Claims (11)

  1.  被写体像を撮像する複数の撮像領域を有する撮像素子と、
     複数の前記撮像領域のうち第1撮像領域に第1撮像条件を設定し、複数の前記撮像領域のうち第2撮像領域に前記第1撮像条件とは異なる第2撮像条件を設定する設定部と、
     前記第1撮像領域及び前記第2撮像領域で撮像された前記被写体像の画像に対して前記撮像素子の動きによる前記被写体像のぶれの補正を行う補正部と、
     を備える電子機器。
    An imaging device having a plurality of imaging areas for imaging a subject image;
    A setting unit configured to set a first imaging condition in the first imaging area among the plurality of imaging areas, and to set a second imaging condition different from the first imaging condition in the second imaging area among the plurality of imaging areas; ,
    A correction unit that corrects blurring of the subject image due to movement of the imaging element with respect to the image of the subject image captured in the first imaging region and the second imaging region;
    Electronic equipment comprising.
  2.  前記設定部は、前記第1撮像条件として前記第1撮像領域に第1露光時間で撮像するように設定し、前記第2撮像条件として前記第2撮像領域に前記第1露光時間よりも短い第2露光時間で撮像するように設定する請求項1に記載の電子機器。 The setting unit sets the first imaging region to capture an image with a first exposure time as the first imaging condition, and sets the second imaging region with a second time shorter than the first exposure time as the second imaging condition. The electronic apparatus according to claim 1, wherein the electronic apparatus is set so as to capture an image with two exposure times.
  3.  前記設定部は、前記第1露光時間の開始時間と前記第2露光時間の開始時間とが合うように前記第1撮像条件と前記第2撮像条件とを設定する請求項2に記載の電子機器。 The electronic device according to claim 2, wherein the setting unit sets the first imaging condition and the second imaging condition so that a start time of the first exposure time matches a start time of the second exposure time. .
  4.  前記設定部は、前記第1露光時間の終了時間と前記第2露光時間の終了時間とが合うように前記第1撮像条件と前記第2撮像条件とを設定する請求項2に記載の電子機器。 The electronic device according to claim 2, wherein the setting unit sets the first imaging condition and the second imaging condition so that an end time of the first exposure time and an end time of the second exposure time are matched. .
  5.  前記補正部は、前記第1撮像領域及び前記第2撮像領域で撮像された前記被写体像の複数の画像に合成処理を行う請求項1から請求項4のいずれか一項に記載の電子機器。 The electronic device according to any one of claims 1 to 4, wherein the correction unit performs a composition process on a plurality of images of the subject image captured in the first imaging region and the second imaging region.
  6.  前記第1撮像領域は、光を電荷に変換する第1光電変換部を有し、
     前記第2撮像領域は、光を電荷に変換する第2光電変換部を有する請求項1から請求項5のいずれか一項に記載の電子機器。
    The first imaging region includes a first photoelectric conversion unit that converts light into electric charge,
    The electronic device according to any one of claims 1 to 5, wherein the second imaging region includes a second photoelectric conversion unit that converts light into an electric charge.
  7.  前記第1撮像領域は、複数の前記第1光電変換部を有し、
     前記第2撮像領域は、複数の前記第2光電変換部を有する請求項6に記載の電子機器。
    The first imaging region has a plurality of the first photoelectric conversion units,
    The electronic device according to claim 6, wherein the second imaging region includes a plurality of the second photoelectric conversion units.
  8.  前記第1光電変換部は、前記第1撮像領域において第1方向と前記第1方向と交差する第2方向とに複数配置され、
     前記第2光電変換部は、前記第2撮像領域において前記第1方向と前記第2方向とに複数配置されている請求項7に記載の電子機器。
    A plurality of the first photoelectric conversion units are arranged in a first direction and a second direction intersecting the first direction in the first imaging region,
    The electronic device according to claim 7, wherein a plurality of the second photoelectric conversion units are arranged in the first direction and the second direction in the second imaging region.
  9.  前記撮像素子は、前記第1方向と前記第2方向とに複数の前記第1撮像領域が配置される請求項8に記載の電子機器。 The electronic device according to claim 8, wherein the imaging element includes a plurality of the first imaging regions arranged in the first direction and the second direction.
  10.  前記撮像素子は、前記第1方向と前記第2方向とに複数の前記第2撮像領域が配置される請求項9に記載の電子機器。 10. The electronic apparatus according to claim 9, wherein the imaging element includes a plurality of the second imaging regions arranged in the first direction and the second direction.
  11.  被写体像を撮像する複数の領域を有する撮像素子と、
     複数の前記領域のうち第1領域に第1撮像条件を設定し、第2領域に前記第1撮像条件とは異なる第2撮像条件を設定する設定部と、
     前記第1領域及び前記第2領域で撮像されて生成された第1画像と、前記第1領域で撮像されて生成された第2画像と、に基づいて画像を生成する生成部と、
     を備える電子機器。
    An image sensor having a plurality of areas for capturing a subject image;
    A setting unit that sets a first imaging condition in a first area among the plurality of areas, and sets a second imaging condition different from the first imaging condition in a second area;
    A generating unit that generates an image based on the first image captured and generated in the first region and the second region, and the second image captured and generated in the first region;
    Electronic equipment comprising.
PCT/JP2018/013042 2017-03-31 2018-03-28 Electronic device WO2018181615A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-072714 2017-03-31
JP2017072714A JP2020098943A (en) 2017-03-31 2017-03-31 Electronic apparatus

Publications (1)

Publication Number Publication Date
WO2018181615A1 true WO2018181615A1 (en) 2018-10-04

Family

ID=63677888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/013042 WO2018181615A1 (en) 2017-03-31 2018-03-28 Electronic device

Country Status (2)

Country Link
JP (1) JP2020098943A (en)
WO (1) WO2018181615A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6994665B1 (en) 2021-06-08 2022-01-14 パナソニックIpマネジメント株式会社 Imaging device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006197192A (en) * 2005-01-13 2006-07-27 Sony Corp Imaging device and processing method of imaging result
JP2014178603A (en) * 2013-03-15 2014-09-25 Nikon Corp Imaging device
JP2014179778A (en) * 2013-03-14 2014-09-25 Nikon Corp Signal processing apparatus, imaging device, imaging apparatus, and electronic apparatus
JP2015033036A (en) * 2013-08-05 2015-02-16 株式会社ニコン Imaging device, control method for imaging device, and control program
JP2015142342A (en) * 2014-01-30 2015-08-03 オリンパス株式会社 Imaging apparatus, image generation method and image generation program
JP2016192606A (en) * 2015-03-30 2016-11-10 株式会社ニコン Electronic apparatus and program
JP2016192605A (en) * 2015-03-30 2016-11-10 株式会社ニコン Electronic apparatus, recording medium and program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006197192A (en) * 2005-01-13 2006-07-27 Sony Corp Imaging device and processing method of imaging result
JP2014179778A (en) * 2013-03-14 2014-09-25 Nikon Corp Signal processing apparatus, imaging device, imaging apparatus, and electronic apparatus
JP2014178603A (en) * 2013-03-15 2014-09-25 Nikon Corp Imaging device
JP2015033036A (en) * 2013-08-05 2015-02-16 株式会社ニコン Imaging device, control method for imaging device, and control program
JP2015142342A (en) * 2014-01-30 2015-08-03 オリンパス株式会社 Imaging apparatus, image generation method and image generation program
JP2016192606A (en) * 2015-03-30 2016-11-10 株式会社ニコン Electronic apparatus and program
JP2016192605A (en) * 2015-03-30 2016-11-10 株式会社ニコン Electronic apparatus, recording medium and program

Also Published As

Publication number Publication date
JP2020098943A (en) 2020-06-25

Similar Documents

Publication Publication Date Title
JP6604384B2 (en) Imaging device
WO2017057279A1 (en) Imaging device, image processing device and display device
JP6516014B2 (en) Imaging apparatus and image processing apparatus
WO2017170716A1 (en) Image pickup device, image processing device, and electronic apparatus
JP2024045553A (en) Imaging device
WO2018181612A1 (en) Electronic device
WO2018181615A1 (en) Electronic device
WO2017170717A1 (en) Image pickup device, focusing device, and electronic apparatus
JP2018056944A (en) Imaging device and imaging element
JP6516016B2 (en) Imaging apparatus and image processing apparatus
WO2018181613A1 (en) Electronic device
WO2018181614A1 (en) Electronic device
WO2018181611A1 (en) Electronic device
WO2018181610A1 (en) Electronic device
JP2018056945A (en) Imaging device and imaging element
JP6516015B2 (en) Imaging apparatus and image processing apparatus
JP6589989B2 (en) Imaging device
JP6604385B2 (en) Imaging device
JP6551533B2 (en) Imaging apparatus and image processing apparatus
JP6589988B2 (en) Imaging device
WO2017170719A1 (en) Image pickup device and electronic apparatus
WO2017057280A1 (en) Imaging device and subject detection device
WO2017170718A1 (en) Image pickup device, subject detection device, and electronic apparatus
WO2017057268A1 (en) Imaging device and control device
WO2017057267A1 (en) Imaging device and focus detection device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18776268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18776268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP