WO2017057295A1 - Dispositif de commande et dispositif d'imagerie - Google Patents

Dispositif de commande et dispositif d'imagerie Download PDF

Info

Publication number
WO2017057295A1
WO2017057295A1 PCT/JP2016/078316 JP2016078316W WO2017057295A1 WO 2017057295 A1 WO2017057295 A1 WO 2017057295A1 JP 2016078316 W JP2016078316 W JP 2016078316W WO 2017057295 A1 WO2017057295 A1 WO 2017057295A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
unit
region
pixel
imaging condition
Prior art date
Application number
PCT/JP2016/078316
Other languages
English (en)
Japanese (ja)
Inventor
孝 塩野谷
敏之 神原
直樹 關口
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Priority to JP2017543273A priority Critical patent/JP6589989B2/ja
Publication of WO2017057295A1 publication Critical patent/WO2017057295A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/75Circuitry for compensating brightness variation in the scene by influencing optical camera components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects

Definitions

  • the present invention relates to a control device and an imaging device.
  • Patent Document 1 An imaging apparatus equipped with an imaging element capable of setting different imaging conditions for each screen area is known (see Patent Document 1).
  • Patent Document 1 An imaging apparatus equipped with an imaging element capable of setting different imaging conditions for each screen area.
  • the imaging device includes an imaging device having an imaging area for imaging a subject, a setting unit for setting imaging conditions for the imaging area, and pixels for use in interpolation in the imaging area.
  • a selection unit that selects from pixels, and a control unit that controls the setting of imaging conditions by the setting unit, and uses the signals interpolated by the signals output from the pixels selected by the selection unit in the imaging region
  • a control unit that controls setting of imaging conditions that are set, and the selection unit is an imaging device in which at least some of the pixels to be selected are different according to the imaging conditions set by the setting unit.
  • the imaging device according to the fifteenth aspect of the present invention images a subject under a first imaging region set to image the subject under the first imaging condition and a second imaging condition different from the first imaging condition.
  • An imaging element having a second imaging region set and a third imaging region set to image a subject under a third imaging condition different from the second imaging condition; and included in the second imaging region
  • a selection unit that selects a pixel to be used for interpolation of a pixel included in the first imaging region and a control unit that controls setting of the imaging condition from among the pixel and the pixel included in the third imaging region.
  • a control unit that controls setting of the imaging condition set in the first imaging region using a signal interpolated by a signal output from the pixel selected by the selection unit.
  • the imaging device images a subject under a first imaging region set to image the subject under the first imaging condition and a second imaging condition different from the first imaging condition.
  • Interpolation of pixels included in the first imaging area from among an imaging device having a set second imaging area, pixels included in the first imaging area, and pixels included in the second imaging area A selection unit for selecting a pixel to be used for the control, and a control unit for controlling setting of an imaging condition, wherein the first imaging is performed using a signal interpolated by a signal output from the pixel selected by the selection unit
  • a control unit that controls setting of imaging conditions set in the area
  • An imaging device is an imaging device having a first imaging area for imaging a subject, a second imaging area for imaging the subject, and a third imaging area for imaging the subject,
  • the imaging condition of one imaging area is set to the first imaging condition
  • the imaging condition of the second imaging area is set to a second imaging condition different from the first imaging condition
  • the imaging condition of the third imaging area is set to the A setting unit configured to set a third imaging condition in which a difference between the first imaging condition and the first imaging condition is smaller than a difference between the first imaging condition and the second imaging condition; a pixel included in the first imaging area
  • a selection unit that selects a pixel to be used for interpolation of a pixel included in the first imaging region from among a pixel included in the imaging region and a pixel included in the third imaging region
  • the setting unit Is a control unit that controls setting of imaging conditions by
  • An imaging apparatus comprising: a control unit that controls setting of an imaging condition set in the first imaging region using a signal interpolated by
  • an imaging apparatus in which the first imaging area for imaging a subject, the second imaging area for imaging a subject, and the first imaging area and the distance between the first imaging area and the second imaging area.
  • An imaging device having a third imaging area that captures a subject with a long distance to the one imaging area, and an imaging condition for the second imaging area is set to an imaging condition different from the imaging condition for the first imaging area Of the pixels included in the first imaging area, the setting unit, the pixels included in the first imaging area, the pixels included in the second imaging area, and the pixels included in the third imaging area.
  • An image pickup apparatus is an image pickup device having an image pickup region for picking up an object, a setting unit for setting image pickup conditions for the image pickup region, and a control unit for controlling setting of image pickup conditions by the setting unit.
  • a control unit that controls setting of the imaging condition set in the imaging region using a signal that is selected as a pixel to be used for interpolation and interpolated by a signal output from a pixel included in the imaging region And an imaging device in which at least some of the pixels to be selected are different depending on the imaging condition set by the setting unit.
  • the imaging apparatus images a subject under a first imaging region set to image the subject under the first imaging condition and a second imaging condition different from the first imaging condition.
  • An imaging device having a set second imaging region; and a control unit that controls setting of imaging conditions, the pixel being included in the first imaging region and the pixel being included in the second imaging region To setting of the imaging condition set in the first imaging area using a signal interpolated by a signal output from a pixel selected as a pixel to be used for interpolation of pixels included in the first imaging area And a control unit that controls the imaging device.
  • An image pickup apparatus includes an image pickup device having an image pickup region for picking up an image of a subject, a setting unit for setting image pickup conditions of the image pickup region, and a control unit for controlling setting of image pickup conditions by the setting unit.
  • the signal is set in the imaging region using a signal selected from the pixels included in the imaging region and having a noise reduced by a signal output from a pixel that outputs a signal for reducing noise.
  • a control unit that controls setting of imaging conditions, and at least some of the selected pixels differ according to the imaging conditions set by the setting unit.
  • An imaging apparatus images a subject under a first imaging region set to image the subject under a first imaging condition and a second imaging condition different from the first imaging condition.
  • An imaging element having a second imaging region set and a third imaging region set to image a subject under a third imaging condition different from the second imaging condition; and included in the second imaging region
  • a selection unit that selects a pixel included in the first imaging region for use in noise reduction from a pixel and a pixel included in the third imaging region, and a control unit that controls setting of imaging conditions.
  • the pixels that output signals for use in reducing noise in the signals of the pixels included in the first imaging region From pixels selected from An imaging device and a control unit for controlling the setting of the imaging conditions noise is set to the first imaging region using the reduced signal by the force signal.
  • An imaging apparatus images a subject under a first imaging region set to image the subject under a first imaging condition and a second imaging condition different from the first imaging condition.
  • a control unit that controls the setting of the imaging conditions.
  • An image pickup apparatus includes an image pickup device having an image pickup area for picking up an image of a subject, a setting unit for setting image pickup conditions for the image pickup area, and a control unit for controlling setting of image pickup conditions by the setting unit.
  • a control unit for controlling the setting of the imaging condition set in the imaging region using a signal image-processed by a signal output from a pixel selected as a pixel for image processing;
  • An imaging device in which at least some of the selected pixels differ according to the imaging conditions set by the setting unit.
  • a control device includes a selection unit that selects a signal to be used for interpolation from signals output from pixels included in the imaging region of the imaging element, and a control unit that controls setting of imaging conditions. And a control unit that controls setting of the imaging condition set in the imaging region using a signal interpolated by the signal selected by the selection unit, and the selection unit is provided in the imaging region.
  • a control device in which at least some of the pixels to be selected are different depending on a set imaging condition.
  • a control device according to a twenty-sixth aspect of the present invention is a selection unit that selects a signal for reducing noise from signals output from pixels included in an imaging region of an imaging element, and a control unit that controls setting of imaging conditions.
  • a control unit that controls setting of the imaging condition set in the imaging region using a signal in which noise is reduced by the signal selected by the selection unit, and the selection unit includes the A control device in which at least some of the pixels to be selected are different depending on an imaging condition set in an imaging region.
  • FIG. 7A is a diagram illustrating the vicinity of the boundary of the first region in the live view image
  • FIG. 7B is an enlarged view of the vicinity of the boundary
  • FIG. 7C is an enlarged view of the target pixel and the reference pixel. .
  • FIG. 7D is an enlarged view of the target pixel and the reference pixel in the second embodiment.
  • 8A is a diagram illustrating the arrangement of photoelectric conversion signals output from the pixels
  • FIG. 8B is a diagram illustrating interpolation of image data of the G color component
  • FIG. 8C is a diagram illustrating G after interpolation. It is a figure which illustrates the image data of a color component.
  • 9A is a diagram obtained by extracting image data of the R color component from FIG. 8A
  • FIG. 9B is a diagram illustrating interpolation of the color difference component Cr
  • FIG. 9C is an image of the color difference component Cr. It is a figure explaining the interpolation of data.
  • 10A is a diagram obtained by extracting B color component image data from FIG. 8A, FIG.
  • FIG. 10B is a diagram illustrating interpolation of the color difference component Cb
  • FIG. 10C is an image of the color difference component Cb. It is a figure explaining the interpolation of data. It is a figure which illustrates the position of the pixel for focus detection in an imaging surface. It is the figure which expanded the one part area
  • FIG. 14A is a diagram illustrating a template image representing an object to be detected
  • FIG. 14B is a diagram illustrating a live view image and a search range. It is a flowchart explaining the flow of the process which sets an imaging condition for every area and images.
  • FIG. 16A to 16C are diagrams illustrating the arrangement of the first region and the second region on the imaging surface of the imaging device.
  • FIG. 20 is a block diagram illustrating a configuration of an imaging system according to Modification 8. It is a figure explaining supply of the program to a mobile device. It is a block diagram which illustrates the composition of the camera by a 3rd embodiment. It is the figure which showed typically the correspondence of each block in 3rd Embodiment, and a some selection part. It is sectional drawing of a laminated type image pick-up element. It is the figure which represented typically about the process with 1st image data and 2nd image data concerning an image process. It is the figure which represented typically about the process of 1st image data and 2nd image data which concerns on a focus detection process.
  • a digital camera will be described as an example of an electronic device equipped with the image processing apparatus according to this embodiment.
  • the camera 1 (FIG. 1) is configured to be able to capture images under different conditions for each region of the imaging surface of the image sensor 32a.
  • the image processing unit 33 performs appropriate processing in areas with different imaging conditions. Details of the camera 1 will be described with reference to the drawings.
  • FIG. 1 is a block diagram illustrating the configuration of the camera 1 according to the first embodiment.
  • the camera 1 includes an imaging optical system 31, an imaging unit 32, an image processing unit 33, a control unit 34, a display unit 35, an operation member 36, and a recording unit 37.
  • the imaging optical system 31 guides the light flux from the object scene to the imaging unit 32.
  • the imaging unit 32 includes an imaging element 32a and a driving unit 32b, and photoelectrically converts an object image formed by the imaging optical system 31.
  • the imaging unit 32 can capture images under the same conditions over the entire imaging surface of the imaging device 32a, or can perform imaging under different conditions for each region of the imaging surface of the imaging device 32a. Details of the imaging unit 32 will be described later.
  • the drive unit 32b generates a drive signal necessary for causing the image sensor 32a to perform accumulation control.
  • An imaging instruction such as a charge accumulation time for the imaging unit 32 is transmitted from the control unit 34 to the driving unit 32b.
  • the image processing unit 33 includes an input unit 33a, a selection unit 33b, and a generation unit 33c.
  • Image data acquired by the imaging unit 32 is input to the input unit 33a.
  • the selection unit 33b performs preprocessing on the input image data. Details of the preprocessing will be described later.
  • the generation unit 33c generates an image based on the input image data and the preprocessed image data.
  • the generation unit 33c performs image processing on the image data.
  • Image processing includes, for example, color interpolation processing, pixel defect correction processing, edge enhancement processing, noise reduction processing, white balance adjustment processing, gamma correction processing, display luminance adjustment processing, saturation adjustment processing, and the like.
  • the generation unit 33 c generates an image to be displayed by the display unit 35.
  • the control unit 34 is constituted by a CPU, for example, and controls the overall operation of the camera 1. For example, the control unit 34 performs a predetermined exposure calculation based on the photoelectric conversion signal acquired by the imaging unit 32, the charge accumulation time (exposure time) of the imaging element 32a necessary for proper exposure, and the aperture of the imaging optical system 31.
  • the exposure conditions such as the value and ISO sensitivity are determined and instructed to the drive unit 32b.
  • image processing conditions for adjusting saturation, contrast, sharpness, and the like are determined and instructed to the image processing unit 33 according to the imaging scene mode set in the camera 1 and the type of the detected subject element. The detection of the subject element will be described later.
  • the control unit 34 includes an object detection unit 34a, a setting unit 34b, an imaging control unit 34c, and an AF calculation unit 34d. These are realized as software by the control unit 34 executing a program stored in a nonvolatile memory (not shown). However, these may be configured by an ASIC or the like.
  • the object detection unit 34a performs a known object recognition process, and from the image acquired by the imaging unit 32, a person (person's face), an animal such as a dog or a cat (animal face), a plant, a bicycle, an automobile , Detecting a subject element such as a vehicle such as a train, a building, a stationary object, a landscape such as a mountain or a cloud, or a predetermined specific object.
  • the setting unit 34b divides the imaging screen by the imaging unit 32 into a plurality of regions including the subject element detected as described above.
  • the setting unit 34b further sets imaging conditions for a plurality of areas.
  • Imaging conditions include the exposure conditions (charge accumulation time, gain, ISO sensitivity, frame rate, etc.) and the image processing conditions (for example, white balance adjustment parameters, gamma correction curves, display brightness adjustment parameters, saturation adjustment parameters, etc.) ).
  • the same imaging conditions can be set for all of the plurality of areas, or different imaging conditions can be set for the plurality of areas.
  • the imaging control unit 34c controls the imaging unit 32 (imaging element 32a) and the image processing unit 33 by applying imaging conditions set for each region by the setting unit 34b. Thereby, it is possible to cause the imaging unit 32 to perform imaging under different exposure conditions for each of the plurality of regions, and for the image processing unit 33, images with different image processing conditions for each of the plurality of regions. Processing can be performed. Any number of pixels may be included in the region, for example, 1000 pixels or 1 pixel. Further, the number of pixels may be different between regions.
  • the AF calculation unit 34d controls an automatic focus adjustment (auto focus: AF) operation for focusing on a corresponding subject at a predetermined position (referred to as a focus detection position) on the imaging screen.
  • the AF calculation unit 34d sends a drive signal for moving the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result to the drive unit 32b.
  • the process performed by the AF calculation unit 34d for automatic focus adjustment is also referred to as a focus detection process. Details of the focus detection process will be described later.
  • the display unit 35 reproduces and displays the image generated by the image processing unit 33, the image processed image, the image read by the recording unit 37, and the like.
  • the display unit 35 also displays an operation menu screen, a setting screen for setting imaging conditions, and the like.
  • the operation member 36 is composed of various operation members such as a release button and a menu button.
  • the operation member 36 sends an operation signal corresponding to each operation to the control unit 34.
  • the operation member 36 includes a touch operation member provided on the display surface of the display unit 35.
  • the recording unit 37 records image data or the like on a recording medium including a memory card (not shown) in response to an instruction from the control unit 34.
  • the recording unit 37 reads image data recorded on the recording medium in response to an instruction from the control unit 34.
  • FIG. 2 is a cross-sectional view of the image sensor 100.
  • the imaging element 100 includes an imaging chip 111, a signal processing chip 112, and a memory chip 113.
  • the imaging chip 111 is stacked on the signal processing chip 112.
  • the signal processing chip 112 is stacked on the memory chip 113.
  • the imaging chip 111, the signal processing chip 112, the signal processing chip 112, and the memory chip 113 are electrically connected by a connection unit 109.
  • the connection unit 109 is, for example, a bump or an electrode.
  • the imaging chip 111 captures a light image from a subject and generates image data.
  • the imaging chip 111 outputs image data from the imaging chip 111 to the signal processing chip 112.
  • the signal processing chip 112 performs signal processing on the image data output from the imaging chip 111.
  • the memory chip 113 has a plurality of memories and stores image data.
  • the image sensor 100 may include an image pickup chip and a signal processing chip.
  • a storage unit for storing image data may be provided in the signal processing chip or may be provided separately from the imaging device 100. .
  • the incident light is incident mainly in the positive direction of the Z axis indicated by the white arrow.
  • the left direction of the paper orthogonal to the Z axis is the X axis plus direction
  • the front side of the paper orthogonal to the Z axis and X axis is the Y axis plus direction.
  • the coordinate axes are displayed so that the orientation of each figure can be understood with reference to the coordinate axes in FIG.
  • the imaging chip 111 is, for example, a CMOS image sensor. Specifically, the imaging chip 111 is a backside illumination type CMOS image sensor.
  • the imaging chip 111 includes a microlens layer 101, a color filter layer 102, a passivation layer 103, a semiconductor layer 106, and a wiring layer 108.
  • the imaging chip 111 is arranged in the order of the microlens layer 101, the color filter layer 102, the passivation layer 103, the semiconductor layer 106, and the wiring layer 108 in the positive Z-axis direction.
  • the microlens layer 101 has a plurality of microlenses L.
  • the microlens L condenses incident light on the photoelectric conversion unit 104 described later.
  • the color filter layer 102 includes a plurality of color filters F.
  • the color filter layer 102 has a plurality of types of color filters F having different spectral characteristics.
  • the color filter layer 102 includes a first filter (R) having a spectral characteristic that mainly transmits red component light and a second filter (Gb, Gr) that has a spectral characteristic that mainly transmits green component light. ) And a third filter (B) having a spectral characteristic that mainly transmits blue component light.
  • the passivation layer 103 is made of a nitride film or an oxide film, and protects the semiconductor layer 106.
  • the semiconductor layer 106 includes a photoelectric conversion unit 104 and a readout circuit 105.
  • the semiconductor layer 106 includes a plurality of photoelectric conversion units 104 between a first surface 106a that is a light incident surface and a second surface 106b opposite to the first surface 106a.
  • the semiconductor layer 106 includes a plurality of photoelectric conversion units 104 arranged in the X-axis direction and the Y-axis direction.
  • the photoelectric conversion unit 104 has a photoelectric conversion function of converting light into electric charge. In addition, the photoelectric conversion unit 104 accumulates charges based on the photoelectric conversion signal.
  • the photoelectric conversion unit 104 is, for example, a photodiode.
  • the semiconductor layer 106 includes a readout circuit 105 on the second surface 106b side of the photoelectric conversion unit 104.
  • a plurality of readout circuits 105 are arranged in the X-axis direction and the Y-axis direction.
  • the readout circuit 105 includes a plurality of transistors, reads out image data generated by the electric charges photoelectrically converted by the photoelectric conversion unit 104, and outputs the image data to the wiring layer 108.
  • the wiring layer 108 has a plurality of metal layers.
  • the metal layer is, for example, an Al wiring, a Cu wiring, or the like.
  • the wiring layer 108 outputs the image data read by the reading circuit 105.
  • the image data is output from the wiring layer 108 to the signal processing chip 112 via the connection unit 109.
  • connection unit 109 may be provided for each photoelectric conversion unit 104. Further, the connection unit 109 may be provided for each of the plurality of photoelectric conversion units 104. When the connection unit 109 is provided for each of the plurality of photoelectric conversion units 104, the pitch of the connection units 109 may be larger than the pitch of the photoelectric conversion units 104. In addition, the connection unit 109 may be provided in a peripheral region of the region where the photoelectric conversion unit 104 is disposed.
  • the signal processing chip 112 has a plurality of signal processing circuits.
  • the signal processing circuit performs signal processing on the image data output from the imaging chip 111.
  • the signal processing circuit includes, for example, an amplifier circuit that amplifies the signal value of the image data, a correlated double sampling circuit that performs noise reduction processing of the image data, and analog / digital (A / D) conversion that converts the analog signal into a digital signal. Circuit etc.
  • a signal processing circuit may be provided for each photoelectric conversion unit 104.
  • a signal processing circuit may be provided for each of the plurality of photoelectric conversion units 104.
  • the signal processing chip 112 has a plurality of through electrodes 110.
  • the through electrode 110 is, for example, a silicon through electrode.
  • the through electrode 110 connects circuits provided in the signal processing chip 112 to each other.
  • the through electrode 110 may also be provided in the peripheral region of the imaging chip 111 and the memory chip 113.
  • some elements constituting the signal processing circuit may be provided in the imaging chip 111.
  • a comparator that compares an input voltage with a reference voltage may be provided in the imaging chip 111, and circuits such as a counter circuit and a latch circuit may be provided in the signal processing chip 112.
  • the memory chip 113 has a plurality of storage units.
  • the storage unit stores image data that has been subjected to signal processing by the signal processing chip 112.
  • the storage unit is a volatile memory such as a DRAM, for example.
  • a storage unit may be provided for each photoelectric conversion unit 104.
  • the storage unit may be provided for each of the plurality of photoelectric conversion units 104.
  • the image data stored in the storage unit is output to the subsequent image processing unit.
  • FIG. 3 is a diagram for explaining the pixel array and the unit area 131 of the imaging chip 111.
  • a state where the imaging chip 111 is observed from the back surface (imaging surface) side is shown.
  • 20 million or more pixels are arranged in a matrix in the pixel region.
  • four adjacent pixels of 2 pixels ⁇ 2 pixels form one unit region 131.
  • the grid lines in the figure indicate the concept that adjacent pixels are grouped to form a unit region 131.
  • the number of pixels forming the unit region 131 is not limited to this, and may be about 1000, for example, 32 pixels ⁇ 32 pixels, more or less, or one pixel.
  • the unit area 131 in FIG. 3 includes a so-called Bayer array composed of four pixels of green pixels Gb, Gr, blue pixels B, and red pixels R.
  • the green pixels Gb and Gr are pixels having a green filter as the color filter F, and receive light in the green wavelength band of incident light.
  • the blue pixel B is a pixel having a blue filter as the color filter F and receives light in the blue wavelength band
  • the red pixel R is a pixel having a red filter as the color filter F and having a red wavelength band. Receives light.
  • a plurality of blocks are defined so as to include at least one unit region 131 per block. That is, the minimum unit of one block is one unit area 131. As described above, of the possible values for the number of pixels forming one unit region 131, the smallest number of pixels is one pixel. Therefore, when one block is defined in units of pixels, the minimum number of pixels among the number of pixels that can define one block is one pixel.
  • Each block can control pixels included in each block with different control parameters. In each block, all the unit areas 131 in the block, that is, all the pixels in the block are controlled under the same imaging condition. That is, photoelectric conversion signals having different imaging conditions can be acquired between a pixel group included in a certain block and a pixel group included in another block.
  • control parameters examples include a frame rate, a gain, a thinning rate, the number of addition rows or addition columns to which photoelectric conversion signals are added, a charge accumulation time or accumulation count, a digitization bit number (word length), and the like.
  • the imaging device 100 can freely perform not only thinning in the row direction (X-axis direction of the imaging chip 111) but also thinning in the column direction (Y-axis direction of the imaging chip 111).
  • the control parameter may be a parameter in image processing.
  • FIG. 4 is a diagram for explaining a circuit in the unit region 131.
  • one unit region 131 is formed by four adjacent pixels of 2 pixels ⁇ 2 pixels.
  • the number of pixels included in the unit region 131 is not limited to this, and may be 1000 pixels or more, or may be a minimum of 1 pixel.
  • the two-dimensional position of the unit area 131 is indicated by reference signs A to D.
  • the reset transistor (RST) of the pixel included in the unit region 131 is configured to be turned on and off individually for each pixel.
  • a reset wiring 300 for turning on / off the reset transistor of the pixel A is provided, and a reset wiring 310 for turning on / off the reset transistor of the pixel B is provided separately from the reset wiring 300.
  • a reset line 320 for turning on and off the reset transistor of the pixel C is provided separately from the reset lines 300 and 310.
  • a dedicated reset wiring 330 for turning on and off the reset transistor is also provided for the other pixels D.
  • the pixel transfer transistor (TX) included in the unit region 131 is also configured to be turned on and off individually for each pixel.
  • a transfer wiring 302 for turning on / off the transfer transistor of the pixel A, a transfer wiring 312 for turning on / off the transfer transistor of the pixel B, and a transfer wiring 322 for turning on / off the transfer transistor of the pixel C are separately provided.
  • a dedicated transfer wiring 332 for turning on / off the transfer transistor is provided for the other pixels D.
  • the pixel selection transistor (SEL) included in the unit region 131 is also configured to be turned on and off individually for each pixel.
  • a selection wiring 306 for turning on / off the selection transistor of the pixel A, a selection wiring 316 for turning on / off the selection transistor of the pixel B, and a selection wiring 326 for turning on / off the selection transistor of the pixel C are separately provided.
  • a dedicated selection wiring 336 for turning on and off the selection transistor is provided for the other pixels D.
  • the power supply wiring 304 is commonly connected from the pixel A to the pixel D included in the unit region 131.
  • the output wiring 308 is commonly connected to the pixel D from the pixel A included in the unit region 131.
  • the power supply wiring 304 is commonly connected between a plurality of unit regions, but the output wiring 308 is provided for each unit region 131 individually.
  • the load current source 309 supplies current to the output wiring 308.
  • the load current source 309 may be provided on the imaging chip 111 side or may be provided on the signal processing chip 112 side.
  • the charge accumulation including the charge accumulation start time, the accumulation end time, and the transfer timing is controlled from the pixel A to the pixel D included in the unit region 131. can do.
  • the photoelectric conversion signals of the pixels A to D can be output via the common output wiring 308.
  • a so-called rolling shutter system in which charge accumulation is controlled in a regular order with respect to rows and columns for the pixels A to D included in the unit region 131.
  • photoelectric conversion signals are output in the order of “ABCD” in the example of FIG.
  • the charge accumulation time can be controlled for each unit region 131.
  • the unit area 131 included in another block is rested while the unit area 131 included in a part of the block is charged (imaged), so that a predetermined block of the imaging chip 111 can be used. Only the imaging can be performed, and the photoelectric conversion signal can be output.
  • a block accumulation control target block
  • charge accumulation imaging
  • the output wiring 308 is provided corresponding to each of the unit areas 131. Since the image pickup device 100 includes the image pickup chip 111, the signal processing chip 112, and the memory chip 113, each chip is arranged in the surface direction by using the electrical connection between the chips using the connection portion 109 for the output wiring 308. The wiring can be routed without increasing the size.
  • an imaging condition can be set for each of a plurality of blocks in the imaging device 32a.
  • the control unit 34 associates the plurality of regions with the block and causes the imaging to be performed under an imaging condition set for each region.
  • FIG. 5 is a diagram schematically showing an image of a subject formed on the image sensor 32a of the camera 1.
  • the camera 1 photoelectrically converts the subject image to obtain a live view image before an imaging instruction is given.
  • the live view image refers to a monitor image that is repeatedly imaged at a predetermined frame rate (for example, 60 fps).
  • the control unit 34 sets the same imaging condition over the entire area of the imaging chip 111 (that is, the entire imaging screen) before the setting unit 34b divides the area.
  • the same imaging condition refers to setting a common imaging condition for the entire imaging screen. For example, even if there is a variation in apex value of less than about 0.3, it is regarded as the same.
  • the imaging conditions set to be the same throughout the imaging chip 111 are determined based on the exposure conditions corresponding to the photometric value of the subject luminance or the exposure conditions manually set by the user.
  • an image including a person 61a, an automobile 62a, a bag 63a, a mountain 64a, and clouds 65a and 66a is formed on the imaging surface of the imaging chip 111.
  • the person 61a holds the bag 63a with both hands.
  • the automobile 62a stops at the right rear of the person 61a.
  • the control unit 34 divides the screen of the live view image into a plurality of regions as follows. First, a subject element is detected from the live view image by the object detection unit 34a. The subject element is detected using a known subject recognition technique. In the example of FIG. 5, the object detection unit 34a detects a person 61a, a car 62a, a bag 63a, a mountain 64a, a cloud 65a, and a cloud 66a as subject elements.
  • the setting unit 34b divides the live view image screen into regions including the subject elements.
  • the region including the person 61a is the region 61
  • the region including the car 62a is the region 62
  • the region including the bag 63a is the region 63
  • the region including the mountain 64a is the region 64
  • the cloud 65a is included.
  • the region is described as a region 65
  • the region including the cloud 66a is described as a region 66.
  • the control unit 34 causes the display unit 35 to display a setting screen as illustrated in FIG. In FIG. 6, a live view image 60a is displayed, and an imaging condition setting screen 70 is displayed on the right side of the live view image 60a.
  • the setting screen 70 lists frame rate, shutter speed (TV), and gain (ISO) in order from the top as an example of setting items for imaging conditions.
  • the frame rate is the number of frames of a live view image acquired per second or a moving image recorded by the camera 1.
  • Gain is ISO sensitivity.
  • the setting items for the imaging conditions may be added as appropriate in addition to those illustrated in FIG. When all the setting items do not fit in the setting screen 70, other setting items may be displayed by scrolling the setting items up and down.
  • the control unit 34 sets the region selected by the user among the regions divided by the setting unit 34b as a target for setting (changing) the imaging condition. For example, in the camera 1 capable of touch operation, the user taps the display position of the main subject for which the imaging condition is to be set (changed) on the display surface of the display unit 35 on which the live view image 60a is displayed. For example, when the display position of the person 61 a is tapped, the control unit 34 sets the area 61 including the person 61 a in the live view image 60 a as an imaging condition setting (change) target area and emphasizes the outline of the area 61. To display.
  • an area 61 in which the outline is emphasized and displayed indicates an area for which the imaging condition is to be set (changed).
  • a live view image 60a in which the outline of the region 61 is emphasized is displayed.
  • the region 61 is a target for setting (changing) the imaging condition.
  • the control unit 34 displays the current shutter speed for the highlighted area (area 61).
  • the set value is displayed on the screen (reference numeral 68).
  • the imaging condition may be set (changed) by operating a button or the like constituting the operation member 36.
  • the setting unit 34b increases or decreases the shutter speed display 68 from the current setting value according to the tap operation.
  • An instruction is sent to the imaging unit 32 (FIG. 1) so as to change the imaging condition of the unit area 131 (FIG. 3) of the imaging element 32a corresponding to the displayed area (area 61) in accordance with the tap operation.
  • the decision icon 72 is an operation icon for confirming the set imaging condition.
  • the setting unit 34b performs the setting (change) of the frame rate and gain (ISO) in the same manner as the setting (change) of the shutter speed (TV).
  • the setting unit 34b may set the imaging condition based on the determination of the control unit 34 without being based on a user operation. For example, when the overexposure or underexposure occurs in the area including the subject element having the maximum luminance or the minimum luminance in the image, the setting unit 34b cancels the overexposure or underexposure based on the determination of the control unit 34.
  • the imaging conditions may be set in. For the area that is not highlighted (the area other than the area 61), the set imaging conditions are maintained.
  • the control unit 34 displays the entire target area brightly, increases the contrast of the entire target area, or displays the entire target area. May be displayed blinking.
  • the target area may be surrounded by a frame.
  • the display of the frame surrounding the target area may be a double frame or a single frame, and the display mode such as the line type, color, and brightness of the surrounding frame may be appropriately changed.
  • the control unit 34 may display an indication of an area for which an imaging condition is set, such as an arrow, in the vicinity of the target area.
  • the control unit 34 may darkly display a region other than the target region for which the imaging condition is set (changed), or may display a low contrast other than the target region.
  • the control unit 34 is operated.
  • imaging is performed under the imaging conditions set for each of the divided areas.
  • the image processing unit 33 performs image processing on the image data acquired by the imaging unit 32. As described above, the image processing can be performed under different image processing conditions for each region.
  • the recording unit 37 that receives an instruction from the control unit 34 records the image data after the image processing on a recording medium including a memory card (not shown). Thereby, a series of imaging processes is completed.
  • the imaging condition is set (changed) for the area selected by the user or the area determined by the control unit 34. ) Is configured to be possible.
  • the control unit 34 performs the following data selection process as necessary.
  • the image processing unit 33 (selection unit 33b) is in the vicinity of a boundary between regions when the image processing for image data obtained by applying different imaging conditions between the divided regions is predetermined image processing.
  • a data selection process is performed on the image data located in the position as a pre-process of the image process.
  • the predetermined image processing is processing for calculating image data of a target position to be processed in an image with reference to image data at a plurality of reference positions around the target position. For example, pixel defect correction processing, color interpolation Processing, contour enhancement processing, noise reduction processing, and the like are applicable.
  • the data selection process is performed to alleviate the uncomfortable feeling that appears in the image after image processing due to the difference in imaging conditions between the divided areas.
  • image data in which the same imaging condition as the image data of the target position is applied to the plurality of reference positions around the target position, and the image data of the target position
  • image data to which different imaging conditions are applied coexists.
  • the image data at the reference position to which the same imaging condition as the target position is applied is calculated rather than the image data at the target position is calculated by directly referring to the image data at the reference position to which the different imaging conditions are applied.
  • data used for image processing is selected as follows.
  • FIG. 7A is a diagram illustrating a region 80 near the boundary between the region 61 and the region 64 in the live view image 60a.
  • the first imaging condition is set in an area 61 including at least a person and the second imaging condition is set in an area 64 including a mountain.
  • FIG. 7B is an enlarged view of a region 80 near the boundary of FIG.
  • the image data from the pixels on the image sensor 32a corresponding to the area 61 for which the first imaging condition is set is shown in white, and the image from the pixels on the image sensor 32a corresponding to the area 64 for which the second imaging condition is set. Data is shaded.
  • FIG. 7A is a diagram illustrating a region 80 near the boundary between the region 61 and the region 64 in the live view image 60a.
  • the first imaging condition is set in an area 61 including at least a person
  • the second imaging condition is set in an area 64 including a mountain.
  • FIG. 7B is an enlarged view of a region 80 near the boundary of FIG.
  • the image data from the target pixel P is located on the boundary portion that is on the region 61 and in the vicinity of the boundary 81 between the region 61 and the region 64.
  • Pixels around the target pixel P (eight pixels in this example) included in a predetermined range 90 (for example, 3 ⁇ 3 pixels) centered on the target pixel P are set as reference pixels.
  • FIG. 7C is an enlarged view of the target pixel P and the reference pixel.
  • the position of the target pixel P is the target position
  • the position of the reference pixel surrounding the target pixel P is the reference position.
  • the image processing unit 33 (generation unit 33c) can also perform image processing by referring to the image data of the reference pixel as it is without performing the data selection processing.
  • the selection unit 33b selects the image data of the first imaging condition used for image processing from the image data of the reference pixel as in the following (Example 1) to (Example 3).
  • the generation unit 33c performs image processing for calculating the image data of the target pixel P with reference to the image data of the reference pixel after selecting the image data.
  • the data output from the pixels indicated by the white background is image data under the first imaging condition
  • the data output from the pixels indicated by the oblique lines is image data under the second imaging condition.
  • the image data output from the pixels indicated by the oblique lines is not used for the image processing.
  • the image processing unit 33 differs only in ISO sensitivity between the first imaging condition and the second imaging condition, and the ISO sensitivity in the first imaging condition is 100, and the ISO sensitivity in the second imaging condition.
  • the image data of the first imaging condition used for image processing is selected from the image data of the reference pixel. That is, the image data of the second imaging condition different from the first imaging condition in the image data of the reference pixel is not used for the image processing.
  • the image processing unit 33 differs only in the shutter speed between the first imaging condition and the second imaging condition, and the shutter speed of the first imaging condition is 1/1000 second.
  • the image data of the first imaging condition used for image processing is selected from the image data of the reference pixel. That is, the image data of the second imaging condition different from the first imaging condition in the image data of the reference pixel is not used for the image processing.
  • the image processing unit 33 differs only in the frame rate between the first imaging condition and the second imaging condition (the charge accumulation time is the same), and the frame rate of the first imaging condition is 30 fps.
  • the frame rate of the second imaging condition is 60 fps
  • the acquisition timing of the image data of the second imaging condition (60 fps) of the reference pixel image data is close to the frame image acquired under the first imaging condition (30 fps).
  • Select the image data of the frame image that is, image data of a frame image having a different acquisition timing from the frame image of the first imaging condition (30 fps) in the image data of the reference pixel is not used for image processing.
  • the image processing unit 33 determines that the imaging conditions applied to the target pixel P and the imaging conditions applied to all the reference pixels around the target pixel P are all the same. Select the image data. For example, if it is determined that the same imaging condition as that applied to the target pixel P is applied to all the reference pixels, the image data of all the reference pixels are referred to as they are and Image processing for calculating image data is performed. Note that, as described above, even if there is a slight difference in the imaging conditions (for example, apex value is 0.3 or less), the imaging conditions may be regarded as the same.
  • the pixel defect correction process is one of image processes performed during imaging.
  • the image sensor 100 which is a solid-state image sensor, may produce pixel defects in the manufacturing process or after manufacturing, and output abnormal level image data. Therefore, the image processing unit 33 (the generation unit 33c) corrects the image data output from the pixel in which the pixel defect has occurred, thereby making the image data in the pixel position in which the pixel defect has occurred inconspicuous.
  • the image processing unit 33 sets a pixel at a pixel defect position recorded in advance in a non-illustrated non-volatile memory in an image of one frame as a target pixel P (processing target pixel), and the target pixel P Pixels around the pixel of interest P (eight pixels in this example) included in a predetermined range 90 (for example, 3 ⁇ 3 pixels) (FIG. 7C) centering on the pixel are used as reference pixels.
  • the image processing unit 33 calculates the maximum value and the minimum value of the image data in the reference pixel. When the image data output from the target pixel P exceeds these maximum value or minimum value, the image processing unit 33 (the generation unit 33c) starts from the target pixel P. Max and Min filter processing is performed to replace the output image data with the maximum value or the minimum value. Such a process is performed for all pixel defects whose position information is recorded in the nonvolatile memory.
  • the image processing unit 33 (selection unit 33b) performs reference when the reference pixel includes a pixel to which a second imaging condition different from the first imaging condition applied to the target pixel P is applied. Image data to which the first imaging condition is applied is selected from image data at the pixel. Thereafter, the image processing unit 33 (generation unit 33c) performs the above-described Max and Min filter processing with reference to the selected image data. Note that pixel defect correction processing may be performed by taking the average of selected image data.
  • color interpolation processing is one of image processing performed at the time of imaging. As illustrated in FIG. 3, in the imaging chip 111 of the imaging device 100, green pixels Gb and Gr, a blue pixel B, and a red pixel R are arranged in a Bayer array.
  • the image processing unit 33 (the generation unit 33c) lacks image data having a color component different from the color component of the color filter F arranged at each pixel position. Color interpolation processing for generating component image data is performed.
  • FIG. 8A is a diagram illustrating the arrangement of image data output from the image sensor 100. Corresponding to each pixel position, it has one of R, G, and B color components according to the rules of the Bayer array.
  • the image processing unit 33 generation unit 33c
  • the image processing unit 33 that performs the G color interpolation refers to the image data of the four G color components at the reference positions around the target position, with the positions of the R color component and the B color component as the target position in order.
  • image data of the G color component at the position of interest is generated. For example, the G color component at the target position indicated by the thick frame in FIG.
  • the image processing unit 33 sets, for example, (aG1 + bG2 + cG3 + dG4) / 4 as G color component image data at the target position (second row, second column).
  • a to d are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the first imaging condition is applied to the left and upper regions with respect to the thick line
  • the second imaging condition is applied to the right and lower regions with respect to the thick line. It shall be.
  • the first imaging condition and the second imaging condition are different.
  • the G color component image data G1 to G4 in FIG. 8B are reference positions for image processing of the pixel at the target position (second row and second column).
  • the first imaging condition is applied to the target position (second row, second column).
  • the first imaging condition is applied to the image data G1 to G3.
  • the second imaging condition is applied to the image data G4.
  • the image processing unit 33 selects the image data G1 to G3 to which the first imaging condition is applied from the G color component image data G1 to G4. In this way, the image processing unit 33 (generation unit 33c) calculates the G color component image data at the position of interest (second row and second column) with reference to the selected image data.
  • the image processing unit 33 (generation unit 33c) sets, for example, (a1G1 + b1G2 + c1G3) / 3 as image data of the G color component at the target position (second row, second column).
  • a1 to c1 are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the image processing unit 33 (the generation unit 33c) generates image data of the G color component at the position of the B color component and the position of the R color component in FIG. In addition, the image data of the G color component can be obtained at each pixel position.
  • FIG. 9A is a diagram obtained by extracting R color component image data from FIG.
  • the image processing unit 33 (generating unit 33c) performs color difference shown in FIG. 9B based on the G color component image data shown in FIG. 8C and the R color component image data shown in FIG. Image data of the component Cr is calculated.
  • the image processing unit 33 (the generation unit 33c) generates image data of the color difference component Cr at the target position indicated by the thick frame (second row and second column) in FIG. 9B, the target position (second row) Reference is made to the image data Cr1 to Cr4 of the four color difference components located in the vicinity of the second column).
  • the image processing unit 33 (generation unit 33c) sets, for example, (eCr1 + fCr2 + gCr3 + hCr4) / 4 as image data of the color difference component Cr at the target position (second row and second column).
  • e to h are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the image processing unit 33 (the generation unit 33c) generates image data of the color difference component Cr at the position of interest indicated by the thick frame (second row and third column) of FIG. Reference is made to the image data Cr2, Cr4 to Cr6 of the four color difference components located in the vicinity of the second row and the third column).
  • the image processing unit 33 sets, for example, (qCr2 + rCr4 + sCr5 + tCr6) / 4 as image data of the color difference component Cr at the target position (second row, third column).
  • q to t are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • image data of the color difference component Cr is generated for each pixel position.
  • the first imaging condition is applied to the left and upper regions with respect to the thick line
  • the second imaging condition is applied to the right and lower regions with respect to the thick line. It shall be applied.
  • the first imaging condition and the second imaging condition are different.
  • the position indicated by the thick frame is the target position of the color difference component Cr.
  • the color difference component image data Cr1 to Cr4 in FIG. 9B are reference positions for image processing of the pixel at the target position (second row and second column).
  • the first imaging condition is applied to the target position (second row, second column).
  • the first imaging condition is applied to the image data Cr1, Cr3, and Cr4.
  • the second imaging condition is applied to the image data Cr2. Therefore, the image processing unit 33 (selection unit 33b) selects image data Cr1, Cr3, and Cr4 to which the first imaging condition is applied from the image data Cr1 to Cr4 of the color difference component Cr. Thereafter, the image processing unit 33 (generation unit 33c) calculates the color difference component image data Cr at the target position (second row and second column) with reference to the selected image data.
  • the image processing unit 33 (generation unit 33c) sets, for example, (e1Cr1 + g1Cr3 + h1Cr4) / 3 as image data of the color difference component Cr at the target position (second row and second column). Note that e1, g1, and h1 are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the position indicated by the thick frame is the target position of the color difference component Cr.
  • the color difference component image data Cr2, Cr4, Cr5, and Cr6 in FIG. 9C are reference positions for image processing of the pixel at the target position (second row and third column).
  • the second imaging condition is applied to the target position (second row, third column).
  • the first imaging condition is applied to the image data Cr4 and Cr5.
  • the second imaging condition is applied to the image data Cr2 and Cr6.
  • the image processing unit 33 selection unit 33b
  • the image processing unit 33 uses the image data Cr2 to which the second imaging condition is applied from the image data Cr2, Cr4 to Cr6 of the color difference component Cr, and Select Cr6. Thereafter, the image processing unit 33 (generation unit 33c) calculates the color difference component image data Cr at the position of interest (second row and third column) with reference to the selected image data.
  • the image processing unit 33 (generation unit 33c) sets, for example, (g2Cr2 + h2Cr6) / 2 as image data of the color difference component Cr at the target position (second row and third column).
  • G2 and h2 are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the image processing unit 33 (generation unit 33c) obtains the image data of the color difference component Cr at each pixel position, and then adds the image data of the G color component shown in FIG. 8C in correspondence with each pixel position. Thus, R color component image data can be obtained at each pixel position.
  • FIG. 10A is a diagram in which image data of the B color component is extracted from FIG.
  • the image processing unit 33 (generation unit 33c) performs the color difference shown in FIG. 10B based on the G color component image data shown in FIG. 8C and the B color component image data shown in FIG. Image data of the component Cb is calculated.
  • the image processing unit 33 (the generation unit 33c) generates image data of the color difference component Cb at the target position indicated by the thick frame (third row, third column) in FIG. Reference is made to the image data Cb1 to Cb4 of the four color difference components located in the vicinity of the third column).
  • the image processing unit 33 (generation unit 33c) sets, for example, (uCb1 + vCb2 + wCb3 + xCb4) / 4 as the image data of the color difference component Cb at the target position (third row, third column).
  • u to x are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the image processing unit 33 (the generation unit 33c) generates image data of the color difference component Cb at the target position indicated by the thick frame (third row and fourth column) in FIG. 10C, for example, Reference is made to the image data Cb2, Cb4 to Cb6 of the four color difference components located in the vicinity of (third row, fourth column).
  • the image processing unit 33 (generation unit 33c) sets, for example, (yCb2 + zCb4 + ⁇ Cb5 + ⁇ Cb6) / 4 as image data of the color difference component Cb at the target position (third row, fourth column).
  • y, z, ⁇ , and ⁇ are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • image data of the color difference component Cb is generated for each pixel position.
  • the first imaging condition is applied to the left and upper regions with respect to the thick line
  • the second imaging condition is applied to the right and lower regions with respect to the thick line. It shall be applied.
  • FIGS. 10A to 10C the first imaging condition and the second imaging condition are different.
  • the position indicated by the thick frame is the target position of the color difference component Cb.
  • the color difference component image data Cb1 to Cb4 in FIG. 10B is a reference position for image processing of the pixel at the target position (third row, third column).
  • the second imaging condition is applied to the target position (third row, third column).
  • the first imaging condition is applied to the image data Cb1 and Cb3.
  • the second imaging condition is applied to the image data Cb2 and Cb4. Therefore, the image processing unit 33 (selection unit 33b) selects image data Cb2 and Cb4 to which the second imaging condition is applied from the image data Cb1 to Cb4 of the color difference component Cb. Thereafter, the image processing unit 33 (generation unit 33c) calculates the color difference component image data Cb at the target position (third row, third column) with reference to the selected image data.
  • the image processing unit 33 sets, for example, (v1Cb2 + x1Cb4) / 2 as image data of the color difference component Cb at the target position (third row, third column).
  • v1 and x1 are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the position indicated by the thick frame (third row, fourth column) is the target position of the color difference component Cb.
  • the color difference component image data Cb2 and Cb4 to Cb6 in FIG. 10C are reference positions for image processing of the pixel at the target position (third row, fourth column).
  • the second imaging condition is applied to the position of interest (third row, fourth column). Further, the second imaging condition is applied to the image data Cb2 and Cb4 to Cb6 at all reference positions.
  • the image processing unit 33 (generation unit 33c) refers to the image data Cb2 and Cb4 to Cb6 of the four color difference components located in the vicinity of the target position (third row, fourth column). Image data of the color difference component Cb in the fourth column) is calculated.
  • the image processing unit 33 (the generation unit 33c) obtains the image data of the color difference component Cb at each pixel position, and then adds the image data of the G color component shown in FIG. 8C in correspondence with each pixel position. Thus, image data of the B color component can be obtained at each pixel position.
  • the image processing unit 33 (generation unit 33c) performs, for example, a known linear filter calculation using a kernel of a predetermined size centered on the pixel of interest P (processing target pixel) in one frame image.
  • the kernel size of a sharpening filter that is an example of a linear filter is N ⁇ N pixels
  • the position of the target pixel P is the target position
  • the positions of (N 2 ⁇ 1) reference pixels surrounding the target pixel P are referred to.
  • Position may be N ⁇ M pixels.
  • the image processing unit 33 (generation unit 33c) performs a filter process for replacing the image data in the target pixel P with a linear filter calculation result on each horizontal line, for example, from the upper horizontal line to the lower horizontal line of the frame image. This is done while shifting the pixel of interest from left to right.
  • the image processing unit 33 (selection unit 33b) performs reference when the reference pixel includes a pixel to which a second imaging condition different from the first imaging condition applied to the target pixel P is applied. Image data to which the first imaging condition is applied is selected from image data at the pixel. Thereafter, the image processing unit 33 (generation unit 33c) performs the above-described linear filter processing with reference to the selected image data.
  • the image processing unit 33 (generation unit 33c) performs, for example, a known linear filter calculation using a kernel of a predetermined size centered on the pixel of interest P (processing target pixel) in one frame image.
  • the kernel size of the smoothing filter which is an example of the linear filter is N ⁇ N pixels
  • the position of the target pixel P is the target position
  • the positions of the (N 2 ⁇ 1) reference pixels surrounding the target pixel P are referred to.
  • Position may be N ⁇ M pixels.
  • the image processing unit 33 (generation unit 33c) performs a filter process for replacing the image data in the target pixel P with a linear filter calculation result on each horizontal line, for example, from the upper horizontal line to the lower horizontal line of the frame image. This is done while shifting the pixel of interest from left to right.
  • the image processing unit 33 performs reference when the reference pixel includes a pixel to which a second imaging condition different from the first imaging condition applied to the target pixel P is applied. Image data to which the first imaging condition is applied is selected from image data at the pixel. Thereafter, the image processing unit 33 (generation unit 33c) performs the above-described linear filter processing with reference to the selected image data. As described above, the image processing unit 33 (selection unit 33b) determines whether the reference pixel includes a pixel to which a second imaging condition different from the first imaging condition applied to the target pixel P is included. Image data to which the first imaging condition is applied is selected from the image data.
  • the image processing unit 33 (generation unit 33c) refers to the selected image data and performs image processing such as pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing.
  • the image processing unit 33 (selection unit 33b) performs the same processing on image data (including image data to which the second imaging condition is applied) from other pixels output from the image sensor, and outputs an image. Generate.
  • the generated image is displayed on a display unit such as a display device.
  • the control unit 34 (AF calculation unit 34d) performs focus detection processing using image data corresponding to a predetermined position (focus detection position) on the imaging screen.
  • the control unit 34 (AF calculation unit 34d) is set near the boundary of the region when different imaging conditions are set between the divided regions and the focus detection position of the AF operation is located at the boundary portion of the divided region.
  • Data selection processing is performed as preprocessing for focus detection processing on image data for focus detection.
  • the data selection process is performed in order to suppress a decrease in the accuracy of the focus detection process due to the difference in imaging conditions between areas of the imaging screen divided by the setting unit 34b.
  • image data for focus detection at a focus detection position for detecting an image shift amount (phase difference) in an image is located near the boundary of the divided areas
  • different imaging conditions are included in the image data for focus detection.
  • the applied image data may be mixed.
  • the amount of image displacement (position) is determined using image data to which the same imaging conditions are applied, rather than detecting image displacement amount (phase difference) using image data to which different imaging conditions are applied as it is. Based on the idea that it is preferable to detect (phase difference), data selection processing is performed as follows.
  • AF calculation unit 34d detects the image shift amounts (phase differences) of a plurality of subject images due to light beams that have passed through different pupil regions of the imaging optical system 31, thereby defocusing the imaging optical system 31. Is calculated.
  • the control unit 34 (AF calculation unit 34d) moves the focus lens of the imaging optical system 31 to a position where the defocus amount is zero (allowable value or less), that is, a focus position.
  • FIG. 11 is a diagram illustrating the position of the focus detection pixel on the imaging surface of the imaging device 32a.
  • focus detection pixels are discretely arranged along the X-axis direction (horizontal direction) of the imaging chip 111.
  • 15 focus detection pixel lines 160 are provided at a predetermined interval.
  • the focus detection pixels constituting the focus detection pixel line 160 output a photoelectric conversion signal for focus detection.
  • normal imaging pixels are provided at pixel positions other than the focus detection pixel line 160.
  • the imaging pixel outputs a live view image or a photoelectric conversion signal for recording.
  • FIG. 12 is an enlarged view of a part of the focus detection pixel line 160 corresponding to the focus detection position 80A shown in FIG.
  • a red pixel R, a green pixel G (Gb, Gr), and a blue pixel B, a focus detection pixel S1, and a focus detection pixel S2 are illustrated.
  • the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B are arranged according to the rules of the Bayer arrangement described above.
  • the square area illustrated for the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B indicates the light receiving area of the imaging pixel.
  • Each imaging pixel receives a light beam passing through the exit pupil of the imaging optical system 31 (FIG. 1). That is, the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B each have a square-shaped mask opening, and light passing through these mask openings reaches the light-receiving portion of the imaging pixel. .
  • the shapes of the light receiving regions (mask openings) of the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B are not limited to a quadrangle, and may be, for example, a circle.
  • the semicircular region exemplified for the focus detection pixel S1 and the focus detection pixel S2 indicates a light receiving region of the focus detection pixel. That is, the focus detection pixel S1 has a semicircular mask opening on the left side of the pixel position in FIG. 12, and the light passing through the mask opening reaches the light receiving portion of the focus detection pixel S1. On the other hand, the focus detection pixel S2 has a semicircular mask opening on the right side of the pixel position in FIG. 12, and the light passing through the mask opening reaches the light receiving portion of the focus detection pixel S2. As described above, the focus detection pixel S1 and the focus detection pixel S2 respectively receive a pair of light beams passing through different areas of the exit pupil of the imaging optical system 31 (FIG. 1).
  • the position of the focus detection pixel line 160 in the imaging chip 111 is not limited to the position illustrated in FIG. Further, the number of focus detection pixel lines 160 is not limited to the example of FIG. Further, the shape of the mask opening in the focus detection pixel S1 and the focus detection pixel S2 is not limited to a semicircular shape. For example, a rectangular light receiving region (mask opening) in the imaging pixel R, the imaging pixel G, and the imaging pixel B is used. Part) may be a rectangular shape divided in the horizontal direction.
  • the focus detection pixel line 160 in the imaging chip 111 may be a line in which focus detection pixels are arranged along the Y-axis direction (vertical direction) of the imaging chip 111.
  • An imaging element in which imaging pixels and focus detection pixels are two-dimensionally arranged as shown in FIG. 12 is known, and detailed illustration and description of these pixels are omitted.
  • the focus detection pixels S1 and S2 each receive one of a pair of focus detection light beams.
  • the focus detection pixels may receive both the pair of focus detection light beams.
  • the control unit 34 (AF calculation unit 34d) is configured to control the imaging optical system 31 (FIG. 1) based on photoelectric conversion signals (signal data) for focus detection output from the focus detection pixel S1 and the focus detection pixel S2.
  • An image shift amount (phase difference) between a pair of images caused by a pair of light beams passing through different areas is detected. Then, the defocus amount is calculated based on the image shift amount (phase difference).
  • Such defocus amount calculation by the pupil division phase difference method is well known in the field of cameras, and thus detailed description thereof is omitted.
  • FIG. 13 is an enlarged view of the focus detection position 80A.
  • White pixels indicate that the first imaging condition is set, and shaded pixels indicate that the second imaging condition is set.
  • the position surrounded by the frame 170 corresponds to the focus detection pixel line 160 (FIG. 11).
  • the control unit 34 (AF calculation unit 34d) normally performs focus detection processing using the focus detection signal data of the focus detection pixels indicated by the frame 170 without performing data selection processing. However, when the focus detection signal data to which the first imaging condition is applied and the focus detection signal data to which the second imaging condition is applied are mixed in the focus detection signal data surrounded by the frame 170, As in the following (Example 1) to (Example 3), the control unit 34 (AF calculation unit 34d) uses the focus detection signal data enclosed by the frame 170 to detect the focus of the first imaging condition used for the focus detection processing. Select the signal data. Then, the control unit 34 (AF calculation unit 34d) performs a focus detection process using the signal data for focus detection after the data selection process. In FIG.
  • the data output from the pixels indicated by the white background is signal data for focus detection under the first imaging condition
  • the data output from the pixels indicated by the diagonal lines is signal data for focus detection under the second imaging condition. is there.
  • the focus detection signal data output from the hatched pixels is not used for the focus detection process.
  • the control unit 34 differs only in ISO sensitivity between the first imaging condition and the second imaging condition, the ISO sensitivity of the first imaging condition is 100, and the ISO sensitivity of the second imaging condition.
  • the signal data for focus detection of the first imaging condition used for the focus detection process is selected from the image data surrounded by the frame 170. That is, out of the focus detection signal data surrounded by the frame 170, the focus detection signal data under the second imaging condition different from the first imaging condition is not used for the focus detection process.
  • control unit 34 differs only in shutter speed between the first imaging condition and the second imaging condition, and the shutter speed of the first imaging condition is 1/1000 second, and the second imaging condition
  • the focus detection signal data of the first imaging condition used for the focus detection process is selected from the focus detection signal data surrounded by the frame 170. That is, out of the focus detection signal data surrounded by the frame 170, the focus detection signal data under the second imaging condition different from the first imaging condition is not used for the focus detection process.
  • the control unit 34 differs only in the frame rate between the first imaging condition and the second imaging condition (the charge accumulation time is the same), and the frame rate of the first imaging condition is 30 fps.
  • the frame rate of the second imaging condition is 60 fps
  • the focus detection signal data of the first imaging condition used for the focus detection process is selected from the focus detection signal data surrounded by the frame 170. That is, out of the focus detection signal data enclosed by the frame 170, the focus detection signal data of the second imaging condition acquired at a timing different from the image data of the first imaging condition is not used for the focus detection process.
  • control unit 34 does not perform the data selection process when the imaging conditions applied in the focus detection signal data surrounded by the frame 170 are the same. That is, the control unit 34 (AF calculation unit 34d) performs focus detection processing using the focus detection signal data of the focus detection pixels indicated by the frame 170 as they are.
  • the imaging conditions are regarded as the same.
  • the example of selecting the focus detection signal data of the first imaging condition from the focus detection signal data surrounded by the frame 170 has been described.
  • the signal data for focus detection under the second imaging condition may be selected.
  • the focus detection is performed by designating the area where the first imaging condition is set, the example in which the photoelectric conversion signal for focus detection under the first imaging condition is selected has been described.
  • An example in which the subject to be focused is located across an area where the first imaging condition is set and an area where the second imaging condition is set will be described.
  • the control unit 34 calculates a first defocus amount from the selected photoelectric conversion signal for focus detection. Further, the control unit 34 (AF calculation unit 34d) selects the focus detection photoelectric conversion signal of the second imaging condition used for the focus detection process from the focus detection photoelectric conversion signal surrounded by the frame 170. Then, the control unit 34 (AF calculation unit 34d) calculates a second defocus amount from the selected focus detection photoelectric conversion signal.
  • the control unit 34 performs a focus detection process using the first defocus amount and the second defocus amount. Specifically, for example, the control unit 34 (AF calculation unit 34d) calculates an average value of the first defocus amount and the second defocus amount, and calculates the moving distance of the lens. Further, the control unit 34 (AF calculation unit 34d) may select a value having a smaller lens moving distance from the first defocus amount and the second defocus amount. The control unit 34 (AF calculation unit 34d) may select a value indicating that the subject is closer to the near side from the first defocus amount and the second defocus amount.
  • the control unit 34 (AF calculation unit 34d) A region having a larger area may be selected and a photoelectric conversion signal for focus detection may be selected. For example, when the area of the face of the subject to be focused is 70% in the region where the first imaging condition is set and 30% in the second region, the control unit 34 (AF calculation unit 34d) A photoelectric conversion signal for focus detection under imaging conditions is selected.
  • the ratio (percentage) to the area described above is an example, and the present invention is not limited to this.
  • the focus detection process using the pupil division phase difference method is exemplified.
  • the contrast detection method in which the focus lens of the imaging optical system 31 is moved to the in-focus position based on the contrast of the subject image. Can be done in the same way.
  • the control unit 34 moves the focus lens of the imaging optical system 31 and outputs the image output from the imaging pixel of the imaging element 32a corresponding to the focus detection position at each position of the focus lens. A known focus evaluation value calculation is performed based on the data. Then, the position of the focus lens that maximizes the focus evaluation value is obtained as the focus position.
  • the control unit 34 normally performs the focus evaluation value calculation using the image data output from the imaging pixel corresponding to the focus detection position without performing the data selection process. However, when the image data corresponding to the focus detection position includes image data to which the first imaging condition is applied and image data to which the second imaging condition is applied, the control unit 34 corresponds to the focus detection position. The image data of the first imaging condition or the image data of the second imaging condition is selected from the image data to be processed. And the control part 34 performs focus evaluation value calculation using the image data after a data selection process. As described above, when the subject to be focused is located across the area in which the first imaging condition is set and the area in which the second imaging condition is set, the control unit 34 (AF calculation unit 34d). May select a region having a larger area of the subject region and select signal data for focus detection.
  • FIG. 14A is a diagram illustrating a template image 180 representing an object to be detected.
  • FIG. 14B illustrates a live view image 60 (a) and a search range 190. It is a figure illustrated.
  • the control unit 34 object detection unit 34a detects an object (for example, a bag 63a which is one of the subject elements in FIG. 5) from the live view image.
  • the control unit 34 may set the range in which the object is detected as the entire range of the live view image 60a. However, in order to reduce the detection process, a part of the live view image 60a is set as the search range 190. Also good.
  • the control unit 34 sets different imaging conditions between the divided regions, and when the search range 190 includes the boundary of the divided region, the control unit 34 (object detection unit 34a) applies to image data located near the boundary of the region. Data selection processing is performed as preprocessing of subject detection processing.
  • the data selection process is performed in order to suppress a decrease in accuracy of the subject element detection process due to the difference in imaging conditions between the areas of the imaging screen divided by the setting unit 34b.
  • image data to which different imaging conditions are applied may be mixed in the image data of the search range 190.
  • the control unit 34 sets the search range 190 in the vicinity of the region including the person 61a. In addition, you may set the area
  • the control unit 34 uses the image data constituting the search range 190 as it is without performing the data selection process. Subject detection processing is performed. However, if image data in the search range 190 includes image data to which the first imaging condition is applied and image data to which the second imaging condition is applied, the control unit 34 (object detection unit 34a) Similarly to (Example 1) to (Example 3) in the case of performing the focus detection process described above, the image data of the first imaging condition used for the subject detection process is selected from the image data in the search range 190.
  • the control unit 34 performs subject detection processing on the area where the first imaging condition is set, using the image data after the data selection processing.
  • the subject detection process is, for example, a process (so-called template matching) in which detection is performed by obtaining the similarity between the template image 180 and the image data of the selected first imaging condition.
  • the control unit 34 selects image data of the second imaging condition used for subject detection processing from the image data in the search range 190. Then, the control unit 34 (the object detection unit 34a) performs subject detection processing similar to the above on the region where the second imaging condition is set, using the image data after the data selection processing.
  • the control unit 34 performs subject detection within the search range 190. Then, subject detection within the search range 190 is detected by matching the boundary between the subject region detected using the image data of the first imaging condition and the subject region detected using the image data of the second imaging condition. be able to.
  • the control unit 34 object detection unit 34a
  • the subject may be detected using only the image data of the first imaging condition.
  • the fact that the area where the first imaging condition is set occupies a large area means, for example, that the area of the area where the first imaging condition is set is 70% or more. Further, the area of the region where the first imaging condition is set is not limited to 70% or more, and may be 80% or more, or 90% or more. Naturally, the ratio of the area of the region in which the first imaging condition is set is not limited to these and can be changed as appropriate.
  • the data selection process for the image data in the search range 190 described above may be applied to a search range used for detecting a specific subject such as a human face or an area used for determination of an imaging scene.
  • the control unit 34 when detecting the face of a person from the search range 190, the control unit 34 (the object detection unit 34a) selects image data of the first imaging condition used for subject detection processing from the image data of the search range 190. . Then, the control unit 34 (object detection unit 34a) performs a known face detection process on the region where the first imaging condition is set. Further, the control unit 34 (object detection unit 34a) selects image data of the second imaging condition used for subject detection processing from the image data in the search range 190.
  • control unit 34 (the object detection unit 34a) performs a known face detection process on the region where the second imaging condition is set. Then, the control unit 34 (the object detection unit 34a) determines the boundary between the face area detected in the area where the first imaging condition is set and the face area detected in the area where the second imaging condition is set. Are combined to perform face detection from the image data in the search range 190.
  • the data selection processing for the image data in the search range 190 described above is not limited to the search range used in the pattern matching method using the template image, but in the search range when detecting the feature amount based on the color or edge of the image. May be applied similarly.
  • the control unit 34 when the image data to which the first imaging condition is applied and the image data to which the second imaging condition is applied coexist in the search range set in the frame image acquired later.
  • the image data of the first imaging condition used for the tracking process is selected from the image data of the search range.
  • the control part 34 performs a tracking process with respect to the area
  • the image data of the second imaging condition used for the tracking process is selected from the image data of the search range in the same manner as described above, and the control unit 34 performs the data after the data selection process for the area where the second imaging condition is set. Tracking processing may be performed using image data.
  • the control unit 34 detects the motion vector.
  • Image data of the first imaging condition used for detection processing is selected from the image data of the area.
  • the control part 34 detects a motion vector using the image data after a data selection process with respect to the area
  • the image data of the second imaging condition used for the motion vector detection process is selected from the image data of the search range, and the control unit 34 selects the data for the area where the second imaging condition is set.
  • Motion vector detection processing may be performed using the processed image data.
  • the control unit 34 may obtain the motion vector of the entire image from the motion vector detected from the area set with the first imaging condition and the motion vector detected from the area set with the second imaging condition. Alternatively, it may be a motion vector for each region.
  • control unit 34 (setting unit 34b) divides the area of the imaging screen and sets different imaging conditions between the divided areas, the control unit 34 (setting unit 34b) newly sets the exposure condition to determine the exposure condition
  • Data selection processing is performed as preprocessing for setting exposure conditions for image data located near the boundary of the region.
  • the data selection process is performed in order to suppress a decrease in accuracy of the process for determining the exposure condition due to the imaging condition being different between the areas of the imaging screen divided by the setting unit 34b.
  • image data to which different imaging conditions are applied may be mixed in the photometric range image data.
  • the control unit 34 When the photometry range is not divided by a plurality of regions having different imaging conditions, the control unit 34 (setting unit 34b) performs exposure calculation processing using the image data constituting the photometry range as it is without performing data selection processing. I do. However, if the image data to which the first imaging condition is applied and the image data to which the second imaging condition is applied are mixed in the image data in the photometry range, the control unit 34 (setting unit 34b) Similarly to (Example 1) to (Example 3) when performing the focus detection process and the subject detection process, the image data of the first imaging condition used for the exposure calculation process is selected from the image data in the photometric range.
  • control unit 34 uses the image data after the data selection process to perform an exposure calculation process on the area where the first imaging condition is set. Then, the control unit 34 (setting unit 34b) selects the image data of the second imaging condition used for the exposure calculation process from the image data in the photometric range. Then, the control unit 34 (setting unit 34b) uses the image data after the data selection process to perform an exposure calculation process on the area where the second imaging condition is set. As described above, when there are a plurality of regions having different imaging conditions in the photometric range, the control unit 34 (setting unit 34b) performs the data selection processing for measuring each region, and performs the data selection processing. Exposure calculation processing is performed using image data.
  • control unit 34 when the photometric range is located across the area where the first imaging condition is set and the area where the second imaging condition is set, the control unit 34 (setting unit 34b) performs the above-described focus detection and subject detection. As in the case of, a region having a larger area may be selected.
  • the photometric range when performing the exposure calculation process described above but also the photometric (colorimetric) range used when determining the white balance adjustment value and the necessity of emission of the auxiliary photographing light by the light source that emits the auxiliary photographing light are determined. The same applies to the photometric range performed at the time, and further to the photometric range performed at the time of determining the light emission amount of the photographing auxiliary light by the light source.
  • the readout resolution of the photoelectric conversion signal is made different between areas obtained by dividing the imaging screen, the same applies to the area used for determination of the imaging scene performed when determining the readout resolution for each area. it can.
  • FIG. 15 is a flowchart for explaining the flow of processing for setting an imaging condition for each area and imaging.
  • the control unit 34 activates a program that executes the process shown in FIG.
  • step S10 the control unit 34 causes the display unit 35 to start live view display, and proceeds to step S20.
  • the control unit 34 causes the display unit 35 to sequentially display an image obtained by performing predetermined image processing on the image data sequentially output from the imaging unit 32 as a live view image.
  • the same imaging condition is set for the entire imaging chip 111, that is, the entire screen.
  • the control unit 34 (AF calculation unit 34d) performs focus detection processing to focus the subject element corresponding to the predetermined focus detection position.
  • the AF operation to be adjusted is controlled.
  • the AF calculation unit 34d performs the focus detection process after performing the data selection process as necessary. If the setting for performing the AF operation is not performed during live view display, the control unit 34 (AF calculation unit 34d) performs the AF operation when the AF operation is instructed later.
  • step S20 the control unit 34 (object detection unit 34a) detects the subject element from the live view image, and proceeds to step S30.
  • the object detection unit 34a performs the subject detection process after performing the data selection process as necessary.
  • step S30 the control unit 34 (setting unit 34b) divides the screen of the live view image into regions including subject elements, and proceeds to step S40.
  • step S ⁇ b> 40 the control unit 34 displays an area on the display unit 35. As illustrated in FIG. 6, the control unit 34 highlights an area that is a target for setting (changing) the imaging condition among the divided areas. In addition, the control unit 34 displays the imaging condition setting screen 70 on the display unit 35 and proceeds to step S50. When the display position of another main subject on the display screen is tapped with the user's finger, the control unit 34 sets an area including the main subject as an area for setting (changing) the imaging condition. Change and highlight.
  • step S50 the control unit 34 determines whether an AF operation is necessary.
  • the control unit 34 for example, when the focus adjustment state changes due to the movement of the subject, when the position of the focus detection position is changed by a user operation, or when execution of an AF operation is instructed by a user operation Then, affirmative determination is made in step S50, and the process proceeds to step S70.
  • the control unit 34 makes a negative determination in step S50 and proceeds to step 60. move on.
  • step S70 the control unit 34 performs the AF operation and returns to step S40.
  • the AF calculation unit 34d performs a focus detection process that is an AF operation after performing the data selection process as necessary.
  • the control unit 34 that has returned to step S40 repeats the same processing as described above based on the live view image acquired after the AF operation.
  • step S60 the control unit 34 (setting unit 34b) sets an imaging condition for the highlighted area in accordance with a user operation, and proceeds to step S80. Note that the display transition of the display unit 35 and the setting of the imaging conditions according to the user operation in step S60 are as described above.
  • the control unit 34 (setting unit 34b) performs exposure calculation processing after performing the data selection processing as necessary.
  • step S80 the control unit 34 determines whether there is an imaging instruction.
  • a release button (not shown) constituting the operation member 36 or a display icon for instructing imaging is operated, the control unit 34 makes a positive determination in step S80 and proceeds to step S90.
  • the control unit 34 makes a negative determination in step S80 and returns to step S60.
  • step S90 the control unit 34 performs predetermined imaging processing. That is, the imaging control unit 34c controls the imaging element 32a so as to perform imaging under the imaging conditions set for each region, and the process proceeds to step S100.
  • step S100 the control unit 34 (imaging control unit 34c) sends an instruction to the image processing unit 33, performs predetermined image processing on the image data obtained by the imaging, and proceeds to step S110.
  • Image processing includes the pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing.
  • the image processing unit 33 selection unit 33b) performs image processing after performing data selection processing on image data located near the boundary of the region, as necessary.
  • step S110 the control unit 34 sends an instruction to the recording unit 37, records the image data after the image processing on a recording medium (not shown), and proceeds to step S120.
  • step S120 the control unit 34 determines whether an end operation has been performed. When the end operation is performed, the control unit 34 makes a positive determination in step S120 and ends the process illustrated in FIG. When the end operation is not performed, the control unit 34 makes a negative determination in step S120 and returns to step S20. When returning to step S20, the control unit 34 repeats the above-described processing.
  • the multilayer image sensor 100 is illustrated as the image sensor 32a.
  • the imaging condition can be set for each of a plurality of blocks in the image sensor (imaging chip 111)
  • the image sensor 32a is not necessarily configured as a multilayer image sensor. do not have to.
  • the camera 1 including the control device includes the first image data generated by imaging the subject image incident on the first area of the imaging unit 32 under the first imaging condition, and the second area of the imaging unit 32.
  • a selection unit 33b that selects one of the second image data generated by imaging the subject image incident on the second imaging condition different from the first imaging condition, and the selected selected image data
  • a control unit 34 setting unit 34b for setting shooting conditions based on the above.
  • the setting unit 34b of the camera 1 sets shooting conditions based on a partial area of the imaging unit 32, and the first area and the second area are included in the partial area. Thereby, it is possible to appropriately perform the setting process in regions where the imaging conditions are different.
  • the selection unit 33b of the camera 1 Since the selection unit 33b of the camera 1 selects the larger image data of the first image data and the second image data corresponding to a part of the regions, the selection unit 33b appropriately sets the regions in different imaging conditions. Processing can be performed.
  • the selection unit 33b of the camera 1 selects all image data when all the image data is the first image data, and selects all images when all the image data is the second image data. When data is selected and the image data includes the first image data and the second image data, the first image data or the second image data is selected. Thereby, it is possible to appropriately perform the setting process in regions where the imaging conditions are different.
  • the setting unit 34b of the camera 1 sets the exposure condition as the shooting condition, it is possible to appropriately set the exposure condition in regions where the shooting conditions are different.
  • the camera 1 includes a control unit 34 that controls the light source that emits the photographing auxiliary light, and the setting unit 34b sets the light emission presence / absence or the light emission amount of the light source controlled by the control unit 34 as the photographing condition. Thereby, it is possible to appropriately perform the setting process in regions where the imaging conditions are different.
  • the image data selection process in the case of performing image processing in the first embodiment is applied to the imaging condition applied to the target pixel P (referred to as the first imaging condition) and the reference pixels around the target pixel P.
  • the image processing unit 33 selecting unit 33b) uses the first imaging condition common to the target pixel from the image data of the pixels located inside the predetermined range 90. Is selected, and the image processing unit 33 (generation unit 33c) refers to the selected image data.
  • the image processing unit 33 uses the image data of the pixels located outside the predetermined range 90 as the image data to which the first imaging condition common to the target pixel is applied. And the number of data referred to by the image processing unit 33 (generation unit 33c) is increased. That is, the image processing unit 33 (selection unit 33b) changes the position to be selected and selects image data to which the first imaging condition is applied.
  • FIG. 7D is an enlarged view of the target pixel P and the reference pixel in the second embodiment.
  • the data output from the pixels indicated by the white background in the predetermined range 90 centered on the target pixel P is the image data of the first imaging condition
  • the data output from the pixels indicated by the oblique lines is It is image data of 2nd imaging conditions.
  • the image processing unit 33 selection unit 33b
  • the image processing unit 33 does not select image data of pixels (hatched lines) to which the second imaging condition is applied. This is the same as in the first embodiment.
  • the image processing unit 33 (selection unit 33b) is located on the inner side of the predetermined range 90 and white background to which the first imaging condition is applied, as illustrated in FIG. 7D. Along with the pixel image data, image data of a white background pixel that is located outside the predetermined range 90 and to which the first imaging condition is applied is further selected.
  • the image processing unit 33 (generation unit 33c) refers to the image data selected in this way and performs image processing.
  • the distance from the position of the target pixel P to the position of the image data of the second imaging condition inside the predetermined range 90 is the first position located outside the predetermined range 90 from the position of the target pixel P.
  • the image processing unit 33 does not select the image data of the second imaging condition that is short from the position of the target pixel P, and the first imaging that is long from the position of the target pixel P.
  • Conditional image data can also be selected.
  • the image processing unit 33 preferentially selects image data of a pixel at a position closer to the predetermined range 90 than image data of a pixel at a position away from the predetermined range 90.
  • the reason for this is based on the idea that a pixel closer to the predetermined range 90 is more likely to have image information common to the target pixel P than a pixel farther from the predetermined range 90.
  • the image processing unit 33 selection unit 33b is a white background pixel (that is, the first pixel) whose distance from the pixel indicated by the diagonal line is L or less. Image data to which imaging conditions are applied) is selected. The reason for this is based on the idea that it is preferable not to include the image data in common with the target pixel P if it is too far from the predetermined range 90.
  • Pixel defect correction processing When the same imaging condition is applied to the pixel of interest P and all of the pixels included in the predetermined range 90 centered on the pixel of interest P, the image processing unit 33 (selection unit 33b) All the image data of the pixels located inside the predetermined range 90 are selected. Thereafter, the image processing unit 33 (generation unit 33c) performs Max and Min filter processing with reference to the selected image data. Note that pixel defect correction processing may be performed by taking the average of selected image data.
  • the image processing unit 33 determines that the pixel to which the second imaging condition different from the first imaging condition applied to the target pixel P at the time of imaging is applied is the target pixel. If it is included in the predetermined range 90 centered on P, image data of a pixel that is located inside the predetermined range 90 and to which the first imaging condition is applied is selected. Furthermore, image data of a white background pixel that is located outside the predetermined range 90 and to which the first imaging condition is applied is selected. The image processing unit 33 (generation unit 33c) performs the Max and Min filter processing described above with reference to the image data selected in this way. Note that pixel defect correction processing may be performed by taking the average of selected image data.
  • the image processing unit 33 performs such processing for all pixel defects whose position information is recorded in the nonvolatile memory.
  • the second imaging different from the first imaging condition applied to the target position (second row and second column) at the reference position corresponding to the G color component image data G4 indicated by the oblique lines. Conditions are applied.
  • the image processing unit 33 selects image data G1 to G3 to which the first imaging condition is applied, from the G color component image data G1 to G4. Further, the image processing unit 33 (selection unit 33b) selects the G color component image data G6 that is located near the reference position corresponding to the data G4 and to which the first imaging condition is applied.
  • the image processing unit 33 changes the position where the image data is selected with respect to the first embodiment, and selects the image data to which the first imaging condition is applied.
  • the second imaging condition is applied also at the position of the data G6, data to which the first imaging condition is applied may be selected from image data at a position near the data G6.
  • the image processing unit 33 calculates the G color component image data at the position of interest (second row and second column) with reference to the image data selected in this way.
  • the image processing unit 33 (generation unit 33c) sets, for example, (a2G1 + b2G2 + c2G3 + d2G6) / 4 as the G color component image data at the target position (second row, second column).
  • a2, b2, c2, and d2 are weighting coefficients provided in accordance with the distance between the reference position and the target position and the image structure.
  • the image processing unit 33 (the generation unit 33c) generates image data of the G color component at the position of the B color component and the position of the R color component in FIG. In addition, image data of the G color component is obtained at each pixel position.
  • R color interpolation The R color interpolation of the second embodiment will be described.
  • the first imaging condition is applied to the left and upper regions with respect to the thick line.
  • the second imaging condition is applied to the right and lower regions.
  • the image processing unit 33 (selection unit 33b) is indicated by a thick frame (second row and second column) at the reference position corresponding to the image data Cr2 of the color difference component Cr indicated by diagonal lines.
  • a second imaging condition different from the first imaging condition applied to the position of interest is applied. Therefore, the image processing unit 33 (selection unit 33b) selects the image data Cr1, Cr3 to Cr4 to which the first imaging condition is applied, from the color difference component image data Cr1 to Cr4. Further, the image processing unit 33 (selection unit 33b) selects the image data Cr15 (or Cr16) of the color difference component Cr that is located in the vicinity of the reference position corresponding to the data Cr2 and to which the first imaging condition is applied.
  • the image processing unit 33 changes the position where the image data is selected with respect to the first embodiment, and selects the image data to which the first imaging condition is applied. If the second imaging condition is also applied at the position of the data Cr15 or Cr16, data to which the first imaging condition is applied may be selected from image data at positions near the data Cr15 or Cr16. .
  • the image processing unit 33 calculates the color difference component image data at the position of interest (second row and second column) with reference to the image data thus selected.
  • the image processing unit 33 (generating unit 33c) sets, for example, (e3Cr1 + f3Cr15 + g3Cr3 + h3Cr4) / 4 as image data of the color difference component Cr at the target position (second row and second column).
  • e3, f3, g3, and h3 are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the image processing unit 33 selects the image data Cr2 and Cr6 to which the second imaging condition is applied, from the color difference component image data Cr2, Cr4 to Cr6.
  • the image processing unit 33 selects the image data Cr8 and Cr7 of the color difference component Cr that is located near the reference position corresponding to the data Cr4 and Cr5 and to which the second imaging condition is applied. That is, the image processing unit 33 (selecting unit 33b) changes the position where the image data is selected with respect to the first embodiment, and selects image data to which the second imaging condition is applied. If the first imaging condition is also applied at the positions of the data Cr8 and Cr7, the data to which the second imaging condition is applied is selected from the image data at positions near the data Cr8 and Cr7. May be.
  • the image processing unit 33 calculates the image data of the color difference component Cr at the position of interest (second row and third column) with reference to the image data thus selected.
  • the image processing unit 33 (generation unit 33c) uses, for example, (q3Cr2 + r3Cr8 + s3Cr7 + t3r6) / 4 as image data of the color difference component Cr at the target position.
  • q3, r3, s3, and t3 are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the image processing unit 33 (generation unit 33c) obtains the image data of the color difference component Cr at each pixel position, and then adds the image data of the G color component shown in FIG. 8C in correspondence with each pixel position. Thus, image data of the R color component is obtained at each pixel position.
  • a first imaging condition different from the condition is applied. Therefore, the image processing unit 33 (selection unit 33b) selects the image data Cb2 and Cb4 to which the second imaging condition is applied from the color difference component image data Cb1 to Cb4. Furthermore, the image processing unit 33 (selection unit 33b) selects the image data Cb16 and Cb17 of the color difference component Cb that is located near the reference position corresponding to the data Cb1 and Cb3 and to which the second imaging condition is applied. That is, the image processing unit 33 (selecting unit 33b) changes the position where the image data is selected with respect to the first embodiment, and selects image data to which the second imaging condition is applied.
  • the image processing unit 33 calculates the color difference component image data Cb at the target position (third row, third column) with reference to the image data selected in this way.
  • the image processing unit 33 (generation unit 33c) sets, for example, (u3Cb16 + v3Cb2 + w3Cb4 + x3Cb17) / 4 as the image data of the color difference component Cb at the target position (third row, third column).
  • u3, v3, w3, and x3 are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
  • the reference position corresponding to the image data Cb2, Cb4 to Cb6 of the four color difference components located in the vicinity of the target position (third row, fourth column) is the target position (3
  • the same second imaging condition as that in the fourth row) is applied.
  • the image processing unit 33 (generation unit 33c) calculates the image data of the color difference component Cb at the target position with reference to the image data Cb2 and Cb4 to Cb6 of the four color difference components located in the vicinity of the target position.
  • the image processing unit 33 (the generation unit 33c) obtains the image data of the color difference component Cb at each pixel position, and then adds the image data of the G color component shown in FIG. 8C in correspondence with each pixel position. Thus, image data of the B color component is obtained at each pixel position.
  • the selection unit 33b selects image data of pixels other than the reference pixel and in an area in which an imaging condition common to the pixel position (position of the target pixel) is set. Thereby, for example, it is possible to increase the data referred to for generating the third image data by the generating unit 33c and appropriately perform image processing.
  • FIGS. 16A to 16C are diagrams illustrating the arrangement of the first area and the second area on the imaging surface of the imaging element 32a.
  • the first region is composed of even columns
  • the second region is composed of odd columns. That is, the imaging surface is divided into even columns and odd columns.
  • the first area is composed of odd rows
  • the second area is composed of even rows. That is, the imaging surface is divided into odd rows and even rows.
  • the first area is composed of blocks of even rows in odd columns and blocks of odd rows in even columns.
  • the second region is configured by even-numbered blocks in even columns and odd-numbered blocks in odd columns. That is, the imaging surface is divided into a checkered pattern.
  • the first image based on the photoelectric conversion signal read from the first region and the photoelectric conversion signal read from the image sensor 32a that has picked up an image of one frame, and Second images based on the photoelectric conversion signals read from the second region are respectively generated.
  • the first image and the second image are captured at the same angle of view and include a common subject image.
  • the control unit 34 uses the first image for display and the second image for detection. Specifically, the control unit 34 causes the display unit 35 to display the first image as a live view image. Further, the control unit 34 causes the object detection unit 34a to perform subject detection processing using the second image, causes the AF calculation unit 34 to perform focus detection processing using the second image, and sets the second image using the setting unit 34b. Is used to perform exposure calculation processing.
  • the imaging condition set in the first area for capturing the first image is referred to as the first imaging condition
  • the imaging condition set in the second area for capturing the second image is referred to as the second imaging condition.
  • the control unit 34 may make the first imaging condition different from the second imaging condition.
  • the control unit 34 sets the first imaging condition to a condition suitable for display by the display unit 35.
  • the first imaging condition is the same for the entire first area of the imaging screen.
  • the control unit 34 sets the second imaging condition to a condition suitable for the focus detection process, the subject detection process, and the exposure calculation process.
  • the second imaging condition is also made the same for the entire second area of the imaging screen.
  • the control unit 34 may change the second imaging condition set in the second area for each frame.
  • the second imaging condition of the first frame is a condition suitable for the focus detection process
  • the second imaging condition of the second frame is a condition suitable for the subject detection process
  • the second imaging condition of the third frame is the exposure calculation process.
  • the second imaging condition in each frame is the same in the entire second area of the imaging screen.
  • control unit 34 may change the first imaging condition set in the first region.
  • the control unit 34 (setting unit 34b) sets different first imaging conditions for each region including the subject element divided by the setting unit 34b.
  • the control unit 34 makes the second imaging condition the same for the entire second area of the imaging screen.
  • the control unit 34 sets the second imaging condition to a condition suitable for the focus detection process, the subject detection process, and the exposure calculation process. However, the conditions suitable for the focus detection process, the subject detection process, and the exposure calculation process are set. If they are different, the imaging conditions set in the second area may be different for each frame.
  • control unit 34 makes the first imaging condition the same for the entire first area of the imaging screen, while making the second imaging condition set for the second area different on the imaging screen. Also good. For example, a different second imaging condition is set for each region including the subject element divided by the setting unit 34b. Even in this case, if the conditions suitable for the focus detection process, the subject detection process, and the exposure calculation process are different, the imaging conditions set in the second region may be different for each frame.
  • control unit 34 varies the first imaging condition set in the first area on the imaging screen, and varies the second imaging condition set on the second area on the imaging screen.
  • the setting unit 34b sets different first imaging conditions for each region including the subject element divided, and the setting unit 34b sets different second imaging conditions for each region including the subject element divided.
  • the area ratio between the first region and the second region may be different.
  • the control unit 34 sets the ratio of the first region to be higher than that of the second region based on the operation by the user or the determination of the control unit 34, or sets the ratio of the first region to the second region as shown in FIG. As shown in FIG. 16 (c), they are set equally, or the ratio of the first area is set lower than that of the second area.
  • the first image is made to have a higher definition than the second image, the resolutions of the first image and the second image are made equal, or the second image is Compared to the first image, it can be made higher definition.
  • Modification 2 In the above embodiment, an example has been described in which the control unit 34 (setting unit 34b) detects a subject element based on a live view image and divides the screen of the live view image into regions including the subject element.
  • the control unit 34 may divide the region based on the output signal from the photometric sensor when the photometric sensor is provided separately from the image sensor 32a.
  • the control unit 34 divides the foreground and the background based on the output signal from the photometric sensor.
  • the live view image acquired by the image sensor 32b is a foreground area corresponding to an area determined as a foreground from an output signal from a photometric sensor, and an area determined as a background from an output signal from the photometric sensor. Is divided into background areas corresponding to.
  • the control unit 34 further arranges the first area and the second area as illustrated in FIGS. 16A to 16C with respect to the position corresponding to the foreground area of the imaging surface of the imaging element 32a. . On the other hand, the control unit 34 arranges only the first area on the imaging surface of the imaging element 32a with respect to the position corresponding to the background area of the imaging surface of the imaging element 32a. The control unit 34 uses the first image for display and the second image for detection.
  • the region of the live view image acquired by the image pickup device 32b can be divided by using the output signal from the photometric sensor.
  • a first image for display and a second image for detection can be obtained for the foreground area, and only a first image for display can be obtained for the background area.
  • the image processing unit 33 performs contrast adjustment processing so as to alleviate the discontinuity of the image based on the difference between the first imaging condition and the second imaging condition. That is, the generation unit 33c relaxes the discontinuity of the image based on the difference between the first imaging condition and the second imaging condition by changing the gradation curve (gamma curve).
  • the generation unit 33c compresses the value of the image data of the second imaging condition in the image data at the reference position to 1/8 by laying down the gradation curve.
  • the generation unit 33c may expand the value of the image data of the first imaging condition among the image data of the target position and the image data of the reference position by increasing the gradation curve by 8 times.
  • the modified example 3 as in the above-described embodiment, it is possible to appropriately perform image processing on image data respectively generated in regions with different imaging conditions. For example, discontinuity and discomfort appearing in an image after image processing can be suppressed due to a difference in imaging conditions at the boundary between regions.
  • the image processing unit 33 prevents the outline of the subject element from being damaged in the above-described image processing (for example, noise reduction processing).
  • image processing for example, noise reduction processing
  • smoothing filter processing is employed when noise reduction is performed.
  • the boundary of the subject element may be blurred while the noise reduction effect.
  • the image processing unit 33 compensates for the blur of the boundary of the subject element by performing a contrast adjustment process in addition to or in addition to the noise reduction process.
  • the image processing unit 33 sets a curve that draws an S shape as a density conversion (gradation conversion) curve (so-called S-shaped conversion).
  • the image processing unit 33 (generation unit 33c) performs contrast adjustment using S-shaped conversion, thereby extending the gradation portions of the bright data and the dark data to respectively increase the number of gradations of the bright data (and dark data).
  • the number of gradations is reduced by compressing intermediate gradation image data.
  • the number of image data having a medium brightness is reduced, and data classified as either bright / dark is increased.
  • blurring of the boundary of the subject element can be compensated.
  • blurring of the boundary of the subject element can be compensated for by sharpening the contrast of the image.
  • the image processing unit 33 changes the white balance adjustment gain so as to alleviate the discontinuity of the image based on the difference between the first imaging condition and the second imaging condition.
  • the imaging condition applied at the time of imaging at the target position (referred to as the first imaging condition) is different from the imaging condition applied at the time of imaging at the reference position around the target position (referred to as the second imaging condition).
  • the image processing unit 33 (generating unit 33c) brings the white balance of the image data of the second imaging condition out of the image data at the reference position closer to the white balance of the image data acquired under the first imaging condition. Change the white balance adjustment gain.
  • the image processing unit 33 determines the white balance between the image data of the first imaging condition and the image data of the target position in the image data at the reference position, and the white balance of the image data acquired under the second imaging condition.
  • the white balance adjustment gain may be changed so as to approach the white balance.
  • the white balance adjustment gain is adjusted to one of the adjustment gains in the areas with different imaging conditions for the image data generated in the areas with different imaging conditions, so that the first imaging condition and the second imaging condition are adjusted.
  • the discontinuity of the image based on the difference from the imaging condition can be reduced.
  • a plurality of image processing units 33 may be provided, and image processing may be performed in parallel. For example, image processing is performed on the image data captured in the region B of the imaging unit 32 while performing image processing on the image data captured in the region A of the imaging unit 32.
  • the plurality of image processing units 33 may perform the same image processing or different image processing. That is, the same parameters are applied to the image data of the region A and the region B, and the same image processing is performed, or the different parameters are applied to the image data of the region A and the region B to perform different image processing. You can do it.
  • image processing is performed by one image processing unit on image data to which the first imaging condition is applied, and other is performed on image data to which the second imaging condition is applied.
  • the image processing unit may perform image processing.
  • the number of image processing units is not limited to the above two, and for example, the same number as the number of imaging conditions that can be set may be provided. That is, each image processing unit takes charge of image processing for each region to which different imaging conditions are applied. According to the modified example 6, it is possible to proceed in parallel with imaging under different imaging conditions for each area and image processing for image data of an image obtained for each area.
  • the camera 1 in which the imaging unit 32 and the control unit 34 are configured as a single electronic device has been described as an example.
  • the imaging unit 1 and the control unit 34 may be provided separately, and the imaging system 1B that controls the imaging unit 32 from the control unit 34 via communication may be configured.
  • the imaging device 1001 including the imaging unit 32 is controlled from the control device 1002 including the control unit 34 will be described with reference to FIG.
  • FIG. 17 is a block diagram illustrating the configuration of an imaging system 1B according to Modification 8.
  • the imaging system 1 ⁇ / b> B includes an imaging device 1001 and a display device 1002.
  • the imaging device 1001 includes a first communication unit 1003 in addition to the imaging optical system 31 and the imaging unit 32 described in the above embodiment.
  • the display device 1002 includes a second communication unit 1004 in addition to the image processing unit 33, the control unit 34, the display unit 35, the operation member 36, and the recording unit 37 described in the above embodiment.
  • the first communication unit 1003 and the second communication unit 1004 can perform bidirectional image data communication using, for example, a well-known wireless communication technology or optical communication technology. Note that the imaging device 1001 and the display device 1002 may be connected by a wired cable, and the first communication unit 1003 and the second communication unit 1004 may perform bidirectional image data communication.
  • the control unit 34 controls the imaging unit 32 by performing data communication via the second communication unit 1004 and the first communication unit 1003. For example, by transmitting and receiving predetermined control data between the imaging device 1001 and the display device 1002, the display device 1002 divides the screen into a plurality of regions based on the images as described above, or the divided regions. A different imaging condition is set for each area, or a photoelectric conversion signal photoelectrically converted in each area is read out.
  • the user since the live view image acquired on the imaging device 1001 side and transmitted to the display device 1002 is displayed on the display unit 35 of the display device 1002, the user is positioned away from the imaging device 1001. Remote control can be performed from a certain display device 1002.
  • the display device 1002 can be configured by a high-function mobile phone 250 such as a smartphone, for example.
  • the imaging device 1001 can be configured by an electronic device including the above-described stacked imaging element 100.
  • the object detection part 34a, the setting part 34b, and imaging A part of the control unit 34c and the AF calculation unit 34d may be provided in the imaging apparatus 1001.
  • the program is supplied to the mobile device such as the camera 1, the high-function mobile phone 250, or the tablet terminal as described above by, for example, infrared communication or short-range wireless communication from the personal computer 205 storing the program as illustrated in FIG. 18. Can be sent to mobile devices.
  • the program may be supplied to the personal computer 205 by setting a recording medium 204 such as a CD-ROM storing the program in the personal computer 205 or by a method via the communication line 201 such as a network. You may load. When passing through the communication line 201, the program is stored in the storage device 203 of the server 202 connected to the communication line.
  • the program can be directly transmitted to the mobile device via a wireless LAN access point (not shown) connected to the communication line 201.
  • a recording medium 204B such as a memory card storing the program may be set in the mobile device.
  • the program can be supplied as various forms of computer program products, such as provision via a recording medium or a communication line.
  • the image processing unit 32A instead of providing the image processing unit 33 of the first embodiment, the image processing unit 32A has an image processing unit 32c having the same function as the image processing unit 33 of the first embodiment. Is different from the first embodiment in that
  • FIG. 19 is a block diagram illustrating the configuration of a camera 1C according to the third embodiment.
  • the camera 1 ⁇ / b> C includes an imaging optical system 31, an imaging unit 32 ⁇ / b> A, a control unit 34, a display unit 35, an operation member 36, and a recording unit 37.
  • the imaging unit 32A further includes an image processing unit 32c having the same function as the image processing unit 33 of the first embodiment.
  • the image processing unit 32 c includes an input unit 321, a selection unit 322, and a generation unit 323.
  • Image data from the image sensor 32 a is input to the input unit 321.
  • the selection unit 322 performs preprocessing on the input image data.
  • the preprocessing performed by the selection unit 322 is the same as the preprocessing performed by the selection unit 33b in the first embodiment.
  • the generation unit 323 performs image processing on the input image data and the pre-processed image data to generate an image.
  • the image processing performed by the generation unit 323 is the same as the image processing performed by the generation unit 33c in the first embodiment.
  • FIG. 20 is a diagram schematically showing the correspondence between each block and a plurality of selection units 322 in the present embodiment.
  • one square of the imaging chip 111 represented by a rectangle represents one block 111a.
  • one square of an image processing chip 114 described later represented by a rectangle represents one selection unit 322.
  • the selection unit 322 is provided corresponding to each block 111a.
  • the selection unit 322 is provided for each block, which is the minimum unit of the area where the imaging condition can be changed on the imaging surface.
  • the hatched block 111a and the hatched selection unit 322 have a correspondence relationship.
  • the hatched selection unit 322 performs preprocessing on the image data from the pixels included in the hatched block 111a.
  • Each selection unit 322 performs preprocessing on image data from pixels included in the corresponding block 111a.
  • the preprocessing of the image data can be processed in parallel by the plurality of selection units 322, so that the processing burden on the selection unit 322 can be reduced, and an appropriate image can be quickly generated from the image data generated in each of the regions with different imaging conditions. Can be generated.
  • the block 111a may be referred to as a block 111a to which the pixel belongs.
  • the block 111a may be referred to as a unit section, and a plurality of blocks 111a, that is, a plurality of unit sections may be referred to as a composite section.
  • FIG. 21 is a cross-sectional view of the multilayer imaging element 100A.
  • the multilayer imaging element 100A further includes an image processing chip 114 that performs the above-described preprocessing and image processing in addition to the backside illumination imaging chip 111, the signal processing chip 112, and the memory chip 113. That is, the above-described image processing unit 32c is provided in the image processing chip 114.
  • the imaging chip 111, the signal processing chip 112, the memory chip 113, and the image processing chip 114 are stacked, and are electrically connected to each other by a conductive bump 109 such as Cu.
  • a plurality of bumps 109 are arranged on the mutually facing surfaces of the memory chip 113 and the image processing chip 114.
  • the bumps 109 are aligned with each other, and the memory chip 113 and the image processing chip 114 are pressurized, so that the aligned bumps 109 are joined and electrically connected.
  • ⁇ Data selection process> Similar to the first embodiment, in the second embodiment, after the region of the imaging screen is divided by the setting unit 34b, the region selected by the user or the region determined by the control unit 34 is determined.
  • the imaging conditions can be set (changed).
  • the control unit 34 causes the selection unit 322 to perform the following data selection processing as necessary.
  • the selection unit 322 selects all the image data of the plurality of reference pixels and sends them to the generation unit 323. Output.
  • the generation unit 323 performs image processing using image data of a plurality of reference pixels.
  • the imaging condition applied in the target pixel P is set as the first imaging condition
  • the imaging conditions applied to a part of the reference pixels are the first imaging conditions
  • the imaging conditions applied to the remaining reference pixels are the second imaging conditions.
  • the selection unit 322 corresponding to the block 111a to which the reference pixel to which the first imaging condition is applied and the selection unit 322 corresponding to the block 111a to which the reference pixel to which the second imaging condition is applied are referred to.
  • Data selection processing is performed on pixel image data as in (Example 1) to (Example 3) below.
  • the generation unit 323 performs image processing for calculating the image data of the target pixel P with reference to the image data of the reference pixel after the data selection processing.
  • Example 1 For example, it is assumed that only the ISO sensitivity differs between the first imaging condition and the second imaging condition, the ISO sensitivity of the first imaging condition is 100, and the ISO sensitivity of the second imaging condition is 800.
  • the selection unit 322 corresponding to the block 111a to which the reference pixel to which the first imaging condition is applied belongs selects image data of the first imaging condition.
  • the selection unit 322 corresponding to the block 111a to which the reference pixel to which the second imaging condition is applied belongs does not select the image data of the second imaging condition. That is, image data with a second imaging condition different from the first imaging condition is not used for image processing.
  • Example 2 For example, only the shutter speed is different between the first imaging condition and the second imaging condition, the shutter speed of the first imaging condition is 1/1000 second, and the shutter speed of the second imaging condition is 1/100 second. And in this case, the selection unit 322 corresponding to the block 111a to which the reference pixel to which the first imaging condition is applied belongs selects image data of the first imaging condition. However, the selection unit 322 corresponding to the block 111a to which the reference pixel to which the second imaging condition is applied belongs does not select the image data of the second imaging condition. That is, image data with a second imaging condition different from the first imaging condition is not used for image processing.
  • the frame rate of the first imaging condition is 30 fps
  • the frame rate of the second imaging condition is 60 fps.
  • the selection unit 322 corresponding to the block 111a to which the reference pixel to which the first imaging condition is applied belongs selects the image data of the pixel of the first imaging condition.
  • the selection unit 322 corresponding to the block 111a to which the reference pixel to which the second imaging condition is applied belongs to the first imaging condition (30 fps) for the image data of the second imaging condition (60 fps) in the image data of the reference pixel.
  • the image data of the frame image acquired at the same timing as the frame image acquired in (1) is selected. That is, of the image data of the reference pixels, the image data of the frame image having the acquisition timing close to that of the frame image of the first imaging condition (30 fps) is used for image processing, and the frame image of the first imaging condition (30 fps) and the acquisition timing are used.
  • the image data of frame images having different values are not used for image processing.
  • the imaging condition applied at the target pixel P is the second imaging condition and the imaging condition applied at the reference pixels around the target pixel P is the first imaging condition. That is, in this case, the selection unit 322 corresponding to the block 111a to which the reference pixel to which the first imaging condition is applied belongs, and the selection unit 322 to which the reference pixel to which the reference pixel to which the second imaging condition is applied belong. Then, the data selection process is performed on the image data of the reference pixel as in (Example 1) to (Example 3) described above.
  • FIG. 22 illustrates image data (hereinafter referred to as first image data) from each pixel included in a partial area (hereinafter referred to as first area 141) of the imaging surface to which the first imaging condition is applied,
  • the processing with image data (hereinafter referred to as second image data) from each pixel included in a partial area (hereinafter referred to as second area 142) of the imaging surface to which the two imaging conditions are applied is schematically illustrated.
  • FIG. FIG. 22 is a diagram illustrating a case where the imaging condition applied to the pixel of interest P is the first imaging condition in the above (Example 1) and (Example 2).
  • the first image data captured under the first imaging condition is output from each pixel included in the first region 141, and the first image data captured under the second imaging condition is output from each pixel included in the second region 142.
  • Two image data are respectively output.
  • the first image data is output to the selection unit 322 corresponding to the block 111a to which the pixel that generated the first image data belongs among the plurality of selection units 322 provided in the processing chip 114.
  • the plurality of selection units 322 respectively corresponding to the plurality of blocks 111a to which the pixels that generate the respective first image data belong are referred to as first selection units 151.
  • the second image data is output to the selection unit 322 corresponding to the block 111a to which each pixel that generated the second image data belongs among the plurality of selection units 322 provided in the processing chip 114.
  • the plurality of selection units 322 respectively corresponding to the plurality of blocks 111a to which the pixels that generate the respective second image data belong are referred to as second selection units 152.
  • the first selection unit 151 selects the image data of the target pixel P and the image data of the reference pixel imaged under the first imaging condition, and then goes to the generation unit 323. Output.
  • the selection unit 322 selects image data from the same block, but may use image data of another block imaged under the first imaging condition. At this time, if the selection unit 322 to which the target pixel P is input and the selection unit 322 of another block imaged under the first imaging condition transmit and receive information 181 about the first imaging condition necessary for the data selection process. Good.
  • the second selection unit 152 does not select the image data of the reference pixel imaged under the second imaging condition, and does not output the image data of the reference pixel imaged under the second imaging condition to the generation unit 323.
  • the second selection unit 152 receives information 181 about the first imaging condition necessary for the data selection process from the first selection unit 151, for example.
  • the second selection unit 152 selects the image data of the target pixel P and the image data of the reference pixel captured under the second imaging condition, and generates the unit. To H.323.
  • the first selection unit 151 does not select the image data of the reference pixel imaged under the first imaging condition, and does not output the image data of the reference pixel imaged under the first imaging condition to the generation unit 323.
  • the first selection unit 151 receives information about the second imaging condition necessary for the data selection process from the second selection unit 152, for example.
  • the generation unit 323 performs pixel defect correction processing, color interpolation processing, edge enhancement processing, noise reduction processing, and the like based on the image data from the first selection unit 151 and the second selection unit 152. Image processing is performed, and image data after image processing is output.
  • the control unit 34 (AF calculation unit 34d) performs focus detection processing using image data corresponding to a predetermined position (focus detection position) on the imaging screen. . Note that when different imaging conditions are set for the divided areas and the focus detection position of the AF operation is located at the boundary portion of the divided areas, that is, the focus detection positions are 2 in the first area and the second area. In the present embodiment, the following 2-2. As will be described, the control unit 34 (AF calculation unit 34d) causes the selection unit 322 to perform data selection processing.
  • the unit 322 selects all focus detection signal data from the pixels in the frame 170 and outputs the selected signal data to the generation unit 323.
  • the control unit 34 (AF calculation unit 34d) performs focus detection processing using signal data for focus detection by the focus detection pixels indicated by a frame 170.
  • the focus detection signal data to which the first imaging condition is applied and the focus detection signal data to which the second imaging condition is applied are mixed in the focus detection signal data from the pixels in the frame 170 in FIG.
  • the control unit 34 (AF calculation unit 34d) performs data as shown in the following (Example 1) to (Example 3) for the selection unit 322 corresponding to the block 111a to which the pixel in the frame 170 belongs. Let the selection process be performed. Then, the control unit 34 (AF calculation unit 34d) performs a focus detection process using the signal data for focus detection after the data selection process.
  • Example 1 For example, it is assumed that only the ISO sensitivity differs between the first imaging condition and the second imaging condition, the ISO sensitivity of the first imaging condition is 100, and the ISO sensitivity of the second imaging condition is 800.
  • the selection unit 322 corresponding to the block 111a to which the pixel to which the first imaging condition is applied belongs selects signal data for focus detection under the first imaging condition. Then, the selection unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs does not select the focus detection signal data of the second imaging condition. That is, out of the focus detection signal data from the pixels in the frame 170, the focus detection signal data of the first imaging condition is used for focus detection processing, and the image data of the second imaging condition different from the first imaging condition is used. Not used for focus detection processing.
  • Example 2 For example, only the shutter speed is different between the first imaging condition and the second imaging condition, the shutter speed of the first imaging condition is 1/1000 second, and the shutter speed of the second imaging condition is 1/100 second.
  • the selection unit 322 corresponding to the block 111a to which the pixel to which the first imaging condition is applied belongs selects signal data for focus detection under the first imaging condition. Then, the selection unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs does not select the focus detection signal data of the second imaging condition. That is, out of the focus detection signal data from the pixels in the frame 170, the focus detection signal data of the first imaging condition is used for focus detection processing, and the focus detection of the second imaging condition different from the first imaging condition is used. Are not used for focus detection processing.
  • the frame rate of the first imaging condition is 30 fps
  • the frame rate of the second imaging condition is 60 fps.
  • the selection unit 322 corresponding to the block 111a to which the pixel to which the first imaging condition is applied belongs selects signal data for focus detection of the pixel under the first imaging condition.
  • the selection unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs acquires the frame image acquired under the first imaging condition (30 fps) and the image data of the second imaging condition (60 fps).
  • Signal data for focus detection of a frame image with close timing is selected. That is, out of the focus detection signal data under the second imaging condition (60 fps), the focus detection signal data of the frame image with the acquisition timing close to the frame image under the first imaging condition (30 fps) is used for the focus detection process. The signal data for focus detection of a frame image having a different acquisition timing from the frame image of the first imaging condition (30 fps) is not used for the focus detection process.
  • the imaging conditions are regarded as the same.
  • Example 1 the example in which the focus detection signal data of the first imaging condition is selected from the focus detection signal data surrounded by the frame 170 has been described. You may make it select the signal data for focus detection of the 2nd imaging condition among the signal data for focus detection.
  • the focus detection position is divided into the first and second regions and the area of the first region is larger than the area of the second region, the image data of the first imaging condition is selected, and conversely, When the area of the two regions is larger than the area of the first region, it is desirable to select image data under the second imaging condition.
  • FIG. 23 is a diagram schematically illustrating processing of the first signal data and the second signal data according to the focus detection processing.
  • the focus detection signal data under the first imaging condition is selected from the signal data generated from the region surrounded by the frame 170, and the focus detection signal under the second imaging condition. It is a figure explaining the case where data is selected. From each pixel included in the first region 141, first signal data for focus detection imaged under the first imaging condition is output, and from each pixel included in the second region 142, imaging is performed under the second imaging condition. The focused second signal data for focus detection is output. The first signal data from the first region 141 is output to the first selection unit 151.
  • the second signal data from the second region 142 is output to the second selection unit 152.
  • the first processing unit 151 selects the first signal data of the first imaging condition and outputs it to the AF calculation unit 34d.
  • the second processing unit 152 selects the second signal data of the second imaging condition and outputs it to the AF calculation unit 34d.
  • the AF calculation unit 34d calculates a first defocus amount from the first signal data from the first processing unit 151. Further, the AF calculation unit 34d calculates a second defocus amount from the second signal data from the first processing unit 151. Then, the AF calculation unit 34d outputs a drive signal for moving the focus lens of the imaging optical system 31 to the in-focus position using the first defocus amount and the second defocus amount.
  • the focus detection signal data under the first imaging condition is selected from the signal data in the region surrounded by the frame 170, and the focus detection signal under the second imaging condition.
  • the first signal data and the second signal data are processed as follows.
  • the first processing unit 151 selects the first signal data of the first imaging condition and outputs the first signal data to the generation unit 323.
  • the second processing unit 152 does not select the second signal data of the second imaging condition, and does not output the second signal data of the reference pixel imaged under the second imaging condition to the generation unit 323.
  • the second processing unit 152 receives, for example, information 181 about the first imaging condition necessary for the data selection process from the first processing unit 151.
  • the AF calculation unit 34d performs focus detection processing based on the first signal data from the first processing unit 151, and focuses the focus lens of the imaging optical system 31 based on the calculation result.
  • a drive signal for moving to a position is output.
  • the subject to be focused is located across the area where the first imaging condition is set and the area where the second imaging condition is set.
  • the subject to be focused is located across the area where the first imaging condition is set and the area where the second imaging condition is set, it corresponds to the block 111a to which the pixel to which the first imaging condition is applied belongs.
  • the selection unit 322 selects first signal data for focus detection under the first imaging condition. Further, the selection unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs selects the second signal data for focus detection under the second imaging condition. Then, the control unit 34 (AF calculation unit 34d) calculates a first defocus amount from the selected first signal data for focus detection.
  • control unit 34 calculates a second defocus amount from the selected second signal data for focus detection. Then, the control unit 34 (AF calculation unit 34d) performs a focus detection process using the first defocus amount and the second defocus amount. Specifically, for example, the control unit 34 (AF calculation unit 34d) calculates the average of the first defocus amount and the second defocus amount, and calculates the moving distance of the lens. Further, the control unit 34 (AF calculation unit 34d) may select a value having a smaller lens moving distance from the first defocus amount and the second defocus amount. The control unit 34 (AF calculation unit 34d) may select a value indicating that the subject is closer to the near side from the first defocus amount and the second defocus amount.
  • the control unit 34 (AF calculation unit 34d) A region having a larger area may be selected and a photoelectric conversion signal for focus detection may be selected. For example, when the area of the face of the subject to be focused is 70% in the region where the first imaging condition is set and 30% in the second region, the control unit 34 (AF calculation unit 34d) A photoelectric conversion signal for focus detection under imaging conditions is selected.
  • the ratio (percentage) to the area described above is an example, and the present invention is not limited to this.
  • the selection unit 322 starts from the pixels in the search range 190. Are selected and output to the generation unit 323.
  • the control unit 34 (AF calculation unit 34d) performs subject detection processing using image data from pixels in the search range 190.
  • FIG. 24 is a diagram schematically illustrating processing of the first image data and the second image data related to the subject detection processing.
  • the control unit 34 performs subject detection processing using the image data after the data selection processing.
  • the control unit 34 sends the image data in the search range 190 to the selection unit 322 (second selection unit 152) corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs.
  • the image data of the second imaging condition used for the subject detection process is selected.
  • control unit 34 performs subject detection processing using the image data after the data selection processing. Then, the control unit 34 (the object detection unit 34a) searches by matching the boundary between the subject area detected from the image data of the first imaging condition and the subject area detected from the image data of the second imaging condition. Subject detection within the range 190 can be detected.
  • the control unit 34 (object detection unit 34a) The image of the first imaging condition used for subject detection processing from the image data in the search range 190 to the selection unit 322 (first selection unit 151) corresponding to the block 111a to which the pixel to which the first imaging condition is applied belongs. Let the data be selected. Then, the control unit 34 (object detection unit 34a) performs subject detection processing using the image data after the data selection processing.
  • control unit 34 sends the image data in the search range 190 to the selection unit 322 (second selection unit 152) corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs. Then, only the image data of the frame image acquired at the first imaging condition (30 fps) and the frame image close to the acquisition timing is selected as the image data of the second imaging condition (60 fps) used for the subject detection process. Then, the control unit 34 (object detection unit 34a) performs subject detection processing using the image data after the data selection processing. Then, the control unit 34 detects the subject within the search range 190 by matching the boundary between the subject area detected from the image data of the first imaging condition and the subject area detected from the image data of the second imaging condition. Can be detected.
  • the image data of the first imaging condition is selected and the second imaging condition is selected.
  • the image data may not be selected.
  • the image data of the second imaging condition may be selected and the image data of the first imaging condition may not be selected.
  • the selection unit 322 selects all the image data from the pixels in the photometric range. Select and output to the generation unit 323.
  • the control unit 34 (setting unit 34b) performs exposure calculation processing using image data from the pixels constituting the photometric range.
  • image data to which the first imaging condition is applied and image data to which the second imaging condition is applied are mixed in the image data in the photometric range
  • the control unit 34 object detection unit 34a selects a block corresponding to the block 111a to which the pixel to which the first imaging condition is applied belongs, similarly to (a) in the case of performing the subject detection process described above.
  • the unit 322 causes the image data of the first imaging condition used for the exposure calculation processing to be selected from the image data of the photometric range.
  • control unit 34 (the object detection unit 34a) performs, for the selection unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied, the second imaging used for exposure calculation processing from image data in the photometric range.
  • the image data of the condition is selected.
  • FIG. 25 is a diagram schematically illustrating processing of the first image data and the second image data related to setting of imaging conditions such as exposure calculation processing.
  • the control unit 34 (setting unit 34b) performs an exposure calculation process for each of the area where the first imaging condition is set and the area where the second imaging condition is set using the image data after the data selection process. .
  • the control unit 34 (setting unit 34b) performs the data selection processing for measuring each region, and performs the data selection processing. Exposure calculation processing is performed using image data.
  • control unit 34 In the case where only the frame rate is different between the first imaging condition and the second imaging condition as in the case of performing the focus detection process described above (Example 3).
  • the control unit 34 Is used for exposure calculation processing from image data in the photometric range to the selection unit 322 corresponding to the block 111a to which the pixel to which the first imaging condition is applied belongs, as in (b) in the case of subject detection processing.
  • the image data of the first imaging condition is selected.
  • the control unit 34 performs the selection unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs as in the case of performing the focus detection process described above (Example 3).
  • control part 34 (setting part 34b) performs an exposure calculation process using the image data after a data selection process similarly to the case of (a) mentioned above.
  • the image data of the first imaging condition is selected and the second imaging condition The image data may not be selected. Further, when the area of the second region is larger than the area of the first region, the image data of the second imaging condition may be selected and the image data of the first imaging condition may not be selected.
  • the camera 1C is capable of imaging by changing the imaging condition for each unit section of the imaging surface, and the first image data from the first region composed of at least one unit section captured under the first imaging condition;
  • An image sensor 32a is provided that generates second image data from a second region composed of at least one unit segment imaged under a second imaging condition different from the first imaging condition.
  • the camera 1C is provided corresponding to each unit section or each composite section having a plurality of unit sections, and a plurality of selections that select or do not select image data from the corresponding unit section or the unit section in the corresponding composite section Part 322.
  • the imaging element 32a is provided in the backside illumination type imaging chip 111.
  • the plurality of selection units 322 are provided in the image processing chip 114. As a result, the data selection processing of the image data can be performed in parallel by the plurality of selection units 322, so that the processing burden on the selection unit 322 can be reduced.
  • the camera 1 ⁇ / b> C includes a generation unit 323 that generates an image based on the selected image data selected by the selection unit 322.
  • the camera 1C is capable of imaging by changing the imaging conditions for each unit section of the imaging surface, and includes at least one unit section that captures an optical image incident through the imaging optical system under the first imaging condition. Generating first image data from the first region and second image data from the second region composed of at least one unit section obtained by imaging an incident light image under a second imaging condition different from the first imaging condition; An image sensor 32a is provided.
  • the camera 1C is provided corresponding to each unit section or each composite section having a plurality of unit sections, and a plurality of selections that select or do not select image data from the corresponding unit section or the unit section in the corresponding composite section Part 322.
  • the camera 1C includes an AF calculation unit 34d that detects information for moving the imaging optical system based on the selected image data selected.
  • the imaging element 32a is provided in the backside illumination type imaging chip 111.
  • the plurality of correction units 322 are provided in the image processing chip 114.
  • the data selection processing of the image data can be performed in parallel by the plurality of selection units 322, so that the processing burden on the selection unit 322 can be reduced and the preprocessing by the plurality of selection units 322 can be performed in a short time by parallel processing.
  • the time until the start of the focus detection process in the AF calculation unit 34d can be shortened, which contributes to speeding up of the focus detection process.
  • the camera 1C is capable of imaging by changing the imaging conditions for each unit section of the imaging surface, and includes at least one unit section that captures the subject image incident through the imaging optical system under the first imaging condition. Generating first image data from the first region and second image data from the second region composed of at least one unit section obtained by capturing an incident subject image under a second image capturing condition different from the first image capturing condition; An image sensor 32a is provided.
  • the camera 1C is provided corresponding to each unit section or each composite section having a plurality of unit sections, and a plurality of selections that select or do not select image data from the corresponding unit section or the unit section in the corresponding composite section Part 322.
  • the camera 1 ⁇ / b> C includes an object detection unit 34 a that detects a target object from a subject image based on selected selected image data.
  • the imaging element 32a is provided in the backside illumination type imaging chip 111.
  • the plurality of correction units 322 are provided in the image processing chip 114.
  • the data selection processing of the image data can be performed in parallel by the plurality of selection units 322, so that the processing burden on the selection unit 322 can be reduced and the preprocessing by the plurality of selection units 322 can be performed in a short time by parallel processing.
  • the time until the start of the subject detection process in the object detection unit 34a can be shortened, which contributes to the speedup of the subject detection process.
  • the camera 1 ⁇ / b> C is capable of imaging by changing the imaging condition for each unit section of the imaging surface, and includes at least one unit section that captures an optical image incident through the imaging optical system under the first imaging condition. Generating first image data from the first region and second image data from the second region composed of at least one unit section obtained by imaging an incident light image under a second imaging condition different from the first imaging condition; An image sensor 32a is provided.
  • the camera 1C is provided corresponding to each unit section or each composite section having a plurality of unit sections, and a plurality of selections that select or do not select image data from the corresponding unit section or the unit section in the corresponding composite section Part 322.
  • the camera 1C includes a setting unit 34b that sets shooting conditions based on the selected image data selected.
  • the imaging element 32a is provided in the backside illumination type imaging chip 111.
  • the plurality of correction units 322 are provided in the image processing chip 114.
  • the data selection processing of the image data can be performed in parallel by the plurality of selection units 322, so that the processing burden on the selection unit 322 can be reduced and the preprocessing by the plurality of selection units 322 can be performed in a short time by parallel processing.
  • Modification of the third embodiment The following modifications are also within the scope of the present invention, and one or a plurality of modifications can be combined with the above-described embodiment.
  • Modification 10 As shown in FIGS. 16 (a) to 16 (c) in Modification 1 of the first and second embodiments, the first region and the second region are arranged on the imaging surface of the imaging device 32a. Processing of the 1 image data and the 2nd image data will be described. Also in this modified example, as in the modified example 1, in any of the cases shown in FIGS. 16A to 16C, the first region is determined by the pixel signal read from the imaging element 32a that has captured one frame.
  • a first image based on the image signal read from the second image and a second image based on the image signal read from the second region are generated.
  • the control unit 34 uses the first image for display and the second image for detection. It is assumed that a first imaging condition is set for the first area for capturing the first image, and a second imaging condition different from the first imaging condition is set for the second area for capturing the second image.
  • FIG. 26 is a diagram schematically illustrating processing of the first image data and the second image data.
  • the first image data captured under the first imaging condition is output from each pixel included in the first area 141
  • the second image captured under the second imaging condition is output from each pixel included in the second area 142.
  • Image data and second signal data are output.
  • the first image data from the first area 141 is output to the first selection unit 151.
  • the second image data and the second signal data from the second region 142 are output to the second selection unit 152.
  • the first selection unit 151 selects all the first image data from each pixel in the first area.
  • the second imaging condition is the same for the entire second area of the imaging screen, the second selection unit 152 selects all the second image data from each pixel in the second area. Since the first imaging condition and the second imaging condition are different, the second selection unit 152 does not select the second image data as data for image processing of the image data in the first area. In addition, the second selection unit 152 receives information 181 about the first imaging condition from the first selection unit 151, for example.
  • the generation unit 323 performs image processing such as pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing based on the first image data from the first selection unit 151, and the image data after the image processing Is output.
  • the object detection unit 34a performs a process of detecting a subject element based on the second image data from the second selection unit 152, and outputs a detection result.
  • the setting unit 34b performs an imaging condition calculation process such as an exposure calculation process based on the second image data from the second selection unit 152, and based on the calculation result, the imaging screen by the imaging unit 32 is detected. While dividing into a plurality of regions including elements, imaging conditions are reset for the plurality of regions.
  • the AF calculation unit 34d performs focus detection processing based on the second signal data from the second selection unit 152, and drives for moving the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result. Output a signal.
  • the first imaging condition varies depending on the area of the imaging screen, that is, the first imaging condition varies depending on the partial area in the first area, and the second imaging condition is the same throughout the second area of the imaging screen.
  • FIG. 27 is a diagram schematically illustrating processing of the first image data, the second image data, and the second signal data.
  • the first image data captured under the first imaging condition is output from each pixel included in the first area 141
  • the second image captured under the second imaging condition is output from each pixel included in the second area 142.
  • Image data and second signal data are output.
  • the first image data from the first area 141 is output to the first selection unit 151.
  • the second image data from the second region 142 is output to the second selection unit 152.
  • the first imaging condition varies depending on the area of the imaging screen. That is, the first imaging condition varies depending on the partial area in the first area.
  • the first selection unit 151 selects only the first image data of a certain imaging condition from the first image data from each pixel of the first region, and does not select the first image data of another imaging condition.
  • the second selection unit 152 selects all the second image data from each pixel in the second area. Since the first imaging condition and the second imaging condition are different, the second selection unit 152 does not select the second image data as data for image processing of the image data in the first area.
  • the second selection unit 152 receives information 181 about the first imaging condition from the first selection unit 151, for example.
  • the generation unit 323 performs image processing such as pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing based on a part of the first image data selected by the first selection unit 151, and the image Output the processed image data.
  • the object detection unit 34a performs a process of detecting a subject element based on the second image data from the second selection unit 152, and outputs a detection result.
  • the setting unit 34b performs an imaging condition calculation process such as an exposure calculation process based on the second image data from the second selection unit 152, and based on the calculation result, the imaging screen by the imaging unit 32 is detected. While dividing into a plurality of regions including elements, imaging conditions are reset for the plurality of regions.
  • the AF calculation unit 34d performs focus detection processing based on the second signal data from the second selection unit 152, and drives for moving the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result. Output a signal.
  • FIG. 28 is a diagram schematically showing processing of the first image data and the second image data.
  • first image data captured under the same first imaging condition is output in the entire first area of the imaging screen, and from each pixel included in the second area 142.
  • Second image data captured under different second imaging conditions depending on the area of the imaging screen is output.
  • the first image data from the first area 141 is output to the first selection unit 151.
  • the second image data and the second signal data from the second region 142 are output to the second selection unit 152.
  • the first selection unit 151 selects all the first image data from each pixel in the first area.
  • the second imaging condition varies depending on the area of the imaging screen. That is, the second imaging condition varies depending on the partial area in the second area.
  • the second selection unit 152 selects only the second image data under a certain imaging condition from the second image data from each pixel in the second region, and does not select the second image data under other imaging conditions. Since the first imaging condition and the second imaging condition are different, the second selection unit 152 does not select the second image data as data for image processing of the image data in the first area.
  • the second selection unit 152 receives information 181 about the first imaging condition from the first selection unit 151, for example.
  • the generation unit 323 performs image processing such as pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing based on the first image data from the first selection unit 151, and the image data after the image processing Is output.
  • the object detection unit 34a performs processing for detecting a subject element based on part of the second image data selected by the second selection unit 152, and outputs a detection result.
  • the setting unit 34b performs an imaging condition calculation process such as an exposure calculation process based on a part of the second image data selected by the second selection unit 152, and an imaging screen by the imaging unit 32 based on the calculation result. Are divided into a plurality of regions including the detected subject element, and the imaging conditions are reset for the plurality of regions.
  • the AF calculation unit 34d performs focus detection processing based on a part of the second signal data selected by the second selection unit 152, and based on the calculation result, the focus lens of the imaging optical system 31 is moved to the in-focus position. A drive signal for moving is output.
  • FIG. 29 is a diagram schematically illustrating processing of the first image data, the second image data, and the second signal data.
  • first image data captured under a first imaging condition that varies depending on the area of the imaging screen is output.
  • second image data captured under different second imaging conditions are output.
  • the first image data from the first area 141 is output to the first selection unit 151.
  • the second image data and the second signal data from the second region 142 are output to the second selection unit 152.
  • the first imaging condition varies depending on the area of the imaging screen. That is, the first imaging condition varies depending on the partial area in the first area.
  • the first selection unit 151 selects only the first image data of a certain imaging condition from the first image data from each pixel of the first region, and does not select the first image data of another imaging condition.
  • the second imaging condition varies depending on the area of the imaging screen. That is, the second imaging condition varies depending on the partial area in the second area.
  • the second selection unit 152 selects only the second image data under a certain imaging condition from the second image data from each pixel in the second region, and does not select the second image data under other imaging conditions.
  • the second selection unit 152 Since the first imaging condition and the second imaging condition are different, the second selection unit 152 does not select the second image data as data for image processing of the image data in the first area. In addition, the second selection unit 152 receives information 181 about the first imaging condition from the first selection unit 151, for example.
  • the generation unit 323 performs image processing such as pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing based on a part of the first image data selected by the first selection unit 151, and the image Output the processed image data.
  • the object detection unit 34a performs processing for detecting a subject element based on part of the second image data selected by the second selection unit 152, and outputs a detection result.
  • the setting unit 34b performs an imaging condition calculation process such as an exposure calculation process based on a part of the second image data selected by the second selection unit 152, and an imaging screen by the imaging unit 32 based on the calculation result. Are divided into a plurality of regions including the detected subject element, and the imaging conditions are reset for the plurality of regions.
  • the AF calculation unit 34d performs focus detection processing based on a part of the second signal data selected by the second selection unit 152, and based on the calculation result, the focus lens of the imaging optical system 31 is moved to the in-focus position. A drive signal for moving is output.
  • one of the selection units 322 corresponds to one of the blocks 111a (unit division).
  • one of the selection units 322 may correspond to one of the composite blocks (composite sections) having a plurality of blocks 111a (unit sections).
  • the selection unit 322 sequentially performs data selection processing on image data from pixels belonging to the plurality of blocks 111a included in the composite block. Even if a plurality of selection units 322 are provided corresponding to each composite block having a plurality of blocks 111a, data selection processing of image data can be performed in parallel by the plurality of selection units 322, so that the processing burden on the selection unit 322 is increased. Can be reduced, and an appropriate image can be generated in a short time from image data generated in different areas with different imaging conditions.
  • the generation unit 323 is provided inside the imaging unit 32A.
  • the generation unit 323 may be provided outside the imaging unit 32A. Even if the generation unit 323 is provided outside the imaging unit 32A, the same operational effects as the above-described operational effects can be obtained.
  • the multilayer imaging element 100A includes image processing that performs the above-described preprocessing and image processing in addition to the backside illumination imaging chip 111, the signal processing chip 112, and the memory chip 113.
  • a chip 114 is further provided.
  • the image processing chip 114 may be provided in the signal processing chip 112 without providing the image processing chip 114 in the multilayer imaging device 100A.
  • An imaging device having an imaging region for imaging a subject, a setting unit for setting imaging conditions for the imaging region, a selection unit for selecting pixels to be used for interpolation from pixels included in the imaging region, A control unit that controls setting of the imaging condition by the setting unit, and sets the imaging condition set in the imaging region using a signal interpolated by a signal output from the pixel selected by the selection unit.
  • the imaging element includes a first imaging area for imaging a subject and a second imaging area for imaging the subject
  • the setting unit includes the first imaging area.
  • An imaging condition for the imaging area and an imaging condition for the second imaging area are set, and the selection unit includes the pixels included in the first imaging area and the pixels included in the second imaging area.
  • the control unit The setting of the imaging condition set in the first imaging region is controlled using a signal interpolated by the signal output from the pixel selected by the selection unit.
  • the selection unit is configured to determine the first imaging region based on the imaging condition of the first imaging region and the imaging condition of the second imaging region set by the setting unit.
  • the selection unit selects a pixel to be used for the interpolation from at least one of the first imaging region and the second imaging region.
  • the selection unit selects a pixel in the second imaging region as the pixel to be used according to the imaging condition of the second imaging region set by the setting unit. .
  • the selection unit sets the first imaging condition when the setting unit sets the first imaging condition in the first imaging region and the second imaging region. 2 Select a pixel included in the imaging area.
  • the selection unit is configured to use the first pixel as a pixel used for the interpolation based on a value related to exposure according to an imaging condition of the second imaging region set by the setting unit. 2. Select pixels in the imaging area.
  • the selection unit includes a value related to exposure according to the imaging condition of the first imaging region set by the setting unit and a value related to exposure based on the imaging condition of the second imaging region. Based on the above, the pixel of the second imaging region is selected as the pixel used for the interpolation.
  • the selection unit includes the number of exposure stages according to the imaging condition of the first imaging region set by the setting unit and the number of exposure stages according to the imaging condition of the second imaging region. When the difference between the two is 0.3 or less, the pixel in the second imaging region is selected as the pixel used for the interpolation.
  • the selection unit sets a first imaging condition in the first imaging region by the setting unit, and a second imaging condition in the second imaging region. Is set, the pixels included in the first imaging area are selected.
  • the setting unit sets a first imaging condition in the first imaging region, and sets a second imaging condition in the second imaging region.
  • the selection unit is configured to interpolate the first pixel in the first imaging region, and to select the first pixel and the second imaging region. A third pixel in the first imaging region having a distance from the first pixel that is longer than a distance from the second pixel is selected.
  • the selection unit has different numbers of pixels to be selected depending on the imaging conditions set in the second imaging region by the setting unit.
  • the selection unit sets a first imaging condition in the first imaging region and a second imaging condition in the second imaging region by the setting unit.
  • a first imaging area set to image a subject under a first imaging condition a second imaging area set to image a subject under a second imaging condition different from the first imaging condition
  • An imaging apparatus comprising: a control unit that controls setting of an imaging condition set in the first imaging area using a signal interpolated by a signal output from a pixel.
  • a first imaging area set to image a subject under the first imaging condition a second imaging area set to image a subject under a second imaging condition different from the first imaging condition
  • a control unit that controls the setting of the imaging condition, the imaging condition set in the first imaging region using a signal interpolated by the signal output from the pixel selected by the selection unit
  • An imaging device comprising: a control unit that controls settings.
  • An imaging device having a first imaging area for imaging a subject, a second imaging area for imaging the subject, and a third imaging area for imaging the subject, and imaging conditions of the first imaging area are set to the first The imaging condition is set, the imaging condition of the second imaging area is set to a second imaging condition different from the first imaging condition, and the imaging condition of the third imaging area is set to the first imaging condition and the second imaging
  • a selection unit that selects a pixel to be used for interpolation of a pixel included in the first imaging region from among pixels included in the third imaging region, and a control that controls setting of imaging conditions by the setting unit A pixel selected by the selection unit
  • a control unit that controls the setting of the imaging condition set in the first imaging region using a signal interpolated by the signal output from the imaging device.
  • An imaging device having a third imaging area for imaging a subject, a setting unit for setting imaging conditions for the second imaging area to imaging conditions different from the imaging conditions for the first imaging area, and the first imaging area
  • An imaging device and a control unit for controlling An imaging device having an imaging region for imaging a subject, a setting unit for setting imaging conditions of the imaging region, and a control unit for controlling setting of imaging conditions by the setting unit, as pixels used for interpolation
  • a control unit that controls the setting of the imaging condition set in the imaging region using a signal interpolated by the selected signal output from the pixel included in the imaging region, and the setting unit An imaging apparatus in which at least some of the pixels to be selected are different depending on the set imaging conditions.
  • a control unit that controls setting of an imaging condition includes a pixel included in the first imaging region and a pixel included in the second imaging region.
  • An imaging device having an imaging area for imaging a subject, a setting unit that sets imaging conditions for the imaging area, and a control unit that controls setting of imaging conditions by the setting unit, and is included in the imaging area
  • a control unit that controls the setting of the imaging condition set in the imaging region using a signal that is selected from the selected pixels and outputs a signal for reducing noise and that has been reduced in noise by a signal output from the pixel
  • an imaging device in which at least some of the pixels to be selected are different depending on the imaging condition set by the setting unit.
  • (22) a first imaging region set to image a subject under the first imaging condition, a second imaging region set to image a subject under a second imaging condition different from the first imaging condition, An imaging element having a third imaging region set to image a subject under a third imaging condition different from the second imaging condition, a pixel included in the second imaging region, and included in the third imaging region
  • a selection unit that selects pixels included in the first imaging region for use in noise reduction, and a control unit that controls setting of imaging conditions, and includes the first imaging region.
  • Output from a pixel selected from the pixels included in the second imaging region and the pixels included in the third imaging region as pixels that output a signal for use in reducing noise in the pixel signal.
  • An imaging device and a control unit for controlling the setting of the imaging condition set in the first imaging region using the signal (23) a first imaging area set to image a subject under the first imaging condition, a second imaging area set to image a subject under a second imaging condition different from the first imaging condition; And a control unit that controls the setting of the imaging condition, and is included in the first imaging region as a pixel that outputs a signal used to reduce noise of the pixel included in the first imaging region.
  • the setting of the imaging condition set in the first imaging area is controlled using a signal interpolated by a signal output from a pixel selected from the pixel selected from the pixels included in the second imaging area and the pixel included in the second imaging area And an imaging device.
  • An image pickup device having an image pickup region for picking up an image of a subject, a setting unit for setting image pickup conditions for the image pickup region, and a control unit for controlling setting of image pickup conditions by the setting unit for image processing
  • a control unit that controls setting of the imaging condition set in the imaging region using a signal that has been image-processed by a signal output from the pixel selected as the pixel of the pixel, and is set by the setting unit
  • An imaging device in which at least some of the selected pixels are different depending on imaging conditions.
  • a selection unit that selects a signal to be used for interpolation from signals output from pixels included in the imaging region of the imaging element, and a control unit that controls setting of imaging conditions, and is selected by the selection unit.
  • a control unit that controls the setting of the imaging condition set in the imaging region using a signal interpolated by the acquired signal, and the selection unit performs the selection according to the imaging condition set in the imaging region A control device in which at least some of the pixels are different.
  • the above-described embodiments and modifications also include the following devices.
  • (1) The first image data generated by imaging light incident on the first region of the imaging unit under the first imaging condition, and the light incident on the second region of the imaging unit are the first imaging condition.
  • a selection unit that selects at least one of the second image data generated by imaging under a second imaging condition different from the first imaging data, and a setting unit that sets the imaging condition based on the selected image data
  • a control device comprising: (2) In the control device as in (1), the setting unit sets the imaging condition based on a partial region of the imaging unit including the first region and the second region. (3) In the control device as in (2), the selection unit selects image data output from a region having a large area among the first region and the second region in the partial region. .
  • the selection unit selects the first image data and the second image data, and based on the first image data, selects the first region. An imaging condition is set, and an imaging condition for the second region is set based on the second image data.
  • the setting unit sets an exposure condition as the photographing condition.
  • the control device as described in (1) to (4) includes a light emission control unit that controls light emission of the light source, and the setting unit includes, as the photographing condition, a light source controlled by the light emission control unit. Set whether or not to emit light or the amount of light emitted.
  • a first region in which light incident through the imaging optical system is imaged under a first imaging condition to generate first image data, and a second imaging condition in which the incident light is different from the first imaging condition A second region that captures an image and generates second image data, and selects at least one of the first image data and the second image data generated by the image sensor
  • An imaging apparatus comprising: a selection unit configured to perform a setting unit configured to set a shooting condition based on the selected image data.
  • the setting unit sets the imaging condition based on a partial region of the imaging unit including the first region and the second region.
  • the selection unit selects image data output from a region having a large area among the first region and the second region in the partial region. .
  • the selection unit selects the first image data and the second image data, and the first region is based on the first image data.
  • An imaging condition is set, and an imaging condition for the second region is set based on the second image data.
  • the setting unit sets an exposure condition as the imaging condition.
  • the imaging device as described in (7) to (10) includes a light emission control unit that controls light emission of a light source, and the setting unit includes, as the photographing condition, a light source controlled by the light emission control unit.
  • the imaging element can capture an image by changing imaging conditions for each unit region of the imaging surface, and the light incident through the optical system From first image data from a first area composed of at least one unit area imaged under one imaging condition, and from at least one unit area obtained by imaging the incident light under a second imaging condition different from the first imaging condition The second image data from the second region is generated, and the selection unit is provided for each unit region or each composite region having a plurality of unit regions, and the corresponding unit region or the corresponding unit Image data from the unit area in the composite area is selected.
  • the imaging element is provided on a first semiconductor substrate, and the plurality of data selection units are provided on a second semiconductor substrate.
  • the imaging device according to (14) wherein the first semiconductor substrate and the second semiconductor substrate are stacked.

Abstract

Un dispositif d'imagerie selon l'invention comprend un élément d'imagerie ayant une zone d'imagerie pour imager un sujet, une unité de définition pour définir la condition d'imagerie de la zone d'imagerie, une unité de sélection qui, parmi les pixels inclus dans la zone d'imagerie, sélectionne des pixels devant être utilisés pour une interpolation, et une unité de commande qui commande la définition de conditions d'imagerie par l'unité de définition. Au moyen d'un signal interpolé au moyen du signal émis à partir des pixels sélectionnés par l'unité de sélection, l'unité de commande commande la définition de la condition d'imagerie définie dans la région d'imagerie. L'unité de sélection est telle qu'au moins certains des pixels sélectionnés sur la base de la condition d'imagerie définie par l'unité de réglage sont différents.
PCT/JP2016/078316 2015-09-30 2016-09-26 Dispositif de commande et dispositif d'imagerie WO2017057295A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017543273A JP6589989B2 (ja) 2015-09-30 2016-09-26 撮像装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015195285 2015-09-30
JP2015-195285 2015-09-30

Publications (1)

Publication Number Publication Date
WO2017057295A1 true WO2017057295A1 (fr) 2017-04-06

Family

ID=58423467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/078316 WO2017057295A1 (fr) 2015-09-30 2016-09-26 Dispositif de commande et dispositif d'imagerie

Country Status (2)

Country Link
JP (1) JP6589989B2 (fr)
WO (1) WO2017057295A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012256964A (ja) * 2011-06-07 2012-12-27 Nikon Corp 撮像装置
JP2014179911A (ja) * 2013-03-15 2014-09-25 Nikon Corp 撮像装置
JP2015041971A (ja) * 2013-08-23 2015-03-02 株式会社ニコン 撮像素子および撮像装置
JP2015056710A (ja) * 2013-09-11 2015-03-23 キヤノン株式会社 撮像装置、およびその制御方法、プログラム、記憶媒体

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012256964A (ja) * 2011-06-07 2012-12-27 Nikon Corp 撮像装置
JP2014179911A (ja) * 2013-03-15 2014-09-25 Nikon Corp 撮像装置
JP2015041971A (ja) * 2013-08-23 2015-03-02 株式会社ニコン 撮像素子および撮像装置
JP2015056710A (ja) * 2013-09-11 2015-03-23 キヤノン株式会社 撮像装置、およびその制御方法、プログラム、記憶媒体

Also Published As

Publication number Publication date
JPWO2017057295A1 (ja) 2018-07-19
JP6589989B2 (ja) 2019-10-16

Similar Documents

Publication Publication Date Title
JP7052850B2 (ja) 撮像装置
WO2017057279A1 (fr) Dispositif de formation d'image, dispositif de traitement d'image et dispositif d'affichage
WO2017170723A1 (fr) Dispositif de capture d'image, dispositif de traitement d'images et appareil électronique
WO2017170716A1 (fr) Dispositif de capture d'image, dispositif de traitement d'image et appareil électronique
WO2017057492A1 (fr) Dispositif d'imagerie et dispositif de traitement d'image
JP2018160830A (ja) 撮像装置
JPWO2017170717A1 (ja) 撮像装置、焦点調節装置、および電子機器
JP6589989B2 (ja) 撮像装置
JP6604385B2 (ja) 撮像装置
JP6589988B2 (ja) 撮像装置
WO2017057494A1 (fr) Dispositif d'imagerie et dispositif de traitement d'images
JP6521085B2 (ja) 撮像装置、および制御装置
WO2017057267A1 (fr) Dispositif d'imagerie et dispositif de détection de mise au point
WO2017057280A1 (fr) Dispositif d'imagerie et dispositif de détection de sujet
WO2017170719A1 (fr) Dispositif de capture d'image, et appareil électronique
WO2017057493A1 (fr) Dispositif d'imagerie et dispositif de traitement d'image
WO2017057495A1 (fr) Dispositif d'imagerie et dispositif de traitement d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16851466

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017543273

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16851466

Country of ref document: EP

Kind code of ref document: A1