WO2015001646A1 - 電子機器、電子機器の制御方法、及び制御プログラム - Google Patents
電子機器、電子機器の制御方法、及び制御プログラム Download PDFInfo
- Publication number
- WO2015001646A1 WO2015001646A1 PCT/JP2013/068381 JP2013068381W WO2015001646A1 WO 2015001646 A1 WO2015001646 A1 WO 2015001646A1 JP 2013068381 W JP2013068381 W JP 2013068381W WO 2015001646 A1 WO2015001646 A1 WO 2015001646A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- image
- imaging
- region
- area
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 31
- 238000003384 imaging method Methods 0.000 claims description 188
- 238000012545 processing Methods 0.000 claims description 175
- 238000001514 detection method Methods 0.000 claims description 47
- 230000008569 process Effects 0.000 claims description 23
- 238000012937 correction Methods 0.000 claims description 7
- 230000035508 accumulation Effects 0.000 description 85
- 238000009825 accumulation Methods 0.000 description 85
- 230000000875 corresponding effect Effects 0.000 description 28
- 238000010586 diagram Methods 0.000 description 20
- 238000012546 transfer Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 9
- 230000035945 sensitivity Effects 0.000 description 9
- 238000004091 panning Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 230000006835 compression Effects 0.000 description 6
- 238000007906 compression Methods 0.000 description 6
- 230000001276 controlling effect Effects 0.000 description 6
- 238000009792 diffusion process Methods 0.000 description 6
- 230000003321 amplification Effects 0.000 description 5
- 238000003199 nucleic acid amplification method Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000002596 correlated effect Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002161 passivation Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 229910000679 solder Inorganic materials 0.000 description 1
- 239000007790 solid phase Substances 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14634—Assemblies, i.e. Hybrid structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/42—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/44—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/53—Control of the integration time
- H04N25/533—Control of the integration time by using differing integration times for different sensor regions
- H04N25/535—Control of the integration time by using differing integration times for different sensor regions by dynamic region selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
- H04N25/58—Control of the dynamic range involving two or more exposures
- H04N25/581—Control of the dynamic range involving two or more exposures acquired simultaneously
- H04N25/583—Control of the dynamic range involving two or more exposures acquired simultaneously with different integration times
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/71—Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
- H04N25/75—Circuitry for providing, modifying or processing image signals from the pixel array
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/767—Horizontal readout lines, multiplexers or registers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/78—Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/79—Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2624—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
Definitions
- the present invention relates to an electronic device, an electronic device control method, and a control program.
- an electronic device including an imaging element in which a backside illumination type imaging chip and a signal processing chip are stacked (hereinafter, this imaging element is referred to as a multilayer imaging element) has been proposed (for example, see Patent Document 1).
- the multilayer imaging element is laminated so that the back-illuminated imaging chip and the signal processing chip are connected via a micro bump for each block unit including a plurality of pixels.
- An object of the aspect of the present invention is to generate a plurality of types of images for the same subject.
- the drive control unit that drives and controls the imaging device, the dividing unit that divides the imaging region of the imaging device into at least the first region and the second region, and the first for the same subject.
- An electronic apparatus includes an image generation unit that generates a first image obtained by imaging a region and a second image obtained by imaging a second region.
- a method for controlling an electronic device having an imaging unit in which an imaging region of an imaging device of the imaging unit is divided into at least a first region and a second region,
- a method for controlling an electronic device includes generating each of a first image obtained by imaging a first region and a second image obtained by imaging a second region.
- a division process for dividing the image pickup area of the image pickup element of the image pickup unit into at least a first area and a second area, and for the same subject. Then, a control program is provided that executes image generation processing for generating a first image obtained by imaging the first region and a second image obtained by imaging the second region.
- FIG. 1 is a cross-sectional view of a multilayer image sensor.
- the multilayer image sensor 100 is described in Japanese Patent Application No. 2012-139026 filed earlier by the applicant of the present application.
- the imaging device 100 includes an imaging chip 113 that outputs a pixel signal corresponding to incident light, a signal processing chip 111 that processes the pixel signal, and a memory chip 112 that stores the pixel signal.
- the imaging chip 113, the signal processing chip 111, and the memory chip 112 are stacked, and are electrically connected to each other by a conductive bump 109 such as Cu.
- incident light is incident mainly in the positive direction of the Z axis indicated by the white arrow.
- the surface on the side where incident light is incident is referred to as a back surface.
- the left direction of the paper orthogonal to the Z axis is the X axis plus direction
- the front side of the paper orthogonal to the Z axis and X axis is the Y axis plus direction.
- the coordinate axes are displayed so that the orientation of each figure can be understood with reference to the coordinate axes of FIG.
- the imaging chip 113 is a back-illuminated MOS image sensor.
- the PD layer 106 is disposed on the back side of the wiring layer 108.
- the PD layer 106 includes a plurality of photodiodes (Photodiode; hereinafter referred to as PD) 104 that are two-dimensionally arranged and accumulate electric charges according to incident light, and a transistor 105 provided corresponding to the PD 104. .
- a color filter 102 is provided on the incident light incident side of the PD layer 106 via a passivation film 103.
- the color filter 102 is a filter that passes a specific wavelength region of visible light.
- the color filter 102 has a plurality of types that transmit different wavelength regions, and has a specific arrangement corresponding to each of the PDs 104. The arrangement of the color filter 102 will be described later.
- a set of the color filter 102, the PD 104, and the transistor 105 forms one pixel.
- a microlens 101 is provided on the incident light incident side of the color filter 102 corresponding to each pixel.
- the microlens 101 condenses incident light toward the corresponding PD 104.
- the wiring layer 108 includes a wiring 107 that transmits a pixel signal from the PD layer 106 to the signal processing chip 111.
- the wiring 107 may be multilayer, and a passive element and an active element may be provided.
- a plurality of bumps 109 are disposed on the surface of the wiring layer 108. The plurality of bumps 109 are aligned with the plurality of bumps 109 provided on the opposing surface of the signal processing chip 111. Then, by pressing the imaging chip 113 and the signal processing chip 111, the aligned bumps 109 are joined and electrically connected.
- a plurality of bumps 109 are arranged on the mutually facing surfaces of the signal processing chip 111 and the memory chip 112. These bumps 109 are aligned with each other. Then, when the signal processing chip 111 and the memory chip 112 are pressurized or the like, the aligned bumps 109 are joined and electrically connected.
- the bonding between the bumps 109 is not limited to Cu bump bonding by solid phase diffusion, and micro bump bonding by solder melting may be employed. Further, for example, about one bump 109 may be provided for one unit group described later. Therefore, the size of the bump 109 may be larger than the pitch of the PD 104. In addition, a bump larger than the bump 109 corresponding to the pixel region may be provided in a peripheral region other than the pixel region where the pixels are arranged (the pixel region 113A shown in FIG. 2).
- the signal processing chip 111 has a TSV (Through-Silicon Via) 110 that connects circuits provided on the front and back surfaces to each other.
- the TSV 110 is provided in the peripheral area.
- the TSV 110 may be provided in the peripheral area of the imaging chip 113 or the memory chip 112.
- FIG. 2 is a diagram for explaining a pixel array and a unit group of the imaging chip.
- FIG. 2 particularly shows a state where the imaging chip 113 is observed from the back side.
- An area where pixels are arranged in the imaging chip 113 is referred to as a pixel area 113A.
- a pixel area 113A 20 million or more pixels are arranged in a matrix.
- 16 pixels of 4 pixels ⁇ 4 pixels adjacent to each other form one unit group 131.
- the grid lines in FIG. 2 indicate a concept in which adjacent pixels are grouped to form a unit group 131.
- the number of pixels forming the unit group 131 is not limited to this, and may be about 1000, for example, 32 pixels ⁇ 64 pixels, or more or less.
- the unit group 131 includes four so-called Bayer arrays, which are composed of four pixels of green pixels Gb and Gr, a blue pixel B, and a red pixel R, vertically and horizontally.
- the green pixel is a pixel having a green filter as the color filter 102, and receives light in the green wavelength band of incident light.
- the blue pixel is a pixel having a blue filter as the color filter 102 and receives light in the blue wavelength band.
- the red pixel is a pixel having a red filter as the color filter 102 and receives light in the red wavelength band.
- FIG. 3 is a circuit diagram corresponding to the unit group of the imaging chip.
- a rectangle surrounded by a dotted line typically represents a circuit corresponding to one pixel. Note that at least some of the transistors described below correspond to the transistor 105 in FIG.
- the unit group 131 is formed of 16 pixels.
- the 16 PDs 104 corresponding to the respective pixels are each connected to the transfer transistor 302.
- the gate of each transfer transistor 302 is connected to a TX wiring 307 to which a transfer pulse is supplied.
- the TX wiring 307 is commonly connected to the 16 transfer transistors 302.
- each transfer transistor 302 is connected to the source of the corresponding reset transistor 303, and a so-called floating diffusion FD (charge detection unit) between the drain of the transfer transistor 302 and the source of each reset transistor 303 is connected to the amplifier transistor 304. Connected to the gate.
- the drain of each reset transistor 303 is connected to a Vdd wiring 310 to which a power supply voltage is supplied.
- the gate of each reset transistor 303 is connected to a reset wiring 306 to which a reset pulse is supplied.
- the reset wiring 306 is commonly connected to the 16 reset transistors 303.
- each amplification transistor 304 is connected to a Vdd wiring 310 to which a power supply voltage is supplied.
- the source of each amplification transistor 304 is connected to the drain of each corresponding selection transistor 305.
- the gate of each selection transistor 305 is connected to a decoder wiring 308 to which a selection pulse is supplied.
- the decoder wiring 308 is provided independently for each of the 16 selection transistors 305.
- the source of each selection transistor 305 is connected to a common output wiring 309.
- the load current source 311 supplies current to the output wiring 309. That is, the output wiring 309 for the selection transistor 305 is formed by a source follower. Note that the load current source 311 may be provided on the imaging chip 113 side or may be provided on the signal processing chip 111 side.
- a reset pulse is applied to the reset transistor 303 through the reset wiring 306.
- a transfer pulse is applied to the transfer transistor 302 through the TX wiring 307.
- the potentials of the PD 104 and the floating diffusion FD are reset.
- the PD 104 converts the incident light received into charges and accumulates them. Thereafter, when the transfer pulse is applied again without the reset pulse being applied, the charge accumulated in the PD 104 is transferred to the floating diffusion FD. As a result, the potential of the floating diffusion FD changes from the reset potential to the signal potential after charge accumulation.
- a selection pulse is applied to the selection transistor 305 through the decoder wiring 308, the change in the signal potential of the floating diffusion FD is transmitted to the output wiring 309 through the amplification transistor 304 and the selection transistor 305.
- the reset wiring 306 and the TX wiring 307 are common to the 16 pixels forming the unit group 131. That is, the reset pulse and the transfer pulse are simultaneously applied to all 16 pixels. Therefore, all the pixels forming the unit group 131 start charge accumulation at the same timing and end charge accumulation at the same timing. However, the pixel signal corresponding to the accumulated charge is selectively output to the output wiring 309 by sequentially applying the selection pulse to each selection transistor 305. In addition, the reset wiring 306, the TX wiring 307, and the output wiring 309 are provided separately for each unit group 131.
- the charge accumulation time can be controlled for each unit group 131.
- pixel signals with different charge accumulation times can be output between the unit groups 131. More specifically, while one unit group 131 performs charge accumulation once, the other unit group 131 repeats charge accumulation several times and outputs a pixel signal each time, so that Each frame for moving images can be output at a different frame rate between the unit groups 131.
- FIG. 4 is a block diagram showing a functional configuration of the image sensor.
- the analog multiplexer 411 sequentially selects the 16 PDs 104 that form the unit group 131. Then, the multiplexer 411 outputs each pixel signal of the 16 PDs 104 to the output wiring 309 provided corresponding to the unit group 131.
- the multiplexer 411 is formed on the imaging chip 113 together with the PD 104.
- the pixel signal of the analog signal output via the multiplexer 411 is amplified by the amplifier 412 formed in the signal processing chip 111. Then, the pixel signal amplified by the amplifier 412 is processed by a signal processing circuit 413 that performs correlated double sampling (CDS) and analog / digital (Analog / Digital) conversion, which is formed in the signal processing chip 111. Correlated double sampling signal processing is performed, and A / D conversion (conversion from an analog signal to a digital signal) is performed. The pixel signal is subjected to correlated double sampling signal processing in the signal processing circuit 413, whereby noise of the pixel signal is reduced. The A / D converted pixel signal is transferred to the demultiplexer 414 and stored in the pixel memory 415 corresponding to each pixel. The demultiplexer 414 and the pixel memory 415 are formed in the memory chip 112.
- CDS correlated double sampling
- analog / digital Analog / Digital
- the arithmetic circuit 416 processes the pixel signal stored in the pixel memory 415 and passes it to the subsequent image processing unit.
- the arithmetic circuit 416 may be provided in the signal processing chip 111 or may be provided in the memory chip 112. Note that FIG. 4 shows connections for one unit group 131, but actually these exist for each unit group 131 and operate in parallel. However, the arithmetic circuit 416 may not exist for each unit group 131. For example, one arithmetic circuit 416 may perform sequential processing while sequentially referring to the values in the pixel memory 415 corresponding to each unit group 131.
- the output wiring 309 is provided corresponding to each of the unit groups 131.
- the image sensor 100 has an image pickup chip 113, a signal processing chip 111, and a memory chip 112 stacked. For this reason, by using the electrical connection between the chips using the bumps 109 for the output wirings 309, the wirings can be routed without enlarging each chip in the surface direction.
- the pixel region 113A of the image sensor 100 is divided into a plurality of blocks.
- the plurality of blocks are defined to include at least one unit group 131 per block.
- pixels included in each block are controlled by different control parameters. That is, pixel signals having different control parameters are acquired for a pixel group included in a block and a pixel group included in another block.
- the control parameter include a charge accumulation time or accumulation count, a frame rate, a gain, a thinning rate, the number of addition rows or addition columns to which pixel signals are added, the number of digitization bits, and the like.
- the control parameter may be a parameter in image processing after obtaining an image signal from a pixel.
- the charge accumulation time refers to the time from the start of PD 104 accumulation until the end.
- the number of times of charge accumulation refers to the number of times the PD 104 accumulates charges per unit time.
- the frame rate is a value representing the number of frames processed (displayed or recorded) per unit time in a moving image.
- the unit of the frame rate is expressed by fps (Frames Per Second). The higher the frame rate, the smoother the movement of the subject (that is, the object to be imaged) in the moving image.
- the gain means a gain factor (amplification factor) of the amplifier 412.
- This ISO sensitivity is a photographic film standard established by ISO and represents how much light the photographic film can record.
- ISO sensitivity is also used when expressing the sensitivity of the image sensor 100.
- the ISO sensitivity is a value representing the ability of the image sensor 100 to capture light.
- Increasing the gain improves the ISO sensitivity. For example, when the gain is doubled, the electrical signal (pixel signal) is also doubled, and the brightness is appropriate even when the amount of incident light is half.
- the gain is increased, noise included in the electric signal is also amplified, so that noise increases.
- the thinning-out rate refers to the ratio of the number of pixels in which pixel signals are not read out with respect to the total number of pixels in a predetermined area. For example, when the thinning rate of a predetermined area is 0, it means that pixel signals are read from all pixels in the predetermined area. Further, when the thinning rate of the predetermined area is 0.5, it means that the pixel signal is read from half of the pixels in the predetermined area. Specifically, when the unit group 131 is a Bayer array, it is read as a pixel from which pixel signals are alternately read out every other unit of the Bayer array in the vertical direction, that is, every two pixels (two rows) of the pixel unit. Pixels that are not output are set.
- the resolution of the image is reduced when the pixel signal readout is thinned out.
- 20 million or more pixels are arranged in the image sensor 100, for example, even if thinning is performed at a thinning rate of 0.5, an image can be displayed with 10 million or more pixels. For this reason, it is considered that the reduction in resolution is not a concern for the user (photographer).
- the number of added rows refers to the number of vertical pixels (number of rows) to be added when pixel signals of pixels adjacent in the vertical direction are added.
- the number of added columns refers to the number of horizontal pixels (number of columns) to be added when pixel signals of pixels adjacent in the horizontal direction are added.
- Such addition processing is performed in the arithmetic circuit 416, for example.
- the arithmetic circuit 416 performs the process of adding the pixel signals of a predetermined number of pixels adjacent in the vertical direction or the horizontal direction, thereby obtaining the same effect as the process of reading out the pixel signals by thinning out at a predetermined thinning rate.
- the average value may be calculated by dividing the added value by the number of rows or columns added by the arithmetic circuit 416.
- the digitization bit number refers to the bit number when the signal processing circuit 413 converts an analog signal into a digital signal in A / D conversion. As the number of bits of the digital signal increases, brightness, color change, and the like are expressed in more detail.
- the accumulation condition refers to a condition related to charge accumulation in the image sensor 100.
- the accumulation condition refers to the charge accumulation time or number of accumulations, the frame rate, and the gain among the control parameters described above. Since the frame rate can change according to the charge accumulation time and the number of times of accumulation, the frame rate is included in the accumulation condition. Further, the amount of light for proper exposure changes according to the gain, and the charge accumulation time or number of times of accumulation can also change according to the amount of light for proper exposure. For this reason, the gain is included in the accumulation condition.
- the imaging condition is a condition related to imaging of the subject.
- the imaging condition refers to a control parameter including the above accumulation condition.
- the imaging conditions include control parameters (for example, charge accumulation time or accumulation count, frame rate, gain) for controlling the image sensor 100, as well as control parameters (for controlling reading of signals from the image sensor 100). For example, a thinning rate), a control parameter for processing a signal from the image sensor 100 (for example, the number of addition rows or addition columns for adding pixel signals, the number of digitization bits, the image processing unit 30 described later performs image processing) Is also included.
- FIG. 5 is a block diagram showing the configuration of the electronic device according to the first embodiment.
- the electronic device 1 illustrated in FIG. 5 is configured by devices such as a digital camera, a smartphone, a mobile phone, and a personal computer having an imaging function, for example.
- the electronic device 1 includes a lens unit 10, an imaging unit 20, an image processing unit 30, a work memory 40, a display unit 50, an operation unit 55, a recording unit 60, and a system control unit 70.
- the lens unit 10 is an imaging optical system composed of a plurality of lens groups.
- the lens unit 10 guides the light flux from the subject to the imaging unit 20.
- the lens unit 10 may be integrated with the electronic device 1 or may be an interchangeable lens that can be attached to and detached from the electronic device 1. Further, the lens unit 10 may have a built-in focus lens or a zoom lens.
- the imaging unit 20 includes an imaging device 100 and a driving unit 21.
- the drive unit 21 is a control circuit that controls driving of the image sensor 100 in accordance with an instruction from the system control unit 70.
- the drive unit 21 controls the timing (or timing cycle) at which the reset pulse and the transfer pulse are applied to the reset transistor 303 and the transfer transistor 302, respectively, thereby reducing the charge accumulation time or accumulation count as a control parameter. Control.
- the drive unit 21 controls the frame rate by controlling the timing (or timing cycle) at which the reset pulse, the transfer pulse, and the selection pulse are applied to the reset transistor 303, the transfer transistor 302, and the selection transistor 305, respectively.
- the drive unit 21 controls the thinning rate by setting pixels to which the reset pulse, the transfer pulse, and the selection pulse are applied.
- the drive unit 21 controls the ISO sensitivity of the image sensor 100 by controlling the gain (also referred to as gain factor or amplification factor) of the amplifier 412.
- the drive unit 21 sends an instruction to the arithmetic circuit 416 to set the number of added rows or the number of added columns to which pixel signals are added.
- the drive unit 21 sets the number of bits for digitization by sending an instruction to the signal processing circuit 413.
- the drive unit 21 performs block setting in the pixel area (imaging area) 113 ⁇ / b> A of the image sensor 100. In this way, the drive unit 21 functions as an image sensor control unit that causes the image sensor 100 to capture an image under different image capturing conditions for each of a plurality of blocks and output a pixel signal.
- the system control unit 70 instructs the drive unit 21 on the block position, shape, range, and the like.
- the image sensor 100 delivers the pixel signal from the image sensor 100 to the image processing unit 30.
- the image processing unit 30 uses the work memory 40 as a work space, performs various image processing on the RAW data including the pixel signals of the respective pixels, and generates image data.
- the image processing unit 30 includes a first image processing unit 30A and a second image processing unit 30B. When the load of image processing is high, processing is assigned to each of the first image processing unit 30A and the second image processing unit 30B. Then, the first image processing unit 30A and the second image processing unit 30B each execute the assigned processing in parallel.
- the system control unit 70 (specifically, the dividing unit 71 shown in FIG. 7) has at least the first region and the second region of the pixel region (imaging region) 113A of the image sensor 100. Divide into areas. Further, the system control unit 70 (specifically, the drive control unit 72 shown in FIG. 7) drives and controls the image sensor 100 so as to perform imaging under different imaging conditions in the first region and the second region.
- the first image processing unit 30A executes image processing of a signal from the first region.
- the second image processing unit 30B executes image processing of signals from the second area.
- the pixel region (imaging region) 113A of the image sensor 100 is not limited to being divided into two regions, a first region, a second region, a first region, a second region, a third region, and so on. Can be divided into a plurality of regions.
- the image processing for the plurality of regions is appropriately assigned to the first image processing unit 30A and the second image processing unit 30B.
- the image processing assignment may be determined in advance according to the number or range of divided areas, or may be determined based on the number or range of areas divided by the system control unit 70.
- the image processing unit 30 executes various image processing. For example, the image processing unit 30 generates an RGB image signal by performing color signal processing (color tone correction) on a signal obtained by the Bayer array. The image processing unit 30 performs image processing such as white balance adjustment, sharpness adjustment, gamma correction, and gradation adjustment on the RGB image signal. Further, the image processing unit 30 performs a process of compressing in a predetermined compression format (JPEG format, MPEG format, etc.) as necessary. The image processing unit 30 outputs the generated image data to the recording unit 60. In addition, the image processing unit 30 outputs the generated image data to the display unit 50.
- JPEG format, MPEG format, etc. a predetermined compression format
- the image processing unit 30 performs processing for detecting a main subject from image data in addition to the processing described above.
- the “main subject” refers to a subject that is an object to be imaged, a subject that is noticed by the user (photographer), or a subject that is estimated to be noticed by the user.
- the number of main subjects is not limited to one in the image data, and there may be a plurality of main subjects (see, for example, FIG. 14).
- Parameters that are referred to when the image processing unit 30 performs image processing are also included in the control parameters (imaging conditions). For example, parameters such as color signal processing (tone correction), white balance adjustment, gradation adjustment, and compression rate are included in the control parameters.
- the signal read from the image sensor 100 changes according to the charge accumulation time, and the parameters referred to when performing image processing also change according to the change in the signal.
- the image processing unit 30 sets different control parameters for each block, and executes image processing such as color signal processing based on these control parameters.
- the image processing unit 30 extracts frames at predetermined timings from among a plurality of frames obtained in time series from the imaging unit 20. Alternatively, the image processing unit 30 discards frames at a predetermined timing among a plurality of frames obtained in time series from the imaging unit 20. Thereby, since the amount of data can be reduced, it is possible to reduce the load of subsequent processing. Further, the image processing unit 30 calculates one or a plurality of frames to be interpolated between the frames based on a plurality of frames obtained in time series from the imaging unit 20. Then, the image processing unit 30 adds the calculated one or more frames between the frames. As a result, a moving image with smoother motion can be reproduced during moving image reproduction.
- the drive part 21 is comprised so that the thinning-out rate may be controlled, it is not restricted to such a structure.
- the drive unit 21 reads pixel signals from all pixels, but the image processing unit 30 or the arithmetic circuit 416 controls the thinning rate by discarding predetermined pixel signals out of the read pixel signals. Good.
- the work memory 40 temporarily stores image data and the like when image processing by the image processing unit 30 is performed.
- the display unit 50 is configured by a liquid crystal display panel, for example. As shown in FIG. 5, the display unit 50 includes a first display unit 51, a first touch panel 52, a second display unit 53, and a second touch panel 54.
- the first display unit 51 displays images (still images, moving images, live view images) captured by the imaging unit 20 and various types of information.
- the first touch panel 52 is formed on the display screen of the first display unit 51.
- the first touch panel 52 outputs a signal indicating a position touched by the user to the system control unit 70 when the user selects an image (a thumbnail image to be described later: see FIG. 13).
- the second display unit 53 displays images (still images, moving images, live view images) captured by the imaging unit 20 and various types of information.
- the second touch panel 54 is formed on the display screen of the second display unit 53.
- the second touch panel 54 outputs a signal indicating a position touched by the user to the system control unit 70 when the user selects an image or the like.
- the operation unit 55 is a release switch, a moving image switch, various operation switches, and the like operated by a user.
- the operation unit 55 outputs a signal corresponding to the operation by the user to the system control unit 70.
- the recording unit 60 has two card slots into which two storage media (a first recording medium 61 and a second recording medium 62) such as a memory card can be mounted.
- the recording unit 60 stores the image data and various data generated by the image processing unit 30 in the recording media (first recording medium 61 and second recording medium 62) mounted in the card slot.
- the first image processing unit 30A and the second image processing unit 30B respectively perform image processing based on a signal from the first region and image processing based on a signal from the second region. Are executed in parallel.
- the first recording medium 61 stores image data based on a signal from the first area in response to an operation of a release switch or a moving image switch. Further, the second recording medium 62 stores image data based on the signal from the second area in accordance with the operation of the release switch or the moving image switch.
- the recording unit 60 has an internal memory. The recording unit 60 can also record the image data and various data generated by the image processing unit 30 in the internal memory.
- the system control unit 70 controls the overall processing and operation of the electronic device 1.
- the system control unit 70 includes a CPU (Central Processing Unit) 70A.
- the system control unit 70 divides the imaging surface (pixel area 113A) of the imaging device 100 (imaging chip 113) into a plurality of blocks, and different charge accumulation times (or charge accumulation times) and frame rates among the blocks. Get an image with a gain. Therefore, the system control unit 70 instructs the drive unit 21 about the block position, shape, range, and accumulation condition for each block.
- the system control unit 70 causes an image to be acquired with a different thinning rate between blocks, the number of addition rows or addition columns to which pixel signals are added, and the number of digitization bits.
- the system control unit 70 instructs the drive unit 21 on the imaging conditions for each block (the thinning rate, the number of added rows or columns to which pixel signals are added, and the number of digitization bits). Further, the image processing unit 30 executes image processing under imaging conditions (control parameters such as color signal processing, white balance adjustment, gradation adjustment, and compression rate) that are different between blocks. For this reason, the system control unit 70 instructs the image processing unit 30 on the imaging conditions for each block (control parameters such as color signal processing, white balance adjustment, gradation adjustment, and compression rate).
- the system control unit 70 causes the recording unit 60 to record the image data generated in the image processing unit 30. Further, the system control unit 70 causes the display unit 50 to output the image data generated in the image processing unit 30, thereby displaying an image on the display unit 50 (one of the first display unit 51 and the touch panel 52 or both). Display. Alternatively, the system control unit 70 reads out the image data recorded in the recording unit 60 and causes the display unit 50 to output the image data, thereby causing the display unit 50 (either the first display unit 51 or the touch panel 52 or both) to display the data. Display an image. The images displayed on the display unit 50 include still images, moving images, and live view images.
- the live view image is an image displayed on the display unit 50 by sequentially outputting the image data generated by the image processing unit 30 to the display unit 50.
- the live view image is used for the user to confirm the image of the subject imaged by the imaging unit 20.
- the live view image is also called a through image or a preview image.
- FIG. 6 is a diagram illustrating an appearance of a digital camera which is an example of an electronic device.
- FIG. 6 shows an external appearance of the electronic device (digital camera) 1 when viewed from the back.
- the first display unit 51 is a display panel having a square display screen.
- the first display unit 51 is provided on the back surface of the electronic device 1.
- the first touch panel 52 is formed on the display screen of the first display unit 51.
- the second display unit 53 is a display panel having a square display screen. An end portion of the second display portion 53 is rotatably connected by a hinge (not shown) provided on the back surface of the electronic device 1 and below the first display portion 51. As the second display unit 53 rotates with the hinge as a fulcrum, the first display unit 51 is opened and closed by the second display unit 53.
- the release switch 55a is a switch that is pressed by the user when capturing a still image.
- the mode dial 55b is a dial rotated by the user when setting various scene modes such as portrait, landscape, and night view.
- the moving image switch 55c is a switch pressed by the user when capturing a moving image.
- a multi-selector 55d is provided on the back surface of the electronic device 1 and on the side of the first display unit 51.
- the multi-selector 55d is a key for the user to select a menu (menu for setting the shooting mode) displayed on the first display unit 51 or the second display unit 53 using the up / down / left / right arrow keys and the OK switch. And a switch.
- the operation unit 55 includes a release switch 55a, a mode dial 55b, a moving image switch 55c, and a multi selector 55d. Note that the operation unit 55 may include other switches.
- FIG. 7 is a functional block diagram of the image processing unit and the system control unit shown in FIG.
- the first image processing unit 30A includes an image generation unit 31A and a detection unit 32A.
- the image generation unit 31 ⁇ / b> A generates image data by performing various types of image processing on the RAW data including the pixel signals of the pixels in the first region output from the imaging unit 20.
- the detection unit 32A detects the main subject from the image data generated by the image generation unit 31A. In the present embodiment, the detection unit 32A compares a plurality of image data obtained in time series from the live view image generated by the image generation unit 31A, and detects a moving subject (moving subject) as a main subject.
- the detection unit 32A detects a main subject using a face detection function as described in, for example, Japanese Patent Application Laid-Open No. 2010-16621 (US2010 / 0002940). In addition to the face detection, the detection unit 32A detects a human body included in the image data as a main subject as described in, for example, Japanese Patent Application Laid-Open No. 2010-16621 (US2010 / 0002940).
- the second image processing unit 30B includes an image generation unit 31B.
- the image generation unit 31 ⁇ / b> B generates image data by performing various types of image processing on the RAW data including the pixel signal of each pixel in the second region output from the imaging unit 20.
- the second image processing unit 30B does not include a detection unit, but may include a detection unit.
- the first image processing unit 30A may not include the detection unit 32A, and the second image processing unit 30B may include the detection unit.
- the image generation unit 31A and the image generation unit 31B may be collectively referred to as an image generation unit 31.
- the system control unit 70 includes a dividing unit 71, a drive control unit 72, and a display control unit 73.
- the dividing unit 71 divides the pixel area (imaging area) 113A of the imaging element 100 into a plurality of areas in units of blocks.
- the dividing unit 71 divides the pixel region 113A into a plurality of regions based on a predetermined arrangement pattern in each block of the pixel region 113A (see FIGS. 8A to 8D).
- the drive control unit 72 sets imaging conditions for a plurality of areas. Further, the drive control unit 72 performs drive control of the image sensor 100 in accordance with the operation of the release switch 55a and the moving image switch 55c by the user.
- the drive control unit also performs drive control of the image sensor 100 even when a live view image is captured (that is, after the start of a shooting operation after power-on).
- the display control unit 73 causes the display unit 50 to output the image data generated by the image generation unit 31, and displays images (still images, moving images, live images) on one or both of the first display unit 51 and the second display unit 53. (View image) is displayed.
- the dividing unit 71, the drive control unit 72, and the display control unit 73 are realized by the CPU 70A executing processing based on the control program.
- FIG. 8 is a diagram showing an arrangement pattern in each block.
- FIG. 8A shows a first arrangement pattern in each block.
- FIG. 8B shows a second arrangement pattern in each block.
- FIG. 8C shows a third arrangement pattern in each block.
- FIG. 8D shows a fourth arrangement pattern in each block.
- the first arrangement pattern shown in FIG. 8A is an arrangement pattern in which the pixel region 113A is divided into two parts, a first region and a second region.
- the first region is composed of odd-numbered column (2m-1) blocks
- the second region is composed of even-numbered column (2m) blocks. That is, each block in the pixel region 113A is divided into odd columns and even columns.
- the second arrangement pattern shown in FIG. 8B is also an arrangement pattern in which the pixel area 113A is divided into two parts, the first area and the second area.
- the first region is composed of odd-numbered rows (2n-1) blocks
- the second region is composed of even-numbered rows (2n) blocks. That is, each block in the pixel region 113A is divided into odd rows and even rows.
- the third array pattern shown in FIG. 8C is also an array pattern in which the pixel region 113A is divided into two parts, the first region and the second region.
- a first region is configured.
- the second region is composed of blocks of odd rows (2n-1) in even columns (2m) and blocks of even rows (2n) in odd columns (2m-1). That is, each block in the pixel region 113A is divided into a checkered pattern.
- the first area and the second area are not necessarily composed only of continuous blocks, but are composed of discrete blocks. ing.
- the fourth arrangement pattern shown in FIG. 8D is an arrangement pattern in which the pixel area 113A is divided into three parts, a first area, a second area, and a third area.
- the first area is composed of blocks of a column (3m-2) two columns before the column that is a multiple of 3 and the second area is one column from a column that is a multiple of 3 It is composed of blocks in the previous column (3m ⁇ 1), and the third area is composed of blocks in a column (3m) that is a multiple of 3.
- FIG. 8 a small number of blocks are set in the pixel region 113A in order to make the arrangement of blocks for each region easy to see, but a larger number of blocks than the number of blocks shown in FIG. 8 is set. You may do it.
- FIG. 9 is a flowchart for explaining a photographing operation executed by the system control unit.
- FIG. 10 is a flowchart for explaining the array pattern setting process.
- the system control unit 70 starts a photographing operation.
- the display control unit 73 displays the live view image captured by the imaging unit 20 on the first display unit 51 and sets the shooting mode. Is displayed on the second display unit 53.
- the drive control unit 72 drives and controls the image sensor 100 so as to capture an image at a low frame rate.
- the display control unit 73 may display the live view image on the second display unit 53 and display the menu on the first display unit 51. In addition, the display control unit 73 may display the live view image and the menu on the same display unit (the first display unit 51 or the second display unit 53).
- the user operates the multi-selector 55d and selects a shooting mode by selecting a menu displayed on the second display unit 53.
- the dividing unit 71 confirms the shooting mode selected by the operation of the multi selector 55d by the user (step S1).
- a still image mode for shooting a still image and a moving image mode for shooting a movie are provided as shooting modes.
- a first still image mode and a second still image mode are provided as still image modes.
- a first moving image mode and a second moving image mode are provided as moving image modes.
- the first still image mode is a shooting mode in which the imaging unit 100 captures a still image of a subject with the pixel region 113A as one region without the division unit 71 dividing the pixel region (imaging region) 113A of the imaging device 100.
- the first still image mode is a normal still image shooting mode that is generally performed.
- the second still image mode is a shooting mode in which the dividing unit 71 divides the pixel region 113A into a plurality of regions, and the image sensor 100 captures a still image of the same subject in each of the plurality of regions.
- processing for continuously capturing still images of the same subject in each of the plurality of regions can be executed in parallel.
- the second still image mode can perform higher-speed continuous shooting than the first still image mode.
- the second still image mode is also referred to as a high-speed continuous shooting mode or a still image-still image mixed mode.
- the first moving image mode is a shooting mode in which the image capturing device 100 captures a moving image of a subject using the pixel region 113A as one region without the dividing unit 71 dividing the pixel region (imaging region) 113A of the image capturing device 100. .
- the first moving image mode is a normal moving image shooting mode that is generally performed.
- the dividing unit 71 divides the pixel region 113A into a plurality of regions
- the image sensor 100 captures a still image of the subject in one of the plurality of regions, and the other of the plurality of regions.
- the second moving image mode is also referred to as a still image-moving image simultaneous shooting mode or a still image-moving image mixed mode.
- a position corresponding to the menu on the second touch panel 54 may be touched instead of the operation of the multi-selector 55d.
- the dividing unit 71 determines whether or not the shooting mode selected by the user is the still image mode (step S2). When determining that the shooting mode is the still image mode, the dividing unit 71 determines whether or not the still image mode is the first still image mode (step S3). When the dividing unit 71 determines that the still image mode is the first still image mode, the dividing unit 71 performs a process of setting the shooting mode to the first still image mode (step S4). On the other hand, when determining that the still image mode is not the first still image mode, that is, when determining that the still image mode is the second still image mode, the dividing unit 71 performs a process of setting the shooting mode to the second still image mode. This is performed (step S5).
- the dividing unit 71 executes the array pattern setting process shown in FIG.
- the dividing unit 71 instructs the image processing unit 30 (first image processing unit 30A) to detect the main subject (step S21).
- the detection unit 32A detects a moving subject and a non-moving subject (subject that has not moved) by comparing a plurality of image data obtained in time series from the live view image. Then, the detection unit 32A outputs the detection result to the system control unit 70 together with the image data.
- the dividing unit 71 confirms the presence / absence of the main subject based on the detection result of the detecting unit 32A. Then, the dividing unit 71 sets an area corresponding to the main subject and the shooting mode in the pixel area 113A (step S22).
- the dividing unit 71 does not divide the pixel region 113A into a plurality of regions when the shooting mode is the first still image mode. That is, the dividing unit 71 sets all the pixel areas 113A as one area. At this time, the dividing unit 71 outputs an instruction signal instructing the driving unit 21 to set all the pixel regions 113A as one region.
- the dividing unit 71 selects one of the arrangement patterns shown in FIGS. 8 (A) to 8 (D). Then, the dividing unit 71 confirms whether or not the main subject is a moving subject based on the detection result of the detecting unit 32A. When the main subject is not a moving subject but a non-moving subject, the dividing unit 71 sets the first region and the second region according to the third arrangement pattern shown in FIG. When the main subject is a moving subject, the dividing unit 71 confirms the moving direction of the moving subject.
- the dividing unit 71 When the moving direction of the moving subject is mainly in the vertical direction, for example, when the main subject is a child or a waterfall that slides on the slide, the dividing unit 71 performs the first arrangement pattern according to the first arrangement pattern shown in FIG. One area and a second area are set.
- the moving direction of the moving subject is mainly in the left-right direction, for example, when the main subject is a person running or when performing panning, the dividing unit 71 is shown in FIG. A first area and a second area are set according to the second arrangement pattern.
- the dividing unit 71 sets the first area, the second area, and the third area according to the fourth arrangement pattern shown in FIG.
- the dividing unit 71 outputs an instruction signal for instructing, for example, the position of a block in each region (first region, second region, and first region to third region) to the drive unit 21.
- FIG. 11 is a diagram illustrating a setting example of the second arrangement pattern when the second still image mode is executed.
- each block is illustrated in a larger size so that the arrangement of the blocks can be easily seen.
- a block smaller than the block size shown in FIG. 11 is set in the pixel region 113A.
- the detection unit 32A detects, as main subjects (moving subjects), people O1 and O2 who are playing soccer, and a soccer ball O3. Based on the detection result of the detection unit 32A, the dividing unit 71 determines that the main subjects O1 to O3 are moving subjects and the moving direction of the moving subject is mainly in the left-right direction.
- the dividing unit 71 sets the first region and the second region according to the second arrangement pattern shown in FIG. At this time, the dividing unit divides the pixel area 113A so that the first area and the second area include the main subjects O1 to O3.
- the drive control unit 72 sets the imaging condition of the region set in step S ⁇ b> 22 (in the example illustrated in FIG. 11, the first region and the second region) based on the detection result of the detection unit 32 ⁇ / b> A. (Step S23). Specifically, the drive control unit 72 outputs an instruction signal for instructing an imaging condition (charge accumulation time, gain, etc.) corresponding to the main subject to the drive unit 21. In addition, the drive control unit 72 outputs an instruction signal for instructing imaging conditions (parameters such as color signal processing, white balance adjustment, gradation adjustment, and compression rate) according to the main subject to the image processing unit 30.
- an imaging condition charge accumulation time, gain, etc.
- the drive control unit 72 outputs an instruction signal for instructing imaging conditions (parameters such as color signal processing, white balance adjustment, gradation adjustment, and compression rate) according to the main subject to the image processing unit 30.
- the drive control unit 72 increases the gain (ISO sensitivity) and increases the charge accumulation time (that is, the exposure time and the shutter speed). Further, when the moving subject is not detected by the detection unit 32A, the drive control unit 72 lowers the gain and delays the charge accumulation time.
- the drive control unit 72 determines whether or not the user has operated the release switch 55a (a full-press operation following a half-press operation) (step S6). If it is determined that the release switch 55a has been operated, the drive control unit 72 causes the imaging unit 20 to perform imaging in the still image mode (first still image mode or second still image mode) (step S7).
- FIG. 12 is a timing chart showing charge accumulation timings in the first still image mode and the second still image mode.
- the first still image mode normal continuous shooting mode
- all the pixel regions 113A are set as one region.
- the drive control unit 72 drives so as to repeatedly perform still image capturing on all the pixel regions 113A while the release switch 55a is being operated (full pressing operation).
- An instruction signal is output to the unit 21.
- the driving unit 21 starts charge accumulation in the pixel region 113A at time t1, and ends charge accumulation in the pixel region 113A at time t3.
- the drive unit 21 reads a pixel signal from each pixel in the pixel region 113A, and resets the electric charge accumulated in each pixel. Thereafter, the driving unit 21 starts charge accumulation in the pixel region 113A at time t4, and ends charge accumulation in the pixel region 113A at time t7. The drive unit 21 repeatedly executes such drive control of the image sensor 100 while the release switch 55a is being operated.
- the drive unit 21 continuously performs four imaging operations from time t1 to t15.
- the time from time t1 to t3, the time from time t4 to t7, the time from time t8 to t11, and the time from time t12 to t15 are the charge accumulation time (exposure time).
- This charge accumulation time (exposure time) is set in the imaging condition setting process in step S23.
- the pixel signal of each pixel read from the image sensor 100 is amplified by the amplifier 412 with the gain instructed from the dividing unit 71 and then output to the image processing unit 30.
- the image generation unit 31 (for example, the image generation unit 31A) checks parameters used in image processing such as color signal processing based on the instruction signal that instructs the imaging condition output from the division unit 71. Then, the image generation unit 31 generates image data by performing various image processing on the RAW data including the pixel signal of each pixel based on the parameters.
- the dividing unit 21 sets, for example, a first area and a second area.
- the drive control unit 72 repeatedly performs still image capturing on the first area while the release switch 55a is being operated (full pressing operation), and in the second area.
- an instruction signal is output to the drive unit 21 so as to repeatedly execute still image capturing.
- the driving unit 21 starts charge accumulation in the first region at time t1, and ends charge accumulation in the first region at time t3.
- the drive unit 21 reads a pixel signal from each pixel in the first region, and resets the charge accumulated in each pixel. Thereafter, the drive unit 21 starts charge accumulation in the first region at time t4 and ends charge accumulation in the first region at time t7.
- the drive unit 21 repeatedly executes such drive control of the image sensor 100 while the release switch 55a is being operated.
- the drive unit 21 starts charge accumulation in the second region at time t2, and ends charge accumulation in the second region at time t5.
- the drive unit 21 reads a pixel signal from each pixel in the second region, and resets the charge accumulated in each pixel. Thereafter, the drive unit 21 starts charge accumulation in the second region at time t6, and ends charge accumulation in the second region at time t9.
- the drive unit 21 repeatedly executes such drive control of the image sensor 100 while the release switch 55a is being operated.
- the drive unit 21 continuously performs four imaging operations from time t1 to t15 in the first region. In parallel with the four times of imaging in the first area, the drive unit 21 continuously executes four times of imaging in the second area from time t2 to t16. Therefore, the drive unit 21 continuously executes imaging eight times from time t1 to time t16 in the first region and the second region.
- the time from time t1 to t3, the time from time t4 to t7, the time from time t8 to t11, and the time from time t12 to t15 are the charge accumulation time (exposure time) in the first region.
- This charge accumulation time (exposure time) is set in the imaging condition setting process in step S23.
- the time from time t2 to t5, the time from time t6 to t9, the time from time t10 to t13, and the time from time t14 to t16 are the charge accumulation time (exposure time) in the second region.
- This charge accumulation time (exposure time) is also set in the imaging condition setting process in step S23.
- the pixel signal of each pixel read from the first region of the image sensor 100 is amplified by the amplifier 412 with the gain instructed from the dividing unit 71 and then output to the image processing unit 30.
- the image generation unit 31A confirms parameters used in image processing such as color signal processing based on the instruction signal that instructs the imaging condition of the first region output from the division unit 71. Then, the image generation unit 31A generates image data of the first region by performing various image processing on the RAW data including the pixel signals of the pixels of the first region based on the parameters.
- the pixel signal of each pixel read from the second area of the image sensor 100 is amplified by the amplifier 412 with the gain instructed from the dividing unit 71 and then output to the image processing unit 30.
- the image generation unit 31B confirms parameters used in image processing such as color signal processing based on the instruction signal instructing the imaging condition of the second region output from the dividing unit 71. Then, the image generation unit 31B generates image data of the second region by performing various image processing on the RAW data including the pixel signals of the pixels of the second region based on the parameters. Further, the image generation unit 31 (the image synthesis unit 31A or 31B) synthesizes the image data of the first area and the image data of the second area.
- the drive control unit 72 causes the drive unit 21 to change (shift) the imaging start timing of the first area and the imaging start timing of the second area.
- the drive of the image sensor 100 is controlled. Accordingly, in the first still image mode shown in FIG. 12A, if, for example, 30 frames of still images can be captured per second, in the second still image mode shown in FIG. Nearly 60 frames can be captured in 1 second, which is almost twice that of the first still image mode. In this case, since the pixel area 113A is divided into the first area and the second area, the number of pixels of the still image is halved. However, a still image of 10 million pixels is captured even if it is half of 20 million pixels. Therefore, it is considered that sufficient image quality is secured for the user.
- the dividing unit 71 sets the first region to the third region in the pixel region 113A according to the fourth arrangement pattern shown in FIG. 8D, the first region and the third region are set in the pixel region 113A.
- Higher speed continuous shooting can be realized than when the second area is set. For example, assuming that 30 frames of still images can be captured in one second in the first still image mode, when three regions are set in the pixel region 113A in the second still image mode, the first still image Nearly 90 frames can be captured in 1 second, which is almost three times the mode.
- the image sensor 100 is expected to have higher pixels in the future. For this reason, even if the pixel region 113A is divided into three and the number of pixels of one still image becomes 1/3, the high speed of continuous shooting is more important for the user than the deterioration of the image quality.
- the display control unit 73 displays the still image on the first display unit 51 or the second display unit 53 by outputting the image data generated by the image processing unit 30 to the display unit 50 ( Step S8).
- the display control unit 73 causes the first display unit 51 to display a still image.
- the display control unit 73 uses, as thumbnail images, a plurality of still images captured on the first display unit 51 a plurality of times at a high speed. indicate.
- the display control unit 73 enlarges and displays the thumbnail image selected by the user on the second display unit 53.
- FIG. 13 is a diagram illustrating a display example when still images are displayed on the first display unit and the second display unit.
- the display control unit 73 displays eight thumbnail images (still images) 511 to 518 side by side on the first display unit 51.
- the thumbnail image 511 is a still image obtained by the first imaging in FIG.
- the thumbnail images 512 to 518 are still images obtained by the second to eighth imaging in FIG. 12B, respectively.
- the eight touch areas 511a to 518a are formed so as to overlap the eight thumbnail images 511 to 518, respectively.
- the touch areas 511a to 518a output detection signals indicating the pressed positions (which touch area) to the system control unit 70.
- the display control unit 73 enlarges and displays the thumbnail image corresponding to the touch area pressed by the user on the second display unit 53.
- the thumbnail image 513 corresponding to the touch area 513 a is displayed on the second display unit 53 in an enlarged manner when the user presses the touch area 513 a.
- step S 10 when the dividing unit 71 determines in step S ⁇ b> 2 that the shooting mode is not the still image mode, that is, when the shooting mode is determined to be the moving image mode, the moving image mode is the first moving image mode. It is determined whether or not there is (step S10).
- the dividing unit 71 determines that the moving image mode is the first moving image mode, the dividing unit 71 performs processing for setting the shooting mode to the first moving image mode (step S11).
- step S12 when determining that the moving image mode is not the first moving image mode, that is, when determining that the moving image mode is the second moving image mode, the dividing unit 71 performs a process of setting the shooting mode to the second moving image mode (step S12).
- the dividing unit 71 executes the array pattern setting process shown in FIG.
- the dividing unit 71 instructs the image processing unit 30 (first image processing unit 30A) to detect the main subject (step S21).
- the detection unit 32A detects a moving subject and a non-moving subject by comparing a plurality of image data obtained in time series from the live view image.
- the detection unit 32A recognizes the face from the image data with eyes, mouth, skin color, and the like, and detects the face as a main subject.
- the detection unit 32A detects a human body (person) included in the image data as a main subject in addition to face detection.
- the detection unit 32A outputs the detection result to the system control unit 70 together with the image data.
- the dividing unit 71 confirms the presence / absence of the main subject based on the detection result of the detecting unit 32A. Then, the dividing unit 71 sets an area corresponding to the main subject and the shooting mode in the pixel area 113A (step S22).
- the dividing unit 71 does not divide the pixel region 113A into a plurality of regions when the shooting mode is the first moving image mode. That is, the dividing unit 71 sets all the pixel areas 113A as one area. At this time, the dividing unit 71 outputs an instruction signal instructing the driving unit 21 to set all the pixel regions 113A as one region.
- the dividing unit 71 selects the array pattern shown in any of FIGS. Then, the dividing unit 71 confirms whether or not the main subject is a moving subject based on the detection result of the detecting unit 32A. When the main subject is not a moving subject but a non-moving subject, the dividing unit 71 sets the first region and the second region according to the third arrangement pattern shown in FIG. When the main subject is a moving subject, the dividing unit 71 confirms the moving direction of the moving subject. When the moving direction of the moving subject is mainly in the vertical direction, the dividing unit 71 sets the first area and the second area according to the first arrangement pattern shown in FIG.
- the dividing unit 71 sets the first area and the second area according to the second arrangement pattern shown in FIG. Further, when the moving speed of the moving subject in the vertical direction is fast, the dividing unit 71 sets the first area, the second area, and the third area according to the fourth arrangement pattern shown in FIG. In step S22, the dividing unit 71 outputs an instruction signal for instructing, for example, the position of a block in each region (first region, second region, and first region to third region) to the drive unit 21.
- FIG. 14 is a diagram illustrating a setting example of the second arrangement pattern when the second moving image mode is executed.
- each block is illustrated in a larger size so that the arrangement of the blocks can be easily seen.
- a block smaller than the block size shown in FIG. 14 is set in the pixel region 113A.
- the detection unit 32A detects, as main subjects (moving subjects), people O1 and O2 who are playing soccer, and a soccer ball O3. Further, the detection unit 32A detects the persons O1 and O2 included in the image data as main subjects.
- the dividing unit 71 determines that the main subjects O1 to O3 are moving subjects and the moving direction of the moving subject is mainly in the left-right direction. As a result, the dividing unit 71 sets the first region and the second region according to the second arrangement pattern shown in FIG. The dividing unit 71 determines that the main subjects O1 and O2 are people. As a result, in the second region, the dividing unit 71 sets the regions 200 and 201 surrounding the main subjects O1 and O2 as the second region A, and sets the regions other than the regions 200 and 201 surrounding the main subjects O1 and O2 as the second region. Set as B.
- the drive control unit 72 determines the regions set in step S22 based on the detection result of the detection unit 32A (in the example illustrated in FIG. 14, the first region, the second region A, and the second region).
- the imaging condition of B) is set (step S23). Specifically, the drive control unit 72 outputs an instruction signal for instructing an imaging condition (frame rate, gain, etc.) corresponding to the main subject to the drive unit 21. In addition, the drive control unit 72 outputs an instruction signal for instructing imaging conditions (parameters such as color signal processing, white balance adjustment, gradation adjustment, and compression rate) according to the main subject to the image processing unit 30.
- the drive control unit 72 increases the gain (ISO sensitivity) of the first region and shortens the charge accumulation time of the first region. In addition, when the moving subject is not detected by the detection unit 32A, the drive control unit 72 decreases the gain in the first region and delays the charge accumulation time in the first region. In addition, when a moving subject is detected by the detection unit 32A, the drive control unit 72 uses the frame rate for the moving subject region (the regions 200 and 201 surrounding the main subjects O1 and O2, that is, the second region A). To increase. Further, the frame rate of the non-moving subject region (the region other than the regions 200 and 201 surrounding the main subjects O1 and O2, that is, the second region B) is set lower than that of the second region A.
- the drive control unit 72 determines whether or not the moving image switch 55c has been operated by the user (step S13). When it is determined that the operation of the moving image switch 55c has been performed, the drive control unit 72 causes the imaging unit 20 to perform image capturing in the moving image mode (first moving image mode or second moving image mode) (step S14). Note that imaging in the first moving image mode is the same as normal moving image shooting, and thus detailed description thereof is omitted.
- FIG. 15 is a timing chart showing the charge accumulation timing in the second moving image mode.
- the second moving image mode still image-moving image mixed mode
- the drive control unit 72 repeatedly performs still image capturing on the first region while the moving image switch 55c is being operated, and also in the second region A and the second region B.
- an instruction signal is output to the drive unit 21 so that a moving image is captured.
- the drive unit 21 captures a still image for each pixel in the first region with the charge accumulation time T ⁇ b> 1 while the moving image switch 55 c is being operated.
- the driving unit 21 causes a moving image to be captured for each pixel in the second region A in the charge accumulation time T2A.
- the driving unit 21 causes a moving image to be captured with a charge accumulation time T2B that is longer than the charge accumulation time T2A for each pixel in the second region B.
- the frame rate changes according to the charge accumulation time. For this reason, the frame rate of the moving image is different between when the imaging is performed during the charge accumulation time T2A and when the imaging is performed during the charge accumulation time T2B.
- the frame rate corresponding to the charge accumulation time T2A in the second region A is 60 fps
- the frame rate corresponding to the charge accumulation time T2B in the second region B is 30 fps.
- the charge accumulation time and the frame rate are set in the imaging condition setting process in step S23.
- the pixel signal of each pixel read from the first region of the image sensor 100 is amplified by the amplifier 412 with the gain instructed from the dividing unit 71 and then output to the image processing unit 30.
- the image generation unit 31A confirms parameters used in image processing such as color signal processing based on the instruction signal that instructs the imaging condition of the first region output from the division unit 71. Then, the image generation unit 31A generates image data of the first region by performing various image processing on the RAW data including the pixel signals of the pixels of the first region based on the parameters.
- the pixel signal of each pixel read from the second area A of the image sensor 100 is amplified by the amplifier 412 with the gain instructed from the dividing unit 71 and then output to the image processing unit 30.
- the image generation unit 31B confirms parameters used in image processing such as color signal processing based on the instruction signal instructing the imaging condition of the second region A output from the division unit 71. Then, the image generation unit 31B generates image data of the second area A by performing various image processing on the RAW data including the pixel signals of the pixels of the second area A based on the parameters.
- the pixel signal of each pixel read from the second area B of the image sensor 100 is amplified by the amplifier 412 with the gain instructed from the dividing unit 71 and then output to the image processing unit 30.
- the image generation unit 31B confirms parameters used in image processing such as color signal processing based on the instruction signal instructing the imaging condition of the second region B output from the division unit 71. Then, the image generation unit 31B generates image data of the second region B by performing various image processing on the RAW data including the pixel signals of the pixels of the second region B based on the parameters.
- the image generation unit 31 (the image generation unit 31A or 31B) combines the image data of the second area A and the image data of the second area B. Further, the image generation unit 31 (the image generation unit 31A or 31B) combines the image data of the first region, the image data of the second region A, and the image data of the second region B.
- the display control unit 73 causes the first display unit 51 to display a moving image by outputting the moving image data generated by the image processing unit 30 to the display unit 50 (step S8). Further, the display control unit 73 causes the second display unit 53 to display a still image by outputting the still image data generated by the image processing unit 30 to the display unit 50 (step S8).
- FIG. 16 is a diagram illustrating a display example when a moving image is displayed on the first display unit and a still image is displayed on the second display unit.
- the display control unit 73 firstly displays a moving image (moving image in which two persons are playing soccer) in which the second region A and the second region B generated by the image generating unit 31 are combined. It is displayed on the display unit 51.
- the display control unit 73 displays the still image of the first area generated by the image generation unit 31.
- the drive control unit 72 that drives and controls the image sensor 100, and the dividing unit 71 that divides the imaging region 113A of the image sensor 100 into at least a first region and a second region.
- the image generation unit 31 generates a first image obtained by imaging the first area and a second image obtained by imaging the second area for the same subject. According to such a configuration, it is possible to generate a plurality of types of images (a plurality of still images, still images and moving images, etc.) for the same subject. Accordingly, it is possible to generate a plurality of types of images according to the subject and the shooting situation, and the usability of the electronic device 1 including the image sensor 100 is improved.
- the drive control unit 72 drives and controls the imaging device 100 by changing the imaging start timing of the first area and the imaging start timing of the second area. According to such a configuration, a plurality of types of images can be generated for the same subject at various timings. In addition, many images can be generated per unit time. Therefore, the user can take an image without missing the opportunity of taking an image.
- the drive control unit 72 since the drive control unit 72 performs imaging of the second area during imaging of the first area, it is possible to execute imaging of the first area and imaging of the second area in parallel, and for the same subject. Images with a part of the exposure time overlapping can be taken. Therefore, the same subject can be imaged at a timing that could not be imaged until now.
- the drive control unit 72 varies at least one of the frame rate, the gain, and the exposure time as the imaging conditions of the first area and the second area of the image sensor 100. According to such a configuration, the user can acquire a plurality of types of images captured under different imaging conditions.
- the image generation unit 31 since the image generation unit 31 generates a still image based on at least one of the first region and the second region, it is possible to generate a plurality of types of still images for the same subject. it can. Further, since the image generation unit 31 generates a moving image based on either the first region imaging or the second region imaging, it is possible to generate a still image and a moving image for the same subject. In addition, the image generation unit 31 performs correction by changing at least one of white balance, gradation, and color tone correction on the first image and the second image, so that the user is based on different parameters. A plurality of types of images subjected to image processing can be acquired.
- the dividing unit 71 forms the first region from a plurality of discrete regions (a plurality of discrete blocks), the resolution is not partially reduced in the image. Further, since the dividing unit 71 variably divides the first area and the second area, it can be divided into areas according to various situations such as the shooting mode and the type of subject.
- a detection unit 32A that detects a main subject from the image generated by the image generation unit 31 is provided, and the division unit 71 performs division so that the first region and the second region include the main subject. On the other hand, an image of the first area and an image of the second area can be generated.
- the display control unit 73 that displays the image generated by the image generation unit 31 on the display unit 50 is provided, the user can check the image displayed on the display unit 50.
- the imaging element 100 has a structure in which a back-illuminated imaging chip and a signal processing chip are stacked, the volume for housing the imaging element 100 can be reduced. Further, drive control of the image sensor 100 can be performed based on an instruction from the system control unit 70, and the burden on the system control unit 70 can be reduced, and the mounting of the image sensor 100 on the electronic device 1 is facilitated. .
- the display unit 50 may be provided outside the electronic device.
- each of the system control unit 70 and the display unit 50 is provided with a communication unit that transmits and receives signals (image data, control signals, and the like) by wire or wirelessly.
- the image processing unit 30 and the system control unit 70 may be configured integrally.
- a system control unit having one or a plurality of CPUs performs the processing based on the control program, thereby assuming the functions of the image processing unit 30 and the system control unit 70.
- the image processing unit 30 includes the two image processing units 30A and 30B, the image processing unit 30 may include only one image processing unit.
- the arrangement pattern of each block is set so that the area of the first region and the area of the second region are the same. That is, the arrangement pattern of each block is set so that the number of pixels in the first area is the same as the number of pixels in the second area.
- the arrangement pattern of each block is set so that the area of the first region, the area of the second region, and the area of the third region are the same. That is, the arrangement pattern of each block is set so that the number of pixels in the first region, the number of pixels in the second region, and the number of pixels in the third region are the same.
- the arrangement pattern of each block may be set so that the area (number of pixels) of each region is different.
- FIG. 17 is a diagram showing a fifth arrangement pattern in each block.
- the fifth array pattern shown in FIG. 17 is an array pattern in which the pixel region 113A is divided into two parts, a first region and a second region.
- the second area is configured by blocks of a column (3m) that is a multiple of 3 and the first area is a block other than the second area, that is, an example that is a multiple of 3 It is composed of blocks of columns (3m-2, 3m-1).
- the ratio of the area of the first region to the area of the second region is 2: 1.
- FIG. 18 is a diagram showing a sixth arrangement pattern in each block.
- the sixth arrangement pattern shown in FIG. 18 is also an arrangement pattern in which the pixel area 113A is divided into two areas, the first area and the second area.
- the block of the row (4n-1) one row before the row that is a multiple of 4 in the odd-numbered column (2m-1) and the block of 4 in the even-numbered column (2m).
- the second area is composed of the multiple lines to the block (4n-3) three lines before, and the first area is composed of blocks other than the second area.
- the ratio of the area of the first region to the area of the second region is 3: 1.
- step S22 of FIG. 10 when the dividing unit 71 determines that the shooting mode selected by the user is the second moving image mode, the fifth arrangement pattern shown in FIG. 17 or the sixth arrangement pattern shown in FIG. Accordingly, the first area and the second area are set. If the dividing unit 71 determines that the main subject is a non-moving subject based on the detection result of the detecting unit 32A, the dividing unit 71 determines the first region and the second region according to the sixth arrangement pattern shown in FIG. Set. Further, when the dividing unit 71 determines that the main subject is a moving subject and the moving direction of the moving subject is mainly in the vertical direction based on the detection result of the detecting unit 32A, the fifth subject shown in FIG. The first area and the second area are set according to the arrangement pattern.
- step S14 the drive control unit 72 outputs an instruction signal to the drive unit 21 to cause the drive unit 21 to capture a still image in the first area and to capture a moving image in the second area.
- the number of pixels in the still image is twice that in the moving image. That is, the resolution of a still image is twice that of a moving image.
- the number of pixels in the still image is three times that in the moving image. That is, the resolution of still images is three times that of moving images. This requires a finer image quality for still images than for moving images.
- the deterioration in image quality is less noticeable than in still images. For this reason, more areas are allocated to still images than moving images. If the number of pixels in the pixel region 113A is 20 million pixels, even if the number of moving image pixels (number of pixels in the second region) is reduced to 1/3 or 1/4, it is 6.6 million pixels or 5 million pixels. Pixels are reserved. Such a number of pixels is comparable to the number of pixels of a commercially available video camera.
- the imaging start timing of the first area is different from the imaging start timing of the second area.
- the imaging start timing of the first area and the imaging start timing of the second area are the same, and the exposure time of the first area (that is, charge accumulation). Time) and the exposure time of the second region are made different.
- FIG. 19 is a timing chart showing the charge accumulation timing in the second embodiment.
- a first area and a second area are set.
- the drive unit 21 repeatedly captures a still image at the charge accumulation time (exposure time) T11 for each pixel in the first region while the release switch 55a is being operated.
- the drive unit 21 repeatedly captures a still image for each pixel in the second region at the charge accumulation time (exposure time) T12.
- the imaging start timing of the first area and the imaging start timing of the second area are the same.
- the charge accumulation time T11 in the first region is different from the charge accumulation time T12 in the second region. That is, the charge accumulation time T12 is set to be longer than the charge accumulation time T11.
- the charge accumulation time is set in the imaging condition setting process in step S23.
- the pixel signal of each pixel read from the first area of the image sensor 100 is amplified by the amplifier 412 with the gain instructed from the dividing unit 71 and then output to the image processing unit 30.
- the image generation unit 31A confirms parameters used in image processing such as color signal processing based on the instruction signal that instructs the imaging condition of the first region output from the division unit 71. Then, the image generation unit 31A generates image data of the first region by performing various image processing on the RAW data including the pixel signals of the pixels of the first region based on the parameters.
- the pixel signal of each pixel read from the second area of the image sensor 100 is amplified by the amplifier 412 with the gain instructed from the dividing unit 71 and then output to the image processing unit 30.
- the image generation unit 31B confirms parameters used in image processing such as color signal processing based on the instruction signal instructing the imaging condition of the second region output from the dividing unit 71. Then, the image generation unit 31B generates image data of the second region by performing various image processing on the RAW data including the pixel signals of the pixels of the second region based on the parameters. Further, the image generation unit 31 (the image synthesis unit 31A or 31B) synthesizes the image data of the first area and the image data of the second area.
- FIG. 20 is a diagram illustrating a display example when still images are displayed on the first display unit and the second display unit in the second embodiment.
- the display control unit 73 displays a still image of the first region (a person image captured at night) in the left region 53 ⁇ / b> L of the display screen of the second display unit 53. Further, the display control unit 73 displays the still image of the second area in the right area 53R of the display screen of the second display section 53.
- the display control unit 73 displays a still image obtained by combining the still image in the first area and the still image in the second area in the central area 51G of the first display unit 51.
- HDR High Dynamic
- Range imaging is widely known.
- HDR imaging a plurality of images are captured while changing imaging conditions (for example, exposure), and the images are combined to generate an image with less overexposure and underexposure.
- the conventional HDR for example, since the imaging time for capturing two images with different imaging conditions is different, the subject moves or the user (photographer) moves the electronic device 1. There is. In this case, since the plurality of images are not the same subject image, it is difficult to synthesize the images.
- the imaging time of two images from which imaging conditions differ can be made into the same time (or substantially the same time). Therefore, the configuration of the second embodiment can solve the problems of conventional HDR imaging.
- the HDR mode is configured to be selectable according to the operation of the multi selector 55d by the user.
- FIG. 21 is a block diagram illustrating configurations of an imaging apparatus and an electronic apparatus according to the third embodiment.
- the imaging device 1A is a device that captures an image of a subject.
- the imaging apparatus 1A includes a lens unit 10, an imaging unit 20, an image processing unit 30, a work memory 40, an operation unit 55, a recording unit 60, and a first system control unit 75.
- the configurations of 10, the imaging unit 20, the image processing unit 30, the work memory 40, the operation unit 55, and the recording unit 60 are the same as those illustrated in FIG. Accordingly, the same components are denoted by the same reference numerals, and redundant description is omitted.
- the electronic device 1B is a device that displays images (still images, moving images, live view images).
- the electronic apparatus 1B includes a display unit 50 and a second system control unit (control unit) 70B.
- the configuration of the display unit 50 in the electronic device 1B is the same as the configuration illustrated in FIG. Accordingly, the same components are denoted by the same reference numerals, and redundant description is omitted.
- the first system control unit 75 has a first communication unit 75A.
- the second system control unit 76 has a second communication unit 76B.
- the first communication unit 75A and the second communication unit 76B transmit and receive signals to each other in a wired or wireless manner.
- the 1st system control part 75 has a structure corresponded to the division
- the 2nd system control part 76 has only the structure corresponded, for example to the display control part 73 among the structures shown in FIG.
- 7 may be provided in either the first system control unit 75 or the second system control unit 76.
- the configuration shown in FIG. 7 (the dividing unit 71, the drive control unit 72, and the display control unit 73) may be provided. 7 may be provided in the first system control unit 75 or the second system control unit 76, and a part of the configuration shown in FIG. 7 is provided in the first system control unit 75. 7 may be provided in the second system control unit 76 other than a part of the configuration shown in FIG.
- the imaging apparatus 1A includes, for example, a digital camera, a smartphone, a mobile phone, and a personal computer having an imaging function and a communication function
- the electronic device 1B includes, for example, a smartphone, a mobile phone, and a portable personal computer that have a communication function. Consists of a portable terminal such as a computer.
- the first system control unit 75 shown in FIG. 21 is realized by a CPU (not shown) executing processing based on a control program. Further, the second system control unit 76 shown in FIG. 21 is realized by a CPU (not shown) executing processing based on a control program.
- the image processing unit 30 and the first system control unit 75 may be configured integrally.
- a system control unit having one or more CPUs performs the processing based on the control program, thereby taking on the function of the image processing unit 30 and the function of the first system control unit 75.
- the electronic device 1 includes the imaging unit 20, the image processing unit 30 including the image generation unit 31, the system control unit including the dividing unit 71 and the drive control unit 72. 70, the lens unit 10 and the recording unit 60 may not be provided. That is, these configurations may be different from the electronic device 1.
- the lens unit 10 and the recording unit 60 may be configured separately in the imaging apparatus 1A.
- the arrangement of the color filters 102 is a Bayer arrangement, but an arrangement other than this arrangement may be used. Further, the number of pixels forming the unit group 131 only needs to include at least one pixel. In addition, the block only needs to include at least one pixel. Therefore, it is possible to execute imaging under different imaging conditions for each pixel.
- part or all of the configuration of the drive unit 21 may be mounted on the imaging chip 113, or part or all of the configuration may be mounted on the signal processing chip 111. Further, a part of the configuration of the image processing unit 30 may be mounted on the imaging chip 113 or the signal processing chip 111. Further, a part of the configuration of the system control unit 70 may be mounted on the imaging chip 113 or the signal processing chip 111.
- the gain, the charge accumulation time (exposure time, shutter speed), and the frame rate are all changeable as the imaging conditions, but at least one of these can be changed. That's fine.
- the imaging condition is automatically set has been described, it may be set according to the operation of the operation unit 55 by the user.
- the block arrangement patterns are illustrated in FIGS. 8A to 8D, FIGS. 17 and 18, but are not limited to these arrangement patterns.
- the user may select the block arrangement pattern by operating the operation unit 55 or the like.
- the place where the still image or the moving image captured by the imaging unit 20 is displayed may be the first display unit 51 or the second display unit 53.
- the size of the block area is set in advance has been described. However, the user may set the size of the block area.
- the dividing unit 71 recognizes the subject based on the live view image and sets the area. However, when the release switch 55a or the moving image switch 55c is pressed halfway, the dividing unit 71 may recognize the subject based on the image at that time and set the area.
- Panning is a shooting method that expresses a sense of speed of a moving subject when the moving subject does not blur and the background (non-moving subject) blurs.
- a panning image is taken in which the charge accumulation time (exposure time) is long in the first area and the background flows, and the charge accumulation time is shorter in the second area than the charge accumulation time in the first area.
- the image generation unit 31 or the user
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Power Engineering (AREA)
- Physics & Mathematics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Solid State Image Pick-Up Elements (AREA)
Abstract
Description
図1は、積層型撮像素子の断面図である。なお、この積層型撮像素子100は、本願出願人が先に出願した特願2012-139026号に記載されているものである。撮像素子100は、入射光に対応した画素信号を出力する撮像チップ113と、画素信号を処理する信号処理チップ111と、画素信号を記憶するメモリチップ112とを備える。これら撮像チップ113、信号処理チップ111、及びメモリチップ112は積層されており、Cu等の導電性を有するバンプ109により互いに電気的に接続される。
上記した第1実施形態では、図12(B)に示すように、第2静止画モードにおいて、第1領域の撮像開始のタイミングと第2領域の撮像開始のタイミングとを異ならせていた。これに対して、第2実施形態では、第2静止画モードにおいて、第1領域の撮像開始のタイミングと第2領域の撮像開始のタイミングとを同一とし、第1領域の露光時間(すなわち電荷蓄積時間)と第2領域の露光時間とを異ならせる。
Range)撮像が広く知られている。HDR撮像では、撮像条件(例えば露出)を変化させつつ複数枚の画像を撮像し、それらを合成することにより白飛びや黒つぶれの少ない画像を生成する。しかし、従来のHDRでは、例えば撮像条件の異なる2枚の画像を撮像する撮像時間が異なっているため、被写体が移動したり、使用者(撮影者)が電子機器1を動かしたりしてしまうことがある。この場合、複数枚の画像は同一の被写体の画像ではないため、画像合成が困難である。これに対して、第2実施形態では、撮像条件の異なる2枚の画像の撮像時間を同じ時刻(又は略同じ時刻)とすることができる。従って、第2実施形態の構成により従来のHDR撮像の問題点を解決することができる。なお、HDRモードは、使用者によるマルチセレクタ55dの操作などに応じて選択可能に構成する。
第3実施形態では、上記した第1実施形態における電子機器1を、撮像装置1Aと電子機器1Bとに分離した構成としている。
Claims (16)
- 撮像素子を駆動制御する駆動制御部と、
前記撮像素子の撮像領域を少なくとも第1領域と第2領域とに分割する分割部と、
同一被写体に対して前記第1領域の撮像による第1画像と、前記第2領域の撮像による第2画像とのそれぞれを生成する画像生成部と、を備える電子機器。 - 前記駆動制御部は、前記第1領域の撮像開始のタイミングと、前記第2領域の撮像開始のタイミングを異ならせて前記撮像素子を駆動制御する請求項1記載の電子機器。
- 前記駆動制御部は、前記第1領域の撮像中に前記第2領域の撮像を行う請求項1または2記載の電子機器。
- 前記駆動制御部は、前記撮像素子の前記第1領域と前記第2領域との撮像条件として、フレームレート、ゲイン、露光時間のうち、少なくとも1つを異ならせる請求項1から3のいずれか一項に記載の電子機器。
- 前記画像生成部は、前記第1領域の撮像と、前記第2領域の撮像との少なくとも一方に基づいて、静止画を生成する請求項1から4のいずれか一項に記載の電子機器。
- 前記画像生成部は、前記第1領域の撮像と、前記第2領域の撮像とのいずれかに基づいて動画を生成する請求項1から5のいずれか一項に記載の電子機器。
- 前記画像生成部は、前記第1画像と前記第2画像とを合成する請求項1から6のいずれか一項に記載の電子機器。
- 前記画像生成部は、前記第1画像と前記第2画像とに対してホワイトバランス、階調、色調補正のうち、少なくとも1つを異ならせた補正を行う請求項1から7のいずれか一項に記載の電子機器。
- 前記分割部は、前記第1領域の画素数と、前記第2領域の画素数とを異ならせる請求項1から8のいずれか一項に記載の電子機器。
- 前記分割部は、前記第1領域を複数の離散的な領域から形成する請求項1から9のいずれか一項に記載の電子機器。
- 前記分割部は、前記第1領域と、前記第2領域とを可変に分割する請求項1から10のいずれか一項に記載の電子機器。
- 前記画像生成部で生成した画像から主要被写体を検出する検出部を備え、
前記分割部は、前記第1領域と前記第2領域とが前記主要被写体を含むように前記分割を行う請求項1から11のいずれか一項に記載の電子機器。 - 前記画像生成部によって生成された画像を表示部に表示させる表示制御部と、を有する請求項1から12のいずれか一項に記載の電子機器。
- 前記撮像素子は、裏面照射型撮像チップと信号処理チップとが積層された構造である請求項1から13のいずれか一項に記載の電子機器。
- 撮像部を有する電子機器の制御方法であって、
前記撮像部の撮像素子の撮像領域を少なくとも第1領域と第2領域とに分割することと、
同一被写体に対して前記第1領域の撮像による第1画像と、前記第2領域の撮像による第2画像とのそれぞれを生成することと、を含む電子機器の制御方法。 - 撮像素子を有する電子機器の制御装置に、
前記撮像部の撮像素子の撮像領域を少なくとも第1領域と第2領域とに分割する分割処理と、
同一被写体に対して前記第1領域の撮像による第1画像と、前記第2領域の撮像による第2画像とのそれぞれを生成する画像生成処理と、を実行させ制御プログラム。
Priority Applications (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13888636.1A EP3018893A4 (en) | 2013-07-04 | 2013-07-04 | ELECTRONIC APPARATUS, CONTROL METHOD, AND CONTROL PROGRAM FOR ELECTRONIC APPARATUS |
CN201811274381.7A CN109256404B (zh) | 2013-07-04 | 2013-07-04 | 摄像元件以及电子设备 |
US14/900,802 US10142563B2 (en) | 2013-07-04 | 2013-07-04 | Electronic apparatus, method for controlling electronic apparatus, and control program |
CN201380077710.8A CN105324985B (zh) | 2013-07-04 | 2013-07-04 | 电子设备及摄像元件 |
JP2015524974A JP6372488B2 (ja) | 2013-07-04 | 2013-07-04 | 電子機器 |
PCT/JP2013/068381 WO2015001646A1 (ja) | 2013-07-04 | 2013-07-04 | 電子機器、電子機器の制御方法、及び制御プログラム |
CN201811271609.7A CN109743514B (zh) | 2013-07-04 | 2013-07-04 | 摄像元件以及摄像装置 |
EP19181971.3A EP3588940B1 (en) | 2013-07-04 | 2013-07-04 | Electronic apparatus, method for controlling electronic apparatus, and control program |
CN201811271635.XA CN109742094B (zh) | 2013-07-04 | 2013-07-04 | 摄像元件以及电子设备 |
US16/177,653 US10841512B2 (en) | 2013-07-04 | 2018-11-01 | Electronic apparatus, method for controlling electronic apparatus, and control program |
US17/020,920 US11637970B2 (en) | 2013-07-04 | 2020-09-15 | Electronic apparatus, method for controlling electronic apparatus, and control program |
US18/118,850 US20230231966A1 (en) | 2013-07-04 | 2023-03-08 | Electronic apparatus, method for controlling electronic apparatus, and control program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/068381 WO2015001646A1 (ja) | 2013-07-04 | 2013-07-04 | 電子機器、電子機器の制御方法、及び制御プログラム |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/900,802 A-371-Of-International US10142563B2 (en) | 2013-07-04 | 2013-07-04 | Electronic apparatus, method for controlling electronic apparatus, and control program |
US16/177,653 Continuation US10841512B2 (en) | 2013-07-04 | 2018-11-01 | Electronic apparatus, method for controlling electronic apparatus, and control program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015001646A1 true WO2015001646A1 (ja) | 2015-01-08 |
Family
ID=52143265
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/068381 WO2015001646A1 (ja) | 2013-07-04 | 2013-07-04 | 電子機器、電子機器の制御方法、及び制御プログラム |
Country Status (5)
Country | Link |
---|---|
US (4) | US10142563B2 (ja) |
EP (2) | EP3588940B1 (ja) |
JP (1) | JP6372488B2 (ja) |
CN (4) | CN109256404B (ja) |
WO (1) | WO2015001646A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015190021A1 (ja) * | 2014-06-10 | 2015-12-17 | ソニー株式会社 | 撮像制御装置、撮像装置、撮像システムおよび撮像制御方法 |
JP2016192606A (ja) * | 2015-03-30 | 2016-11-10 | 株式会社ニコン | 電子機器、およびプログラム |
JP2017022641A (ja) * | 2015-07-14 | 2017-01-26 | キヤノン株式会社 | 画像処理装置、撮像装置および撮像制御プログラム |
JP2017183940A (ja) * | 2016-03-29 | 2017-10-05 | 本田技研工業株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
JP2018516013A (ja) * | 2015-05-19 | 2018-06-14 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | セミグローバルシャッタイメージャ |
CN109196855A (zh) * | 2016-03-31 | 2019-01-11 | 株式会社尼康 | 摄像装置、图像处理装置及电子设备 |
JP2019057240A (ja) * | 2017-09-22 | 2019-04-11 | 株式会社デンソーウェーブ | 撮像装置 |
US11483467B2 (en) | 2016-03-31 | 2022-10-25 | Nikon Corporation | Imaging device, image processing device, and electronic apparatus |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109256404B (zh) | 2013-07-04 | 2023-08-15 | 株式会社尼康 | 摄像元件以及电子设备 |
CN105474627B (zh) * | 2013-08-12 | 2019-06-04 | 株式会社尼康 | 电子设备 |
JP6261430B2 (ja) * | 2014-04-01 | 2018-01-17 | キヤノン株式会社 | 撮像装置及び画像処理システム |
JP5866674B1 (ja) * | 2014-07-29 | 2016-02-17 | パナソニックIpマネジメント株式会社 | 撮像装置 |
US9571741B1 (en) * | 2015-10-08 | 2017-02-14 | Gopro, Inc. | Smart shutter in low light |
WO2017149932A1 (ja) * | 2016-03-03 | 2017-09-08 | ソニー株式会社 | 医療用画像処理装置、システム、方法及びプログラム |
JP2017169021A (ja) * | 2016-03-16 | 2017-09-21 | ソニー株式会社 | 撮像装置、撮像方法および撮像プログラム |
US20190141263A1 (en) * | 2016-06-09 | 2019-05-09 | Sony Corporation | Control device and control method |
CN107633795B (zh) * | 2016-08-19 | 2019-11-08 | 京东方科技集团股份有限公司 | 显示装置和显示面板的驱动方法 |
CN107091800A (zh) * | 2017-06-06 | 2017-08-25 | 深圳小孚医疗科技有限公司 | 用于显微成像粒子分析的聚焦系统和聚焦方法 |
US11302439B2 (en) * | 2017-06-27 | 2022-04-12 | Sony Corporation | Medical image processing apparatus, medical image processing method, and computing device |
DE112018003456T5 (de) | 2017-07-07 | 2020-03-19 | Semiconductor Energy Laboratory Co., Ltd. | Anzeigesystem und Betriebsverfahren des Anzeigesystems |
US10462361B2 (en) * | 2017-09-26 | 2019-10-29 | Rosemount Aerospace Inc. | Seeker with dynamic resolution imaging |
TW201915818A (zh) * | 2017-10-05 | 2019-04-16 | 香港商印芯科技股份有限公司 | 光學識別模組 |
US11317038B2 (en) | 2017-12-19 | 2022-04-26 | SmartSens Technology (HK) Co., Ltd. | Pixel unit with a design for half row reading, an imaging apparatus including the same, and an imaging method thereof |
KR102648747B1 (ko) | 2019-01-18 | 2024-03-20 | 삼성전자주식회사 | Hdr 이미지를 생성하기 위한 이미징 시스템 및 그것의 동작 방법 |
JP7309451B2 (ja) * | 2019-05-24 | 2023-07-18 | キヤノン株式会社 | 撮像装置および制御方法 |
KR20210053377A (ko) * | 2019-11-01 | 2021-05-12 | 삼성전자주식회사 | 이미지 센서 및 이미지 신호 처리기를 포함하는 이미지 장치, 및 이미지 센서의 동작 방법 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11355650A (ja) * | 1998-06-09 | 1999-12-24 | Nikon Corp | 画像取り込み装置 |
JP2000228747A (ja) * | 1998-12-03 | 2000-08-15 | Olympus Optical Co Ltd | 画像処理装置 |
JP2006049361A (ja) | 2004-07-30 | 2006-02-16 | Sony Corp | 半導体モジュール及びmos型固体撮像装置 |
JP2006197192A (ja) * | 2005-01-13 | 2006-07-27 | Sony Corp | 撮像装置及び撮像結果の処理方法 |
US20100002940A1 (en) | 2008-07-03 | 2010-01-07 | Sony Corporation | Image data processing apparatus and image data processing method |
JP2010021697A (ja) * | 2008-07-09 | 2010-01-28 | Sony Corp | 撮像素子、カメラ、撮像素子の制御方法、並びにプログラム |
JP2010239277A (ja) * | 2009-03-30 | 2010-10-21 | Fujifilm Corp | 撮像装置および撮像方法 |
JP2012139026A (ja) | 2010-12-27 | 2012-07-19 | Alpine Electronics Inc | 消費電力制御システム |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6529640B1 (en) | 1998-06-09 | 2003-03-04 | Nikon Corporation | Image processing apparatus |
JP4245699B2 (ja) * | 1998-09-16 | 2009-03-25 | オリンパス株式会社 | 撮像装置 |
US6825884B1 (en) | 1998-12-03 | 2004-11-30 | Olympus Corporation | Imaging processing apparatus for generating a wide dynamic range image |
JP3993541B2 (ja) | 2003-07-30 | 2007-10-17 | 真人 佐々木 | 2次元マクロセル制御イメージセンサ、撮像素子、及び撮像方法 |
US7446812B2 (en) * | 2004-01-13 | 2008-11-04 | Micron Technology, Inc. | Wide dynamic range operations for imaging |
JP4449565B2 (ja) * | 2004-05-12 | 2010-04-14 | ソニー株式会社 | 物理量分布検知の半導体装置 |
JP2006245527A (ja) * | 2005-02-07 | 2006-09-14 | Fuji Photo Film Co Ltd | 固体撮像素子 |
TW201101476A (en) | 2005-06-02 | 2011-01-01 | Sony Corp | Semiconductor image sensor module and method of manufacturing the same |
WO2007134473A1 (de) * | 2006-05-22 | 2007-11-29 | Waeny Martin | Bildaufnehmer mit lokaler adaptiver belichtungsregelung |
JP2008118378A (ja) * | 2006-11-02 | 2008-05-22 | Canon Inc | 撮影装置及びその駆動方法 |
JP5141559B2 (ja) | 2006-12-18 | 2013-02-13 | ソニー株式会社 | 撮像装置及び方法、記録装置及び方法、再生装置及び方法 |
WO2008078227A1 (en) * | 2006-12-21 | 2008-07-03 | Koninklijke Philips Electronics N.V. | A device for and a method of processing audio data |
JP5260979B2 (ja) * | 2007-05-02 | 2013-08-14 | キヤノン株式会社 | 撮像システム、信号処理回路、及び信号処理方法 |
US8542315B2 (en) | 2007-11-27 | 2013-09-24 | Broadcom Corporation | Method and apparatus for expanded dynamic range imaging |
US8390710B2 (en) * | 2007-12-19 | 2013-03-05 | Canon Kabushiki Kaisha | Image pickup system, method for driving image pickup elements, and recording medium |
JP5219778B2 (ja) * | 2008-12-18 | 2013-06-26 | キヤノン株式会社 | 撮像装置及びその制御方法 |
CN101442617B (zh) * | 2008-12-23 | 2014-01-08 | 北京中星微电子有限公司 | 一种分块曝光的方法及其装置 |
JP2010178197A (ja) * | 2009-01-30 | 2010-08-12 | Panasonic Corp | 固体撮像装置の駆動方法、固体撮像装置およびカメラ |
KR100989126B1 (ko) * | 2009-02-05 | 2010-10-20 | 삼성모바일디스플레이주식회사 | 전자 영상 기기 및 그 구동 방법 |
US8179466B2 (en) * | 2009-03-11 | 2012-05-15 | Eastman Kodak Company | Capture of video with motion-speed determination and variable capture rate |
KR101133733B1 (ko) * | 2009-12-10 | 2012-04-09 | 삼성전자주식회사 | 전자 셔터를 이용한 다단계 노출 방법 및 이를 이용한 촬영 장치 |
JP5651976B2 (ja) * | 2010-03-26 | 2015-01-14 | ソニー株式会社 | 固体撮像素子およびその製造方法、並びに電子機器 |
JP2012060394A (ja) * | 2010-09-08 | 2012-03-22 | Canon Inc | 画像取得装置 |
US20140192238A1 (en) * | 2010-10-24 | 2014-07-10 | Linx Computational Imaging Ltd. | System and Method for Imaging and Image Processing |
WO2012127772A1 (ja) * | 2011-03-24 | 2012-09-27 | パナソニック株式会社 | 固体撮像素子および当該素子を備える撮像装置 |
JP5947507B2 (ja) | 2011-09-01 | 2016-07-06 | キヤノン株式会社 | 撮像装置及びその制御方法 |
RU2018130065A (ru) | 2012-03-30 | 2019-03-15 | Никон Корпорейшн | Модуль формирования изображений, устройство формирования изображений и управляющая программа для формирования изображений |
JPWO2013164915A1 (ja) | 2012-05-02 | 2015-12-24 | 株式会社ニコン | 撮像装置 |
WO2014141663A1 (ja) | 2013-03-14 | 2014-09-18 | 株式会社ニコン | 撮像ユニット、撮像装置および撮像制御プログラム |
CN109256404B (zh) * | 2013-07-04 | 2023-08-15 | 株式会社尼康 | 摄像元件以及电子设备 |
-
2013
- 2013-07-04 CN CN201811274381.7A patent/CN109256404B/zh active Active
- 2013-07-04 JP JP2015524974A patent/JP6372488B2/ja active Active
- 2013-07-04 CN CN201380077710.8A patent/CN105324985B/zh active Active
- 2013-07-04 EP EP19181971.3A patent/EP3588940B1/en active Active
- 2013-07-04 EP EP13888636.1A patent/EP3018893A4/en not_active Withdrawn
- 2013-07-04 CN CN201811271635.XA patent/CN109742094B/zh active Active
- 2013-07-04 WO PCT/JP2013/068381 patent/WO2015001646A1/ja active Application Filing
- 2013-07-04 CN CN201811271609.7A patent/CN109743514B/zh active Active
- 2013-07-04 US US14/900,802 patent/US10142563B2/en active Active
-
2018
- 2018-11-01 US US16/177,653 patent/US10841512B2/en active Active
-
2020
- 2020-09-15 US US17/020,920 patent/US11637970B2/en active Active
-
2023
- 2023-03-08 US US18/118,850 patent/US20230231966A1/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11355650A (ja) * | 1998-06-09 | 1999-12-24 | Nikon Corp | 画像取り込み装置 |
JP2000228747A (ja) * | 1998-12-03 | 2000-08-15 | Olympus Optical Co Ltd | 画像処理装置 |
JP2006049361A (ja) | 2004-07-30 | 2006-02-16 | Sony Corp | 半導体モジュール及びmos型固体撮像装置 |
JP2006197192A (ja) * | 2005-01-13 | 2006-07-27 | Sony Corp | 撮像装置及び撮像結果の処理方法 |
US20100002940A1 (en) | 2008-07-03 | 2010-01-07 | Sony Corporation | Image data processing apparatus and image data processing method |
JP2010016621A (ja) | 2008-07-03 | 2010-01-21 | Sony Corp | 画像データ処理装置と画像データ処理方法およびプログラムと記録媒体 |
JP2010021697A (ja) * | 2008-07-09 | 2010-01-28 | Sony Corp | 撮像素子、カメラ、撮像素子の制御方法、並びにプログラム |
JP2010239277A (ja) * | 2009-03-30 | 2010-10-21 | Fujifilm Corp | 撮像装置および撮像方法 |
JP2012139026A (ja) | 2010-12-27 | 2012-07-19 | Alpine Electronics Inc | 消費電力制御システム |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10623669B2 (en) | 2014-06-10 | 2020-04-14 | Sony Corporation | Image capturing control apparatus, image capturing apparatus, image capturing system and image capturing control method |
JPWO2015190021A1 (ja) * | 2014-06-10 | 2017-04-20 | ソニー株式会社 | 撮像制御装置、撮像装置、撮像システムおよび撮像制御方法 |
WO2015190021A1 (ja) * | 2014-06-10 | 2015-12-17 | ソニー株式会社 | 撮像制御装置、撮像装置、撮像システムおよび撮像制御方法 |
JP2016192606A (ja) * | 2015-03-30 | 2016-11-10 | 株式会社ニコン | 電子機器、およびプログラム |
US11800074B2 (en) * | 2015-03-30 | 2023-10-24 | Nikon Corporation | Electronic device and computer program product |
CN112954181A (zh) * | 2015-03-30 | 2021-06-11 | 株式会社尼康 | 电子设备 |
US20200162662A1 (en) * | 2015-03-30 | 2020-05-21 | Nikon Corporation | Electronic device and computer program product |
JP2018516013A (ja) * | 2015-05-19 | 2018-06-14 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | セミグローバルシャッタイメージャ |
US11272127B2 (en) | 2015-05-19 | 2022-03-08 | Magic Leap, Inc. | Semi-global shutter imager |
JP7438251B2 (ja) | 2015-05-19 | 2024-02-26 | マジック リープ, インコーポレイテッド | セミグローバルシャッタイメージャ |
JP7128872B2 (ja) | 2015-05-19 | 2022-08-31 | マジック リープ, インコーポレイテッド | セミグローバルシャッタイメージャ |
JP2021013186A (ja) * | 2015-05-19 | 2021-02-04 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | セミグローバルシャッタイメージャ |
US11019287B2 (en) | 2015-05-19 | 2021-05-25 | Magic Leap, Inc. | Semi-global shutter imager |
JP2022060543A (ja) * | 2015-05-19 | 2022-04-14 | マジック リープ, インコーポレイテッド | セミグローバルシャッタイメージャ |
JP2017022641A (ja) * | 2015-07-14 | 2017-01-26 | キヤノン株式会社 | 画像処理装置、撮像装置および撮像制御プログラム |
JP2017183940A (ja) * | 2016-03-29 | 2017-10-05 | 本田技研工業株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
EP3439282A4 (en) * | 2016-03-31 | 2019-11-13 | Nikon Corporation | IMAGE CAPTURE DEVICE, IMAGE PROCESSING DEVICE, AND ELECTRONIC APPARATUS |
CN109196855A (zh) * | 2016-03-31 | 2019-01-11 | 株式会社尼康 | 摄像装置、图像处理装置及电子设备 |
US11483467B2 (en) | 2016-03-31 | 2022-10-25 | Nikon Corporation | Imaging device, image processing device, and electronic apparatus |
JP7372034B2 (ja) | 2016-03-31 | 2023-10-31 | 株式会社ニコン | 撮像装置、および画像処理装置 |
US12028610B2 (en) | 2016-03-31 | 2024-07-02 | Nikon Corporation | Imaging device, image processing device, and electronic apparatus |
JP2019057240A (ja) * | 2017-09-22 | 2019-04-11 | 株式会社デンソーウェーブ | 撮像装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3588940A1 (en) | 2020-01-01 |
CN109742094B (zh) | 2024-05-31 |
EP3588940B1 (en) | 2023-03-22 |
US20190089909A1 (en) | 2019-03-21 |
US10841512B2 (en) | 2020-11-17 |
US10142563B2 (en) | 2018-11-27 |
CN105324985B (zh) | 2018-11-23 |
US11637970B2 (en) | 2023-04-25 |
CN109743514B (zh) | 2022-01-28 |
JP6372488B2 (ja) | 2018-08-15 |
CN109256404B (zh) | 2023-08-15 |
CN109742094A (zh) | 2019-05-10 |
EP3018893A1 (en) | 2016-05-11 |
US20230231966A1 (en) | 2023-07-20 |
JPWO2015001646A1 (ja) | 2017-02-23 |
EP3018893A4 (en) | 2016-11-30 |
US20200412978A1 (en) | 2020-12-31 |
CN105324985A (zh) | 2016-02-10 |
US20160142645A1 (en) | 2016-05-19 |
CN109743514A (zh) | 2019-05-10 |
CN109256404A (zh) | 2019-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6372488B2 (ja) | 電子機器 | |
JP7544116B2 (ja) | 撮像素子及び撮像装置 | |
US11206353B2 (en) | Electronic apparatus, method for controlling electronic apparatus, and control program for setting image-capture conditions of image sensor | |
JP6075393B2 (ja) | 電子機器及び制御プログラム | |
JP6375607B2 (ja) | 電子機器、電子機器の制御方法、及び制御プログラム | |
JP6561428B2 (ja) | 電子機器、制御方法、及び制御プログラム | |
JP6409883B2 (ja) | 電子機器及び制御プログラム | |
JP7176591B2 (ja) | 電子機器 | |
JP2018117375A (ja) | 電子機器、電子機器の制御方法、及び制御プログラム | |
JP2018148590A (ja) | 電子機器、及び撮像素子 | |
JP2020171054A (ja) | 電子機器 | |
JP7283488B2 (ja) | 撮像素子及び電子機器 | |
JP6822507B2 (ja) | 電子機器 | |
JP2016158294A (ja) | 電子機器、電子機器の制御方法、及び制御プログラム | |
JP2018207504A (ja) | 電子機器 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201380077710.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13888636 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015524974 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14900802 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013888636 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |