WO2018124050A1 - Imaging device, camera, and imaging method - Google Patents

Imaging device, camera, and imaging method Download PDF

Info

Publication number
WO2018124050A1
WO2018124050A1 PCT/JP2017/046594 JP2017046594W WO2018124050A1 WO 2018124050 A1 WO2018124050 A1 WO 2018124050A1 JP 2017046594 W JP2017046594 W JP 2017046594W WO 2018124050 A1 WO2018124050 A1 WO 2018124050A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel signal
correction
processing unit
image processing
Prior art date
Application number
PCT/JP2017/046594
Other languages
French (fr)
Japanese (ja)
Inventor
寿人 吉松
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2018124050A1 publication Critical patent/WO2018124050A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors

Definitions

  • the present disclosure relates to an imaging apparatus, a camera including the imaging apparatus, and an imaging method thereof.
  • Patent Document 1 An imaging apparatus that captures an image using an image sensor is known (see, for example, Patent Document 1).
  • the present disclosure provides an imaging apparatus, a camera, and an imaging method in which the image processing speed is increased as compared with the conventional art.
  • an imaging device is obtained from a solid-state imaging device having a plurality of pixels arranged in a matrix and capable of non-destructive readout, and after the exposure is completed
  • An image processing unit that generates a captured image by correcting the first pixel signal, and a memory that stores the first pixel signal corrected by the image processing unit. Correction is performed based on the second pixel signal acquired from the solid-state image sensor by nondestructive readout during exposure.
  • a camera includes the above-described imaging device and a lens that collects external light on the imaging device.
  • the imaging method acquires a first pixel signal after completion of exposure from a solid-state imaging device that is arranged in a matrix and includes a plurality of pixels that can be read nondestructively,
  • the captured image is generated by correcting the first pixel signal based on the second pixel signal acquired from the solid-state imaging device by nondestructive readout.
  • the speed of image processing is increased.
  • FIG. 1 is a block diagram illustrating an overall configuration of an imaging apparatus according to an embodiment.
  • FIG. 2 is a diagram illustrating an example of a circuit configuration of a pixel according to the embodiment.
  • FIG. 3 is a functional block diagram of a camera in which the imaging device according to the embodiment is built.
  • FIG. 4A is a flowchart illustrating the operation of the imaging apparatus according to the embodiment.
  • FIG. 4B is a flowchart illustrating the operation of the imaging apparatus according to the conventional example.
  • FIG. 5 is a diagram for explaining the flow of image processing according to the embodiment.
  • FIG. 6 is a diagram illustrating an image generated by the image processing according to the embodiment.
  • FIG. 7 is an external view of a camera in which the imaging device according to the embodiment is built.
  • FIG. 1 is a block diagram showing an overall configuration of an imaging apparatus 10 according to the present embodiment.
  • the imaging apparatus 10 shown in the figure includes a solid-state imaging device 100, a signal processing unit 300, a display unit 500 (see FIG. 3), and an operation unit 600 (see FIG. 3).
  • the solid-state imaging device 100 includes a pixel array unit 110, a column AD conversion unit 120, a row scanning unit 130, a column scanning unit 140, and a drive control unit 150.
  • a column signal line 160 is arranged for each pixel column
  • a scanning line 170 is arranged for each pixel row.
  • the display unit 500 and the operation unit 600 included in the imaging device 10 are not illustrated.
  • the pixel array unit 110 is an imaging unit in which a plurality of pixels 210 are arranged in a matrix.
  • the column AD conversion (analog / digital converter) unit 120 digitally converts a signal (analog signal) input from each column signal line 160 to acquire, hold, and output a digital value corresponding to the amount of light received by the pixel 210. It is the conversion part.
  • the row scanning unit 130 has a function of controlling the reset operation, charge accumulation operation, and readout operation of the pixels 210 in units of rows.
  • the column scanning unit 140 causes the signal processing unit 300 to output the digital values for one row held in the column AD conversion unit 120 to the row signal line 180 sequentially.
  • the drive control unit 150 controls each unit by supplying various control signals to the row scanning unit 130 and the column scanning unit 140.
  • the drive control unit 150 supplies various control signals to the row scanning unit 130 and the column scanning unit 140 based on a control signal from the signal processing unit 300.
  • FIG. 2 is a diagram illustrating an example of a circuit configuration of the pixel 210 according to this embodiment.
  • the pixel 210 includes a photoelectric conversion element 211, a reset transistor 212, an amplification transistor 213, a selection transistor 214, and a charge storage unit 215.
  • the photoelectric conversion element 211 is a photoelectric conversion unit that photoelectrically converts received light into signal charges (pixel charges).
  • the photoelectric conversion element 211 includes an upper electrode 211a, a lower electrode 211b, and a photoelectric conversion film 211c sandwiched between both electrodes.
  • the photoelectric conversion film 211c is a film made of a photoelectric conversion material that generates an electric charge according to received light.
  • the photoelectric conversion film 211c is made of an organic photoelectric conversion film containing organic molecules having a high light absorption function. ing.
  • the photoelectric conversion element 211 according to the present embodiment is an organic photoelectric conversion element having an organic photoelectric conversion film
  • the solid-state imaging element 100 is an organic sensor using the organic photoelectric conversion element. Note that the organic photoelectric conversion film is formed across the plurality of pixels 210. Each of the plurality of pixels 210 has an organic photoelectric conversion film.
  • the thickness of the photoelectric conversion film 211c is, for example, about 500 nm. Moreover, the photoelectric conversion film 211c is formed using, for example, a vacuum deposition method.
  • the organic molecule has a high light absorption function over the entire visible light wavelength range from about 400 nm to about 700 nm.
  • the photoelectric conversion element 211 included in the pixel 210 according to the present embodiment is not limited to the organic photoelectric conversion film described above, and may be, for example, a photodiode formed of an inorganic material. .
  • the upper electrode 211a is an electrode facing the lower electrode 211b, and is formed on the photoelectric conversion film 211c so as to cover the photoelectric conversion film 211c. That is, the upper electrode 211a is formed across a plurality of pixels 210.
  • the upper electrode 211a is made of a transparent conductive material (for example, ITO: indium / titanium / tin) in order to make light incident on the photoelectric conversion film 211c.
  • the lower electrode 211b is an electrode for taking out electrons or holes generated in the photoelectric conversion film 211c between the opposed upper electrode 211a.
  • the lower electrode 211b is formed for each pixel 210.
  • the lower electrode 211b is made of, for example, Ti, TiN, Ta, Mo, or the like.
  • the charge accumulating unit 215 is connected to the photoelectric conversion element 211 and accumulates signal charges taken out via the lower electrode 211b.
  • the reset transistor 212 has a drain supplied with a reset voltage VRST, a source connected to the charge storage unit 215, and resets (initializes) the potential of the charge storage unit 215. Specifically, a predetermined voltage is supplied (turned on) from the row scanning unit 130 to the gate of the reset transistor 212 via the reset scanning line 170A, so that the reset transistor 212 has the potential of the charge storage unit 215. To reset. Further, by stopping the supply of a predetermined voltage, signal charges are accumulated in the charge accumulation unit 215 (exposure is started).
  • the amplification transistor 213 has a gate connected to the charge storage unit 215, a power supply voltage VDD supplied to the drain, and outputs a pixel signal corresponding to the amount of signal charge stored in the charge storage unit 215.
  • the selection transistor 214 has a drain connected to the source of the amplification transistor 213, a source connected to the column signal line 160, and determines the timing for outputting the pixel signal from the amplification transistor 213. Specifically, a pixel signal is output from the amplification transistor 213 by supplying a predetermined voltage from the row scanning unit 130 to the gate of the selection transistor 214 via the selection scanning line 170B.
  • the pixel 210 having the above configuration can perform nondestructive readout.
  • the non-destructive reading means reading out a pixel signal corresponding to the amount of charge without destroying the charge (signal charge) accumulated in the charge accumulation unit 215 during exposure. Note that “during exposure” is used to mean any timing within the exposure time.
  • the column AD conversion unit 120 includes an AD converter 121 provided for each column signal line 160.
  • the AD converter 121 is, for example, a 14-bit AD converter.
  • the AD converter 121 digitally converts an analog pixel signal output from the pixel 210 using a ramp method, and outputs a digital value corresponding to the amount of light received by the pixel 210.
  • the AD converter 121 includes a comparator and an up / down counter (not shown).
  • the ramp-type AD conversion is an AD conversion using a ramp wave.
  • a ramp wave whose voltage rises at a constant slope is started, and from the start point.
  • the time until the voltages of both signals (the input signal and the ramp wave) coincide with each other is measured, and the measured time is output as a digital value.
  • the comparator compares the voltage of the column signal with the voltage of the reference signal input as a ramp wave, and outputs a signal indicating the timing at which the voltage of the reference signal matches the voltage of the column signal.
  • the up / down counter counts down (or up) during a period from when the reference signal is input to the comparator to when the reference signal reaches the voltage of the column signal indicating the reference component.
  • Digital corresponding to the difference obtained by subtracting the reference component from the signal component of the column signal by performing up-counting (or down-counting) in the period from when the reference signal reaches the voltage of the column signal indicating the signal component Keep the value finally.
  • the digital values held in the up / down counters are sequentially output to the row signal line 180 and output to the signal processing unit 300 via an output circuit (not shown, but an output buffer or the like).
  • the drive control unit 150 controls the row scanning unit 130 and the column scanning unit 140 to perform a reset operation, a charge accumulation operation, and a readout operation in the pixel 210 or a digital signal from the AD converter 121 to the signal processing unit 300. Controls the output operation of.
  • the drive control unit 150 controls the row scanning unit 130, sequentially applies a predetermined voltage to the selection scanning line 170B, and outputs a pixel signal (analog). .
  • the drive control unit 150 also controls the column scanning unit 140 to sequentially output pixel signals (digital values) held in the AD converter 121 to the signal processing unit 300.
  • FIG. 3 is a functional block diagram of the camera 1 including the imaging device 10 and the lens 400 according to the present embodiment.
  • the camera 1 shown in the figure includes a solid-state imaging device 100, a signal processing unit 300, a lens 400, a display unit 500, and an operation unit 600.
  • the signal processing unit 300 includes a control unit 310, an image processing unit 320, and a memory 330.
  • the light that has passed through the lens 400 enters the solid-state imaging device 100.
  • the signal processing unit 300 drives the solid-state image sensor 100 and captures a pixel signal (digital value) from the solid-state image sensor 100.
  • the image processing unit 320 captures a pixel signal from the solid-state imaging device 100.
  • the image processing unit 320 performs predetermined signal processing on the pixel signal acquired from the solid-state imaging device 100 to generate a captured image.
  • the generated captured image is stored in the memory 330.
  • the generated captured image is output to the display unit 500.
  • the pixel signal (digital value) is an example of the first pixel signal.
  • the captured image is an image displayed or recorded to the user in the case of a camera, for example.
  • control unit 310 reads a program from the memory 330 and executes the read program.
  • the control unit 310 is realized by a processor.
  • the processor functions as the control unit 310 when the software program stored in the memory 330 is executed by the processor.
  • the control unit 310 is realized by reading the software program from the nonvolatile memory to the volatile memory and executing the read software program by the processor.
  • the control unit 310 controls the drive control unit 150.
  • other units may be controlled.
  • the control unit 310 may perform control according to the input.
  • the control unit 310 may control the lens 400 (specifically, a motor that controls the position of the lens 400) to adjust the focus of the subject.
  • the image processing unit 320 generates a captured image by performing predetermined correction on the first pixel signal acquired from the solid-state imaging device 100 after the exposure is completed. Specifically, the image processing unit 320 performs a predetermined correction on the first pixel signal based on the second pixel signal acquired from the solid-state imaging device 100 by nondestructive readout during exposure, thereby obtaining a captured image. Generate.
  • the predetermined correction means that information for correction (for example, a correction value for correction applied to the first pixel signal) is obtained unless image data for one frame (second pixel signal for one frame) is used. Correction that cannot be performed, for example, gradation correction, image blur correction, or white balance correction.
  • the present embodiment is characterized in that the above-described correction is performed on the first pixel signal based on the analysis result of the second pixel signal.
  • the image processing unit 320 reads a program from the memory 330 and executes the read program.
  • the image processing unit 320 is realized by a processor.
  • the processor functions as the image processing unit 320 when the software program stored in the memory 330 is executed by the processor.
  • the image processing unit 320 is realized by reading a software program from a nonvolatile memory into a volatile memory and executing the read software program by a processor. Note that the image processing unit 320 may perform the above correction under the control of the control unit 310.
  • control unit 310 and the image processing unit 320 may be realized by a microcomputer.
  • each of the control unit 310 and the image processing unit 320 includes a nonvolatile memory in which an operation program is stored, a volatile memory that is a temporary storage area for executing the program, an input / output port, and a processor that executes the program. Etc.
  • the first pixel signal is corrected so that overexposure and blackout are not likely to occur. Specifically, the overexposed pixels are corrected dark and the blacked out pixels are corrected brightly. In the gradation correction, other pixels are also corrected as necessary, so that an image in which whiteout and blackout are reduced and the brightness balance is adjusted as a whole is generated.
  • the image processing unit 320 uses the second pixel signal (image data generated by the second pixel signal) acquired by nondestructive reading.
  • a blur image is generated from the second pixel signal for one frame, and the first pixel signal is corrected by calculating the generated blur image and the first pixel signal.
  • the first pixel signal is corrected for each pixel 210.
  • the correction value of the first pixel signal is obtained by calculating the brightness of the first pixel signal of one pixel 210 and the brightness of the pixel of the blurred image corresponding to the first pixel signal.
  • a captured image is generated by correcting the first pixel signal with the correction value.
  • the blurred image is an example of a correction image.
  • a blurred image and the first pixel signal are calculated, and a captured image is generated by correcting the first pixel signal from the calculation result.
  • an image (an example of a correction image) generated by the second pixel signal acquired by nondestructive readout and an image generated by the first or second pixel signal in the previous frame, for example.
  • the motion vector is acquired from the above, and the first pixel signal is corrected according to the acquired motion vector.
  • the direction and amount of image blurring are specified from the motion vector, and the image generated by the first pixel signal is translated in accordance with the direction and amount, so that the correction is performed.
  • the image generated by the first pixel signal is translated so that the motion vector becomes small.
  • the signal processing unit 300 calculates one motion vector for one frame image.
  • one frame image is divided into a plurality of windows (smaller areas smaller than one frame image), a motion vector is acquired for each window, and a motion vector in one frame image is acquired from the motion vector in each window.
  • acquiring a motion vector using a correction image is an example of image analysis.
  • the white balance correction the white balance of the first pixel signal is corrected from the analysis result of the second pixel signal in the same manner as described above.
  • the white balance is corrected after correcting the offset (correcting the OB step when the analog signal is converted into the digital signal)
  • the second pixel acquired by nondestructive readout is also used in the offset correction.
  • a signal may be used. Note that image blur correction, white balance correction, and the like need not use a blurred image, and can be performed by using an image for one frame acquired by nondestructive reading.
  • correction image or image analysis using the generated correction image is an example of image processing.
  • the image data (image) generated by the second pixel signal acquired by non-destructive readout during exposure is an image that is generally darker than the captured image generated by the first pixel signal acquired after the exposure is completed. It becomes. Therefore, the brightness of the image data can be adjusted by amplifying the second pixel signal (correcting the gain with respect to the brightness) before generating the correction image using the second pixel signal. Good. Specifically, adjustment to increase the brightness may be performed.
  • the memory 330 functions as a work memory for the image processing unit 320.
  • the memory 330 temporarily stores image data processed by the image processing unit 320, image data (first pixel signal) input from the solid-state imaging device 100 before being processed by the image processing unit 320, and the like. .
  • the memory 330 has a nonvolatile memory and a volatile memory.
  • the non-volatile memory stores a program corresponding to processing executed by the imaging apparatus 10, and captured image data.
  • the volatile memory is used as a work memory when the processor performs processing, for example.
  • the nonvolatile memory can be realized by a semiconductor memory such as a ferroelectric memory or a flash memory, for example.
  • the volatile memory can be realized by a semiconductor memory such as DRAM (Dynamic Random Access Memory).
  • the memory 330 only needs to be included in the imaging apparatus 10 and may not be included in the signal processing unit 300.
  • the non-volatile memory may be an external memory that is detachably connected to the imaging device 10.
  • the external memory is realized by a memory card such as an SD card (registered trademark), for example.
  • image data captured by the imaging device 10 is stored in the external memory.
  • the display unit 500 is a display device that displays the captured image generated by the signal processing unit 300, and is a liquid crystal monitor, for example.
  • the display unit 500 can display various setting information. For example, the display unit 500 can display shooting conditions (aperture, ISO sensitivity, etc.) at the time of shooting.
  • the operation unit 600 is an input unit that receives input from the user, and is, for example, a release button or a touch panel.
  • the touch panel is bonded to a liquid crystal monitor. An imaging instruction from the user, a change in imaging conditions, and the like are accepted.
  • the imaging device 10 may include an interface (not shown) for performing communication between an external circuit and the solid-state imaging device 100 or the signal processing unit 300.
  • the interface is, for example, a communication port made of a semiconductor integrated circuit.
  • FIG. 4A is a flowchart showing the operation of the imaging apparatus 10 according to the present embodiment.
  • FIG. 4B is a flowchart illustrating the operation of the imaging apparatus according to the conventional example.
  • the conventional example shows an example in which correction is performed using a pixel signal (first pixel signal) acquired after the exposure is completed. That is, in the conventional example, correction is performed without performing nondestructive reading.
  • FIG. 4A and FIG. 4B are examples of flowcharts showing the operation in the case where gradation correction is performed.
  • the solid-state imaging device 100 starts exposure under the control of the control unit 310 (S1). Thereby, charges corresponding to the amount of received light are accumulated in each of the plurality of pixels 210. Specifically, charges corresponding to the amount of light received by the photoelectric conversion element 211 are accumulated in the charge accumulation unit 215.
  • control unit 310 performs control to execute non-destructive reading that reads out the charges accumulated in the charge accumulation unit 215 of each pixel 210 during the exposure without destroying them. More specifically, the control unit 310 controls the drive control unit 150 and performs AD conversion in the column AD conversion unit 120 for each pixel row, so that the accumulated charge (pixel signal) is a digital value ( Second pixel signal). The converted digital values are sequentially output to the signal processing unit 300 by the column scanning unit 140. That is, the signal processing unit 300 acquires the second pixel signal by nondestructive reading (S2). Note that the potential of the charge storage portion 215 is not reset by the reset transistor 212 after performing nondestructive reading.
  • the image processing unit 320 generates a blurred image (an example of a correction image) from the acquired second pixel signal (S3).
  • the image processing unit 320 may generate a blurred image having a lower resolution than an image generated from the second pixel signal or a captured image. For example, when the resolution of the image generated from the second pixel signal or the captured image is 5000 ⁇ 4000, the resolution of the blurred image is 50 ⁇ 40 or the like.
  • the blurred image is generated by accumulating the second pixel signals for one frame, reducing the image, and further enlarging the image.
  • the pixel values of a plurality of adjacent pixels 210 are integrated to calculate one pixel value, which is reduced.
  • An image is generated.
  • the resolution of the reduced image is the same as that of the blurred image, and is 50 ⁇ 40, for example.
  • the reduced image is enlarged.
  • the reduced image is enlarged to, for example, the image size before reduction.
  • the resolution of the enlarged image (that is, the blurred image) is the same as the resolution of the reduced image, and is 50 ⁇ 40, for example.
  • the blurred image has, for example, an image size that is the same as the image size before the reduction, and a resolution that is lower than the image before the reduction.
  • the generated blurred image may be held by the image processing unit 320 or stored in the memory 330, for example. That is, the image processing unit 320 may store the generated blurred image in the memory 330, or may store the blurred image in the memory when the image processing unit 320 includes a memory (not shown). In the following description, an example in which the image processing unit 320 stores a blurred image in the memory 330 will be described. Note that the blurred image may be generated by performing low-pass filter processing on an image generated by the acquired second pixel signal.
  • the exposure in the solid-state imaging device 100 ends (S4). That is, in the imaging apparatus 10 according to the present embodiment, generation of a blurred image in the image processing unit 320 is completed before the exposure of the solid-state imaging device 100 is completed.
  • the control unit 310 performs nondestructive readout from the exposure time and the time (first time) required from the acquisition of the second pixel signal until the generation of the blurred image is completed (S2 and S3). You may control the timing to do. For example, the control unit 310 may cause the solid-state imaging device 100 to perform nondestructive reading at a time that is a first time back from the time when the exposure ends.
  • the image processing unit 320 can increase the brightness of the second pixel signal acquired by nondestructive readout and generate a blurred image before the end of exposure. That is, the image processing unit 320 can correct the first pixel signal sequentially acquired for each pixel 210 after the exposure is completed at the time when the first pixel signal is acquired. Further, since the brightness of the second pixel signal (brightness information indicated by the second pixel signal) is bright, the influence of noise when gain is applied to the second pixel signal can be reduced.
  • the first time can be specified before the exposure is started from the processing capability of the signal processing unit 300 and the number of pixels of the pixel array unit 110.
  • step S3 a correction image is generated and an image analysis of the correction image is performed.
  • the control unit 310 is nondestructive based on the exposure time and the time (second time) required from the acquisition of the second pixel signal to the generation and analysis of the correction image (S2 and S3). You may control the timing which performs reading. For example, the control unit 310 may cause the solid-state imaging device 100 to perform nondestructive readout at a time that is a second time after the time when the exposure ends.
  • the control unit 310 controls the drive control unit 150 after the exposure is completed, so that the column AD conversion unit 120 performs AD conversion for each pixel column, and converts the accumulated charge into a digital value corresponding to the accumulated charge. .
  • the converted digital values are sequentially output to the signal processing unit 300 by the column scanning unit 140. That is, the signal processing unit 300 acquires the first pixel signal (S5). More specifically, the signal processing unit 300 sequentially acquires a first pixel signal for each of the plurality of pixels 210.
  • the reading of the first pixel signal is performed by, for example, normal reading.
  • the normal reading is destructive reading, and is a reading in which the accumulated charge is destroyed after the pixel signal is read out (the potential of the charge accumulation unit 215 is reset by turning on the reset transistor 212).
  • the image processing unit 320 uses the blurred image stored in the memory 330 to generate a captured image by correcting the first pixel signal from the blurred image and the first pixel signal (S6).
  • correction for noise such as streaking as an image
  • light amount adjustment for the pixels 210 around the lens 400 may be performed together. Good. Thereby, compared with the case where each correction is performed separately, the time required for correction can be shortened.
  • the image processing unit 320 displays a captured image on the display unit 500 included in the imaging device 10, or via a storage unit (not shown, for example, an external memory) or an interface (not shown) included in the camera 1.
  • the captured image is stored in an external storage medium.
  • the solid-state imaging device starts exposure (S1).
  • the signal processing unit does not perform image processing during the exposure (does not perform reading or the like), and the exposure ends (S4). At this time, a blurred image has not been generated.
  • the signal processing unit acquires the first pixel signal after the exposure is completed (S5).
  • the signal processing unit generates an intermediate image by performing correction for noise generated in the solid-state imaging device and light amount adjustment for pixels around the lens on the first pixel signal (S11). These processes are performed before storing in the memory.
  • the intermediate image is stored in the memory (S12).
  • the intermediate image is stored in a DRAM included in the memory. That is, in the conventional method, it takes time to store the intermediate image in the memory. Furthermore, a memory for storing the intermediate image is required, and therefore the memory capacity needs to be increased. For example, in order to store an intermediate image, an extra memory capable of storing image data for one frame is required.
  • a blurred image is generated using the intermediate image (S13).
  • an intermediate image is used as an image for generating a blurred image.
  • the method for generating a blurred image is the same as the method for generating a blurred image according to the present embodiment (see S3). For example, gain correction is not performed on the first pixel signal. Since step S13 is performed after the exposure is completed, the conventional method takes time for the process after the exposure is completed.
  • the intermediate image is read from the memory (S14), and the captured image is generated by correcting the intermediate image using the blurred image generated in step S13 (S15). That is, in the conventional method, it takes time to read the intermediate image from the memory.
  • the imaging apparatus 10 can perform image processing using an image acquired by non-destructive readout during exposure, so that the time required for image processing can be shortened compared to a conventional imaging apparatus. Can do.
  • FIG. 5 is a diagram for comparing the flow of image processing.
  • (A) of FIG. 5 is a figure for demonstrating the flow of the image processing of the imaging device which concerns on a prior art example.
  • FIG. 5B is a diagram for explaining the flow of image processing of the imaging apparatus 10 according to the present embodiment.
  • the signal processing unit does not perform image processing from the start of exposure to the end of exposure.
  • the signal processing unit 300 acquires the second pixel signal by performing nondestructive readout during exposure (S2). Then, a blurred image is generated using the second pixel signal (S3).
  • FIG. 5B shows an example in which the signal processing unit 300 performs nondestructive reading and image processing so that generation of a blurred image is completed by the end of exposure (S4).
  • the signal processing unit 300 can complete the image processing in step S13 in the conventional method during exposure by performing nondestructive reading.
  • step S3 Even if the exposure time is short and the process of step S3 is not completed by the end of the exposure, a part of the process of step S3 can be performed during the exposure. It is possible to shorten the time from the end to the end of the image processing. In other words, the nondestructive reading may be performed before the exposure is completed.
  • image processing is started after completion of exposure.
  • an intermediate image is generated from the first pixel signal acquired after the exposure is completed (S11).
  • This is processing performed before storing the first pixel signal in the memory, and includes correction for noise and light amount adjustment for pixels around the lens as described above.
  • the processing performed before storing in the memory may be other than the above.
  • the generated intermediate image is stored in the memory (S12).
  • the intermediate image is an image for one frame.
  • the image processing unit generates a blurred image using the intermediate image (S13). That is, the image processing unit generates a blurred image for one frame.
  • the image processing unit generates a captured image from the intermediate image stored in the memory and the blurred image generated in step S13 (S15).
  • the signal processing unit since the signal processing unit starts image processing after the end of exposure, it takes time from the end of exposure to the end of image processing. Specifically, it takes time to perform steps S11 to S15. Therefore, the time required from the start of exposure to the end of image processing (imaging time Ta) also takes time. Further, a memory for storing the intermediate image is required.
  • the image processing performed after the end of exposure is generation of a captured image.
  • the image processing unit 320 generates a captured image from the first pixel signal acquired after the exposure is completed and the blurred image generated in step S3 (S6).
  • the imaging time Tb of the imaging apparatus 10 according to the present embodiment can also be shortened (Tb ⁇ Ta). Specifically, the imaging time can be shortened by the shortened time T (Tb-Ta) shown in the drawing.
  • the image processing unit 320 sequentially acquires the first pixel signal for each pixel 210 after the exposure is completed. For example, the first pixel signal for each pixel 210 acquired sequentially and the pixel of the blurred image corresponding to the pixel 210 are calculated, and the first pixel signal is corrected according to the calculation result. That is, it is possible to sequentially acquire the first pixel signal for each pixel 210 and sequentially perform correction.
  • the first pixel signals that are sequentially corrected are sequentially stored in the memory 330, for example. As a result, it is possible to omit the time for storing the first pixel signal before correction in the memory 330 or reading the first pixel signal stored for correction, so that the imaging time Tb is further reduced. can do. In addition, since the capacity for storing the first pixel signal is not necessary, the capacity of the memory 330 can be reduced.
  • the readout method is a rolling shutter
  • the first pixel signal acquired by the conventional method and the first pixel signal acquired by the method of the present embodiment are the same signal.
  • the readout method is not limited to the rolling shutter.
  • a global shutter may be used.
  • the shutter function can be realized by adjusting the voltage applied to the organic photoelectric conversion film, so a global shutter can be realized without adding an element such as a memory. can do.
  • by causing all the organic photoelectric conversion films to function as shutters during the period of reading by the global shutter it is possible to suppress the accumulation of charges in the charge accumulation unit 215 during the period. That is, by using the organic photoelectric conversion film, it is possible to reduce rolling distortion generated in the global shutter without adding an element. Note that reading by nondestructive reading and reading after completion of exposure are performed by the same method.
  • FIG. 6 is a diagram for comparing images generated by image processing.
  • (A) of FIG. 6 is a figure for demonstrating the image produced
  • FIG. 6B is a diagram for explaining an image generated by the image processing of the imaging apparatus 10 according to the present embodiment.
  • FIG. 6B shows an example in which generation of a blurred image ends during exposure.
  • the conventional method cannot generate an image (for example, an intermediate image or a blurred image) for correcting the first pixel signal during exposure.
  • Image processing cannot be performed.
  • the signal processing unit After the end of exposure (S4), the signal processing unit generates an intermediate image P1 subjected to the above correction from the first pixel signal.
  • the intermediate image P1 is an image for one frame. Note that the intermediate image P1 shows a case where an image that is dark overall (dark due to an actual subject) is captured, and is represented by dot-like hatching.
  • the blurred image P2 is generated from the intermediate image P1.
  • the blurred image is also an image for one frame.
  • a captured image P3 is generated from the intermediate image P1 and the blurred image P2. That is, in the conventional method, it is necessary to generate three images.
  • 6A shows an example in which the captured image P3 is corrected so that the intermediate image P1 is slightly brightened. The correction is not limited to this.
  • image processing is performed during exposure.
  • the signal processing unit 300 generates a blurred image P12 by nondestructive reading (S2) during exposure. Since non-destructive readout is performed during exposure, the blurred image P12 generated using the second pixel signal acquired there is a dark image as a whole.
  • the blurred image P12 is an image that is darker than the captured image P13 generated using the first pixel signal acquired after the end of exposure. Further, the blurred image P12 is an image darker than the blurred image P2 generated from the intermediate image P1 shown in FIG. Therefore, the image processing unit 320 may adjust the gain of the second pixel signal and generate the blurred image P12 using the second pixel signal whose gain has been adjusted. As a result, a blur image P12 that is substantially the same as the blur image P2 in the conventional example can be obtained.
  • a captured image P13 is generated from the blurred image P12 generated by adjusting the gain and the first pixel signal acquired after the exposure is completed.
  • the captured image P13 generated and the captured image P3 generated by the conventional method are substantially the same image.
  • the gain of the second pixel signal By adjusting the gain of the second pixel signal, the accuracy of the captured image P13 can be improved.
  • Examples of the camera 1 in which the imaging apparatus 10 is built in include a digital still camera 1A shown in FIG. 7A and a digital video camera 1B shown in FIG. 7B.
  • the imaging device 10 according to the present embodiment is built in the camera shown in FIG. 7A or 7B, the user can take an image via the operation unit 600 as described above. After the instruction is issued (after the shutter is released), it is possible to shorten the time until the captured image is displayed on the display unit 500. Further, when continuously capturing images, the time from when the shutter is released until the next shutter is released can be shortened.
  • the imaging device 10 is obtained from the solid-state imaging device 100 having a plurality of pixels arranged in a matrix and capable of nondestructive readout, and the solid-state imaging device 100 after completion of exposure.
  • an image processing unit 320 that generates a captured image P13 by correcting the first pixel signal.
  • the image processing unit 320 performs correction based on the second pixel signal acquired from the solid-state imaging device 100 by nondestructive reading during exposure.
  • the imaging apparatus 10 acquires the second pixel signal during exposure by nondestructive readout. Therefore, a process for correcting the first pixel signal using the second pixel signal during exposure (for example, generation of a correction image for correcting the first pixel signal or analysis of the correction image) ).
  • a process for correcting the first pixel signal using the second pixel signal during exposure for example, generation of a correction image for correcting the first pixel signal or analysis of the correction image.
  • the image processing unit 320 generates an image for correction used for correction from the second pixel signal, or generates an image for correction, and performs image processing for image analysis using the generated image for correction. Then, the control unit 310 controls the timing at which the solid-state imaging device 100 performs nondestructive reading so that the image processing in the image processing unit 320 ends before the exposure ends.
  • a blurred image (an example of a correction image) P12 is generated by using the second pixel signal acquired by nondestructive reading before the exposure is completed.
  • the signal of the pixel 210 can be sequentially corrected.
  • the correction is image blur correction or white balance correction
  • a correction image is generated using the second pixel signal acquired by non-destructive reading and the correction is performed before the exposure is completed. Image analysis of the image can be performed.
  • the signal of each pixel 210 is sequentially acquired when the signal for each pixel 210 is acquired with respect to the first pixel signal acquired after the exposure is completed.
  • the signal can be corrected. Therefore, the imaging time Tb can be further shortened.
  • a memory 330 is provided. Then, the image processing unit 320 performs correction corresponding to the pixel 210 on the first pixel signal sequentially obtained for each pixel 210, and stores the corrected first pixel signal in the memory 330.
  • the correction can be performed without temporarily storing the first pixel signal in the memory 330, so that the time for storing and reading the first pixel signal before the correction in the memory 330 can be saved.
  • the capacity of the memory 330 can be reduced.
  • the correction image is a blurred image P12 whose resolution is lower than that of the captured image P13. Then, the image processing unit 320 generates the blurred image P12 from the second pixel signal to which the gain correction for the brightness is added.
  • the processing amount in the signal processing unit 300 can be reduced by using the blurred image P12, the processing time in the signal processing unit 300 can be shortened. Furthermore, the blurred image P12 generated by applying gain correction to the second pixel signal is an image having substantially the same brightness as the blurred image P2 generated in the conventional example. As a result, the first pixel signal can be corrected with higher accuracy.
  • the correction is gradation correction, image blur correction, or white balance correction.
  • each of the plurality of pixels 210 has an organic photoelectric conversion film.
  • the shutter function can be realized by adjusting the voltage applied to the organic photoelectric conversion film, a global shutter can be realized without adding an element such as a memory. Therefore, even when the subject is moving, image data with less distortion can be acquired.
  • the camera 1 includes the imaging device 10 described above.
  • the imaging method acquires a first pixel signal from the solid-state imaging device 100 having a plurality of pixels 210 capable of nondestructive readout after the exposure is completed, and performs nondestructive readout during exposure.
  • a captured image P13 is generated by correcting the first pixel signal based on the second pixel signal acquired from the solid-state imaging device 100.
  • the second pixel signal can be acquired during exposure by nondestructive readout, a blurred image can be generated using the second pixel signal during exposure.
  • the time from the end of exposure to the end of image processing can be shortened. In other words, the time from the start of exposure until the end of image processing (imaging time) can be shortened.
  • the image processing unit 320 may use a pixel-mixed signal acquired from the solid-state imaging device 100 as a blurred image.
  • the image processing unit 320 performs the gradation correction using the blurred image P12.
  • the image processing unit 320 may acquire a histogram for the brightness of the image from the second pixel signal, and use the result as a correction signal to correct the gradation of the first pixel signal.
  • the correction of the gradation of the first pixel signal according to the histogram for the brightness generated by the second pixel signal acquired from the solid-state imaging device 100 is an example of image processing performed by the image processing unit 320. It is.
  • the present invention is not limited to this.
  • the first pixel signal for one frame is temporarily stored in the memory 330, and the image processing unit 320 reads the first pixel signal from the memory 330 for each pixel 210, and the read first pixel signal and blurred image
  • the captured image may be generated by correcting the first pixel signal.
  • Non-destructive readout may be performed a plurality of times during exposure. For example, when image blur correction is performed, two non-destructive readings may be performed during exposure, a motion vector may be acquired from image data based on each pixel signal, and the first pixel signal may be corrected. Further, in order to reduce the influence of subject blurring and flash light during the exposure period, for example, the image processing unit 320 is more affected by subject blurring and flash light from a plurality of images acquired by non-destructive reading. A small number of images may be selected as correction images.
  • the camera 1 including the imaging device 10 according to the present disclosure has been described, but the application is not limited thereto.
  • Various electronic devices incorporating the imaging device 10 according to the present disclosure are also included in the present disclosure.
  • each component (functional block) in the imaging device 10 may be individually made into one chip by a semiconductor device such as an IC (Integrated Circuit), an LSI (Large Scale Integration), or the like, or may include a part or all of it. Alternatively, one chip may be used.
  • the method of circuit integration is not limited to LSI's, and implementation using dedicated circuitry or general purpose processors is also possible.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • integrated circuit technology that replaces LSI appears as a result of progress in semiconductor technology or other derived technology functional blocks may be integrated using this technology. Biotechnology can be applied as a possibility.
  • all or part of the various processes described above may be realized by hardware such as an electronic circuit or may be realized by using software.
  • the processing by software is realized by a processor included in the imaging apparatus 10 executing a program stored in a memory.
  • the program may be recorded on a recording medium and distributed or distributed. For example, by installing the distributed program in a device having another processor and causing the processor to execute the program, it is possible to cause the device to perform each of the above processes.
  • the camera 1 has been described with respect to the example including the lens 400 that allows light from the outside to enter the solid-state imaging device 100.
  • the lens 400 may be a lens that can be attached to and detached from the camera 1.
  • the camera 1 may not include the lens 400.
  • the lens 400 collects light from the outside and makes it incident on the solid-state imaging device 100.
  • the present disclosure can be widely used for imaging devices that capture images.

Abstract

An imaging device (10) is provided with a solid-state imaging element (100) having a plurality of pixels (210) which are arranged in matrix form and can be read out nondestructively, an image processing unit (320) for correcting a first pixel signal acquired from the solid-state imaging element (100) after the termination of exposure and thereby generating a captured image (P13), and a memory (330) for storing the first pixel signal corrected by the image processing unit (320). The image processing unit (320) makes the correction on the basis of a second pixel signal acquired from the solid-state imaging element (100) by nondestructive readout during exposure.

Description

撮像装置、カメラ及び撮像方法Imaging apparatus, camera, and imaging method
 本開示は、撮像装置、当該撮像装置を備えるカメラ及びその撮像方法に関する。 The present disclosure relates to an imaging apparatus, a camera including the imaging apparatus, and an imaging method thereof.
 従来、イメージセンサを用いて画像を撮像する撮像装置が知られている(例えば、特許文献1参照)。 2. Description of the Related Art Conventionally, an imaging apparatus that captures an image using an image sensor is known (see, for example, Patent Document 1).
特開2008―042180号公報JP 2008-042180 A
 撮像装置では種々の画像処理が行われるが、撮像装置としては当該画像処理の速度が高速であることが望まれる。 Various image processing is performed in the imaging apparatus, and it is desirable for the imaging apparatus to have a high speed of the image processing.
 そこで、本開示は、従来よりも画像処理の速度が高速化された撮像装置、カメラ及び撮像方法を提供する。 Therefore, the present disclosure provides an imaging apparatus, a camera, and an imaging method in which the image processing speed is increased as compared with the conventional art.
 上記目的を達成するために、本開示の一態様に係る撮像装置は、行列状に配列され、非破壊読み出しが可能な複数の画素を有する固体撮像素子と、露光の終了後に固体撮像素子から取得した第1の画素信号に補正を行うことで撮像画像を生成する画像処理部と、前記画像処理部により補正された前記第1の画素信号を格納するメモリと、を備え、画像処理部は、露光中の非破壊読み出しにより固体撮像素子から取得した第2の画素信号に基づいて補正を行う。 In order to achieve the above object, an imaging device according to one embodiment of the present disclosure is obtained from a solid-state imaging device having a plurality of pixels arranged in a matrix and capable of non-destructive readout, and after the exposure is completed An image processing unit that generates a captured image by correcting the first pixel signal, and a memory that stores the first pixel signal corrected by the image processing unit. Correction is performed based on the second pixel signal acquired from the solid-state image sensor by nondestructive readout during exposure.
 また、本開示の一態様に係るカメラは、上記に記載の撮像装置と、撮像装置に外部の光を集光するレンズとを備える。 In addition, a camera according to an aspect of the present disclosure includes the above-described imaging device and a lens that collects external light on the imaging device.
 また、本開示の一態様に係る撮像方法は、行列状に配列され、非破壊読み出しが可能な複数の画素を有する固体撮像素子から、露光の終了後に第1の画素信号を取得し、露光中の非破壊読み出しにより固体撮像素子から取得した第2の画素信号に基づいて第1の画素信号に補正を行うことで撮像画像を生成する。 In addition, the imaging method according to one embodiment of the present disclosure acquires a first pixel signal after completion of exposure from a solid-state imaging device that is arranged in a matrix and includes a plurality of pixels that can be read nondestructively, The captured image is generated by correcting the first pixel signal based on the second pixel signal acquired from the solid-state imaging device by nondestructive readout.
 上記本開示に係る撮像装置、カメラ、及び撮像方法によれば、画像処理の速度が高速化される。 According to the imaging apparatus, camera, and imaging method according to the present disclosure, the speed of image processing is increased.
図1は、実施の形態に係る撮像装置の全体構成を示すブロック図である。FIG. 1 is a block diagram illustrating an overall configuration of an imaging apparatus according to an embodiment. 図2は、実施の形態に係る画素の回路構成の一例を示す図である。FIG. 2 is a diagram illustrating an example of a circuit configuration of a pixel according to the embodiment. 図3は、実施の形態に係る撮像装置が内蔵されたカメラの機能ブロック図である。FIG. 3 is a functional block diagram of a camera in which the imaging device according to the embodiment is built. 図4Aは、実施の形態に係る撮像装置の動作を示すフローチャートである。FIG. 4A is a flowchart illustrating the operation of the imaging apparatus according to the embodiment. 図4Bは、従来例に係る撮像装置の動作を示すフローチャートである。FIG. 4B is a flowchart illustrating the operation of the imaging apparatus according to the conventional example. 図5は、実施の形態に係る画像処理の流れを説明するための図である。FIG. 5 is a diagram for explaining the flow of image processing according to the embodiment. 図6は、実施の形態に係る画像処理で生成される画像を示す図である。FIG. 6 is a diagram illustrating an image generated by the image processing according to the embodiment. 図7は、実施の形態に係る撮像装置が内蔵されたカメラの外観図である。FIG. 7 is an external view of a camera in which the imaging device according to the embodiment is built.
 以下では、本開示の撮像装置、カメラ及び撮像方法について、図面を用いて詳細に説明する。なお、以下に説明する実施の形態は、いずれも本開示の好ましい一具体例を示すものである。したがって、以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置及び接続形態、ステップ、ステップの順序などは、一例であり、本開示を限定する主旨ではない。よって、以下の実施の形態における構成要素のうち、本開示の最上位概念を示す独立請求項に記載されていない構成要素については、任意の構成要素として説明される。 Hereinafter, the imaging apparatus, camera, and imaging method of the present disclosure will be described in detail with reference to the drawings. Note that each of the embodiments described below shows a preferred specific example of the present disclosure. Therefore, numerical values, shapes, materials, components, component arrangement and connection forms, steps, order of steps, and the like shown in the following embodiments are merely examples, and are not intended to limit the present disclosure. Therefore, among the constituent elements in the following embodiments, constituent elements that are not described in the independent claims indicating the highest concept of the present disclosure are described as arbitrary constituent elements.
 なお、当業者が本開示を十分に理解するために添付図面および以下の説明を提供するのであって、これらによって請求の範囲に記載の主題を限定することを意図するものではない。また、各図は、模式図であり、必ずしも厳密に図示されたものではない。 It is to be noted that those skilled in the art provide the accompanying drawings and the following description in order to fully understand the present disclosure, and are not intended to limit the claimed subject matter. Each figure is a mimetic diagram and is not necessarily illustrated strictly.
 (実施の形態)
 以下、図1~図7を用いて、実施の形態を説明する。
(Embodiment)
Hereinafter, embodiments will be described with reference to FIGS.
 [1.撮像装置の全体構成]
 まず、本実施の形態に係る撮像装置の全体構成について、図1を用いて説明する。図1は、本実施の形態に係る撮像装置10の全体構成を示すブロック図である。同図に示された撮像装置10は、固体撮像素子100と、信号処理部300と、表示部500(図3を参照)と、操作部600(図3を参照)とを備える。さらに、固体撮像素子100は、画素アレイ部110と、列AD変換部120と、行走査部130と、列走査部140と、駆動制御部150とを有する。また、画素アレイ部110及びその周辺領域には、画素列ごとに列信号線160が配置され、画素行ごとに走査線170が配置されている。なお、図1では、撮像装置10が備える表示部500及び操作部600は、図示を省略している。
[1. Overall configuration of imaging apparatus]
First, the overall configuration of the imaging apparatus according to the present embodiment will be described with reference to FIG. FIG. 1 is a block diagram showing an overall configuration of an imaging apparatus 10 according to the present embodiment. The imaging apparatus 10 shown in the figure includes a solid-state imaging device 100, a signal processing unit 300, a display unit 500 (see FIG. 3), and an operation unit 600 (see FIG. 3). Further, the solid-state imaging device 100 includes a pixel array unit 110, a column AD conversion unit 120, a row scanning unit 130, a column scanning unit 140, and a drive control unit 150. In the pixel array section 110 and its peripheral region, a column signal line 160 is arranged for each pixel column, and a scanning line 170 is arranged for each pixel row. In FIG. 1, the display unit 500 and the operation unit 600 included in the imaging device 10 are not illustrated.
 画素アレイ部110は、複数の画素210が行列状に配置された撮像部である。 The pixel array unit 110 is an imaging unit in which a plurality of pixels 210 are arranged in a matrix.
 列AD変換(アナログ/デジタルコンバータ)部120は、各列信号線160から入力された信号(アナログ信号)をデジタル変換することで、画素210の受光量に対応するデジタル値を取得、保持及び出力する変換部である。 The column AD conversion (analog / digital converter) unit 120 digitally converts a signal (analog signal) input from each column signal line 160 to acquire, hold, and output a digital value corresponding to the amount of light received by the pixel 210. It is the conversion part.
 行走査部130は、行単位で画素210のリセット動作、電荷の蓄積動作、及び読み出し動作を制御する機能を有する。 The row scanning unit 130 has a function of controlling the reset operation, charge accumulation operation, and readout operation of the pixels 210 in units of rows.
 列走査部140は、列AD変換部120に保持された一行分のデジタル値を行信号線180へ順次出力させることで、信号処理部300に出力させる。 The column scanning unit 140 causes the signal processing unit 300 to output the digital values for one row held in the column AD conversion unit 120 to the row signal line 180 sequentially.
 駆動制御部150は、行走査部130及び列走査部140に対して各種制御信号を供給することにより、各部を制御する。駆動制御部150は、例えば、信号処理部300からの制御信号に基づいて、行走査部130及び列走査部140に対して各種制御信号を供給する。 The drive control unit 150 controls each unit by supplying various control signals to the row scanning unit 130 and the column scanning unit 140. For example, the drive control unit 150 supplies various control signals to the row scanning unit 130 and the column scanning unit 140 based on a control signal from the signal processing unit 300.
 [2.固体撮像素子の構成]
 続いて、固体撮像素子100の構成について、図1を参照しながら、図2を用いて詳細に説明する。
[2. Configuration of solid-state image sensor]
Next, the configuration of the solid-state imaging device 100 will be described in detail with reference to FIG.
 [2-1.画素]
 まずは、画素210について、図2を用いて説明する。図2は、本実施の形態に係る画素210の回路構成の一例を示す図である。
[2-1. Pixel]
First, the pixel 210 will be described with reference to FIG. FIG. 2 is a diagram illustrating an example of a circuit configuration of the pixel 210 according to this embodiment.
 画素210は、光電変換素子211と、リセットトランジスタ212と、増幅トランジスタ213と、選択トランジスタ214と、電荷蓄積部215とを有する。 The pixel 210 includes a photoelectric conversion element 211, a reset transistor 212, an amplification transistor 213, a selection transistor 214, and a charge storage unit 215.
 光電変換素子211は、受光した光を信号電荷(画素電荷)に光電変換する光電変換部である。具体的には、光電変換素子211は、上部電極211aと、下部電極211bと、両電極に挟まれた光電変換膜211cとで構成されている。光電変換膜211cは、受光した光に応じて電荷を発生する光電変換材料で構成された膜であり、本実施の形態では、高い光吸収機能を有する有機分子を含む有機光電変換膜で構成されている。言い換えると、本実施の形態に係る光電変換素子211は有機光電変換膜を有する有機光電変換素子であり、固体撮像素子100は有機光電変換素子を用いた有機センサである。なお、有機光電変換膜は、複数の画素210に跨って形成されている。複数の画素210のそれぞれは、有機光電変換膜を有する。 The photoelectric conversion element 211 is a photoelectric conversion unit that photoelectrically converts received light into signal charges (pixel charges). Specifically, the photoelectric conversion element 211 includes an upper electrode 211a, a lower electrode 211b, and a photoelectric conversion film 211c sandwiched between both electrodes. The photoelectric conversion film 211c is a film made of a photoelectric conversion material that generates an electric charge according to received light. In this embodiment, the photoelectric conversion film 211c is made of an organic photoelectric conversion film containing organic molecules having a high light absorption function. ing. In other words, the photoelectric conversion element 211 according to the present embodiment is an organic photoelectric conversion element having an organic photoelectric conversion film, and the solid-state imaging element 100 is an organic sensor using the organic photoelectric conversion element. Note that the organic photoelectric conversion film is formed across the plurality of pixels 210. Each of the plurality of pixels 210 has an organic photoelectric conversion film.
 光電変換膜211cの厚さは、例えば、約500nmである。また、光電変換膜211cは、例えば、真空蒸着法を用いて形成される。上記有機分子は、波長約400nmから約700nmまでの可視光全域にわたって高い光吸収機能を有する。 The thickness of the photoelectric conversion film 211c is, for example, about 500 nm. Moreover, the photoelectric conversion film 211c is formed using, for example, a vacuum deposition method. The organic molecule has a high light absorption function over the entire visible light wavelength range from about 400 nm to about 700 nm.
 なお、本実施の形態に係る画素210が備える光電変換素子211は、上述した有機光電変換膜で構成されていることに限定されず、例えば、無機材料で構成されたフォトダイオードであってもよい。 Note that the photoelectric conversion element 211 included in the pixel 210 according to the present embodiment is not limited to the organic photoelectric conversion film described above, and may be, for example, a photodiode formed of an inorganic material. .
 上部電極211aは、下部電極211bと対向する電極であり、光電変換膜211c上に光電変換膜211cを覆うように形成されている。つまり、上部電極211aは、複数の画素210に跨がって形成されている。上部電極211aは、光電変換膜211cに光を入射させるため、透明な導電性材料(例えば、ITO:インジウム・チタン・スズ)で構成されている。 The upper electrode 211a is an electrode facing the lower electrode 211b, and is formed on the photoelectric conversion film 211c so as to cover the photoelectric conversion film 211c. That is, the upper electrode 211a is formed across a plurality of pixels 210. The upper electrode 211a is made of a transparent conductive material (for example, ITO: indium / titanium / tin) in order to make light incident on the photoelectric conversion film 211c.
 下部電極211bは、対向する上部電極211aとの間にある光電変換膜211cで発生した電子又は正孔を取り出すための電極である。下部電極211bは、画素210ごとに形成されている。下部電極211bは、例えば、Ti、TiN、Ta、Moなどで構成されている。 The lower electrode 211b is an electrode for taking out electrons or holes generated in the photoelectric conversion film 211c between the opposed upper electrode 211a. The lower electrode 211b is formed for each pixel 210. The lower electrode 211b is made of, for example, Ti, TiN, Ta, Mo, or the like.
 電荷蓄積部215は、光電変換素子211に接続され、下部電極211bを介して取り出された信号電荷を蓄積する。 The charge accumulating unit 215 is connected to the photoelectric conversion element 211 and accumulates signal charges taken out via the lower electrode 211b.
 リセットトランジスタ212は、ドレインにリセット電圧VRSTが供給され、ソースが電荷蓄積部215に接続され、電荷蓄積部215の電位をリセット(初期化)する。具体的には、リセットトランジスタ212のゲートに、リセット用走査線170Aを介して行走査部130から所定の電圧が供給される(オンされる)ことで、リセットトランジスタ212は電荷蓄積部215の電位をリセットする。また、所定の電圧の供給を停止することで、電荷蓄積部215に信号電荷が蓄積される(露光が開始される)。 The reset transistor 212 has a drain supplied with a reset voltage VRST, a source connected to the charge storage unit 215, and resets (initializes) the potential of the charge storage unit 215. Specifically, a predetermined voltage is supplied (turned on) from the row scanning unit 130 to the gate of the reset transistor 212 via the reset scanning line 170A, so that the reset transistor 212 has the potential of the charge storage unit 215. To reset. Further, by stopping the supply of a predetermined voltage, signal charges are accumulated in the charge accumulation unit 215 (exposure is started).
 増幅トランジスタ213は、ゲートが電荷蓄積部215に接続され、ドレインに電源電圧VDDが供給され、電荷蓄積部215に蓄積された信号電荷の電荷量に応じた画素信号を出力する。 The amplification transistor 213 has a gate connected to the charge storage unit 215, a power supply voltage VDD supplied to the drain, and outputs a pixel signal corresponding to the amount of signal charge stored in the charge storage unit 215.
 選択トランジスタ214は、ドレインが増幅トランジスタ213のソースに接続され、ソースに列信号線160に接続され、増幅トランジスタ213から画素信号を出力するタイミングを決定する。具体的には、選択トランジスタ214のゲートに、選択用走査線170Bを介して行走査部130から所定の電圧が供給されることで、増幅トランジスタ213から画素信号が出力される。 The selection transistor 214 has a drain connected to the source of the amplification transistor 213, a source connected to the column signal line 160, and determines the timing for outputting the pixel signal from the amplification transistor 213. Specifically, a pixel signal is output from the amplification transistor 213 by supplying a predetermined voltage from the row scanning unit 130 to the gate of the selection transistor 214 via the selection scanning line 170B.
 上記の構成を有する画素210は、非破壊読み出しが可能である。ここで、非破壊読み出しとは、露光中に、電荷蓄積部215に蓄積した電荷(信号電荷)を破壊せずに、その電荷量に応じた画素信号を読み出すことを意味する。なお、露光中とは、露光時間内の任意のタイミングを意味するものとして使用している。 The pixel 210 having the above configuration can perform nondestructive readout. Here, the non-destructive reading means reading out a pixel signal corresponding to the amount of charge without destroying the charge (signal charge) accumulated in the charge accumulation unit 215 during exposure. Note that “during exposure” is used to mean any timing within the exposure time.
 [2-2.その他の構成]
 図1を再び参照して、列AD変換部120は、列信号線160ごとに設けられたAD変換器121で構成されている。AD変換器121は、例えば、14bitのAD変換器である。AD変換器121は、例えば、画素210から出力されたアナログの画素信号をランプ方式でデジタル変換することで、画素210での受光量に対応するデジタル値を出力する。AD変換器121は、コンパレータ及びアップダウンカウンタ(図示しない)を有する。
[2-2. Other configurations]
Referring again to FIG. 1, the column AD conversion unit 120 includes an AD converter 121 provided for each column signal line 160. The AD converter 121 is, for example, a 14-bit AD converter. For example, the AD converter 121 digitally converts an analog pixel signal output from the pixel 210 using a ramp method, and outputs a digital value corresponding to the amount of light received by the pixel 210. The AD converter 121 includes a comparator and an up / down counter (not shown).
 ここで、ランプ方式によるAD変換とは、ランプ波を用いたAD変換であり、アナログの入力信号が入力されると、一定の傾斜で電圧が上昇するランプ波を立ち上げ、その立ち上げ時点から、両信号(入力信号とランプ波)の電圧が一致するまでの時間を計測し、その計測時間をデジタル値で出力する方式である。コンパレータは、列信号の電圧と、ランプ波として入力される参照信号の電圧とを比較し、参照信号の電圧が列信号の電圧と一致するタイミングを示す信号を出力する。 Here, the ramp-type AD conversion is an AD conversion using a ramp wave. When an analog input signal is input, a ramp wave whose voltage rises at a constant slope is started, and from the start point. In this method, the time until the voltages of both signals (the input signal and the ramp wave) coincide with each other is measured, and the measured time is output as a digital value. The comparator compares the voltage of the column signal with the voltage of the reference signal input as a ramp wave, and outputs a signal indicating the timing at which the voltage of the reference signal matches the voltage of the column signal.
 アップダウンカウンタは、コンパレータに参照信号が入力されてからその参照信号が基準成分を示す列信号の電圧に達するまでの期間においてダウンカウント(あるいは、アップカウント)し、続いて、コンパレータに参照電圧が入力されてからその参照信号が信号成分を示す列信号の電圧に達するまでの期間においてアップカウント(あるいは、ダウンカウント)することで、列信号の信号成分から基準成分を差し引いた差分に相当するデジタル値を最終的に保持する。 The up / down counter counts down (or up) during a period from when the reference signal is input to the comparator to when the reference signal reaches the voltage of the column signal indicating the reference component. Digital corresponding to the difference obtained by subtracting the reference component from the signal component of the column signal by performing up-counting (or down-counting) in the period from when the reference signal reaches the voltage of the column signal indicating the signal component Keep the value finally.
 各アップダウンカウンタに保持されたデジタル値は、順次、行信号線180に出力され、出力回路(図示しないが、出力バッファなど)を介して、信号処理部300に出力される。 The digital values held in the up / down counters are sequentially output to the row signal line 180 and output to the signal processing unit 300 via an output circuit (not shown, but an output buffer or the like).
 駆動制御部150は、行走査部130及び列走査部140を制御することにより、画素210におけるリセット動作、電荷の蓄積動作、及び読み出し動作、又はAD変換器121から信号処理部300へのデジタル信号の出力動作を制御する。 The drive control unit 150 controls the row scanning unit 130 and the column scanning unit 140 to perform a reset operation, a charge accumulation operation, and a readout operation in the pixel 210 or a digital signal from the AD converter 121 to the signal processing unit 300. Controls the output operation of.
 例えば、信号処理部300から読み出しの指示を受け付けると、駆動制御部150は、行走査部130を制御し、順次選択用走査線170Bに所定の電圧を印加させ、画素信号(アナログ)を出力させる。また、駆動制御部150は、列走査部140を制御し、AD変換器121に保持されている画素信号(デジタル値)を順次信号処理部300へ出力させる。 For example, when a read instruction is received from the signal processing unit 300, the drive control unit 150 controls the row scanning unit 130, sequentially applies a predetermined voltage to the selection scanning line 170B, and outputs a pixel signal (analog). . The drive control unit 150 also controls the column scanning unit 140 to sequentially output pixel signals (digital values) held in the AD converter 121 to the signal processing unit 300.
 [3.信号処理部の構成]
 続いて、信号処理部300について、図3を用いて説明する。図3は、本実施の形態に係る、撮像装置10と、レンズ400とを備えるカメラ1の機能ブロック図である。同図に示されたカメラ1は、固体撮像素子100と、信号処理部300と、レンズ400と、表示部500と、操作部600とを備える。そして、信号処理部300は、制御部310、画像処理部320及びメモリ330を有する。
[3. Configuration of signal processor]
Next, the signal processing unit 300 will be described with reference to FIG. FIG. 3 is a functional block diagram of the camera 1 including the imaging device 10 and the lens 400 according to the present embodiment. The camera 1 shown in the figure includes a solid-state imaging device 100, a signal processing unit 300, a lens 400, a display unit 500, and an operation unit 600. The signal processing unit 300 includes a control unit 310, an image processing unit 320, and a memory 330.
 レンズ400を通過した光は、固体撮像素子100に入射する。信号処理部300は、固体撮像素子100を駆動し、固体撮像素子100からの画素信号(デジタル値)を取り込む。例えば、制御部310が駆動制御部150を制御することで、画像処理部320は固体撮像素子100からの画素信号を取り込む。画像処理部320は、固体撮像素子100から取得した画素信号に所定の信号処理を施して、撮像画像を生成する。生成された撮像画像は、メモリ330に格納される。また、生成された撮像画像は表示部500に出力される。なお、画素信号(デジタル値)は、第1の画素信号の一例である。ここで、撮像画像とは、例えば、カメラなどの場合であれば、ユーザに表示される又は記録される画像のことである。 The light that has passed through the lens 400 enters the solid-state imaging device 100. The signal processing unit 300 drives the solid-state image sensor 100 and captures a pixel signal (digital value) from the solid-state image sensor 100. For example, when the control unit 310 controls the drive control unit 150, the image processing unit 320 captures a pixel signal from the solid-state imaging device 100. The image processing unit 320 performs predetermined signal processing on the pixel signal acquired from the solid-state imaging device 100 to generate a captured image. The generated captured image is stored in the memory 330. The generated captured image is output to the display unit 500. The pixel signal (digital value) is an example of the first pixel signal. Here, the captured image is an image displayed or recorded to the user in the case of a camera, for example.
 制御部310は、例えば、メモリ330からプログラムを読み出し、読み出したプログラムを実行する。制御部310は、プロセッサにより実現される。この場合、メモリ330に格納されたソフトウェアプログラムがプロセッサにより実行されたときに、プロセッサは制御部310として機能する。例えば、ソフトウェアプログラムを不揮発性メモリから揮発性メモリに読み出し、読み出したソフトウェアプログラムをプロセッサが実行することで、制御部310が実現される。上記では制御部310は駆動制御部150を制御することについて説明したが、その他の各部を制御してもよい。例えば、操作部600がユーザからの入力を受け付けると、制御部310は入力に応じた制御を行ってもよい。操作部600がユーザから撮像の指示を受け付けると、制御部310は、レンズ400(具体的には、レンズ400の位置を制御するモータ)を制御し、被写体のピントなどを調整してもよい。 For example, the control unit 310 reads a program from the memory 330 and executes the read program. The control unit 310 is realized by a processor. In this case, the processor functions as the control unit 310 when the software program stored in the memory 330 is executed by the processor. For example, the control unit 310 is realized by reading the software program from the nonvolatile memory to the volatile memory and executing the read software program by the processor. In the above description, the control unit 310 controls the drive control unit 150. However, other units may be controlled. For example, when the operation unit 600 receives an input from the user, the control unit 310 may perform control according to the input. When the operation unit 600 receives an imaging instruction from the user, the control unit 310 may control the lens 400 (specifically, a motor that controls the position of the lens 400) to adjust the focus of the subject.
 画像処理部320は、露光の終了後に固体撮像素子100から取得した第1の画素信号に所定の補正を行うことで撮像画像を生成する。具体的には、画像処理部320は、露光中に非破壊読み出しにより固体撮像素子100から取得した第2の画素信号に基づいて第1の画素信号に所定の補正を行うことで、撮像画像を生成する。所定の補正とは、1フレーム分の画像データ(1フレーム分の第2の画素信号)を用いないと補正のための情報(例えば、第1の画素信号に施す補正の補正値)を得ることができない補正であり、例えば、階調の補正、像ぶれの補正又はホワイトバランスの補正などである。本実施の形態では、第2の画素信号の解析結果に基づいて、第1の画素信号に上記の補正を行うことに特徴を有する。 The image processing unit 320 generates a captured image by performing predetermined correction on the first pixel signal acquired from the solid-state imaging device 100 after the exposure is completed. Specifically, the image processing unit 320 performs a predetermined correction on the first pixel signal based on the second pixel signal acquired from the solid-state imaging device 100 by nondestructive readout during exposure, thereby obtaining a captured image. Generate. The predetermined correction means that information for correction (for example, a correction value for correction applied to the first pixel signal) is obtained unless image data for one frame (second pixel signal for one frame) is used. Correction that cannot be performed, for example, gradation correction, image blur correction, or white balance correction. The present embodiment is characterized in that the above-described correction is performed on the first pixel signal based on the analysis result of the second pixel signal.
 画像処理部320は、例えば、メモリ330からプログラムを読み出し、読み出したプログラムを実行する。画像処理部320は、プロセッサにより実現される。この場合、メモリ330に格納されたソフトウェアプログラムがプロセッサにより実行されたときに、プロセッサは画像処理部320として機能する。例えば、ソフトウェアプログラムを不揮発性メモリから揮発性メモリに読み出し、読み出したソフトウェアプログラムをプロセッサが実行することで、画像処理部320が実現される。なお、画像処理部320は、制御部310の制御により上記の補正を行ってもよい。 For example, the image processing unit 320 reads a program from the memory 330 and executes the read program. The image processing unit 320 is realized by a processor. In this case, the processor functions as the image processing unit 320 when the software program stored in the memory 330 is executed by the processor. For example, the image processing unit 320 is realized by reading a software program from a nonvolatile memory into a volatile memory and executing the read software program by a processor. Note that the image processing unit 320 may perform the above correction under the control of the control unit 310.
 なお、制御部310及び画像処理部320は、マイクロコンピュータにより実現されてもよい。この場合、制御部310及び画像処理部320はそれぞれ、動作プログラムが格納された不揮発性メモリ、プログラムを実行するための一時的な記憶領域である揮発性メモリ、入出力ポート、プログラムを実行するプロセッサなどを有する。 Note that the control unit 310 and the image processing unit 320 may be realized by a microcomputer. In this case, each of the control unit 310 and the image processing unit 320 includes a nonvolatile memory in which an operation program is stored, a volatile memory that is a temporary storage area for executing the program, an input / output port, and a processor that executes the program. Etc.
 階調補正では、例えば、逆光などの明暗差の大きなシーンの撮像において、白飛びや黒つぶれなどが生じにくいように第1の画素信号が補正される。具体的には、白飛びの画素は暗く、及び、黒つぶれの画素は明るく補正される。階調補正では、その他の画素も必要に応じて適宜補正が行われるので、白飛び及び黒つぶれが低減され、かつ全体として明るさのバランスが調整された画像が生成される。このような補正を行うために、画像処理部320は、非破壊読み出しで取得した第2の画素信号(第2の画素信号により生成される画像データ)を用いる。具体的には、1フレーム分の第2の画素信号からボケ画像を生成し、生成したボケ画像と第1の画素信号とを演算することにより、第1の画素信号の補正を行う。第1の画素信号の補正は、画素210ごとに行われる。1つの画素210の第1の画素信号の明るさと、当該第1の画素信号に対応するボケ画像の画素の明るさとを演算することで、当該第1の画素信号の補正値を取得する。画素210ごとに、当該補正値により第1の画素信号を補正することで、撮像画像が生成される。なお、ボケ画像は、補正用画像の一例である。 In the gradation correction, for example, in imaging of a scene with a large contrast between light and dark such as backlight, the first pixel signal is corrected so that overexposure and blackout are not likely to occur. Specifically, the overexposed pixels are corrected dark and the blacked out pixels are corrected brightly. In the gradation correction, other pixels are also corrected as necessary, so that an image in which whiteout and blackout are reduced and the brightness balance is adjusted as a whole is generated. In order to perform such correction, the image processing unit 320 uses the second pixel signal (image data generated by the second pixel signal) acquired by nondestructive reading. Specifically, a blur image is generated from the second pixel signal for one frame, and the first pixel signal is corrected by calculating the generated blur image and the first pixel signal. The first pixel signal is corrected for each pixel 210. The correction value of the first pixel signal is obtained by calculating the brightness of the first pixel signal of one pixel 210 and the brightness of the pixel of the blurred image corresponding to the first pixel signal. For each pixel 210, a captured image is generated by correcting the first pixel signal with the correction value. The blurred image is an example of a correction image.
 また、明暗差の小さなシーンにおいても同様に、ボケ画像と第1の画素信号とを演算し、その演算結果から第1の画素信号を補正することで、撮像画像を生成する。 Similarly, even in a scene with a small contrast, a blurred image and the first pixel signal are calculated, and a captured image is generated by correcting the first pixel signal from the calculation result.
 像ぶれ補正では、非破壊読み出しで取得した第2の画素信号により生成された画像(補正用画像の一例)と、例えば1つ前のフレームにおける第1又は第2の画素信号により生成された画像とから動きベクトルを取得し、取得した動きベクトルに応じて第1の画素信号を補正する。例えば、動きベクトルから像ぶれした方向及び量を特定し、当該方向及び量に応じて第1の画素信号により生成される画像を平行移動させることで、補正が行われる。具体的には、動きベクトルが小さくなるように第1の画素信号により生成される画像が平行移動される。信号処理部300は、1フレームの画像に対し、1つの動きベクトルを算出する。例えば、1フレームの画像を複数のウィンドウ(1フレームの画像より小さい小領域)に分け、ウィンドウ単位で動きベクトルを取得し、それぞれのウィンドウでの動きベクトルから1フレームの画像における動きベクトルを取得する。なお、補正用画像を用いて動きベクトルを取得することは、画像解析の一例である。 In image blur correction, an image (an example of a correction image) generated by the second pixel signal acquired by nondestructive readout and an image generated by the first or second pixel signal in the previous frame, for example. The motion vector is acquired from the above, and the first pixel signal is corrected according to the acquired motion vector. For example, the direction and amount of image blurring are specified from the motion vector, and the image generated by the first pixel signal is translated in accordance with the direction and amount, so that the correction is performed. Specifically, the image generated by the first pixel signal is translated so that the motion vector becomes small. The signal processing unit 300 calculates one motion vector for one frame image. For example, one frame image is divided into a plurality of windows (smaller areas smaller than one frame image), a motion vector is acquired for each window, and a motion vector in one frame image is acquired from the motion vector in each window. . Note that acquiring a motion vector using a correction image is an example of image analysis.
 ホワイトバランスの補正でも、上記と同様に第2の画素信号の解析結果から第1の画素信号のホワイトバランスを補正する。例えば、オフセットの補正(アナログ信号をデジタル信号に変換する際のOB段差の補正)を行ったのち、ホワイトバランスの補正を行う場合、オフセットの補正においても、非破壊読み出しで取得した第2の画素信号を用いてもよい。なお、像ぶれ補正やホワイトバランスの補正などは、ボケ画像を用いなくてもよく、非破壊読み出しにより取得した1フレーム分の画像を用いることで補正が実施できる。 In the white balance correction, the white balance of the first pixel signal is corrected from the analysis result of the second pixel signal in the same manner as described above. For example, when the white balance is corrected after correcting the offset (correcting the OB step when the analog signal is converted into the digital signal), the second pixel acquired by nondestructive readout is also used in the offset correction. A signal may be used. Note that image blur correction, white balance correction, and the like need not use a blurred image, and can be performed by using an image for one frame acquired by nondestructive reading.
 なお、補正用画像の生成、又は生成した補正用画像を用いて画像解析することは、画像処理の一例である。 Note that generation of a correction image or image analysis using the generated correction image is an example of image processing.
 露光中の非破壊読み出しで取得した第2の画素信号で生成される画像データ(画像)は、露光が終了した後に取得した第1の画素信号で生成される撮像画像に比べ全体的に暗い画像となる。そのため、第2の画素信号を用いて補正用画像を生成する前に、第2の画素信号を増幅する(明るさに対するゲインの補正をする)ことにより、画像データの明るさを調整してもよい。具体的には、明るさを高くする調整を行ってもよい。 The image data (image) generated by the second pixel signal acquired by non-destructive readout during exposure is an image that is generally darker than the captured image generated by the first pixel signal acquired after the exposure is completed. It becomes. Therefore, the brightness of the image data can be adjusted by amplifying the second pixel signal (correcting the gain with respect to the brightness) before generating the correction image using the second pixel signal. Good. Specifically, adjustment to increase the brightness may be performed.
 メモリ330は、画像処理部320のワークメモリとして機能する。メモリ330は、画像処理部320で処理された画像データや、画像処理部320で処理される前の固体撮像素子100から入力される画像データ(第1の画素信号)などを一時的に格納する。 The memory 330 functions as a work memory for the image processing unit 320. The memory 330 temporarily stores image data processed by the image processing unit 320, image data (first pixel signal) input from the solid-state imaging device 100 before being processed by the image processing unit 320, and the like. .
 メモリ330は、不揮発性メモリと揮発性メモリとを有する。不揮発性メモリは、撮像装置10で実行される処理に対応するプログラム、及び、撮像された画像データなどが記憶される。揮発性メモリは、例えば、プロセッサが処理を行う際のワークメモリとして使用される。不揮発性メモリは、例えば、強誘電体メモリ、又はフラッシュメモリなどの半導体メモリで実現できる。また、揮発性メモリは、例えば、DRAM(Dynamic Random Access Memory)などの半導体メモリで実現できる。なお、メモリ330は、撮像装置10が有していればよく、信号処理部300が有していなくてもよい。また、不揮発性メモリは、撮像装置10に着脱可能に接続される外部メモリであってもよい。外部メモリは、例えば、SDカード(登録商標)などのメモリカードにより実現される。例えば、外部メモリには、撮像装置10が撮像した画像データが記憶される。 The memory 330 has a nonvolatile memory and a volatile memory. The non-volatile memory stores a program corresponding to processing executed by the imaging apparatus 10, and captured image data. The volatile memory is used as a work memory when the processor performs processing, for example. The nonvolatile memory can be realized by a semiconductor memory such as a ferroelectric memory or a flash memory, for example. The volatile memory can be realized by a semiconductor memory such as DRAM (Dynamic Random Access Memory). The memory 330 only needs to be included in the imaging apparatus 10 and may not be included in the signal processing unit 300. The non-volatile memory may be an external memory that is detachably connected to the imaging device 10. The external memory is realized by a memory card such as an SD card (registered trademark), for example. For example, image data captured by the imaging device 10 is stored in the external memory.
 表示部500は、信号処理部300で生成した撮像画像を表示する表示デバイスであり、例えば、液晶モニタなどである。また、表示部500は、各種の設定情報が表示可能である。例えば、表示部500は、撮影時の撮影条件(絞りやISO感度など)を表示可能である。 The display unit 500 is a display device that displays the captured image generated by the signal processing unit 300, and is a liquid crystal monitor, for example. The display unit 500 can display various setting information. For example, the display unit 500 can display shooting conditions (aperture, ISO sensitivity, etc.) at the time of shooting.
 操作部600は、ユーザからの入力を受け付ける入力部であり、例えば、レリーズボタン又はタッチパネルなどである。例えば、タッチパネルは液晶モニタに接着されている。ユーザからの撮像の指示や撮影条件の変更などを受け付ける。 The operation unit 600 is an input unit that receives input from the user, and is, for example, a release button or a touch panel. For example, the touch panel is bonded to a liquid crystal monitor. An imaging instruction from the user, a change in imaging conditions, and the like are accepted.
 なお、撮像装置10は、外部の回路と、固体撮像素子100又は信号処理部300との間で通信を行うためのインターフェース(図示しない)を備えていてもよい。インターフェースは、一例として、半導体集積回路からなる通信ポートなどである。 Note that the imaging device 10 may include an interface (not shown) for performing communication between an external circuit and the solid-state imaging device 100 or the signal processing unit 300. The interface is, for example, a communication port made of a semiconductor integrated circuit.
 [4.撮像装置の処理]
 次に、撮像装置10の処理について説明する。
[4. Imaging device processing]
Next, the process of the imaging device 10 will be described.
 [4-1.撮像装置の処理の流れ]
 撮像装置10の処理の流れについて、図4A及び図4Bを用いて、従来例と比較しながら説明する。図4Aは、本実施の形態に係る撮像装置10の動作を示すフローチャートである。図4Bは、従来例に係る撮像装置の動作を示すフローチャートである。なお、従来例は、露光の終了後に取得した画素信号(第1の画素信号)を用いて補正を行っている例を示している。つまり、従来例は、非破壊読み出しを行わずに補正を行っている。また、図4A及び図4Bは、階調の補正を行う場合の動作を示すフローチャートの一例である。
[4-1. Flow of processing of imaging device]
The processing flow of the imaging apparatus 10 will be described using FIGS. 4A and 4B while comparing with the conventional example. FIG. 4A is a flowchart showing the operation of the imaging apparatus 10 according to the present embodiment. FIG. 4B is a flowchart illustrating the operation of the imaging apparatus according to the conventional example. The conventional example shows an example in which correction is performed using a pixel signal (first pixel signal) acquired after the exposure is completed. That is, in the conventional example, correction is performed without performing nondestructive reading. FIG. 4A and FIG. 4B are examples of flowcharts showing the operation in the case where gradation correction is performed.
 まず、固体撮像素子100は、例えば、制御部310の制御により露光を開始する(S1)。これにより、複数の画素210のそれぞれには、受光量に応じた電荷が蓄積される。具体的には、光電変換素子211の受光量に応じた電荷が、電荷蓄積部215に蓄積される。 First, for example, the solid-state imaging device 100 starts exposure under the control of the control unit 310 (S1). Thereby, charges corresponding to the amount of received light are accumulated in each of the plurality of pixels 210. Specifically, charges corresponding to the amount of light received by the photoelectric conversion element 211 are accumulated in the charge accumulation unit 215.
 そして、制御部310は、露光中にそれぞれの画素210の電荷蓄積部215に蓄積された電荷を破壊せずに読み出す非破壊読み出しを実行する制御を行う。より具体的には、制御部310が駆動制御部150を制御し画素行ごとに列AD変換部120でAD変換を行うことで、蓄積電荷(画素信号)が当該蓄積電荷に応じたデジタル値(第2の画素信号)に変換される。そして、変換されたデジタル値は、列走査部140によって順次信号処理部300へ出力される。つまり、信号処理部300は、非破壊読み出しにより、第2の画素信号を取得する(S2)。なお、非破壊読み出しを行った後、リセットトランジスタ212による電荷蓄積部215の電位のリセットは、行われない。 Then, the control unit 310 performs control to execute non-destructive reading that reads out the charges accumulated in the charge accumulation unit 215 of each pixel 210 during the exposure without destroying them. More specifically, the control unit 310 controls the drive control unit 150 and performs AD conversion in the column AD conversion unit 120 for each pixel row, so that the accumulated charge (pixel signal) is a digital value ( Second pixel signal). The converted digital values are sequentially output to the signal processing unit 300 by the column scanning unit 140. That is, the signal processing unit 300 acquires the second pixel signal by nondestructive reading (S2). Note that the potential of the charge storage portion 215 is not reset by the reset transistor 212 after performing nondestructive reading.
 画像処理部320は、取得した第2の画素信号からボケ画像(補正用画像の一例)を生成する(S3)。例えば、画像処理部320は、第2の画素信号から生成される画像又は撮像画像より解像度が低いボケ画像を生成してもよい。例えば、第2の画素信号から生成される画像又は撮像画像の解像度が5000×4000である場合、ボケ画像の解像度は50×40などである。ボケ画像は、1フレーム分の第2の画素信号を積算することで画像を縮小し、さらにその画像を拡大することで生成される。具体的には、ボケ画像の生成では、まず1フレーム分の第2の画素信号のうち、近接する複数の画素210の画素値を積算して1つの画素値を算出することで、縮小された画像が生成される。縮小された画像の解像度は、ボケ画像の解像度と同じであり、例えば50×40などである。そして、縮小された画像は、拡大される。縮小された画像は、例えば、縮小する前の画像サイズに拡大される。拡大された画像(つまり、ボケ画像)の解像度は、縮小された画像の解像度と同じであり、例えば、50×40である。ボケ画像は、例えば、画像サイズは縮小する前の画像サイズと同じであり、かつ解像度は縮小する前の画像より低い画像となる。 The image processing unit 320 generates a blurred image (an example of a correction image) from the acquired second pixel signal (S3). For example, the image processing unit 320 may generate a blurred image having a lower resolution than an image generated from the second pixel signal or a captured image. For example, when the resolution of the image generated from the second pixel signal or the captured image is 5000 × 4000, the resolution of the blurred image is 50 × 40 or the like. The blurred image is generated by accumulating the second pixel signals for one frame, reducing the image, and further enlarging the image. Specifically, in the generation of the blurred image, first, among the second pixel signals for one frame, the pixel values of a plurality of adjacent pixels 210 are integrated to calculate one pixel value, which is reduced. An image is generated. The resolution of the reduced image is the same as that of the blurred image, and is 50 × 40, for example. Then, the reduced image is enlarged. The reduced image is enlarged to, for example, the image size before reduction. The resolution of the enlarged image (that is, the blurred image) is the same as the resolution of the reduced image, and is 50 × 40, for example. The blurred image has, for example, an image size that is the same as the image size before the reduction, and a resolution that is lower than the image before the reduction.
 これにより、一時的にボケ画像を保持するためのメモリ330の容量を削減することができる。階調の補正を行う上では、ボケ画像の解像度は50×40程度であっても補正が可能である。生成したボケ画像は、例えば、画像処理部320が保持していてもよいし、メモリ330に格納してもよい。すなわち、画像処理部320は、生成したボケ画像をメモリ330に格納してもよいし、画像処理部320がメモリ(図示しない)を内蔵する場合は、当該メモリに格納してもよい。以降の説明においては、画像処理部320は、メモリ330にボケ画像を格納する例について説明する。なお、ボケ画像は取得した第2の画素信号により生成される画像に対してローパスフィルタ処理を行うことで生成されてもよい。 Thereby, it is possible to reduce the capacity of the memory 330 for temporarily holding the blurred image. In correcting the gradation, correction can be performed even if the resolution of the blurred image is about 50 × 40. The generated blurred image may be held by the image processing unit 320 or stored in the memory 330, for example. That is, the image processing unit 320 may store the generated blurred image in the memory 330, or may store the blurred image in the memory when the image processing unit 320 includes a memory (not shown). In the following description, an example in which the image processing unit 320 stores a blurred image in the memory 330 will be described. Note that the blurred image may be generated by performing low-pass filter processing on an image generated by the acquired second pixel signal.
 ここで、固体撮像素子100における露光が終了する(S4)。つまり、本実施の形態に係る撮像装置10は、固体撮像素子100の露光が終了する前に、画像処理部320のボケ画像の生成が完了する。言い換えると、制御部310は、露光時間と、第2の画素信号の取得からボケ画像の生成が完了する(S2及びS3)までに要する時間(第1の時間)とから、非破壊読み出しを実行するタイミングを制御してもよい。例えば、制御部310は、露光が終了する時刻から、第1の時間遡った時刻に、固体撮像素子100に非破壊読み出しを実行させてもよい。これにより、画像処理部320は、非破壊読み出しにより取得される第2の画素信号の明るさをより明るくし、かつ露光の終了までにボケ画像を生成することができる。つまり、画像処理部320は、露光の終了後に画素210ごとに順次取得した第1の画素信号に、当該第1の画素信号を取得した時点で、補正を行うことができる。また、第2の画素信号の明るさ(第2の画素信号が示す明るさの情報)が明るいことで、第2の画素信号にゲインをかける際のノイズの影響を低減することができる。なお、第1の時間は、信号処理部300の処理能力及び画素アレイ部110の画素数などから、露光を開始する前に特定可能である。 Here, the exposure in the solid-state imaging device 100 ends (S4). That is, in the imaging apparatus 10 according to the present embodiment, generation of a blurred image in the image processing unit 320 is completed before the exposure of the solid-state imaging device 100 is completed. In other words, the control unit 310 performs nondestructive readout from the exposure time and the time (first time) required from the acquisition of the second pixel signal until the generation of the blurred image is completed (S2 and S3). You may control the timing to do. For example, the control unit 310 may cause the solid-state imaging device 100 to perform nondestructive reading at a time that is a first time back from the time when the exposure ends. As a result, the image processing unit 320 can increase the brightness of the second pixel signal acquired by nondestructive readout and generate a blurred image before the end of exposure. That is, the image processing unit 320 can correct the first pixel signal sequentially acquired for each pixel 210 after the exposure is completed at the time when the first pixel signal is acquired. Further, since the brightness of the second pixel signal (brightness information indicated by the second pixel signal) is bright, the influence of noise when gain is applied to the second pixel signal can be reduced. The first time can be specified before the exposure is started from the processing capability of the signal processing unit 300 and the number of pixels of the pixel array unit 110.
 なお、補正が像ぶれの補正である場合、ステップS3では、補正用画像の生成と、補正用画像の画像解析とが行われる。その場合、制御部310は、露光時間と、第2の画素信号の取得から補正用画像の生成及び解析が完了する(S2及びS3)までに要する時間(第2の時間)とから、非破壊読み出しを実行するタイミングを制御してもよい。例えば、制御部310は、露光が終了する時刻から第2の時間遡った時刻に、固体撮像素子100に非破壊読み出しを実行させてもよい。 If the correction is image blur correction, in step S3, a correction image is generated and an image analysis of the correction image is performed. In that case, the control unit 310 is nondestructive based on the exposure time and the time (second time) required from the acquisition of the second pixel signal to the generation and analysis of the correction image (S2 and S3). You may control the timing which performs reading. For example, the control unit 310 may cause the solid-state imaging device 100 to perform nondestructive readout at a time that is a second time after the time when the exposure ends.
 次に、制御部310が露光の終了後に駆動制御部150を制御することで、画素列ごとに列AD変換部120でAD変換を行い、蓄積電荷を当該蓄積電荷に応じたデジタル値に変換する。そして、変換されたデジタル値は列走査部140によって順次信号処理部300へ出力される。つまり、信号処理部300は、第1の画素信号を取得する(S5)。より具体的には、信号処理部300は、複数の画素210ごとの第1の画素信号を、順次取得する。なお、第1の画素信号の読み出しは、例えば、通常読み出しで行う。通常読み出しとは、破壊読み出しのことであり、画素信号を読み出した後、蓄積した電荷を破壊する(リセットトランジスタ212がオンされることで電荷蓄積部215の電位をリセットする)読み出しである。 Next, the control unit 310 controls the drive control unit 150 after the exposure is completed, so that the column AD conversion unit 120 performs AD conversion for each pixel column, and converts the accumulated charge into a digital value corresponding to the accumulated charge. . The converted digital values are sequentially output to the signal processing unit 300 by the column scanning unit 140. That is, the signal processing unit 300 acquires the first pixel signal (S5). More specifically, the signal processing unit 300 sequentially acquires a first pixel signal for each of the plurality of pixels 210. Note that the reading of the first pixel signal is performed by, for example, normal reading. The normal reading is destructive reading, and is a reading in which the accumulated charge is destroyed after the pixel signal is read out (the potential of the charge accumulation unit 215 is reset by turning on the reset transistor 212).
 画像処理部320は、メモリ330に格納したボケ画像を用いて、当該ボケ画像と第1の画素信号とから、第1の画素信号に補正を行うことで撮像画像を生成する(S6)。なお、第1の画素信号に補正を行う際、固体撮像素子100で発生するノイズ(画像としてスジが入るなど)に対する補正、及びレンズ400の周辺の画素210に対する光量調整などを合わせて行ってもよい。これにより、それぞれの補正を別々に行う場合に比べ、補正に要する時間を短縮することができる。 The image processing unit 320 uses the blurred image stored in the memory 330 to generate a captured image by correcting the first pixel signal from the blurred image and the first pixel signal (S6). When correcting the first pixel signal, correction for noise (such as streaking as an image) generated in the solid-state imaging device 100 and light amount adjustment for the pixels 210 around the lens 400 may be performed together. Good. Thereby, compared with the case where each correction is performed separately, the time required for correction can be shortened.
 そして、画像処理部320は、例えば、撮像装置10が備える表示部500に撮像画像を表示させる、またはカメラ1が備える記憶部(図示しないが、例えば外部メモリ)若しくはインターフェース(図示しない)を介して外部の記憶媒体に撮像画像を格納する。 Then, the image processing unit 320 displays a captured image on the display unit 500 included in the imaging device 10, or via a storage unit (not shown, for example, an external memory) or an interface (not shown) included in the camera 1. The captured image is stored in an external storage medium.
 続いて、図4Bを用いて、従来例に係る撮像装置の動作について説明する。なお、本実施の形態に係る撮像装置10と同一の動作については、図4Aと同一の符号を付し、説明を省略する場合がある。 Subsequently, the operation of the imaging apparatus according to the conventional example will be described with reference to FIG. 4B. In addition, about the same operation | movement as the imaging device 10 which concerns on this Embodiment, the code | symbol same as FIG. 4A may be attached | subjected and description may be abbreviate | omitted.
 まず、固体撮像素子は、露光を開始(S1)する。信号処理部は、露光中に画像処理は行わず(読み出し等を行わず)、露光が終了する(S4)。この時点で、ボケ画像は生成されていない。 First, the solid-state imaging device starts exposure (S1). The signal processing unit does not perform image processing during the exposure (does not perform reading or the like), and the exposure ends (S4). At this time, a blurred image has not been generated.
 信号処理部は、露光の終了後に第1の画素信号を取得する(S5)。信号処理部は、第1の画素信号に、固体撮像素子で発生するノイズに対する補正、及びレンズの周辺の画素に対する光量調整などを行うことで、中間画像を生成する(S11)。これらの処理は、メモリに格納する前に実施される。そして、中間画像はメモリに格納される(S12)。例えば、中間画像は、メモリが有するDRAMに格納される。つまり、従来の方法では、中間画像をメモリに格納するための時間を要する。さらに、中間画像を格納するためのメモリが必要となり、そのためメモリの容量を増やす必要がある。例えば、中間画像を格納するために、1フレーム分の画像データを格納できるメモリが余分に必要となる。 The signal processing unit acquires the first pixel signal after the exposure is completed (S5). The signal processing unit generates an intermediate image by performing correction for noise generated in the solid-state imaging device and light amount adjustment for pixels around the lens on the first pixel signal (S11). These processes are performed before storing in the memory. The intermediate image is stored in the memory (S12). For example, the intermediate image is stored in a DRAM included in the memory. That is, in the conventional method, it takes time to store the intermediate image in the memory. Furthermore, a memory for storing the intermediate image is required, and therefore the memory capacity needs to be increased. For example, in order to store an intermediate image, an extra memory capable of storing image data for one frame is required.
 次に、中間画像を用いてボケ画像を生成する(S13)。従来では、ボケ画像を生成するための画像に中間画像を用いている。ボケ画像の生成方法は本実施の形態に係るボケ画像の生成方法(S3を参照)と同じである。なお、例えば、第1の画素信号に対してゲイン補正は行われない。そして、ステップS13が露光の終了後に行われるので、従来の方法では露光が終了した後の処理に時間を要する。 Next, a blurred image is generated using the intermediate image (S13). Conventionally, an intermediate image is used as an image for generating a blurred image. The method for generating a blurred image is the same as the method for generating a blurred image according to the present embodiment (see S3). For example, gain correction is not performed on the first pixel signal. Since step S13 is performed after the exposure is completed, the conventional method takes time for the process after the exposure is completed.
 さらに、メモリから中間画像を読み出し(S14)、ステップS13で生成したボケ画像を用いて中間画像に補正を行うことで撮像画像を生成する(S15)。つまり、従来の方法では、メモリから中間画像を読み出すための時間を要する。 Further, the intermediate image is read from the memory (S14), and the captured image is generated by correcting the intermediate image using the blurred image generated in step S13 (S15). That is, in the conventional method, it takes time to read the intermediate image from the memory.
 上述したように、従来の方法であれば、露光の終了後に画像処理を開始するため、画像処理に要する時間が長い。具体的には、ステップS11~S14の処理に要する時間だけ、本実施の形態に係る撮像装置10に比べ、露光の終了後の画像処理に時間を要する。一方、本実施の形態に係る撮像装置10は、露光中の非破壊読み出しにより取得した画像を用いて画像処理を行うことができるので、従来の撮像装置に比べ画像処理に要する時間を短縮することができる。 As described above, in the case of the conventional method, since the image processing is started after the exposure is completed, the time required for the image processing is long. Specifically, as compared with the imaging apparatus 10 according to the present embodiment, the time required for the image processing after the exposure is longer than the time required for the processing in steps S11 to S14. On the other hand, the imaging apparatus 10 according to the present embodiment can perform image processing using an image acquired by non-destructive readout during exposure, so that the time required for image processing can be shortened compared to a conventional imaging apparatus. Can do.
 [4-2.信号処理部の処理の比較]
 続いて、撮像装置の信号処理部が行う画像処理について、図5及び図6を用いてさらに詳細に説明する。
[4-2. Comparison of signal processor processing]
Subsequently, image processing performed by the signal processing unit of the imaging apparatus will be described in more detail with reference to FIGS. 5 and 6.
 最初に、信号処理部300が行う画像処理について、図5を用いて説明する。図5は、画像処理の流れを比較するための図である。図5の(a)は、従来例に係る撮像装置の画像処理の流れを説明するための図である。図5の(b)は、本実施の形態に係る撮像装置10の画像処理の流れを説明するための図である。 First, image processing performed by the signal processing unit 300 will be described with reference to FIG. FIG. 5 is a diagram for comparing the flow of image processing. (A) of FIG. 5 is a figure for demonstrating the flow of the image processing of the imaging device which concerns on a prior art example. FIG. 5B is a diagram for explaining the flow of image processing of the imaging apparatus 10 according to the present embodiment.
 まずは、露光の開始(S1)から露光の終了(S4)までの信号処理部の処理について説明する。 First, the processing of the signal processing unit from the start of exposure (S1) to the end of exposure (S4) will be described.
 図5の(a)に示すように、従来の方法では、露光の開始から露光の終了まで、信号処理部は画像処理を行っていない。 As shown in FIG. 5A, in the conventional method, the signal processing unit does not perform image processing from the start of exposure to the end of exposure.
 図5の(b)に示すように、本実施の形態では、露光中に非破壊読み出しを行うことにより、信号処理部300は第2の画素信号を取得する(S2)。そして、第2の画素信号を用いてボケ画像を生成する(S3)。図5(b)では、露光の終了(S4)までに、ボケ画像の生成が完了するように、信号処理部300が非破壊読み出し及び画像処理を行っている例を示している。信号処理部300は、非破壊読み出しを行うことで、従来の方法におけるステップS13の画像処理を、露光中に完了することができる。 As shown in FIG. 5B, in this embodiment, the signal processing unit 300 acquires the second pixel signal by performing nondestructive readout during exposure (S2). Then, a blurred image is generated using the second pixel signal (S3). FIG. 5B shows an example in which the signal processing unit 300 performs nondestructive reading and image processing so that generation of a blurred image is completed by the end of exposure (S4). The signal processing unit 300 can complete the image processing in step S13 in the conventional method during exposure by performing nondestructive reading.
 なお、露光時間が短くステップS3の処理が露光の終了までに完了しない場合であっても、露光中にステップS3のうちの一部の処理を行うことができるので、従来の方法に比べ露光の終了から画像処理の終了までの時間を短縮することが可能である。言い換えると、非破壊読み出しは、露光が終了するまでに行われればよい。 Even if the exposure time is short and the process of step S3 is not completed by the end of the exposure, a part of the process of step S3 can be performed during the exposure. It is possible to shorten the time from the end to the end of the image processing. In other words, the nondestructive reading may be performed before the exposure is completed.
 次に、露光の終了後の処理について説明する。 Next, the processing after the end of exposure will be described.
 従来の方法では、露光の終了後に画像処理を開始する。まず、露光の終了後に取得した第1の画素信号から、中間画像を生成する(S11)。これは、第1の画素信号をメモリに格納する前に行う処理であり、上述したようにノイズに対する補正、及びレンズの周辺の画素に対する光量調整などである。なお、メモリに格納する前に行う処理は、上記以外であってもよい。 In the conventional method, image processing is started after completion of exposure. First, an intermediate image is generated from the first pixel signal acquired after the exposure is completed (S11). This is processing performed before storing the first pixel signal in the memory, and includes correction for noise and light amount adjustment for pixels around the lens as described above. The processing performed before storing in the memory may be other than the above.
 生成された中間画像は、メモリに格納される(S12)。中間画像は、1フレーム分の画像である。また、画像処理部は、中間画像を用いてボケ画像を生成する(S13)。つまり、画像処理部は、1フレーム分のボケ画像を生成する。 The generated intermediate image is stored in the memory (S12). The intermediate image is an image for one frame. The image processing unit generates a blurred image using the intermediate image (S13). That is, the image processing unit generates a blurred image for one frame.
 そして、画像処理部は、メモリに格納されている中間画像と、ステップS13で生成したボケ画像とから、撮像画像を生成する(S15)。 The image processing unit generates a captured image from the intermediate image stored in the memory and the blurred image generated in step S13 (S15).
 従来の方法では、信号処理部は、露光の終了後から画像処理を開始するので、露光の終了から画像処理の終了までに、時間を要する。具体的には、ステップS11~ステップS15までを行う時間を要する。そのため、露光の開始から画像処理の終了までの要する時間(撮像時間Ta)も、時間を要する。さらに、中間画像を格納するためのメモリが必要となる。 In the conventional method, since the signal processing unit starts image processing after the end of exposure, it takes time from the end of exposure to the end of image processing. Specifically, it takes time to perform steps S11 to S15. Therefore, the time required from the start of exposure to the end of image processing (imaging time Ta) also takes time. Further, a memory for storing the intermediate image is required.
 続いて、本実施の形態に係る信号処理部300の画像処理について説明する。 Subsequently, image processing of the signal processing unit 300 according to the present embodiment will be described.
 上述したように、本実施の形態では、露光の終了までにボケ画像の生成が完了している。そのため、露光の終了後に行う画像処理は、撮像画像の生成である。具体的には、画像処理部320は、露光の終了後に取得した第1の画素信号と、ステップS3で生成したボケ画像とから、撮像画像を生成する(S6)。 As described above, in the present embodiment, generation of a blurred image is completed by the end of exposure. Therefore, the image processing performed after the end of exposure is generation of a captured image. Specifically, the image processing unit 320 generates a captured image from the first pixel signal acquired after the exposure is completed and the blurred image generated in step S3 (S6).
 これにより、図5に示すように、露光の終了後から画像処理が終了するまでの時間短縮することができる。つまり、本実施の形態に係る撮像装置10の撮像時間Tbも短縮することができる(Tb<Ta)。具体的には、図中に示す短縮された時間T(Tb-Ta)だけ、撮像時間を短縮することができる。 Thereby, as shown in FIG. 5, it is possible to shorten the time from the end of exposure until the end of image processing. That is, the imaging time Tb of the imaging apparatus 10 according to the present embodiment can also be shortened (Tb <Ta). Specifically, the imaging time can be shortened by the shortened time T (Tb-Ta) shown in the drawing.
 画像処理部320は、露光の終了後に、画素210ごとに第1の画素信号を順次取得する。例えば、順次取得した画素210ごとの第1の画素信号と、当該画素210に対応するボケ画像の画素とを演算し、その演算結果に応じて第1の画素信号に補正を行う。つまり、画素210ごとの第1の画素信号を順次取得するとともに、順次補正を行うことができる。順次補正された第1の画素信号は、例えば、順次メモリ330に格納される。これにより、補正する前の第1の画素信号をメモリ330に格納する、又は、補正を行うために格納した第1の画素信号を読み出す時間などを省くことができるので、さらに撮像時間Tbを短縮することができる。また、第1の画素信号を格納するための容量が不要となるため、メモリ330の容量を減らすことができる。 The image processing unit 320 sequentially acquires the first pixel signal for each pixel 210 after the exposure is completed. For example, the first pixel signal for each pixel 210 acquired sequentially and the pixel of the blurred image corresponding to the pixel 210 are calculated, and the first pixel signal is corrected according to the calculation result. That is, it is possible to sequentially acquire the first pixel signal for each pixel 210 and sequentially perform correction. The first pixel signals that are sequentially corrected are sequentially stored in the memory 330, for example. As a result, it is possible to omit the time for storing the first pixel signal before correction in the memory 330 or reading the first pixel signal stored for correction, so that the imaging time Tb is further reduced. can do. In addition, since the capacity for storing the first pixel signal is not necessary, the capacity of the memory 330 can be reduced.
 また、例えば、読み出し方式がローリングシャッタである場合、従来の方法で取得する第1の画素信号と、本実施の形態の方法で取得する第1の画素信号とは、同一の信号である。なお、読み出し方式は、ローリングシャッタに限定されない。例えば、グローバルシャッタであってもよい。固体撮像素子100が有機光電変換膜を有している場合、有機光電変換膜に印加する電圧を調整することでシャッタ機能が実現できるため、メモリなどの素子を追加することなく、グローバルシャッタを実現することができる。また、グローバルシャッタで読み出している期間、全ての有機光電変換膜をシャッタとして機能させることで、その期間中に電荷蓄積部215に電荷が蓄積されることを抑制することができる。つまり、有機光電変換膜を用いることで、素子を追加することなくグローバルシャッタで発生するローリング歪を低減することができる。なお、非破壊読み出しでの読み出しと、露光の終了後での読み出しとは、同一の方式で読み出しが行われる。 For example, when the readout method is a rolling shutter, the first pixel signal acquired by the conventional method and the first pixel signal acquired by the method of the present embodiment are the same signal. Note that the readout method is not limited to the rolling shutter. For example, a global shutter may be used. When the solid-state imaging device 100 has an organic photoelectric conversion film, the shutter function can be realized by adjusting the voltage applied to the organic photoelectric conversion film, so a global shutter can be realized without adding an element such as a memory. can do. In addition, by causing all the organic photoelectric conversion films to function as shutters during the period of reading by the global shutter, it is possible to suppress the accumulation of charges in the charge accumulation unit 215 during the period. That is, by using the organic photoelectric conversion film, it is possible to reduce rolling distortion generated in the global shutter without adding an element. Note that reading by nondestructive reading and reading after completion of exposure are performed by the same method.
 続いて、生成される画像について図6を用いて説明する。図6は、画像処理で生成される画像を比較するための図である。図6の(a)は、従来例に係る撮像装置の画像処理で生成される画像を説明するための図である。図6の(b)は、本実施の形態に係る撮像装置10の画像処理で生成される画像を説明するための図である。図6の(b)では、露光中にボケ画像の生成が終了する例について示している。 Subsequently, the generated image will be described with reference to FIG. FIG. 6 is a diagram for comparing images generated by image processing. (A) of FIG. 6 is a figure for demonstrating the image produced | generated by the image processing of the imaging device which concerns on a prior art example. FIG. 6B is a diagram for explaining an image generated by the image processing of the imaging apparatus 10 according to the present embodiment. FIG. 6B shows an example in which generation of a blurred image ends during exposure.
 図6の(a)に示すように、従来の方法では、露光中に第1の画素信号を補正するための画像(例えば、中間画像又は、ボケ画像)を生成することができないため、露光中に画像処理が行うことができない。露光の終了(S4)後に、信号処理部は第1の画素信号から上記補正を施した中間画像P1を生成する。中間画像P1は、1フレーム分の画像である。なお、中間画像P1は、全体的に暗めの(実際の被写体により暗い)画像が撮像された場合を示しており、ドット状のハッチングでそれを表している。 As shown in FIG. 6A, the conventional method cannot generate an image (for example, an intermediate image or a blurred image) for correcting the first pixel signal during exposure. Image processing cannot be performed. After the end of exposure (S4), the signal processing unit generates an intermediate image P1 subjected to the above correction from the first pixel signal. The intermediate image P1 is an image for one frame. Note that the intermediate image P1 shows a case where an image that is dark overall (dark due to an actual subject) is captured, and is represented by dot-like hatching.
 次に、生成したが中間画像P1からボケ画像P2を生成する。ボケ画像も、1フレーム分の画像である。 Next, the blurred image P2 is generated from the intermediate image P1. The blurred image is also an image for one frame.
 続いて、中間画像P1とボケ画像P2とから、撮像画像P3を生成する。つまり、従来の方法では、3つの画像を生成する必要があった。なお、図6の(a)では、撮像画像P3は、中間画像P1を若干明るくする補正を行っている例について示している。なお、補正はこれに限定されない。 Subsequently, a captured image P3 is generated from the intermediate image P1 and the blurred image P2. That is, in the conventional method, it is necessary to generate three images. 6A shows an example in which the captured image P3 is corrected so that the intermediate image P1 is slightly brightened. The correction is not limited to this.
 図6の(b)に示すように、本実施の形態では、露光中に画像処理が行われる。具体的には、信号処理部300は、露光中の非破壊読み出し(S2)により、ボケ画像P12を生成する。露光中に非破壊読み出しを行うので、そこで取得した第2の画素信号を用いて生成したボケ画像P12は、全体的に暗い画像となる。ボケ画像P12は、露光の終了後に取得した第1の画素信号を用いて生成された撮像画像P13より暗い画像である。また、ボケ画像P12は、図6の(a)に示す中間画像P1から生成されたボケ画像P2より暗い画像となる。そこで、画像処理部320は、第2の画素信号のゲインを調整し、ゲインを調整した第2の画素信号を用いてボケ画像P12を生成してもよい。これにより、従来例におけるボケ画像P2と略同一のボケ画像P12を得ることができる。 As shown in FIG. 6B, in this embodiment, image processing is performed during exposure. Specifically, the signal processing unit 300 generates a blurred image P12 by nondestructive reading (S2) during exposure. Since non-destructive readout is performed during exposure, the blurred image P12 generated using the second pixel signal acquired there is a dark image as a whole. The blurred image P12 is an image that is darker than the captured image P13 generated using the first pixel signal acquired after the end of exposure. Further, the blurred image P12 is an image darker than the blurred image P2 generated from the intermediate image P1 shown in FIG. Therefore, the image processing unit 320 may adjust the gain of the second pixel signal and generate the blurred image P12 using the second pixel signal whose gain has been adjusted. As a result, a blur image P12 that is substantially the same as the blur image P2 in the conventional example can be obtained.
 次に、ゲインを調整して生成したボケ画像P12と、露光の終了後に取得した第1の画素信号とから、撮像画像P13を生成する。これにより、生成される撮像画像P13と、従来の方法で生成された撮像画像P3とは、略同一の画像となる。第2の画素信号のゲインを調整することで、撮像画像P13の精度を向上させることができる。 Next, a captured image P13 is generated from the blurred image P12 generated by adjusting the gain and the first pixel signal acquired after the exposure is completed. Thereby, the captured image P13 generated and the captured image P3 generated by the conventional method are substantially the same image. By adjusting the gain of the second pixel signal, the accuracy of the captured image P13 can be improved.
 [5.カメラ]
 上記の撮像装置10が内蔵されたカメラ1の例として、例えば、図7の(a)に示されるデジタルスチルカメラ1Aや図7の(b)に示されるデジタルビデオカメラ1Bなどが挙げられる。例えば、図7の(a)又は図7の(b)などのカメラに本実施の形態に係る撮像装置10が内蔵されることで、上記に説明したようにユーザが操作部600を介して撮像の指示を行ってから(シャッタを切ってから)、表示部500に撮像画像が表示されるまでの時間を短縮することができる。また、連続して撮像する際、シャッタを切ってから、次にシャッタを切るまでの時間を短縮することができる。
[5. camera]
Examples of the camera 1 in which the imaging apparatus 10 is built in include a digital still camera 1A shown in FIG. 7A and a digital video camera 1B shown in FIG. 7B. For example, when the imaging device 10 according to the present embodiment is built in the camera shown in FIG. 7A or 7B, the user can take an image via the operation unit 600 as described above. After the instruction is issued (after the shutter is released), it is possible to shorten the time until the captured image is displayed on the display unit 500. Further, when continuously capturing images, the time from when the shutter is released until the next shutter is released can be shortened.
 [6.効果等]
 以上のように、本実施の形態に係る撮像装置10は、行列状に配列され、非破壊読み出しが可能な複数の画素を有する固体撮像素子100と、露光の終了後に固体撮像素子100から取得した第1の画素信号に補正を行うことで撮像画像P13を生成する画像処理部320とを備える。画像処理部320は、露光中の非破壊読み出しにより固体撮像素子100から取得した第2の画素信号に基づいて補正を行う。
[6. Effect]
As described above, the imaging device 10 according to the present embodiment is obtained from the solid-state imaging device 100 having a plurality of pixels arranged in a matrix and capable of nondestructive readout, and the solid-state imaging device 100 after completion of exposure. And an image processing unit 320 that generates a captured image P13 by correcting the first pixel signal. The image processing unit 320 performs correction based on the second pixel signal acquired from the solid-state imaging device 100 by nondestructive reading during exposure.
 従来の方法では、露光の終了後に取得した第1の画素信号に基づいて補正を行っていた。そのため、露光が終了しないと、画像処理が開始できなかった。 In the conventional method, correction is performed based on the first pixel signal acquired after the exposure is completed. Therefore, the image processing cannot be started unless the exposure is completed.
 本実施の形態に係る撮像装置10は、非破壊読み出しにより露光中に第2の画素信号を取得する。そのため、露光中に第2の画素信号を用いて、第1の画素信号の補正を行うための処理(例えば、第1の画素信号を補正するための補正用画像の生成又は補正用画像の解析)を行うことできる。これにより、従来の方法に比べ、露光の終了後から画像処理の終了までの時間を短縮することができる。言い換えると、露光を開始してから、画像処理が終了するまでの時間(撮像時間Tb)を短縮することができる。つまり、画像処理の速度が高速化される。 The imaging apparatus 10 according to the present embodiment acquires the second pixel signal during exposure by nondestructive readout. Therefore, a process for correcting the first pixel signal using the second pixel signal during exposure (for example, generation of a correction image for correcting the first pixel signal or analysis of the correction image) ). Thereby, compared with the conventional method, the time from the end of exposure to the end of image processing can be shortened. In other words, the time from the start of exposure until the end of image processing (imaging time Tb) can be shortened. That is, the speed of image processing is increased.
 また、さらに、制御部310を備える。画像処理部320は、第2の画素信号から補正に用いる補正用画像を生成する、又は補正用画像を生成し、生成した補正用画像を用いて画像解析する画像処理を行う。そして、制御部310は、露光が終了する前に、画像処理部320での画像処理が終了するように、固体撮像素子100が非破壊読み出しを行うタイミングを制御する。 Furthermore, a control unit 310 is further provided. The image processing unit 320 generates an image for correction used for correction from the second pixel signal, or generates an image for correction, and performs image processing for image analysis using the generated image for correction. Then, the control unit 310 controls the timing at which the solid-state imaging device 100 performs nondestructive reading so that the image processing in the image processing unit 320 ends before the exposure ends.
 これにより、例えば、補正が階調の補正である場合、露光が終了するまでに、非破壊読み出しで取得した第2の画素信号を用いてボケ画像(補正用画像の一例)P12を生成することができる。つまり、露光の終了後に取得する第1の画素信号に対し、画素210ごとの信号を取得した時点で、順次当該画素210の信号に補正を行うことができる。また、補正が像ぶれの補正又はホワイトバランスの補正などである場合、露光が終了するまでに、非破壊読み出しで取得した第2の画素信号を用いて補正用画像を生成し、かつ当該補正用画像の画像解析を行うことができる。つまり、露光が終了するまでに画像解析の結果を取得することができるので、露光の終了後に取得する第1の画素信号に対し、画素210ごとの信号を取得した時点で、順次当該画素210の信号に補正を行うことができる。よって、撮像時間Tbをより短縮することができる。 Thereby, for example, when the correction is correction of gradation, a blurred image (an example of a correction image) P12 is generated by using the second pixel signal acquired by nondestructive reading before the exposure is completed. Can do. That is, with respect to the first pixel signal acquired after the exposure is completed, when the signal for each pixel 210 is acquired, the signal of the pixel 210 can be sequentially corrected. If the correction is image blur correction or white balance correction, a correction image is generated using the second pixel signal acquired by non-destructive reading and the correction is performed before the exposure is completed. Image analysis of the image can be performed. That is, since the result of the image analysis can be acquired before the exposure is completed, the signal of each pixel 210 is sequentially acquired when the signal for each pixel 210 is acquired with respect to the first pixel signal acquired after the exposure is completed. The signal can be corrected. Therefore, the imaging time Tb can be further shortened.
 また、さらに、メモリ330を備える。そして、画像処理部320は、画素210ごとに順次取得した第1の画素信号に、当該画素210に対応した補正を行い、当該補正を行った第1の画素信号をメモリ330に格納する。 Further, a memory 330 is provided. Then, the image processing unit 320 performs correction corresponding to the pixel 210 on the first pixel signal sequentially obtained for each pixel 210, and stores the corrected first pixel signal in the memory 330.
 これにより、第1の画素信号を一旦メモリ330に格納することなく補正を行えるので、補正する前の第1の画素信号をメモリ330に格納する及び読み出す時間を省くことができる。また、補正する前の第1の画素信号を格納するためのメモリが不要となるので、メモリ330の容量を低減することができる。 Thereby, the correction can be performed without temporarily storing the first pixel signal in the memory 330, so that the time for storing and reading the first pixel signal before the correction in the memory 330 can be saved. In addition, since the memory for storing the first pixel signal before correction is not necessary, the capacity of the memory 330 can be reduced.
 また、補正用画像とは、撮像画像P13より解像度が低いボケ画像P12である。そして、画像処理部320は、明るさに対するゲインの補正を加えた第2の画素信号からボケ画像P12を生成する。 Further, the correction image is a blurred image P12 whose resolution is lower than that of the captured image P13. Then, the image processing unit 320 generates the blurred image P12 from the second pixel signal to which the gain correction for the brightness is added.
 ボケ画像P12を用いることにより、信号処理部300での処理量を減らすことができるので、信号処理部300における処理時間の短縮が可能となる。さらに、第2の画素信号にゲイン補正を加えることで生成されたボケ画像P12は、従来例において生成されるボケ画像P2と略同一の明るさの画像となる。これにより、より精度良く第1の画素信号の補正が行える。 Since the processing amount in the signal processing unit 300 can be reduced by using the blurred image P12, the processing time in the signal processing unit 300 can be shortened. Furthermore, the blurred image P12 generated by applying gain correction to the second pixel signal is an image having substantially the same brightness as the blurred image P2 generated in the conventional example. As a result, the first pixel signal can be corrected with higher accuracy.
 また、補正とは、階調の補正、像ぶれの補正、又はホワイトバランスの補正である。 Also, the correction is gradation correction, image blur correction, or white balance correction.
 これにより、階調の補正、像ぶれの補正、又はホワイトバランスの補正を行う場合に、撮像時間Tbを短縮することができる。 This makes it possible to shorten the imaging time Tb when performing gradation correction, image blur correction, or white balance correction.
 また、複数の画素210のそれぞれは、有機光電変換膜を有する。 Further, each of the plurality of pixels 210 has an organic photoelectric conversion film.
 これにより、有機光電変換膜に印加する電圧を調整することでシャッタ機能が実現できるため、メモリなどの素子を追加することなく、グローバルシャッタを実現することができる。そのため、被写体が動いている場合でも、より歪みの少ない画像データを取得することができる。 Thereby, since the shutter function can be realized by adjusting the voltage applied to the organic photoelectric conversion film, a global shutter can be realized without adding an element such as a memory. Therefore, even when the subject is moving, image data with less distortion can be acquired.
 また、本実施の形態に係るカメラ1は、上記の撮像装置10を備える。 Further, the camera 1 according to the present embodiment includes the imaging device 10 described above.
 これにより、ユーザが操作部600を介して撮像の指示を行ってから(シャッタを切ってから)、表示部500に撮像画像が表示されるまでの時間を短縮することができる。また、連続して撮像する際、シャッタを切ってから、次にシャッタを切るまでの時間を短縮することができる。 Accordingly, it is possible to shorten the time from when the user gives an instruction for imaging through the operation unit 600 (after the shutter is released) until the captured image is displayed on the display unit 500. Further, when continuously capturing images, the time from when the shutter is released until the next shutter is released can be shortened.
 また、本実施の形態に係る撮像方法は、露光の終了後に、非破壊読み出しが可能な複数の画素210を有する固体撮像素子100から第1の画素信号を取得し、露光中の非破壊読み出しにより固体撮像素子100から取得した第2の画素信号に基づいて第1の画素信号に補正を施すことで撮像画像P13を生成する。 In addition, the imaging method according to the present embodiment acquires a first pixel signal from the solid-state imaging device 100 having a plurality of pixels 210 capable of nondestructive readout after the exposure is completed, and performs nondestructive readout during exposure. A captured image P13 is generated by correcting the first pixel signal based on the second pixel signal acquired from the solid-state imaging device 100.
 これにより、非破壊読み出しにより露光中に第2の画素信号を取得できるので、露光中に第2の画素信号を用いて、ボケ画像の生成を行うことできる。これにより、従来の方法に比べ、露光の終了後から画像処理の終了までの時間を短縮することができる。言い換えると、露光を開始してから、画像処理が終了するまでの時間(撮像時間)を短縮することができる。 Thus, since the second pixel signal can be acquired during exposure by nondestructive readout, a blurred image can be generated using the second pixel signal during exposure. Thereby, compared with the conventional method, the time from the end of exposure to the end of image processing can be shortened. In other words, the time from the start of exposure until the end of image processing (imaging time) can be shortened.
 (他の実施の形態)
 以上のように、本開示における技術の例示として、実施の形態を説明した。そのために、添付図面および詳細な説明を提供した。
(Other embodiments)
As described above, the embodiments have been described as examples of the technology in the present disclosure. For this purpose, the accompanying drawings and detailed description are provided.
 したがって、添付図面および詳細な説明に記載された構成要素の中には、課題解決のために必須な構成要素だけでなく、上記技術を例示するために、課題解決のためには必須でない構成要素も含まれ得る。そのため、それらの必須ではない構成要素が添付図面や詳細な説明に記載されていることをもって、直ちに、それらの必須ではない構成要素が必須であるとの認定をするべきではない。 Accordingly, among the components described in the accompanying drawings and the detailed description, not only the components essential for solving the problem, but also the components not essential for solving the problem in order to illustrate the above technique. May also be included. Therefore, it should not be immediately recognized that these non-essential components are essential as those non-essential components are described in the accompanying drawings and detailed description.
 また、上述の実施の形態は、本開示における技術を例示するためのものであるから、請求の範囲またはその均等の範囲において種々の変更、置き換え、付加、省略などを行うことができる。 In addition, since the above-described embodiment is for illustrating the technique in the present disclosure, various modifications, replacements, additions, omissions, and the like can be performed within the scope of the claims or an equivalent scope thereof.
 例えば、上記実施の形態では、画像処理部320がボケ画像P12を生成する例について説明したが、これに限定されない。例えば、画素混合ができる固体撮像素子100を用いている場合、画像処理部320は固体撮像素子100から取得した画素混合された信号をボケ画像として用いてもよい。 For example, in the above embodiment, the example in which the image processing unit 320 generates the blurred image P12 has been described, but the present invention is not limited to this. For example, when the solid-state imaging device 100 capable of pixel mixing is used, the image processing unit 320 may use a pixel-mixed signal acquired from the solid-state imaging device 100 as a blurred image.
 これにより、実施の形態と同様の効果を奏する。 This produces the same effect as the embodiment.
 また、上記では、画像処理部320は、ボケ画像P12を用いて階調の補正を行う例について説明したが、これに限定されない。例えば、画像処理部320は、第2の画素信号から画像の明るさに対するヒストグラムを取得し、その結果を補正用信号として用いて第1の画素信号に階調の補正を行ってもよい。なお、固体撮像素子100から取得した第2の画素信号により生成された明るさに対するヒストグラムに応じて第1の画素信号に階調の補正を行うことは、画像処理部320が行う画像処理の一例である。 In the above description, the image processing unit 320 performs the gradation correction using the blurred image P12. However, the present invention is not limited to this. For example, the image processing unit 320 may acquire a histogram for the brightness of the image from the second pixel signal, and use the result as a correction signal to correct the gradation of the first pixel signal. Note that the correction of the gradation of the first pixel signal according to the histogram for the brightness generated by the second pixel signal acquired from the solid-state imaging device 100 is an example of image processing performed by the image processing unit 320. It is.
 これにより、ボケ画像P12を用いて階調の補正を行う場合と同様の効果を奏する。 Thereby, the same effect as that in the case of correcting the gradation using the blurred image P12 is obtained.
 また、上記では、第1の画素信号はメモリ330に格納されず、画像処理部320により補正され撮像画像が生成される例について説明したが、これに限定されない。例えば、1フレーム分の第1の画素信号は一旦メモリ330に格納され、画像処理部320はメモリ330から第1の画素信号を画素210ごとに読み出し、読み出した第1の画素信号とボケ画像とから当該第1の画素信号に補正を行うことで撮像画像を生成してもよい。 In the above description, an example in which the first pixel signal is not stored in the memory 330 and is corrected by the image processing unit 320 to generate a captured image has been described. However, the present invention is not limited to this. For example, the first pixel signal for one frame is temporarily stored in the memory 330, and the image processing unit 320 reads the first pixel signal from the memory 330 for each pixel 210, and the read first pixel signal and blurred image The captured image may be generated by correcting the first pixel signal.
 これにより、従来の方法と比べ、非破壊読み出しによりボケ画像を生成している分、画像処理の高速化が可能となる。 This makes it possible to increase the speed of image processing compared to the conventional method because the blurred image is generated by nondestructive reading.
 また、上記では、露光中に非破壊読み出しを1回行う例について説明したが、非破壊読み出しを行う回数はこれに限定されない。露光中に非破壊読み出しを複数回行ってもよい。例えば、像ぶれ補正を行う場合、露光中に2回の非破壊読み出しを行い、それぞれの画素信号に基づく画像データから動きベクトルを取得し、第1の画素信号に補正を行ってもよい。また、露光期間中の被写体のブレやフラッシュ光などの影響を低減するために、例えば画像処理部320は、非破壊読み出しで取得した複数の画像の中から被写体のブレやフラッシュ光の影響がより少ない画像を補正用画像として選択してもよい。 In the above description, an example in which non-destructive readout is performed once during exposure has been described. However, the number of times of non-destructive readout is not limited to this. Non-destructive readout may be performed a plurality of times during exposure. For example, when image blur correction is performed, two non-destructive readings may be performed during exposure, a motion vector may be acquired from image data based on each pixel signal, and the first pixel signal may be corrected. Further, in order to reduce the influence of subject blurring and flash light during the exposure period, for example, the image processing unit 320 is more affected by subject blurring and flash light from a plurality of images acquired by non-destructive reading. A small number of images may be selected as correction images.
 また、上記では、本開示に係る撮像装置10が内蔵されたカメラ1について説明したが、用途はこれに限定されない。本開示に係る撮像装置10が内蔵された各種電子機器も本開示に含まれる。 In the above description, the camera 1 including the imaging device 10 according to the present disclosure has been described, but the application is not limited thereto. Various electronic devices incorporating the imaging device 10 according to the present disclosure are also included in the present disclosure.
 また、撮像装置10における各構成要素(機能ブロック)は、IC(Integrated Circuit)、LSI(Large Scale Integration)等の半導体装置により個別に1チップ化されてもよいし、一部又は全部を含むように1チップ化されてもよい。また、集積回路化の手法はLSIに限るものではなく、専用回路又は汎用プロセッサで実現してもよい。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)や、LSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。更には、半導体技術の進歩又は派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、その技術を用いて機能ブロックの集積化を行ってもよい。バイオ技術の適用等が可能性としてあり得る。 In addition, each component (functional block) in the imaging device 10 may be individually made into one chip by a semiconductor device such as an IC (Integrated Circuit), an LSI (Large Scale Integration), or the like, or may include a part or all of it. Alternatively, one chip may be used. Further, the method of circuit integration is not limited to LSI's, and implementation using dedicated circuitry or general purpose processors is also possible. An FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used. Furthermore, if integrated circuit technology that replaces LSI appears as a result of progress in semiconductor technology or other derived technology, functional blocks may be integrated using this technology. Biotechnology can be applied as a possibility.
 また、上記各種処理の全部又は一部は、電子回路等のハードウェアにより実現されても、ソフトウェアを用いて実現されてもよい。なお、ソフトウェアによる処理は、撮像装置10に含まれるプロセッサがメモリに記憶されたプログラムを実行することにより実現されるものである。また、そのプログラムを記録媒体に記録して頒布や流通させてもよい。例えば、頒布されたプログラムを、他のプロセッサを有する装置にインストールして、そのプログラムをそのプロセッサに実行させることで、その装置に、上記各処理を行わせることが可能となる。 In addition, all or part of the various processes described above may be realized by hardware such as an electronic circuit or may be realized by using software. Note that the processing by software is realized by a processor included in the imaging apparatus 10 executing a program stored in a memory. Further, the program may be recorded on a recording medium and distributed or distributed. For example, by installing the distributed program in a device having another processor and causing the processor to execute the program, it is possible to cause the device to perform each of the above processes.
 また、上記実施の形態では、カメラ1は外部からの光を固体撮像素子100に入射させるレンズ400を備えている例について説明したが、これに限定されない。レンズ400は、例えば、カメラ1に着脱可能なレンズであってもよい。この場合、カメラ1は、レンズ400を備えていなくてもよい。なお、レンズ400は、外部からの光を集光して固体撮像素子100に入射させる。 In the above-described embodiment, the camera 1 has been described with respect to the example including the lens 400 that allows light from the outside to enter the solid-state imaging device 100. However, the present invention is not limited to this. For example, the lens 400 may be a lens that can be attached to and detached from the camera 1. In this case, the camera 1 may not include the lens 400. The lens 400 collects light from the outside and makes it incident on the solid-state imaging device 100.
 また、上述した実施の形態で示した構成要素及び機能を任意に組み合わせることで実現される形態も本開示の範囲に含まれる。 Also, a form realized by arbitrarily combining the constituent elements and functions shown in the above-described embodiments is also included in the scope of the present disclosure.
 本開示は、画像を撮像する撮像装置に広く利用可能である。 The present disclosure can be widely used for imaging devices that capture images.
 1  カメラ
 1A  デジタルスチルカメラ
 1B  デジタルビデオカメラ
 10  撮像装置
 100  固体撮像素子
 110  画素アレイ部
 120  列AD変換部
 121  AD変換器
 130  行走査部
 140  列走査部
 150  駆動制御部
 160  列信号線
 170  走査線
 170A  リセット用走査線
 170B  選択用走査線
 180  行信号線
 210  画素
 211  光電変換素子
 211a  上部電極
 211b  下部電極
 211c  光電変換膜
 212  リセットトランジスタ
 213  増幅トランジスタ
 214  選択トランジスタ
 215  電荷蓄積部
 300  信号処理部
 310  制御部
 320  画像処理部
 330  メモリ
 400  レンズ
 500  表示部
 600  操作部
 P1  中間画像
 P2、P12  ボケ画像
 P3、P13  撮像画像
 Ta、Tb  撮像時間
DESCRIPTION OF SYMBOLS 1 Camera 1A Digital still camera 1B Digital video camera 10 Imaging device 100 Solid-state image sensor 110 Pixel array part 120 Column AD conversion part 121 AD converter 130 Row scanning part 140 Column scanning part 150 Drive control part 160 Column signal line 170 Scan line 170A Reset scanning line 170B Selection scanning line 180 Row signal line 210 Pixel 211 Photoelectric conversion element 211a Upper electrode 211b Lower electrode 211c Photoelectric conversion film 212 Reset transistor 213 Amplification transistor 214 Selection transistor 215 Charge accumulation unit 300 Signal processing unit 310 Control unit 320 Image processing unit 330 Memory 400 Lens 500 Display unit 600 Operation unit P1 Intermediate image P2, P12 Blurred image P3, P13 Captured image Ta, Tb Imaging time

Claims (8)

  1.  行列状に配列され、非破壊読み出しが可能な複数の画素を有する固体撮像素子と、
     露光の終了後に前記固体撮像素子から取得した第1の画素信号に補正を行うことで撮像画像を生成する画像処理部と、
     前記画像処理部により補正された前記第1の画素信号を格納するメモリと、を備え、
     前記画像処理部は、
     前記露光中の前記非破壊読み出しにより前記固体撮像素子から取得した第2の画素信号に基づいて前記補正を行う
     撮像装置。
    A solid-state imaging device having a plurality of pixels arranged in a matrix and capable of nondestructive readout;
    An image processing unit that generates a captured image by performing correction on the first pixel signal acquired from the solid-state imaging device after the end of exposure;
    A memory for storing the first pixel signal corrected by the image processing unit,
    The image processing unit
    An imaging device that performs the correction based on a second pixel signal acquired from the solid-state imaging device by the non-destructive readout during the exposure.
  2.  さらに、制御部を備え、
     前記画像処理部は、前記第2の画素信号から前記補正に用いる補正用画像を生成する、又は前記補正用画像を生成し、生成した前記補正用画像を用いて画像解析する画像処理を行い、
     前記制御部は、前記露光が終了する前に、前記画像処理部での前記画像処理が終了するように、前記固体撮像素子が前記非破壊読み出しを行うタイミングを制御する
     請求項1に記載の撮像装置。
    Furthermore, a control unit is provided,
    The image processing unit generates an image for correction used for the correction from the second pixel signal, or generates an image for correction, and performs image processing for image analysis using the generated image for correction,
    The imaging unit according to claim 1, wherein the control unit controls the timing at which the solid-state imaging device performs the non-destructive reading so that the image processing in the image processing unit is completed before the exposure is completed. apparatus.
  3.  前記画像処理部は、前記画素ごとに順次取得した前記第1の画素信号に、当該画素に対応した前記補正を行い、当該補正を行った前記第1の画素信号を前記メモリに格納する
     請求項2に記載の撮像装置。
    The image processing unit performs the correction corresponding to the pixel on the first pixel signal sequentially acquired for each pixel, and stores the corrected first pixel signal in the memory. 2. The imaging device according to 2.
  4.  前記補正用画像とは、前記撮像画像より解像度が低いボケ画像であり、
     前記画像処理部は、明るさに対するゲインの補正を加えた前記第2の画素信号から前記ボケ画像を生成する
     請求項2又は3に記載の撮像装置。
    The correction image is a blurred image whose resolution is lower than that of the captured image.
    The imaging apparatus according to claim 2, wherein the image processing unit generates the blurred image from the second pixel signal obtained by adding a gain correction to brightness.
  5.  前記補正は、階調の補正、像ぶれの補正、又はホワイトバランスの補正である
     請求項1~4のいずれか1項に記載の撮像装置。
    The imaging apparatus according to any one of claims 1 to 4, wherein the correction is gradation correction, image blur correction, or white balance correction.
  6.  前記複数の画素のそれぞれは、有機光電変換膜を有する
     請求項1~5のいずれか1項に記載の撮像装置。
    6. The imaging apparatus according to claim 1, wherein each of the plurality of pixels includes an organic photoelectric conversion film.
  7.  請求項1~6のいずれか1項に記載の撮像装置と、
     前記撮像装置に外部の光を集光するレンズとを備える
     カメラ。
    The imaging device according to any one of claims 1 to 6,
    A camera comprising: a lens that collects external light on the imaging device.
  8.  行列状に配列され、非破壊読み出しが可能な複数の画素を有する固体撮像素子から、露光の終了後に第1の画素信号を取得し、
     前記露光中の非破壊読み出しにより前記固体撮像素子から取得した第2の画素信号に基づいて前記第1の画素信号に補正を行うことで撮像画像を生成する
     撮像方法。
    A first pixel signal is obtained after completion of exposure from a solid-state imaging device having a plurality of pixels arranged in a matrix and capable of nondestructive readout,
    The imaging method which produces | generates a captured image by correct | amending the said 1st pixel signal based on the 2nd pixel signal acquired from the said solid-state image sensor by the nondestructive readout during the said exposure.
PCT/JP2017/046594 2016-12-27 2017-12-26 Imaging device, camera, and imaging method WO2018124050A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-254492 2016-12-27
JP2016254492A JP2020031257A (en) 2016-12-27 2016-12-27 Imaging apparatus, camera, and imaging method

Publications (1)

Publication Number Publication Date
WO2018124050A1 true WO2018124050A1 (en) 2018-07-05

Family

ID=62709362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/046594 WO2018124050A1 (en) 2016-12-27 2017-12-26 Imaging device, camera, and imaging method

Country Status (2)

Country Link
JP (1) JP2020031257A (en)
WO (1) WO2018124050A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004344249A (en) * 2003-05-20 2004-12-09 Canon Inc Radiographic apparatus, radiographic method, and program and recording medium for radiography
JP2007281555A (en) * 2006-04-03 2007-10-25 Seiko Epson Corp Imaging apparatus
JP2012049600A (en) * 2010-08-24 2012-03-08 Seiko Epson Corp Image processing apparatus, image processing method and imaging apparatus
JP2016019090A (en) * 2014-07-07 2016-02-01 パナソニックIpマネジメント株式会社 Solid-state image pickup device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004344249A (en) * 2003-05-20 2004-12-09 Canon Inc Radiographic apparatus, radiographic method, and program and recording medium for radiography
JP2007281555A (en) * 2006-04-03 2007-10-25 Seiko Epson Corp Imaging apparatus
JP2012049600A (en) * 2010-08-24 2012-03-08 Seiko Epson Corp Image processing apparatus, image processing method and imaging apparatus
JP2016019090A (en) * 2014-07-07 2016-02-01 パナソニックIpマネジメント株式会社 Solid-state image pickup device

Also Published As

Publication number Publication date
JP2020031257A (en) 2020-02-27

Similar Documents

Publication Publication Date Title
TWI373964B (en)
US9191560B2 (en) Image capturing apparatus that performs photoelectric conversion on incident light that has passed through an imaging lens and outputs an electric signal
CN109997352B (en) Imaging device, camera, and imaging method
JP6414718B2 (en) Imaging device
US20100045830A1 (en) Image capturing device, smear reduction method, and computer readable storage medium
JP5222068B2 (en) Imaging device
US9986163B2 (en) Digital photographing apparatus and digital photographing method
WO2018124051A1 (en) Image selection device, camera and image selection method
WO2018124049A1 (en) Image capturing device, camera and image capturing method
WO2018124053A1 (en) Imaging device, camera, and imaging method
WO2018124050A1 (en) Imaging device, camera, and imaging method
US11546490B2 (en) Image generation method, imaging apparatus, and recording medium
JP2006239117A (en) Radiographic device
US9794502B2 (en) Image capturing apparatus
JP6706850B2 (en) Imaging device, camera, and imaging method
JP6706783B2 (en) Imaging device, imaging device, camera, and imaging method
JP6664066B2 (en) Imaging device and control method thereof
JP6470589B2 (en) IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
WO2018124057A1 (en) Imaging device and control method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17889145

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17889145

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP