WO2018124050A1 - Dispositif d'imagerie, caméra et procédé d'imagerie - Google Patents

Dispositif d'imagerie, caméra et procédé d'imagerie Download PDF

Info

Publication number
WO2018124050A1
WO2018124050A1 PCT/JP2017/046594 JP2017046594W WO2018124050A1 WO 2018124050 A1 WO2018124050 A1 WO 2018124050A1 JP 2017046594 W JP2017046594 W JP 2017046594W WO 2018124050 A1 WO2018124050 A1 WO 2018124050A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel signal
correction
processing unit
image processing
Prior art date
Application number
PCT/JP2017/046594
Other languages
English (en)
Japanese (ja)
Inventor
寿人 吉松
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2018124050A1 publication Critical patent/WO2018124050A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors

Definitions

  • the present disclosure relates to an imaging apparatus, a camera including the imaging apparatus, and an imaging method thereof.
  • Patent Document 1 An imaging apparatus that captures an image using an image sensor is known (see, for example, Patent Document 1).
  • the present disclosure provides an imaging apparatus, a camera, and an imaging method in which the image processing speed is increased as compared with the conventional art.
  • an imaging device is obtained from a solid-state imaging device having a plurality of pixels arranged in a matrix and capable of non-destructive readout, and after the exposure is completed
  • An image processing unit that generates a captured image by correcting the first pixel signal, and a memory that stores the first pixel signal corrected by the image processing unit. Correction is performed based on the second pixel signal acquired from the solid-state image sensor by nondestructive readout during exposure.
  • a camera includes the above-described imaging device and a lens that collects external light on the imaging device.
  • the imaging method acquires a first pixel signal after completion of exposure from a solid-state imaging device that is arranged in a matrix and includes a plurality of pixels that can be read nondestructively,
  • the captured image is generated by correcting the first pixel signal based on the second pixel signal acquired from the solid-state imaging device by nondestructive readout.
  • the speed of image processing is increased.
  • FIG. 1 is a block diagram illustrating an overall configuration of an imaging apparatus according to an embodiment.
  • FIG. 2 is a diagram illustrating an example of a circuit configuration of a pixel according to the embodiment.
  • FIG. 3 is a functional block diagram of a camera in which the imaging device according to the embodiment is built.
  • FIG. 4A is a flowchart illustrating the operation of the imaging apparatus according to the embodiment.
  • FIG. 4B is a flowchart illustrating the operation of the imaging apparatus according to the conventional example.
  • FIG. 5 is a diagram for explaining the flow of image processing according to the embodiment.
  • FIG. 6 is a diagram illustrating an image generated by the image processing according to the embodiment.
  • FIG. 7 is an external view of a camera in which the imaging device according to the embodiment is built.
  • FIG. 1 is a block diagram showing an overall configuration of an imaging apparatus 10 according to the present embodiment.
  • the imaging apparatus 10 shown in the figure includes a solid-state imaging device 100, a signal processing unit 300, a display unit 500 (see FIG. 3), and an operation unit 600 (see FIG. 3).
  • the solid-state imaging device 100 includes a pixel array unit 110, a column AD conversion unit 120, a row scanning unit 130, a column scanning unit 140, and a drive control unit 150.
  • a column signal line 160 is arranged for each pixel column
  • a scanning line 170 is arranged for each pixel row.
  • the display unit 500 and the operation unit 600 included in the imaging device 10 are not illustrated.
  • the pixel array unit 110 is an imaging unit in which a plurality of pixels 210 are arranged in a matrix.
  • the column AD conversion (analog / digital converter) unit 120 digitally converts a signal (analog signal) input from each column signal line 160 to acquire, hold, and output a digital value corresponding to the amount of light received by the pixel 210. It is the conversion part.
  • the row scanning unit 130 has a function of controlling the reset operation, charge accumulation operation, and readout operation of the pixels 210 in units of rows.
  • the column scanning unit 140 causes the signal processing unit 300 to output the digital values for one row held in the column AD conversion unit 120 to the row signal line 180 sequentially.
  • the drive control unit 150 controls each unit by supplying various control signals to the row scanning unit 130 and the column scanning unit 140.
  • the drive control unit 150 supplies various control signals to the row scanning unit 130 and the column scanning unit 140 based on a control signal from the signal processing unit 300.
  • FIG. 2 is a diagram illustrating an example of a circuit configuration of the pixel 210 according to this embodiment.
  • the pixel 210 includes a photoelectric conversion element 211, a reset transistor 212, an amplification transistor 213, a selection transistor 214, and a charge storage unit 215.
  • the photoelectric conversion element 211 is a photoelectric conversion unit that photoelectrically converts received light into signal charges (pixel charges).
  • the photoelectric conversion element 211 includes an upper electrode 211a, a lower electrode 211b, and a photoelectric conversion film 211c sandwiched between both electrodes.
  • the photoelectric conversion film 211c is a film made of a photoelectric conversion material that generates an electric charge according to received light.
  • the photoelectric conversion film 211c is made of an organic photoelectric conversion film containing organic molecules having a high light absorption function. ing.
  • the photoelectric conversion element 211 according to the present embodiment is an organic photoelectric conversion element having an organic photoelectric conversion film
  • the solid-state imaging element 100 is an organic sensor using the organic photoelectric conversion element. Note that the organic photoelectric conversion film is formed across the plurality of pixels 210. Each of the plurality of pixels 210 has an organic photoelectric conversion film.
  • the thickness of the photoelectric conversion film 211c is, for example, about 500 nm. Moreover, the photoelectric conversion film 211c is formed using, for example, a vacuum deposition method.
  • the organic molecule has a high light absorption function over the entire visible light wavelength range from about 400 nm to about 700 nm.
  • the photoelectric conversion element 211 included in the pixel 210 according to the present embodiment is not limited to the organic photoelectric conversion film described above, and may be, for example, a photodiode formed of an inorganic material. .
  • the upper electrode 211a is an electrode facing the lower electrode 211b, and is formed on the photoelectric conversion film 211c so as to cover the photoelectric conversion film 211c. That is, the upper electrode 211a is formed across a plurality of pixels 210.
  • the upper electrode 211a is made of a transparent conductive material (for example, ITO: indium / titanium / tin) in order to make light incident on the photoelectric conversion film 211c.
  • the lower electrode 211b is an electrode for taking out electrons or holes generated in the photoelectric conversion film 211c between the opposed upper electrode 211a.
  • the lower electrode 211b is formed for each pixel 210.
  • the lower electrode 211b is made of, for example, Ti, TiN, Ta, Mo, or the like.
  • the charge accumulating unit 215 is connected to the photoelectric conversion element 211 and accumulates signal charges taken out via the lower electrode 211b.
  • the reset transistor 212 has a drain supplied with a reset voltage VRST, a source connected to the charge storage unit 215, and resets (initializes) the potential of the charge storage unit 215. Specifically, a predetermined voltage is supplied (turned on) from the row scanning unit 130 to the gate of the reset transistor 212 via the reset scanning line 170A, so that the reset transistor 212 has the potential of the charge storage unit 215. To reset. Further, by stopping the supply of a predetermined voltage, signal charges are accumulated in the charge accumulation unit 215 (exposure is started).
  • the amplification transistor 213 has a gate connected to the charge storage unit 215, a power supply voltage VDD supplied to the drain, and outputs a pixel signal corresponding to the amount of signal charge stored in the charge storage unit 215.
  • the selection transistor 214 has a drain connected to the source of the amplification transistor 213, a source connected to the column signal line 160, and determines the timing for outputting the pixel signal from the amplification transistor 213. Specifically, a pixel signal is output from the amplification transistor 213 by supplying a predetermined voltage from the row scanning unit 130 to the gate of the selection transistor 214 via the selection scanning line 170B.
  • the pixel 210 having the above configuration can perform nondestructive readout.
  • the non-destructive reading means reading out a pixel signal corresponding to the amount of charge without destroying the charge (signal charge) accumulated in the charge accumulation unit 215 during exposure. Note that “during exposure” is used to mean any timing within the exposure time.
  • the column AD conversion unit 120 includes an AD converter 121 provided for each column signal line 160.
  • the AD converter 121 is, for example, a 14-bit AD converter.
  • the AD converter 121 digitally converts an analog pixel signal output from the pixel 210 using a ramp method, and outputs a digital value corresponding to the amount of light received by the pixel 210.
  • the AD converter 121 includes a comparator and an up / down counter (not shown).
  • the ramp-type AD conversion is an AD conversion using a ramp wave.
  • a ramp wave whose voltage rises at a constant slope is started, and from the start point.
  • the time until the voltages of both signals (the input signal and the ramp wave) coincide with each other is measured, and the measured time is output as a digital value.
  • the comparator compares the voltage of the column signal with the voltage of the reference signal input as a ramp wave, and outputs a signal indicating the timing at which the voltage of the reference signal matches the voltage of the column signal.
  • the up / down counter counts down (or up) during a period from when the reference signal is input to the comparator to when the reference signal reaches the voltage of the column signal indicating the reference component.
  • Digital corresponding to the difference obtained by subtracting the reference component from the signal component of the column signal by performing up-counting (or down-counting) in the period from when the reference signal reaches the voltage of the column signal indicating the signal component Keep the value finally.
  • the digital values held in the up / down counters are sequentially output to the row signal line 180 and output to the signal processing unit 300 via an output circuit (not shown, but an output buffer or the like).
  • the drive control unit 150 controls the row scanning unit 130 and the column scanning unit 140 to perform a reset operation, a charge accumulation operation, and a readout operation in the pixel 210 or a digital signal from the AD converter 121 to the signal processing unit 300. Controls the output operation of.
  • the drive control unit 150 controls the row scanning unit 130, sequentially applies a predetermined voltage to the selection scanning line 170B, and outputs a pixel signal (analog). .
  • the drive control unit 150 also controls the column scanning unit 140 to sequentially output pixel signals (digital values) held in the AD converter 121 to the signal processing unit 300.
  • FIG. 3 is a functional block diagram of the camera 1 including the imaging device 10 and the lens 400 according to the present embodiment.
  • the camera 1 shown in the figure includes a solid-state imaging device 100, a signal processing unit 300, a lens 400, a display unit 500, and an operation unit 600.
  • the signal processing unit 300 includes a control unit 310, an image processing unit 320, and a memory 330.
  • the light that has passed through the lens 400 enters the solid-state imaging device 100.
  • the signal processing unit 300 drives the solid-state image sensor 100 and captures a pixel signal (digital value) from the solid-state image sensor 100.
  • the image processing unit 320 captures a pixel signal from the solid-state imaging device 100.
  • the image processing unit 320 performs predetermined signal processing on the pixel signal acquired from the solid-state imaging device 100 to generate a captured image.
  • the generated captured image is stored in the memory 330.
  • the generated captured image is output to the display unit 500.
  • the pixel signal (digital value) is an example of the first pixel signal.
  • the captured image is an image displayed or recorded to the user in the case of a camera, for example.
  • control unit 310 reads a program from the memory 330 and executes the read program.
  • the control unit 310 is realized by a processor.
  • the processor functions as the control unit 310 when the software program stored in the memory 330 is executed by the processor.
  • the control unit 310 is realized by reading the software program from the nonvolatile memory to the volatile memory and executing the read software program by the processor.
  • the control unit 310 controls the drive control unit 150.
  • other units may be controlled.
  • the control unit 310 may perform control according to the input.
  • the control unit 310 may control the lens 400 (specifically, a motor that controls the position of the lens 400) to adjust the focus of the subject.
  • the image processing unit 320 generates a captured image by performing predetermined correction on the first pixel signal acquired from the solid-state imaging device 100 after the exposure is completed. Specifically, the image processing unit 320 performs a predetermined correction on the first pixel signal based on the second pixel signal acquired from the solid-state imaging device 100 by nondestructive readout during exposure, thereby obtaining a captured image. Generate.
  • the predetermined correction means that information for correction (for example, a correction value for correction applied to the first pixel signal) is obtained unless image data for one frame (second pixel signal for one frame) is used. Correction that cannot be performed, for example, gradation correction, image blur correction, or white balance correction.
  • the present embodiment is characterized in that the above-described correction is performed on the first pixel signal based on the analysis result of the second pixel signal.
  • the image processing unit 320 reads a program from the memory 330 and executes the read program.
  • the image processing unit 320 is realized by a processor.
  • the processor functions as the image processing unit 320 when the software program stored in the memory 330 is executed by the processor.
  • the image processing unit 320 is realized by reading a software program from a nonvolatile memory into a volatile memory and executing the read software program by a processor. Note that the image processing unit 320 may perform the above correction under the control of the control unit 310.
  • control unit 310 and the image processing unit 320 may be realized by a microcomputer.
  • each of the control unit 310 and the image processing unit 320 includes a nonvolatile memory in which an operation program is stored, a volatile memory that is a temporary storage area for executing the program, an input / output port, and a processor that executes the program. Etc.
  • the first pixel signal is corrected so that overexposure and blackout are not likely to occur. Specifically, the overexposed pixels are corrected dark and the blacked out pixels are corrected brightly. In the gradation correction, other pixels are also corrected as necessary, so that an image in which whiteout and blackout are reduced and the brightness balance is adjusted as a whole is generated.
  • the image processing unit 320 uses the second pixel signal (image data generated by the second pixel signal) acquired by nondestructive reading.
  • a blur image is generated from the second pixel signal for one frame, and the first pixel signal is corrected by calculating the generated blur image and the first pixel signal.
  • the first pixel signal is corrected for each pixel 210.
  • the correction value of the first pixel signal is obtained by calculating the brightness of the first pixel signal of one pixel 210 and the brightness of the pixel of the blurred image corresponding to the first pixel signal.
  • a captured image is generated by correcting the first pixel signal with the correction value.
  • the blurred image is an example of a correction image.
  • a blurred image and the first pixel signal are calculated, and a captured image is generated by correcting the first pixel signal from the calculation result.
  • an image (an example of a correction image) generated by the second pixel signal acquired by nondestructive readout and an image generated by the first or second pixel signal in the previous frame, for example.
  • the motion vector is acquired from the above, and the first pixel signal is corrected according to the acquired motion vector.
  • the direction and amount of image blurring are specified from the motion vector, and the image generated by the first pixel signal is translated in accordance with the direction and amount, so that the correction is performed.
  • the image generated by the first pixel signal is translated so that the motion vector becomes small.
  • the signal processing unit 300 calculates one motion vector for one frame image.
  • one frame image is divided into a plurality of windows (smaller areas smaller than one frame image), a motion vector is acquired for each window, and a motion vector in one frame image is acquired from the motion vector in each window.
  • acquiring a motion vector using a correction image is an example of image analysis.
  • the white balance correction the white balance of the first pixel signal is corrected from the analysis result of the second pixel signal in the same manner as described above.
  • the white balance is corrected after correcting the offset (correcting the OB step when the analog signal is converted into the digital signal)
  • the second pixel acquired by nondestructive readout is also used in the offset correction.
  • a signal may be used. Note that image blur correction, white balance correction, and the like need not use a blurred image, and can be performed by using an image for one frame acquired by nondestructive reading.
  • correction image or image analysis using the generated correction image is an example of image processing.
  • the image data (image) generated by the second pixel signal acquired by non-destructive readout during exposure is an image that is generally darker than the captured image generated by the first pixel signal acquired after the exposure is completed. It becomes. Therefore, the brightness of the image data can be adjusted by amplifying the second pixel signal (correcting the gain with respect to the brightness) before generating the correction image using the second pixel signal. Good. Specifically, adjustment to increase the brightness may be performed.
  • the memory 330 functions as a work memory for the image processing unit 320.
  • the memory 330 temporarily stores image data processed by the image processing unit 320, image data (first pixel signal) input from the solid-state imaging device 100 before being processed by the image processing unit 320, and the like. .
  • the memory 330 has a nonvolatile memory and a volatile memory.
  • the non-volatile memory stores a program corresponding to processing executed by the imaging apparatus 10, and captured image data.
  • the volatile memory is used as a work memory when the processor performs processing, for example.
  • the nonvolatile memory can be realized by a semiconductor memory such as a ferroelectric memory or a flash memory, for example.
  • the volatile memory can be realized by a semiconductor memory such as DRAM (Dynamic Random Access Memory).
  • the memory 330 only needs to be included in the imaging apparatus 10 and may not be included in the signal processing unit 300.
  • the non-volatile memory may be an external memory that is detachably connected to the imaging device 10.
  • the external memory is realized by a memory card such as an SD card (registered trademark), for example.
  • image data captured by the imaging device 10 is stored in the external memory.
  • the display unit 500 is a display device that displays the captured image generated by the signal processing unit 300, and is a liquid crystal monitor, for example.
  • the display unit 500 can display various setting information. For example, the display unit 500 can display shooting conditions (aperture, ISO sensitivity, etc.) at the time of shooting.
  • the operation unit 600 is an input unit that receives input from the user, and is, for example, a release button or a touch panel.
  • the touch panel is bonded to a liquid crystal monitor. An imaging instruction from the user, a change in imaging conditions, and the like are accepted.
  • the imaging device 10 may include an interface (not shown) for performing communication between an external circuit and the solid-state imaging device 100 or the signal processing unit 300.
  • the interface is, for example, a communication port made of a semiconductor integrated circuit.
  • FIG. 4A is a flowchart showing the operation of the imaging apparatus 10 according to the present embodiment.
  • FIG. 4B is a flowchart illustrating the operation of the imaging apparatus according to the conventional example.
  • the conventional example shows an example in which correction is performed using a pixel signal (first pixel signal) acquired after the exposure is completed. That is, in the conventional example, correction is performed without performing nondestructive reading.
  • FIG. 4A and FIG. 4B are examples of flowcharts showing the operation in the case where gradation correction is performed.
  • the solid-state imaging device 100 starts exposure under the control of the control unit 310 (S1). Thereby, charges corresponding to the amount of received light are accumulated in each of the plurality of pixels 210. Specifically, charges corresponding to the amount of light received by the photoelectric conversion element 211 are accumulated in the charge accumulation unit 215.
  • control unit 310 performs control to execute non-destructive reading that reads out the charges accumulated in the charge accumulation unit 215 of each pixel 210 during the exposure without destroying them. More specifically, the control unit 310 controls the drive control unit 150 and performs AD conversion in the column AD conversion unit 120 for each pixel row, so that the accumulated charge (pixel signal) is a digital value ( Second pixel signal). The converted digital values are sequentially output to the signal processing unit 300 by the column scanning unit 140. That is, the signal processing unit 300 acquires the second pixel signal by nondestructive reading (S2). Note that the potential of the charge storage portion 215 is not reset by the reset transistor 212 after performing nondestructive reading.
  • the image processing unit 320 generates a blurred image (an example of a correction image) from the acquired second pixel signal (S3).
  • the image processing unit 320 may generate a blurred image having a lower resolution than an image generated from the second pixel signal or a captured image. For example, when the resolution of the image generated from the second pixel signal or the captured image is 5000 ⁇ 4000, the resolution of the blurred image is 50 ⁇ 40 or the like.
  • the blurred image is generated by accumulating the second pixel signals for one frame, reducing the image, and further enlarging the image.
  • the pixel values of a plurality of adjacent pixels 210 are integrated to calculate one pixel value, which is reduced.
  • An image is generated.
  • the resolution of the reduced image is the same as that of the blurred image, and is 50 ⁇ 40, for example.
  • the reduced image is enlarged.
  • the reduced image is enlarged to, for example, the image size before reduction.
  • the resolution of the enlarged image (that is, the blurred image) is the same as the resolution of the reduced image, and is 50 ⁇ 40, for example.
  • the blurred image has, for example, an image size that is the same as the image size before the reduction, and a resolution that is lower than the image before the reduction.
  • the generated blurred image may be held by the image processing unit 320 or stored in the memory 330, for example. That is, the image processing unit 320 may store the generated blurred image in the memory 330, or may store the blurred image in the memory when the image processing unit 320 includes a memory (not shown). In the following description, an example in which the image processing unit 320 stores a blurred image in the memory 330 will be described. Note that the blurred image may be generated by performing low-pass filter processing on an image generated by the acquired second pixel signal.
  • the exposure in the solid-state imaging device 100 ends (S4). That is, in the imaging apparatus 10 according to the present embodiment, generation of a blurred image in the image processing unit 320 is completed before the exposure of the solid-state imaging device 100 is completed.
  • the control unit 310 performs nondestructive readout from the exposure time and the time (first time) required from the acquisition of the second pixel signal until the generation of the blurred image is completed (S2 and S3). You may control the timing to do. For example, the control unit 310 may cause the solid-state imaging device 100 to perform nondestructive reading at a time that is a first time back from the time when the exposure ends.
  • the image processing unit 320 can increase the brightness of the second pixel signal acquired by nondestructive readout and generate a blurred image before the end of exposure. That is, the image processing unit 320 can correct the first pixel signal sequentially acquired for each pixel 210 after the exposure is completed at the time when the first pixel signal is acquired. Further, since the brightness of the second pixel signal (brightness information indicated by the second pixel signal) is bright, the influence of noise when gain is applied to the second pixel signal can be reduced.
  • the first time can be specified before the exposure is started from the processing capability of the signal processing unit 300 and the number of pixels of the pixel array unit 110.
  • step S3 a correction image is generated and an image analysis of the correction image is performed.
  • the control unit 310 is nondestructive based on the exposure time and the time (second time) required from the acquisition of the second pixel signal to the generation and analysis of the correction image (S2 and S3). You may control the timing which performs reading. For example, the control unit 310 may cause the solid-state imaging device 100 to perform nondestructive readout at a time that is a second time after the time when the exposure ends.
  • the control unit 310 controls the drive control unit 150 after the exposure is completed, so that the column AD conversion unit 120 performs AD conversion for each pixel column, and converts the accumulated charge into a digital value corresponding to the accumulated charge. .
  • the converted digital values are sequentially output to the signal processing unit 300 by the column scanning unit 140. That is, the signal processing unit 300 acquires the first pixel signal (S5). More specifically, the signal processing unit 300 sequentially acquires a first pixel signal for each of the plurality of pixels 210.
  • the reading of the first pixel signal is performed by, for example, normal reading.
  • the normal reading is destructive reading, and is a reading in which the accumulated charge is destroyed after the pixel signal is read out (the potential of the charge accumulation unit 215 is reset by turning on the reset transistor 212).
  • the image processing unit 320 uses the blurred image stored in the memory 330 to generate a captured image by correcting the first pixel signal from the blurred image and the first pixel signal (S6).
  • correction for noise such as streaking as an image
  • light amount adjustment for the pixels 210 around the lens 400 may be performed together. Good. Thereby, compared with the case where each correction is performed separately, the time required for correction can be shortened.
  • the image processing unit 320 displays a captured image on the display unit 500 included in the imaging device 10, or via a storage unit (not shown, for example, an external memory) or an interface (not shown) included in the camera 1.
  • the captured image is stored in an external storage medium.
  • the solid-state imaging device starts exposure (S1).
  • the signal processing unit does not perform image processing during the exposure (does not perform reading or the like), and the exposure ends (S4). At this time, a blurred image has not been generated.
  • the signal processing unit acquires the first pixel signal after the exposure is completed (S5).
  • the signal processing unit generates an intermediate image by performing correction for noise generated in the solid-state imaging device and light amount adjustment for pixels around the lens on the first pixel signal (S11). These processes are performed before storing in the memory.
  • the intermediate image is stored in the memory (S12).
  • the intermediate image is stored in a DRAM included in the memory. That is, in the conventional method, it takes time to store the intermediate image in the memory. Furthermore, a memory for storing the intermediate image is required, and therefore the memory capacity needs to be increased. For example, in order to store an intermediate image, an extra memory capable of storing image data for one frame is required.
  • a blurred image is generated using the intermediate image (S13).
  • an intermediate image is used as an image for generating a blurred image.
  • the method for generating a blurred image is the same as the method for generating a blurred image according to the present embodiment (see S3). For example, gain correction is not performed on the first pixel signal. Since step S13 is performed after the exposure is completed, the conventional method takes time for the process after the exposure is completed.
  • the intermediate image is read from the memory (S14), and the captured image is generated by correcting the intermediate image using the blurred image generated in step S13 (S15). That is, in the conventional method, it takes time to read the intermediate image from the memory.
  • the imaging apparatus 10 can perform image processing using an image acquired by non-destructive readout during exposure, so that the time required for image processing can be shortened compared to a conventional imaging apparatus. Can do.
  • FIG. 5 is a diagram for comparing the flow of image processing.
  • (A) of FIG. 5 is a figure for demonstrating the flow of the image processing of the imaging device which concerns on a prior art example.
  • FIG. 5B is a diagram for explaining the flow of image processing of the imaging apparatus 10 according to the present embodiment.
  • the signal processing unit does not perform image processing from the start of exposure to the end of exposure.
  • the signal processing unit 300 acquires the second pixel signal by performing nondestructive readout during exposure (S2). Then, a blurred image is generated using the second pixel signal (S3).
  • FIG. 5B shows an example in which the signal processing unit 300 performs nondestructive reading and image processing so that generation of a blurred image is completed by the end of exposure (S4).
  • the signal processing unit 300 can complete the image processing in step S13 in the conventional method during exposure by performing nondestructive reading.
  • step S3 Even if the exposure time is short and the process of step S3 is not completed by the end of the exposure, a part of the process of step S3 can be performed during the exposure. It is possible to shorten the time from the end to the end of the image processing. In other words, the nondestructive reading may be performed before the exposure is completed.
  • image processing is started after completion of exposure.
  • an intermediate image is generated from the first pixel signal acquired after the exposure is completed (S11).
  • This is processing performed before storing the first pixel signal in the memory, and includes correction for noise and light amount adjustment for pixels around the lens as described above.
  • the processing performed before storing in the memory may be other than the above.
  • the generated intermediate image is stored in the memory (S12).
  • the intermediate image is an image for one frame.
  • the image processing unit generates a blurred image using the intermediate image (S13). That is, the image processing unit generates a blurred image for one frame.
  • the image processing unit generates a captured image from the intermediate image stored in the memory and the blurred image generated in step S13 (S15).
  • the signal processing unit since the signal processing unit starts image processing after the end of exposure, it takes time from the end of exposure to the end of image processing. Specifically, it takes time to perform steps S11 to S15. Therefore, the time required from the start of exposure to the end of image processing (imaging time Ta) also takes time. Further, a memory for storing the intermediate image is required.
  • the image processing performed after the end of exposure is generation of a captured image.
  • the image processing unit 320 generates a captured image from the first pixel signal acquired after the exposure is completed and the blurred image generated in step S3 (S6).
  • the imaging time Tb of the imaging apparatus 10 according to the present embodiment can also be shortened (Tb ⁇ Ta). Specifically, the imaging time can be shortened by the shortened time T (Tb-Ta) shown in the drawing.
  • the image processing unit 320 sequentially acquires the first pixel signal for each pixel 210 after the exposure is completed. For example, the first pixel signal for each pixel 210 acquired sequentially and the pixel of the blurred image corresponding to the pixel 210 are calculated, and the first pixel signal is corrected according to the calculation result. That is, it is possible to sequentially acquire the first pixel signal for each pixel 210 and sequentially perform correction.
  • the first pixel signals that are sequentially corrected are sequentially stored in the memory 330, for example. As a result, it is possible to omit the time for storing the first pixel signal before correction in the memory 330 or reading the first pixel signal stored for correction, so that the imaging time Tb is further reduced. can do. In addition, since the capacity for storing the first pixel signal is not necessary, the capacity of the memory 330 can be reduced.
  • the readout method is a rolling shutter
  • the first pixel signal acquired by the conventional method and the first pixel signal acquired by the method of the present embodiment are the same signal.
  • the readout method is not limited to the rolling shutter.
  • a global shutter may be used.
  • the shutter function can be realized by adjusting the voltage applied to the organic photoelectric conversion film, so a global shutter can be realized without adding an element such as a memory. can do.
  • by causing all the organic photoelectric conversion films to function as shutters during the period of reading by the global shutter it is possible to suppress the accumulation of charges in the charge accumulation unit 215 during the period. That is, by using the organic photoelectric conversion film, it is possible to reduce rolling distortion generated in the global shutter without adding an element. Note that reading by nondestructive reading and reading after completion of exposure are performed by the same method.
  • FIG. 6 is a diagram for comparing images generated by image processing.
  • (A) of FIG. 6 is a figure for demonstrating the image produced
  • FIG. 6B is a diagram for explaining an image generated by the image processing of the imaging apparatus 10 according to the present embodiment.
  • FIG. 6B shows an example in which generation of a blurred image ends during exposure.
  • the conventional method cannot generate an image (for example, an intermediate image or a blurred image) for correcting the first pixel signal during exposure.
  • Image processing cannot be performed.
  • the signal processing unit After the end of exposure (S4), the signal processing unit generates an intermediate image P1 subjected to the above correction from the first pixel signal.
  • the intermediate image P1 is an image for one frame. Note that the intermediate image P1 shows a case where an image that is dark overall (dark due to an actual subject) is captured, and is represented by dot-like hatching.
  • the blurred image P2 is generated from the intermediate image P1.
  • the blurred image is also an image for one frame.
  • a captured image P3 is generated from the intermediate image P1 and the blurred image P2. That is, in the conventional method, it is necessary to generate three images.
  • 6A shows an example in which the captured image P3 is corrected so that the intermediate image P1 is slightly brightened. The correction is not limited to this.
  • image processing is performed during exposure.
  • the signal processing unit 300 generates a blurred image P12 by nondestructive reading (S2) during exposure. Since non-destructive readout is performed during exposure, the blurred image P12 generated using the second pixel signal acquired there is a dark image as a whole.
  • the blurred image P12 is an image that is darker than the captured image P13 generated using the first pixel signal acquired after the end of exposure. Further, the blurred image P12 is an image darker than the blurred image P2 generated from the intermediate image P1 shown in FIG. Therefore, the image processing unit 320 may adjust the gain of the second pixel signal and generate the blurred image P12 using the second pixel signal whose gain has been adjusted. As a result, a blur image P12 that is substantially the same as the blur image P2 in the conventional example can be obtained.
  • a captured image P13 is generated from the blurred image P12 generated by adjusting the gain and the first pixel signal acquired after the exposure is completed.
  • the captured image P13 generated and the captured image P3 generated by the conventional method are substantially the same image.
  • the gain of the second pixel signal By adjusting the gain of the second pixel signal, the accuracy of the captured image P13 can be improved.
  • Examples of the camera 1 in which the imaging apparatus 10 is built in include a digital still camera 1A shown in FIG. 7A and a digital video camera 1B shown in FIG. 7B.
  • the imaging device 10 according to the present embodiment is built in the camera shown in FIG. 7A or 7B, the user can take an image via the operation unit 600 as described above. After the instruction is issued (after the shutter is released), it is possible to shorten the time until the captured image is displayed on the display unit 500. Further, when continuously capturing images, the time from when the shutter is released until the next shutter is released can be shortened.
  • the imaging device 10 is obtained from the solid-state imaging device 100 having a plurality of pixels arranged in a matrix and capable of nondestructive readout, and the solid-state imaging device 100 after completion of exposure.
  • an image processing unit 320 that generates a captured image P13 by correcting the first pixel signal.
  • the image processing unit 320 performs correction based on the second pixel signal acquired from the solid-state imaging device 100 by nondestructive reading during exposure.
  • the imaging apparatus 10 acquires the second pixel signal during exposure by nondestructive readout. Therefore, a process for correcting the first pixel signal using the second pixel signal during exposure (for example, generation of a correction image for correcting the first pixel signal or analysis of the correction image) ).
  • a process for correcting the first pixel signal using the second pixel signal during exposure for example, generation of a correction image for correcting the first pixel signal or analysis of the correction image.
  • the image processing unit 320 generates an image for correction used for correction from the second pixel signal, or generates an image for correction, and performs image processing for image analysis using the generated image for correction. Then, the control unit 310 controls the timing at which the solid-state imaging device 100 performs nondestructive reading so that the image processing in the image processing unit 320 ends before the exposure ends.
  • a blurred image (an example of a correction image) P12 is generated by using the second pixel signal acquired by nondestructive reading before the exposure is completed.
  • the signal of the pixel 210 can be sequentially corrected.
  • the correction is image blur correction or white balance correction
  • a correction image is generated using the second pixel signal acquired by non-destructive reading and the correction is performed before the exposure is completed. Image analysis of the image can be performed.
  • the signal of each pixel 210 is sequentially acquired when the signal for each pixel 210 is acquired with respect to the first pixel signal acquired after the exposure is completed.
  • the signal can be corrected. Therefore, the imaging time Tb can be further shortened.
  • a memory 330 is provided. Then, the image processing unit 320 performs correction corresponding to the pixel 210 on the first pixel signal sequentially obtained for each pixel 210, and stores the corrected first pixel signal in the memory 330.
  • the correction can be performed without temporarily storing the first pixel signal in the memory 330, so that the time for storing and reading the first pixel signal before the correction in the memory 330 can be saved.
  • the capacity of the memory 330 can be reduced.
  • the correction image is a blurred image P12 whose resolution is lower than that of the captured image P13. Then, the image processing unit 320 generates the blurred image P12 from the second pixel signal to which the gain correction for the brightness is added.
  • the processing amount in the signal processing unit 300 can be reduced by using the blurred image P12, the processing time in the signal processing unit 300 can be shortened. Furthermore, the blurred image P12 generated by applying gain correction to the second pixel signal is an image having substantially the same brightness as the blurred image P2 generated in the conventional example. As a result, the first pixel signal can be corrected with higher accuracy.
  • the correction is gradation correction, image blur correction, or white balance correction.
  • each of the plurality of pixels 210 has an organic photoelectric conversion film.
  • the shutter function can be realized by adjusting the voltage applied to the organic photoelectric conversion film, a global shutter can be realized without adding an element such as a memory. Therefore, even when the subject is moving, image data with less distortion can be acquired.
  • the camera 1 includes the imaging device 10 described above.
  • the imaging method acquires a first pixel signal from the solid-state imaging device 100 having a plurality of pixels 210 capable of nondestructive readout after the exposure is completed, and performs nondestructive readout during exposure.
  • a captured image P13 is generated by correcting the first pixel signal based on the second pixel signal acquired from the solid-state imaging device 100.
  • the second pixel signal can be acquired during exposure by nondestructive readout, a blurred image can be generated using the second pixel signal during exposure.
  • the time from the end of exposure to the end of image processing can be shortened. In other words, the time from the start of exposure until the end of image processing (imaging time) can be shortened.
  • the image processing unit 320 may use a pixel-mixed signal acquired from the solid-state imaging device 100 as a blurred image.
  • the image processing unit 320 performs the gradation correction using the blurred image P12.
  • the image processing unit 320 may acquire a histogram for the brightness of the image from the second pixel signal, and use the result as a correction signal to correct the gradation of the first pixel signal.
  • the correction of the gradation of the first pixel signal according to the histogram for the brightness generated by the second pixel signal acquired from the solid-state imaging device 100 is an example of image processing performed by the image processing unit 320. It is.
  • the present invention is not limited to this.
  • the first pixel signal for one frame is temporarily stored in the memory 330, and the image processing unit 320 reads the first pixel signal from the memory 330 for each pixel 210, and the read first pixel signal and blurred image
  • the captured image may be generated by correcting the first pixel signal.
  • Non-destructive readout may be performed a plurality of times during exposure. For example, when image blur correction is performed, two non-destructive readings may be performed during exposure, a motion vector may be acquired from image data based on each pixel signal, and the first pixel signal may be corrected. Further, in order to reduce the influence of subject blurring and flash light during the exposure period, for example, the image processing unit 320 is more affected by subject blurring and flash light from a plurality of images acquired by non-destructive reading. A small number of images may be selected as correction images.
  • the camera 1 including the imaging device 10 according to the present disclosure has been described, but the application is not limited thereto.
  • Various electronic devices incorporating the imaging device 10 according to the present disclosure are also included in the present disclosure.
  • each component (functional block) in the imaging device 10 may be individually made into one chip by a semiconductor device such as an IC (Integrated Circuit), an LSI (Large Scale Integration), or the like, or may include a part or all of it. Alternatively, one chip may be used.
  • the method of circuit integration is not limited to LSI's, and implementation using dedicated circuitry or general purpose processors is also possible.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • integrated circuit technology that replaces LSI appears as a result of progress in semiconductor technology or other derived technology functional blocks may be integrated using this technology. Biotechnology can be applied as a possibility.
  • all or part of the various processes described above may be realized by hardware such as an electronic circuit or may be realized by using software.
  • the processing by software is realized by a processor included in the imaging apparatus 10 executing a program stored in a memory.
  • the program may be recorded on a recording medium and distributed or distributed. For example, by installing the distributed program in a device having another processor and causing the processor to execute the program, it is possible to cause the device to perform each of the above processes.
  • the camera 1 has been described with respect to the example including the lens 400 that allows light from the outside to enter the solid-state imaging device 100.
  • the lens 400 may be a lens that can be attached to and detached from the camera 1.
  • the camera 1 may not include the lens 400.
  • the lens 400 collects light from the outside and makes it incident on the solid-state imaging device 100.
  • the present disclosure can be widely used for imaging devices that capture images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

L'invention concerne un dispositif d'imagerie (10) qui est pourvu d'un élément d'imagerie à semi-conducteurs (100) ayant une pluralité de pixels (210) qui sont disposés sous forme de matrice et qui peuvent être lus de manière non destructive, une unité de traitement d'image (320) permettant de corriger un premier signal de pixel acquis à partir de l'élément d'imagerie à semi-conducteurs (100) après la fin de l'exposition et générer ainsi une image capturée (P13), et une mémoire (330) pour stocker le premier signal de pixel corrigé par l'unité de traitement d'image (320). L'unité de traitement d'image (320) effectue la correction sur la base d'un second signal de pixel acquis à partir de l'élément d'imagerie à semi-conducteurs (100) par lecture non destructive pendant l'exposition.
PCT/JP2017/046594 2016-12-27 2017-12-26 Dispositif d'imagerie, caméra et procédé d'imagerie WO2018124050A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-254492 2016-12-27
JP2016254492A JP2020031257A (ja) 2016-12-27 2016-12-27 撮像装置、カメラ及び撮像方法

Publications (1)

Publication Number Publication Date
WO2018124050A1 true WO2018124050A1 (fr) 2018-07-05

Family

ID=62709362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/046594 WO2018124050A1 (fr) 2016-12-27 2017-12-26 Dispositif d'imagerie, caméra et procédé d'imagerie

Country Status (2)

Country Link
JP (1) JP2020031257A (fr)
WO (1) WO2018124050A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004344249A (ja) * 2003-05-20 2004-12-09 Canon Inc 放射線撮影装置、放射線撮影方法、放射線撮影プログラム及び記録媒体
JP2007281555A (ja) * 2006-04-03 2007-10-25 Seiko Epson Corp 撮像装置
JP2012049600A (ja) * 2010-08-24 2012-03-08 Seiko Epson Corp 画像処理装置、画像処理方法及び撮像装置
JP2016019090A (ja) * 2014-07-07 2016-02-01 パナソニックIpマネジメント株式会社 固体撮像装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004344249A (ja) * 2003-05-20 2004-12-09 Canon Inc 放射線撮影装置、放射線撮影方法、放射線撮影プログラム及び記録媒体
JP2007281555A (ja) * 2006-04-03 2007-10-25 Seiko Epson Corp 撮像装置
JP2012049600A (ja) * 2010-08-24 2012-03-08 Seiko Epson Corp 画像処理装置、画像処理方法及び撮像装置
JP2016019090A (ja) * 2014-07-07 2016-02-01 パナソニックIpマネジメント株式会社 固体撮像装置

Also Published As

Publication number Publication date
JP2020031257A (ja) 2020-02-27

Similar Documents

Publication Publication Date Title
TWI373964B (fr)
US9191560B2 (en) Image capturing apparatus that performs photoelectric conversion on incident light that has passed through an imaging lens and outputs an electric signal
CN109997352B (zh) 摄像装置、相机以及摄像方法
JP6414718B2 (ja) 撮像装置
JP5222068B2 (ja) 撮像装置
US9986163B2 (en) Digital photographing apparatus and digital photographing method
WO2018124051A1 (fr) Dispositif de sélection d'image, caméra et procédé de sélection d'image
WO2018124049A1 (fr) Dispositif de capture d'image, caméra et procédé de capture d'image
WO2018124053A1 (fr) Dispositif d'imagerie, caméra et procédé d'imagerie
WO2018124050A1 (fr) Dispositif d'imagerie, caméra et procédé d'imagerie
US11546490B2 (en) Image generation method, imaging apparatus, and recording medium
JP2006239117A (ja) 放射線撮像装置
US9794502B2 (en) Image capturing apparatus
JP6706850B2 (ja) 撮像装置、カメラ、及び撮像方法
JP6706783B2 (ja) 撮像素子、撮像装置、カメラ、及び撮像方法
JP2014027520A (ja) 撮像装置及び画像処理方法
JP6664066B2 (ja) 撮像装置及びその制御方法
JP6470589B2 (ja) 撮像装置およびその制御方法、プログラム、並びに記憶媒体
WO2018124057A1 (fr) Dispositif d'imagerie et son procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17889145

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17889145

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP