WO2021075321A1 - Image capture apparatus, electronic device and image capture method - Google Patents

Image capture apparatus, electronic device and image capture method Download PDF

Info

Publication number
WO2021075321A1
WO2021075321A1 PCT/JP2020/037944 JP2020037944W WO2021075321A1 WO 2021075321 A1 WO2021075321 A1 WO 2021075321A1 JP 2020037944 W JP2020037944 W JP 2020037944W WO 2021075321 A1 WO2021075321 A1 WO 2021075321A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal processing
processing unit
unit
brightness information
performs
Prior art date
Application number
PCT/JP2020/037944
Other languages
French (fr)
Japanese (ja)
Inventor
裕幸 小沢
斉 甲斐
山田 聡
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2021075321A1 publication Critical patent/WO2021075321A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • the present disclosure relates to an imaging device, an electronic device, and an imaging method.
  • the signal processing circuit in the image pickup device is the application processor mounted on the electronic device.
  • various signal processing is performed according to the instructions.
  • the signal processing circuit in the imaging device follows the instructions of the application processor. It is common to perform signal processing. For example, the brightness adjustment and white balance adjustment of image data can also be performed by the signal processing circuit in the image pickup device, but since the adjustment can be performed with higher accuracy inside the application processor, the adjustment is performed inside the image pickup device.
  • the signal processing circuit of is often handed over to the application processor after performing signal processing on the premise that brightness adjustment and white balance adjustment are performed inside the application processor.
  • the application processor may shift to sleep mode in order to reduce power consumption, but when the application processor goes into sleep mode, brightness adjustment and white balance adjustment are performed on the image data captured by the image sensor. It cannot be done properly.
  • the signal processing circuit When performing recognition processing or the like in the imaging device using the data processed by the signal processing circuit in the image pickup device, if the signal processing circuit performs signal processing in response to an instruction from the application processor, it is not always optimal for recognition processing or the like. The signal processing result may not be obtained, and the reliability of the recognition processing or the like may be lowered.
  • the present disclosure provides an imaging device, an electronic device, and an imaging method capable of performing optimum signal processing without depending on an external instruction such as an application processor.
  • a pixel array unit having a plurality of pixels that perform photoelectric conversion and a pixel array unit A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data, A signal processing unit that performs signal processing on the digital pixel data, and A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
  • the signal processing unit provides an imaging device that performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
  • An information processing unit that performs at least one of a predetermined recognition process and a detection process based on the data signal-processed by the signal processing unit may be provided.
  • the signal processing unit performs the signal processing according to an instruction from the outside, and if there is no instruction from the outside, performs the signal processing according to the brightness information detected by the brightness information detector.
  • the information processing unit performs at least one of the recognition process and the detection process based on the data obtained by the signal processing unit performing the signal processing according to the brightness information detected by the brightness information detector. You may.
  • the signal processing unit A first signal processing unit that performs a first signal processing on the digital pixel data, It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
  • the second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
  • An information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit may be further provided.
  • the common signal processing includes brightness adjustment processing of an image captured by the pixel array unit.
  • the second signal processing unit may perform the brightness adjustment process based on the brightness information detected by the brightness information detector.
  • the recognition process includes a process of giving input data to a calculation model generated by machine learning and performing a calculation.
  • the input data may be output data of the signal processing unit.
  • the first substrate having the pixel array portion and The converter, the signal processing unit, and the second substrate having the brightness information detector, which are laminated on the first substrate, may be provided.
  • the first substrate and the second substrate may be bonded by any of a CoC (Chip on Chip) method, a CoW (Chip on Wafer) method, or a WoW (Wafer on Wafer) method.
  • a CoC Chip on Chip
  • CoW Chip on Wafer
  • WoW WoW
  • the gain adjusting unit adjusts the gain according to an instruction from the outside, and if there is no instruction from the outside, adjusts the gain based on the brightness information detected by the brightness information detector. May be good.
  • an imaging device that outputs captured image data and A processor that performs predetermined signal processing on the image data.
  • the image pickup device A pixel array unit having a plurality of pixels that perform photoelectric conversion, A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data, A signal processing unit that performs signal processing on the digital pixel data, and A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
  • the signal processing unit performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
  • An electronic device is provided in which the signal-processed image data is supplied to the processor by the signal processing unit.
  • the processor sends an instruction to the first signal processing unit to perform processing based on the output data of the first signal processing unit, and the first operation mode has lower power consumption than the first operation mode. It has a second operation mode in which one does not send an instruction to the first signal processing unit and does not receive the output data of the first signal processing unit.
  • the second signal processing unit or the information processing unit performs motion detection processing based on the brightness information detected by the brightness information detector regardless of the operation mode of the processor.
  • the processor is in the second operation mode, the second signal processing unit or the information processing unit performs the motion detection process based on the brightness information detected by the brightness information detector, and the motion detection process is performed.
  • the processor may be returned from the second operation mode to the first operation mode.
  • a step of performing photoelectric conversion in the pixel array unit and outputting an analog pixel signal there is a step of performing photoelectric conversion in the pixel array unit and outputting an analog pixel signal.
  • the step of converting the analog pixel signal into digital pixel data A step of detecting brightness information incident on the pixel array unit based on the digital pixel data, and
  • an imaging method including a step of performing signal processing on the digital pixel data and performing at least a part of the signal processing based on the brightness information.
  • FIG. 1 is a block diagram showing a schematic configuration of an electronic device 2 provided with an image pickup apparatus 1 according to the first embodiment.
  • the electronic device 2 of FIG. 1 includes an image pickup device 1 and an application processor 3.
  • the electronic device 2 in FIG. 1 is a smartphone, a mobile phone, a tablet, a PC, a digital camera, or the like having an imaging function, and the specific device thereof does not matter.
  • the imaging unit 4 has an optical system 12 and a pixel array unit 13.
  • the optical system 12 includes, for example, a zoom lens, a single focus lens, an aperture, and the like.
  • the optical system 12 guides the incident light to the pixel array unit 13.
  • the pixel array unit 13 has a plurality of pixels arranged in the two-dimensional direction. Each pixel is composed of a plurality of unit pixels for a plurality of colors such as RGB.
  • Each unit pixel has a light receiving element such as a photodiode.
  • the light receiving element photoelectrically converts the incident light and outputs an analog pixel signal.
  • the light incident on the image pickup unit 4 passes through the optical system 12 and is imaged on a light receiving surface in which a plurality of light receiving elements are arranged. Each light receiving element accumulates electric charges according to the intensity of the incident light and outputs an analog pixel signal according to the amount of accumulated electric charges.
  • the control unit 5 controls each unit in the image pickup apparatus 1 according to an instruction from the application processor 3 or the like.
  • the control unit 5 may be integrated with the DSP 9 described later.
  • the DSP 9 has a function of an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the data signal-processed by the signal processing unit 7.
  • the DSP 9 performs at least one of the recognition process and the detection process based on the data obtained by the signal processing unit 7 performing the signal processing according to the brightness information detected by the OPD 11.
  • the DSP 9 performs arithmetic processing using a machine-learned calculation model by, for example, executing a program stored in the memory 8.
  • Various information about the learned calculation model is stored in the memory 8 in advance, and the DSP 9 reads necessary information about the calculation model from the memory 8 and inputs the output data of the signal processing unit 7 into the calculation model. , Perform arithmetic processing.
  • the specific form of the machine-learned calculation model does not matter, but for example, it is a calculation model by a deep neural network (hereinafter, DNN).
  • DNN deep neural network
  • This calculation model is designed based on the parameters generated by inputting the output data of the ADC 6 or the output data of the signal processing unit 7 as an input and inputting the training data associated with the label for this input into the trained calculation model. be able to.
  • the DNN may be composed of a multi-layer neural network.
  • the DSP 9 can perform a predetermined recognition process, for example, by an arithmetic process using the DNN.
  • the recognition process is a process of automatically recognizing whether or not the image data, which is the output data of the signal processing unit 7, includes characteristic image information. More specifically, the recognition process is a process in which input data is given to a calculation model generated by machine learning and calculated, and the input data is output data of the signal processing unit 7.
  • the DSP 9 performs a product-sum calculation of the dictionary coefficient stored in the memory 8 and the image data in the process of executing the calculation process based on the learned calculation model stored in the memory 8.
  • the calculation result by the DSP 9 is stored in the memory 8 and input to the selector 10.
  • the result of the arithmetic processing using the calculation model by DSP9 may be image data or various information (metadata) obtained from the image data.
  • the DSP 9 or the control unit 5 described above may have a function of a memory controller that controls writing and reading to the memory 8, or a memory controller may be provided separately from the DSP 9 and the control unit 5.
  • the DSP 9 may perform detection processing such as motion detection processing and face detection processing. The detection process may be performed by the signal processing unit 7 instead of the DSP 9. Alternatively, the signal processing unit 7 and the DSP 9 may cooperate to perform the detection process.
  • the memory 8 stores digital pixel data output from the ADC 6, a program executed by the DSP 9, various information related to the learned calculation model used by the DSP 9 for arithmetic processing, and the like. Further, the memory 8 may store the data of the calculation processing result of the DSP 9.
  • the memory 8 is a readable and writable RAM (RandomAccessMemory). By exchanging the information about the calculation model in the memory 8, the DSP 9 can execute various machine learning calculation models, and can perform recognition processing and detection processing having high versatility and a wide range of application.
  • the memory 8 may be a ROM (Read Only Memory).
  • the application processor 3 is a semiconductor device separate from the image pickup device 1, and is mounted on the same or different base substrate as the image pickup device 1.
  • the application processor 3 has a CPU (Central Processing Unit) inside the application processor 3 and executes programs such as an operating system and various application software.
  • the application processor 3 includes a signal processing unit 14 as described later, and performs various signal processing.
  • the signal processing unit 14 in the application processor 3 can perform advanced signal processing at a higher speed than the signal processing unit 7 in the image pickup apparatus 1.
  • the application processor 3 includes an OPD 15.
  • the OPD 15 generates a brightness information detection signal based on one frame of digital pixel data captured by the pixel array unit 13 and output from the ADC 6.
  • the application processor 3 Since the application processor 3 has a processing performance capable of performing complicated processing in a shorter time than that of the image pickup device 1, there is a high possibility that a brightness information detection signal having higher reliability than that of the OPD 11 in the image pickup device 1 can be generated. ..
  • FIG. 2 is a block diagram showing an internal configuration of a signal processing unit 7 in the image pickup apparatus 1 and a signal processing unit 14 in the application processor 3 according to the first embodiment.
  • the specific content of the signal processing performed by the signal processing unit 7 in the image pickup apparatus 1 is arbitrary.
  • the signal processing unit 7 in the image pickup apparatus 1 of FIG. 2 includes a processing A unit 7a, a gain adjusting unit 7b for automatic exposure (AE: Automatic Exposure), a processing B unit 7c, and a processing C unit 7d.
  • the AE gain adjusting unit 7b performs a process of raising the pixel value of the digital pixel data to make it brighter.
  • the signal processing unit 7 in the image pickup apparatus 1 can perform various signal processing based on the brightness information detection signal output from the OPD 11.
  • the signal processing unit 7 basically performs signal processing according to the instruction of the application processor 3, but when the application processor 3 is stopped, the application processor 3 needs image data from the imaging device 1. Therefore, the signal processing unit 7 in the image pickup apparatus 1 performs signal processing based on the brightness information detection signal output from the OPD 11 in order to generate optimum data for performing recognition processing and detection processing. ..
  • the gain of the gain adjusting unit 7b for AE is adjusted to be further increased.
  • the white balance is biased, so the white balance is adjusted by the gain adjusting unit 7e for AWB.
  • the signal processing unit 14 in the application processor 3 includes a processing X unit 14a, an AE gain adjusting unit 14b, a white balance (AWB: Auto White Balance) gain adjusting unit 14c, a processing Y unit 14d, and a processing Z unit. It has 14e and.
  • the AE gain adjusting unit 14b performs the same processing as the AE gain adjusting unit 7b in the image pickup apparatus 1.
  • the AWB gain adjusting unit 14c properly adjusts the white balance of the image data.
  • the specific contents of the signal processing performed by the processing X unit 14a, the processing Y unit 14d, and the processing Z unit 14e are arbitrary.
  • the processing X unit 14a, the processing Y unit 14d, and the processing Z unit 14e can perform the above-mentioned lens shading processing, clamping processing, horizontal crop processing, defect correction unit processing, and the like.
  • the signal processing unit 14 in the application processor 3 performs each signal processing based on the brightness information detection signal from the OPD 15. Since the signal processing unit 14 in the application processor 3 can perform more complicated and advanced signal processing than the signal processing unit 7 in the image pickup apparatus 1, the signal processing performed by the signal processing unit 14 in the application processor 3 can be performed. , The reliability is higher than the signal processing performed by the signal processing in the image pickup apparatus 1.
  • the application processor 3 has a normal operation mode (first operation mode) and a sleep mode (second operation mode). In the normal operation mode, the application processor 3 instructs the signal processing unit 7 in the image pickup apparatus 1 about the content of the signal processing to be performed by the signal processing unit 7.
  • the signal processing unit 7 in the image pickup apparatus 1 performs signal processing according to an instruction from the application processor 3.
  • the application processor 3 is in the image pickup device 1. It is possible to instruct to send digital pixel data in which some signal processing that can be performed by the signal processing unit 7 is omitted.
  • the application processor 3 instructs the signal processing unit 7 in the image pickup apparatus 1 to send digital pixel data in which the processing of the AE gain adjusting unit 7b and the AWB gain adjusting unit is omitted.
  • the signal processing unit 7 in the image pickup apparatus 1 omits the processing of the AE gain adjusting unit 7b and the AWB gain adjusting unit, and applies the digital pixel data obtained by performing at least a part of the remaining signal processing. Send to processor 3.
  • the application processor 3 shifts to the sleep mode, the application processor 3 does not give an instruction to the signal processing unit 7 in the image pickup apparatus 1 and does not receive data from the signal processing unit 7.
  • the signal processing unit 7 in the image pickup apparatus 1 adjusts the gains of the AE gain adjusting unit 7b and the AWB gain adjusting unit based on the brightness information detection signal from the OPD 11. For example, the signal processing unit 7 adjusts the gain for AE so that the DSP 9 can input the optimum data to the DNN for performing the recognition process using the DNN in the image pickup apparatus 1.
  • FIG. 3 shows an example in which the signal processing unit 7 in the image pickup apparatus 1 further includes an AWB gain adjusting unit 7e in addition to the internal configuration of FIG.
  • the AWB gain adjusting unit 7e is arranged between the AE gain adjusting unit 7b and the processing B unit 7c, but it may be arranged at another location.
  • the processing of the AWB gain adjusting unit 7e may be performed before the processing of the AE gain adjusting unit 7b.
  • the AWB gain adjusting unit 7e appropriately adjusts the white balance of the image data in the same manner as the AWB gain adjusting unit 14c.
  • the signal processing unit 7 of FIG. 3 adjusts the gains of the AE gain adjusting unit 7b and the AWB gain adjusting unit 7e based on the brightness information detection signal from the OPD 11. .. At this time, the signal processing unit 7 adjusts the gain for AE and the gain for AWB so that the DSP 9 can input the optimum data for performing the recognition process and the detection process using the DNN to the DNN. Since the signal processing unit 7 of FIG. 3 can adjust the gains of the AE gain adjusting unit 7b and the AWB gain adjusting unit 7e based on the brightness information detection signal from the OPD 11, the image data captured by the pixel array unit 13 can be adjusted. Not only can the brightness be optimized, but the white balance can also be optimized. By “optimizing", it means that the DSP 9 provides the DSP 9 with the most suitable input data for performing the recognition process and the detection process.
  • FIGS. 1 to 3 show an example in which the signal processing unit 7 performs at least a part of signal processing based on the brightness information detection signal from the OPD 11, the signal processing unit 7 or the control unit 5 may perform the opD 11
  • the gain adjustment of the analog pixel signal before the ADC 6 performs the A / D conversion may be instructed based on the brightness information detection signal from.
  • this instruction signal is sent to the imaging unit 4 to adjust the gain of an amplifier (not shown).
  • the gain of the analog pixel signal may be adjusted based on the brightness information detection signal from the OPD 11.
  • This gain adjustment may be performed by the application processor 3 in the sleep mode, and may be performed by an instruction from the application processor 3 in the normal operation mode.
  • FIG. 4 is a diagram showing an example of the chip structure of the image pickup apparatus 1 of FIG.
  • the image pickup apparatus 1 of FIG. 4 is a laminated body in which a first substrate 31 and a second substrate 32 are laminated.
  • the first substrate 31 and the second substrate 32 are sometimes called dies.
  • the first substrate 31 and the second substrate 32 are rectangular, but the specific shapes and sizes of the first substrate 31 and the second substrate 32 are arbitrary. Further, the first substrate 31 and the second substrate 32 may have the same size or may be different sizes from each other.
  • the pixel array unit 13 shown in FIG. 1 is arranged on the first substrate 31. Further, at least a part of the optical system 12 of the imaging unit 4 may be mounted on the first substrate 31 on-chip.
  • control unit 5 ADC 6, signal processing unit 7, memory 8, DSP 9, selector 10, and OPD 11 shown in FIG. 1 are arranged on the second substrate 32.
  • an input / output interface unit (not shown), a power supply circuit, or the like may be arranged on the second substrate 32.
  • the first substrate 31 and the second substrate 32 are cut out from a wafer, separated into individual pieces, and then laminated on top of each other. Good.
  • one of the first substrate 31 and the second substrate 32 (for example, the first substrate 31) is cut out from the wafer and individualized, and then the individualized first substrate 31 is separated into the second substrate 32 before individualization.
  • the so-called CoW (Chip on Wafer) method may be adopted.
  • a so-called WoW (Wafer on Wafer) method in which the first substrate 31 and the second substrate 32 are bonded together in a wafer state, may be adopted.
  • plasma bonding or the like can be used as the bonding method between the first substrate 31 and the second substrate 32.
  • various other joining methods may be used.
  • FIG. 5A and 5B are diagrams showing an example of the layout of the first substrate 31 and the second substrate 32.
  • FIG. 5A shows a layout example of the first substrate 31 on which the pixel array unit 13 is arranged.
  • the pixel array unit 13 is arranged on one side L101 side of the four sides L101 to L104 of the first substrate 31.
  • the pixel array unit 13 is arranged so that its central portion O101 is closer to the side L101 than the central portion O100 of the first substrate 31.
  • the side L101 may be, for example, the shorter side of the first substrate 31.
  • the present invention is not limited to this, and the pixel array portion 13 may be arranged on the longer side.
  • each unit pixel 101a in the pixel array unit 13 is placed on the second substrate.
  • a TSV array 102 in which a plurality of through wires (Through Silicon Via: hereinafter referred to as TSV) penetrating the first substrate 31 are arranged is provided.
  • the pad array 103 may include pads (also referred to as signal pins) for interfaces such as MIPI (Mobile Industry Processor Interface) and SPI (Serial Peripheral Interface).
  • the pad array 103 may include pads (also referred to as signal pins) for input / output of clocks and data.
  • Each pad is electrically connected to, for example, an external power supply circuit or interface circuit via a wire. It is preferable that the pad array 103 and the TSV array 102 are sufficiently separated from each other so that the influence of the reflection of the signal from the wire connected to each pad in the pad array 103 can be ignored.
  • FIG. 5B shows a layout example of the second board 32 in which the control unit 5, the ADC 6, the signal processing unit 7, the memory 8, the DSP 9, and the selector 10 are arranged.
  • the ADC 6, the control unit 5, the signal processing unit 7, the DSP 9, and the memory 8 are arranged on the second substrate 32.
  • the ADC 6 is divided into two regions, an ADC section 6a and a DAC section (Digital to Analog Converter) 6b.
  • the DAC 6b is a circuit that supplies a reference voltage for AD conversion to the ADC unit 6a, and is included in a part of the ADC 6 in a broad sense.
  • the selector 10 and the OPD 11 are also arranged on the second substrate 32.
  • the vicinity of the wiring 122 connected to the TSV array 102 is on the upstream side, and the ADC unit 6a is sequentially arranged from the upstream along the flow of the signal read from the pixel array unit 13.
  • the signal processing unit 7 and the DSP 9 are arranged. That is, the ADC unit 6a to which the pixel signal read from the pixel array unit 13 is first input is arranged in the vicinity of the wiring 122 which is the most upstream side, and then the signal processing unit 7 is arranged and the most from the wiring 122.
  • the DSP9 is arranged in a distant area.
  • the wiring connecting each part can be shortened. As a result, it is possible to reduce the signal delay, reduce the signal propagation loss, improve the S / N ratio, reduce the power consumption, and the like.
  • control unit 5 is arranged in the vicinity of the wiring 122 on the upstream side, for example.
  • the control unit 5 is arranged between the ADC 6 and the signal processing unit 7.
  • the signal pins and power supply pins for the analog circuit are arranged together in the vicinity of the analog circuit (for example, the lower side in FIG. 5B), and the remaining signal pins and power supply pins for the digital circuit are arranged in the vicinity of the digital circuit (for example, the lower side in FIG. 5B).
  • the upper side in FIG. 5B), or the power supply pin for the analog circuit and the power supply pin for the digital circuit can be arranged sufficiently apart.
  • the DSP 9 is arranged on the side opposite to the ADC portion 6a, which is the most downstream side.
  • the DSP 9 is arranged in a region that does not overlap with the pixel array unit 13 in the stacking direction of the first substrate 31 and the second substrate 32 (hereinafter, simply referred to as the vertical direction). be able to.
  • the memory 8 is arranged in the vicinity of the DSP 9 and the signal processing unit 7.
  • Various information about the learned calculation model is stored in the memory 8, and the DSP 9 reads the information about the calculation model from the memory 8 and performs arithmetic processing using the calculation model, and the result of the arithmetic processing is stored in the memory 8.
  • the DSP 9 reads the information about the calculation model from the memory 8 and performs arithmetic processing using the calculation model, and the result of the arithmetic processing is stored in the memory 8.
  • the DSP 9 can access the memory 8 at high speed.
  • FIG. 6 is a flowchart showing a processing procedure performed by the image pickup apparatus 1 according to the first embodiment.
  • the pixel array unit 13 performs photoelectric conversion and outputs an analog pixel signal (step S1).
  • the analog pixel signal is converted into digital pixel data (step S2).
  • the brightness information incident on the pixel array unit 13 is detected based on the digital pixel data (step S3).
  • signal processing is performed on the digital pixel data, and at least a part of the signal processing is performed based on the above-mentioned brightness information (step S4).
  • the OPD 11 is provided in the image pickup apparatus 1 to generate the brightness information detection signal, and the signal processing unit 7 in the image pickup apparatus 1 performs signal processing based on the brightness information detection signal. Therefore, when the application processor 3 does not give an instruction to the signal processing unit 7, the signal processing unit 7 can perform the optimum signal processing for the image pickup apparatus 1. More specifically, in the case of performing information processing such as recognition processing and detection processing based on the output data of the signal processing unit 7 in the image pickup apparatus 1, in order to perform information processing such as recognition processing and detection processing. The signal processing unit 7 can perform signal processing so as to obtain the optimum input data. As a result, the intelligent imaging device 1 can perform highly reliable information processing such as recognition processing and detection processing inside the imaging device 1, and can also perform information processing such as recognition processing and detection processing as well as simply performing imaging. Is obtained.
  • FIG. 7 is a block diagram showing a schematic configuration of the electronic device 2 provided with the image pickup apparatus 1 according to the second embodiment.
  • the image pickup device 1 of FIG. 7 is different from the image pickup device 1 of FIG. 1 in that it includes a first signal processing unit 71 and a second signal processing unit 72 instead of the signal processing unit 7 in the image pickup device 1 of FIG. ing.
  • the digital pixel data from the ADC 6 may be input to the first signal processing unit 71, and the data resulting from the signal processing to the end by the first signal processing unit 71 may be input to the second signal processing unit 72.
  • the second signal processing unit 72 additionally performs its own signal processing after the signal processing of the first signal processing unit 71 is completed.
  • the common signal processing may include white balance adjustment processing of the image captured by the pixel array unit 13.
  • the second signal processing unit 72 may perform white balance adjustment processing based on the brightness information detected by the OPD 11.
  • FIG. 8 is a block diagram showing an internal configuration of the first signal processing unit 71 and the second signal processing unit 72 in the image pickup apparatus 1 according to the second embodiment and an internal configuration of the signal processing unit 14 in the application processor 3. Is.
  • the internal configuration of the first signal processing unit 71 is the same as that of the signal processing unit 7 in the image pickup apparatus 1 of FIG. 4, for example, processing A unit 7a, AE gain adjusting unit 7b, processing B unit 7c, and processing. It has a C portion 7d.
  • the OPD 11 was connected to the signal processing unit 7 in the image pickup apparatus 1 of FIG. 1, but the OPD 11 was not connected to the first signal processing unit 71, and the OPD 11 was connected to the second signal processing unit 72. There is.
  • the first signal processing unit 71 performs various signal processing according to the instructions of the application processor 3, and sends data indicating the final signal processing result to the application processor 3. As described above, the first signal processing unit 71 performs various signal processing in order to generate data suitable for performing advanced signal processing in the application processor 3.
  • the second signal processing unit 72 performs signal processing at its own discretion of the image pickup apparatus 1.
  • the image pickup apparatus 1 according to the present embodiment performs information processing such as recognition processing and detection processing using the image data captured by the DSP 9 in the pixel array unit 13. Therefore, the second signal processing unit 72 performs signal processing for generating optimum image data for information processing such as recognition processing and detection processing.
  • the second signal processing unit 72 includes a processing D unit 7f, an AE gain adjusting unit 7g, an AWB gain adjusting unit 7h, a processing E unit 7i, and a processing F unit 7j.
  • the specific contents of signal processing of the processing D unit 7f, the processing E unit 7i, and the processing F unit 7j are arbitrary.
  • the processing D unit 7f, the processing E unit 7i, and the processing F unit 7j each have a lens shading process (Lenz Shade), a demosaic process, a linear matrix / gamma / hue gain process (LinearMatrix Gamma HueGain), and a dewarp process.
  • the lens shading process is a process that raises the pixel value of the peripheral part.
  • the demosaic process is a process of generating pixel data of three colors (RGB) from a four-color arrangement.
  • the linear matrix / gamma / huge gain process is a process such as image linearization, gamma correction, and color adjustment.
  • the dewarp process is a process of correcting a distorted image with a fisheye lens or a wide-angle lens into a flat image as much as possible.
  • the dewarp process is a process of correcting using an adjustment table provided for each lens.
  • the gain / YC matrix / normalization process is a gain adjustment, color adjustment, and normalization process for inputting to the DNN.
  • the digital pixel data from the ADC 6 has a pixel value of 8 to 12 bits, whereas the input data of the DNN has a pixel value of 0 to 1, so that bit shift and normalization processing are required.
  • the second signal processing unit 72 performs signal processing in parallel with the first signal processing unit 71. It will be. Therefore, the second signal processing unit 72 needs to perform signal processing equivalent to the signal processing performed by the first signal processing unit 71.
  • the signal processing already performed by the first signal processing unit 71 is the second signal processing. It is not necessary to do it in the part 72.
  • the second signal processing unit 72 processes the signal again. May be done.
  • the second signal processing unit 72 is basically different from the first signal processing unit 71.
  • signal processing is performed, in some cases, the signal processing performed by the first signal processing unit 71 may be redone by changing the set value.
  • the OPD 11 is connected to the second signal processing unit 72.
  • the second signal processing unit 72 performs various signal processing based on the brightness information detection signal output from the OPD 11. For example, when the image data captured by the pixel array unit 13 is dark as a whole, signal processing such as brightness adjustment is performed so that optimum data is input to the DNN that performs recognition processing and detection processing.
  • the application processor 3 may shift to the sleep mode in order to reduce power consumption.
  • the application processor 3 shifts to the sleep mode, the application processor 3 cannot give an instruction to the first signal processing unit 71. 1
  • the signal processing unit 71 may not be able to perform effective signal processing.
  • the second signal processing unit 72 performs signal processing regardless of the instruction of the application processor 3, effective signal processing can be continuously performed even when the application processor 3 shifts to the sleep mode.
  • FIG. 10 shows control of the timing at which the control unit 5 or an arithmetic unit such as a CPU controls the timing at which the OPD 11 generates a brightness information detection signal, the control of reading the brightness information detection signal generated by the OPD 11, and the read brightness information detection. Processing to calculate the set value to be set in each processing unit in the second signal processing unit 72 based on the signal, and control to transmit the calculated set value to each processing unit in the second signal processing unit 72. It is a block diagram of the image pickup apparatus 1 in the case of performing. In the case of FIG.
  • the control unit 5 in the image pickup apparatus 1 or an arithmetic unit such as a CPU performs control of the OPD 11 and calculation processing based on the brightness information detection signal output from the OPD 11, the second signal processing unit 72
  • the processing load can be reduced, and the time for each processing unit in the second signal processing unit 72 to perform signal processing can be shortened.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on any kind of moving body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 11 is a block diagram showing a schematic configuration example of a vehicle control system 12000, which is an example of a mobile control system to which the technique according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are provided as the functional configuration of the integrated control unit 12050.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
  • the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
  • the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
  • the body system control unit 12020 receives the input of these radio waves or signals to control the vehicle door lock device, power window device, lamp, and the like.
  • the vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
  • the image pickup unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received.
  • the image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects the in-vehicle information.
  • a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
  • the microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit.
  • a control command can be output to 12010.
  • the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving that runs autonomously without depending on the operation.
  • FIG. 12 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the imaging unit 12031 includes imaging units 12101, 12102, 12103, 12104, and 12105.
  • the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as, for example, the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100.
  • the image pickup unit 12101 provided on the front nose and the image pickup section 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the imaging unit 12105 provided on the upper part of the windshield in the vehicle interior is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 12 shows an example of the imaging range of the imaging units 12101 to 12104 with a dashed line.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103.
  • the imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
  • the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100).
  • a predetermined speed for example, 0 km / h or more.
  • the microprocessor 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles.
  • the microprocessor 12051 identifies obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104.
  • pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine.
  • the audio image output unit 12052 When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the technology according to the present disclosure (the present technology) can be applied to various products.
  • the techniques according to the present disclosure may be applied to endoscopic surgery systems.
  • FIG. 13 illustrates how the surgeon (doctor) 11131 is performing surgery on patient 11132 on patient bed 11133 using the endoscopic surgery system 11000.
  • the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as an abdominal tube 11111 and an energy treatment tool 11112, and a support arm device 11120 that supports the endoscope 11100.
  • a cart 11200 equipped with various devices for endoscopic surgery.
  • An optical system and an image sensor are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the image sensor by the optical system.
  • the observation light is photoelectrically converted by the image pickup device, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated.
  • the image signal is transmitted as RAW data to the camera control unit (CCU: Camera Control Unit) 11201.
  • CCU Camera Control Unit
  • the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, by utilizing the wavelength dependence of light absorption in body tissue to irradiate light in a narrow band as compared with the irradiation light (that is, white light) in normal observation, the surface layer of the mucous membrane. So-called narrow band imaging, in which a predetermined tissue such as a blood vessel is photographed with high contrast, is performed.
  • fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating with excitation light.
  • the camera head 11102 includes a lens unit 11401, an imaging unit 11402, a driving unit 11403, a communication unit 11404, and a camera head control unit 11405.
  • CCU11201 has a communication unit 11411, an image processing unit 11412, and a control unit 11413.
  • the camera head 11102 and CCU11201 are communicatively connected to each other by a transmission cable 11400.
  • the drive unit 11403 is composed of an actuator, and the zoom lens and focus lens of the lens unit 11401 are moved by a predetermined distance along the optical axis under the control of the camera head control unit 11405. As a result, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
  • the communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU11201.
  • the communication unit 11404 transmits the image signal obtained from the image pickup unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
  • the image processing unit 11412 performs various image processing on the image signal which is the RAW data transmitted from the camera head 11102.
  • the control unit 11413 performs various controls related to the imaging of the surgical site and the like by the endoscope 11100 and the display of the captured image obtained by the imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
  • the above is an example of an endoscopic surgery system to which the technology according to the present disclosure can be applied.
  • the technique according to the present disclosure can be applied to, for example, the image pickup unit 11402 of the camera head 11102 among the configurations described above.
  • the technique according to the present disclosure can be applied to the camera head 11102, the camera head 11102 and the like can be miniaturized, so that the endoscopic surgery system 11000 can be miniaturized.
  • the technique according to the present disclosure to the camera head 11102 or the like, it is possible to acquire a clear image with reduced noise, so that it is possible to provide an operator with a more easily visible photographed image. As a result, it becomes possible to reduce the fatigue of the operator.
  • WSI Whole Slide Imaging
  • the technology according to the present disclosure can be applied to various products. For example, even if the technique according to the present disclosure is applied to a pathological diagnosis system or a support system thereof (hereinafter referred to as a diagnosis support system) in which a doctor or the like observes cells or tissues collected from a patient to diagnose a lesion. Good.
  • This diagnostic support system may be a WSI (Whole Slide Imaging) system that diagnoses or supports lesions based on images acquired by using digital pathology technology.
  • FIG. 15 is a diagram showing an example of a schematic configuration of a diagnostic support system 5500 to which the technique according to the present disclosure is applied.
  • the diagnostic support system 5500 includes one or more pathological systems 5510. Further, the medical information system 5530 and the out-licensing device 5540 may be included.
  • the display control device 5513 sends a viewing request for the pathological image received from the user to the server 5512. Then, the display control device 5513 displays the pathological image received from the server 5512 on the display device 5514 using a liquid crystal, EL (Electro-Luminescence), CRT (35CathodeRayTube), or the like.
  • the display device 5514 may be compatible with 4K or 8K, and is not limited to one, and may be a plurality of display devices.
  • Various stains such as general stain showing the morphology of the tissue such as HE (Hematoxylin-Eosin) stain and immunostaining showing the immune status of the tissue such as IHC (Immunohistochemistry) stain may be applied to the staining of thin sections. .. At that time, one thin section may be stained with a plurality of different reagents, or two or more thin sections (also referred to as adjacent thin sections) continuously cut out from the same block piece may be different from each other. It may be dyed using.
  • the server 5512 can generate a tile image of a smaller size by further dividing the tile image. The generation of such a tile image may be repeated until a tile image having a size set as the minimum unit is generated.
  • the server 5512 executes the tile composition process of generating one tile image by synthesizing a predetermined number of adjacent tile images for all the tile images. This tile composition process can be repeated until one tile image is finally generated.
  • a tile image group having a pyramid structure in which each layer is composed of one or more tile images is generated.
  • the tile image of one layer and the tile image of a different layer from this layer have the same number of pixels, but their resolutions are different. For example, when a total of four tile images of 2 ⁇ 2 are combined to generate one tile image in the upper layer, the resolution of the tile image in the upper layer is 1/2 times the resolution of the tile image in the lower layer used for composition. It has become.
  • a tile image group having such a pyramid structure By constructing a tile image group having such a pyramid structure, it is possible to switch the 37 level of detail of the observation object displayed on the display device depending on the hierarchy to which the tile image to be displayed belongs. For example, when the tile image of the lowest layer is used, the narrow area of the observation object is displayed in detail, and the wider area of the observation object is displayed coarser as the tile image of the upper layer is used. it can.
  • the server 5512 may store the tile image group having a pyramid structure in a storage device other than the server 5512, for example, a cloud server. Further, a part or all of the tile image generation process as described above may be executed by a cloud server or the like.
  • the medical information system 5530 is a so-called electronic medical record system, and stores information related to diagnosis such as patient identification information, patient disease information, test information and image information used for diagnosis, diagnosis result, and prescription drug.
  • diagnosis such as patient identification information, patient disease information, test information and image information used for diagnosis, diagnosis result, and prescription drug.
  • a pathological image obtained by imaging an observation object of a patient can be displayed on the display device 5514 by the display control device 5513 after being temporarily stored via the server 5512.
  • a pathologist using the pathological system 5510 makes a pathological diagnosis based on a pathological image displayed on the display device 5514.
  • the results of the pathological diagnosis made by the pathologist are stored in the medical information system 5530.
  • the derivation device 5540 can perform analysis on the pathological image. A learning model created by machine learning can be used for this analysis.
  • the derivation device 5540 may derive a classification result of a specific region, an organization identification result, or the like as the analysis result. Further, the derivation device 5540 may derive identification results such as cell information, number, position, and luminance information, and scoring information for them. These information derived by the derivation device 5540 may be displayed on the display device 5514 of the pathology system 5510 as diagnostic support information.
  • the technique according to the present disclosure can be suitably applied to, for example, the microscope 5511 among the configurations described above.
  • the technique according to the present disclosure can be applied to the low-resolution imaging unit and / or the high-resolution imaging unit in the microscope 5511.
  • the technique according to the present disclosure can be applied to the low-resolution imaging unit and / or the high-resolution imaging unit, the low-resolution imaging unit and / or the high-resolution imaging unit can be miniaturized, and the microscope 5511 can be miniaturized. ..
  • the microscope 5511 can be easily transported, so that system introduction, system recombination, and the like can be facilitated.
  • a moving image may be generated from a still image of an observation object acquired by using a microscope.
  • a moving image may be generated from a still image continuously captured for a predetermined period, or an image sequence may be generated from a still image captured at a predetermined interval.
  • the movements such as beating and elongation of cancer cells, nerve cells, myocardial tissue, sperm, migration, and the division process of cultured cells and fertilized eggs can be observed. It is possible to analyze the dynamic characteristics of objects using machine learning.
  • the present technology can have the following configurations.
  • a pixel array unit having a plurality of pixels that perform photoelectric conversion, and A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data, A signal processing unit that performs signal processing on the digital pixel data, and A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
  • the signal processing unit is an imaging device that performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
  • the image pickup apparatus according to (1) above, further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the data signal processed by the signal processing unit.
  • the signal processing unit performs the signal processing according to an instruction from the outside, and if there is no instruction from the outside, the signal processing according to the brightness information detected by the brightness information detector. And The information processing unit performs at least one of the recognition process and the detection process based on the data obtained by the signal processing unit performing the signal processing according to the brightness information detected by the brightness information detector. , The imaging device according to (2) above. (4) The signal processing unit A first signal processing unit that performs a first signal processing on the digital pixel data, It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
  • the second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
  • the image pickup apparatus according to (1), further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit.
  • the first signal processing and the second signal processing include common signal processing.
  • the imaging device according to (4), wherein the second signal processing unit performs at least a part of the common signal processing based on the brightness information detected by the brightness information detector.
  • the common signal processing includes a brightness adjustment processing of an image captured by the pixel array unit.
  • the imaging device according to (5), wherein the second signal processing unit performs the brightness adjustment process based on the brightness information detected by the brightness information detector.
  • the common signal processing includes a white balance adjustment process of an image captured by the pixel array unit.
  • the imaging device according to (5) or (6) above, wherein the second signal processing unit performs the white balance adjustment process based on the brightness information detected by the brightness information detector.
  • the recognition process includes a process of giving input data to a calculation model generated by machine learning and performing a calculation.
  • the imaging device according to any one of (2) to (7) above, wherein the input data is output data of the signal processing unit.
  • the second signal processing unit or the information processing unit performs motion detection processing based on the brightness information detected by the brightness information detector, any one of (4) to (8).
  • the imaging apparatus according to.
  • the signal processing unit includes an arithmetic unit that calculates a set value for performing the signal processing based on the brightness information detected by the brightness information detector.
  • the imaging device according to any one item. (11) A first substrate having the pixel array portion and The imaging image according to any one of (1) to (10), comprising the converter, the signal processing unit, and the second substrate having the brightness information detector, which are laminated on the first substrate. apparatus. (12) The first substrate and the second substrate are bonded to each other by any of a CoC (Chip on Chip) method, a CoW (Chip on Wafer) method, or a WoW (Wafer on Wafer) method (11). The imaging apparatus according to.
  • Imaging device (14) The gain adjusting unit adjusts the gain in response to an external instruction, and if there is no external instruction, the gain adjustment is based on the brightness information detected by the brightness information detector. (13).
  • An imaging device that outputs captured image data and A processor that performs predetermined signal processing on the image data is provided.
  • the image pickup device A pixel array unit having a plurality of pixels that perform photoelectric conversion, A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data, A signal processing unit that performs signal processing on the digital pixel data, and A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
  • the signal processing unit performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
  • An electronic device in which the image data processed by the signal processing unit is supplied to the processor.
  • the signal processing unit A first signal processing unit that performs a first signal processing on the digital pixel data, It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
  • the second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
  • the electronic device further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit.
  • the processor sends an instruction to the first signal processing unit to perform processing based on the output data of the first signal processing unit, and has lower power consumption than the first operation mode.
  • the second signal processing unit or the information processing unit performs motion detection processing based on the brightness information detected by the brightness information detector regardless of the operation mode of the processor.
  • the second signal processing unit or the information processing unit performs the motion detection process based on the brightness information detected by the brightness information detector, and the motion detection process is performed.
  • the step of converting the analog pixel signal into digital pixel data A step of detecting brightness information incident on the pixel array unit based on the digital pixel data, and An imaging method comprising a step of performing signal processing on the digital pixel data and performing at least a part of the signal processing based on the brightness information.
  • Imaging device 1 Imaging device, 2 Electronic equipment, 3 Application processor, 4 Imaging unit, 5 Control unit, 6 Converter, 7 Signal processing unit, 7a processing A unit, 7b AE gain adjustment unit, 7c processing B unit, 7d processing C unit , 7e AWB gain adjustment unit, 7f processing D unit, 7g AE gain adjustment unit, 7h AWB gain adjustment unit, 7i processing E unit, 7j processing F unit, 8 memory, 9 DSP, 10 selector, 11 brightness information Detector, 12 optical system, 13 pixel array unit, 14 signal processing unit, 14a processing X unit, 14b AE gain adjustment unit, 14c AWB gain adjustment unit, 14d processing Y unit, 14e processing Z unit, 15 OPD, 20 Network, 21 cloud server, 31 1st board, 32 2nd board, 71 1st signal processing unit, 72 2nd signal processing unit,

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

[Problem] To achieve an optimum signal processing independently of external instructions. [Solution] An image capture apparatus according to the present invention comprises: a pixel array unit having a plurality of pixels for performing photoelectric conversions; a converter for converting analog image signals outputted by the pixel array unit to digital pixel data; a signal processing unit for performing signal processing of the digital pixel data; and a brightness information detector for detecting, on the basis of the digital pixel data, brightness information incident upon the pixel array unit, wherein the signal processing unit performs at least a part of the signal processing on the basis of the brightness information detected by the brightness information detector.

Description

撮像装置、電子機器及び撮像方法Imaging equipment, electronic devices and imaging methods
 本開示は、撮像装置、電子機器及び撮像方法に関する。 The present disclosure relates to an imaging device, an electronic device, and an imaging method.
 近年では、イメージセンサで撮像された画像データに対して種々の信号処理を高速に行うことが求められている。また、半導体プロセス技術の進展により、イメージセンサチップ、メモリチップ及び信号処理チップなどの複数のチップ同士をバンプで接続してパッケージングした半導体デバイスや、イメージセンサが配置されたダイと、メモリや信号処理回路等が配置されたダイとを積層してパッケージングした半導体デバイスが提案されている。 In recent years, it has been required to perform various signal processing at high speed on image data captured by an image sensor. In addition, due to advances in semiconductor process technology, semiconductor devices such as image sensor chips, memory chips, and signal processing chips that are packaged by connecting multiple chips with bumps, dies on which image sensors are placed, and memories and signals A semiconductor device in which a die on which a processing circuit or the like is arranged is laminated and packaged has been proposed.
国際公開WO2018/051809A1International release WO2018 / 051809A1
 イメージセンサと信号処理回路とを内蔵する半導体デバイス(以下、撮像装置と呼ぶ)をスマートフォン等の電子機器に搭載する場合、撮像装置内の信号処理回路は、電子機器に搭載されているアプリケーションプロセッサの指示に従って各種の信号処理を行うことが多い。 When a semiconductor device containing an image sensor and a signal processing circuit (hereinafter referred to as an image pickup device) is mounted on an electronic device such as a smartphone, the signal processing circuit in the image pickup device is the application processor mounted on the electronic device. In many cases, various signal processing is performed according to the instructions.
 アプリケーションプロセッサは、撮像装置内の信号処理回路よりも、高度な信号処理を高速に行えるハードウェア性能を備えていることが多いため、撮像装置内の信号処理回路は、アプリケーションプロセッサの指示に従った信号処理を行うのが一般的である。例えば、画像データの明るさ調整やホワイトバランス調整は、撮像装置内の信号処理回路でも行うことができるが、アプリケーションプロセッサの内部で行った方がより精度の高い調整が行えることから、撮像装置内の信号処理回路は、プリケーションプロセッサの内部で明るさ調整やホワイトバランス調整を行うことを前提とした信号処理を行って、アプリケーションプロセッサに引き渡すことが多い。 Since the application processor often has hardware performance capable of performing advanced signal processing at a higher speed than the signal processing circuit in the imaging device, the signal processing circuit in the imaging device follows the instructions of the application processor. It is common to perform signal processing. For example, the brightness adjustment and white balance adjustment of image data can also be performed by the signal processing circuit in the image pickup device, but since the adjustment can be performed with higher accuracy inside the application processor, the adjustment is performed inside the image pickup device. The signal processing circuit of is often handed over to the application processor after performing signal processing on the premise that brightness adjustment and white balance adjustment are performed inside the application processor.
 しかしながら、撮像装置内の信号処理回路が、アプリケーションプロセッサに依存した信号処理を行う場合、以下のような問題が生じる。 However, when the signal processing circuit in the image pickup device performs signal processing depending on the application processor, the following problems occur.
 1.アプリケーションプロセッサは、消費電力の低減のために、スリープモードに移行することがあるが、アプリケーションプロセッサがスリープモードになると、イメージセンサで撮像された画像データに対して、明るさ調整やホワイトバランス調整を適正に行えなくなる。 1. The application processor may shift to sleep mode in order to reduce power consumption, but when the application processor goes into sleep mode, brightness adjustment and white balance adjustment are performed on the image data captured by the image sensor. It cannot be done properly.
 2.撮像装置内の信号処理回路で信号処理したデータを用いて、撮像装置内で認識処理等を行う場合、アプリケーションプロセッサからの指示で信号処理回路が信号処理を行うと、必ずしも認識処理等にとって最適な信号処理結果とならず、認識処理等の信頼性が低下するおそれがある。 2. When performing recognition processing or the like in the imaging device using the data processed by the signal processing circuit in the image pickup device, if the signal processing circuit performs signal processing in response to an instruction from the application processor, it is not always optimal for recognition processing or the like. The signal processing result may not be obtained, and the reliability of the recognition processing or the like may be lowered.
 そこで、本開示では、アプリケーションプロセッサ等の外部からの指示に依存せずに、最適な信号処理を行うことができる撮像装置、電子機器及び撮像方法を提供するものである。 Therefore, the present disclosure provides an imaging device, an electronic device, and an imaging method capable of performing optimum signal processing without depending on an external instruction such as an application processor.
 上記の課題を解決するために、本開示によれば、光電変換を行う複数の画素を有する画素アレイ部と、
 前記画素アレイ部から出力されたアナログ画素信号をデジタル画素データに変換する変換器と、
 前記デジタル画素データに対して信号処理を行う信号処理部と、
 前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出する明るさ情報検出器と、を備え、
 前記信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理の少なくとも一部を行う、撮像装置が提供される。
In order to solve the above problems, according to the present disclosure, a pixel array unit having a plurality of pixels that perform photoelectric conversion and a pixel array unit
A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data,
A signal processing unit that performs signal processing on the digital pixel data, and
A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
The signal processing unit provides an imaging device that performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
 前記信号処理部で信号処理されたデータに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部を備えてもよい。 An information processing unit that performs at least one of a predetermined recognition process and a detection process based on the data signal-processed by the signal processing unit may be provided.
 前記信号処理部は、外部からの指示に応じた前記信号処理を行い、外部からの指示がない場合は、前記明るさ情報検出器で検出された明るさ情報に応じた前記信号処理を行い、
 前記情報処理部は、前記信号処理部が前記明るさ情報検出器で検出された明るさ情報に応じた前記信号処理を行ったデータに基づいて、前記認識処理及び前記検出処理の少なくとも一方を行ってもよい。
The signal processing unit performs the signal processing according to an instruction from the outside, and if there is no instruction from the outside, performs the signal processing according to the brightness information detected by the brightness information detector.
The information processing unit performs at least one of the recognition process and the detection process based on the data obtained by the signal processing unit performing the signal processing according to the brightness information detected by the brightness information detector. You may.
 前記信号処理部は、
 前記デジタル画素データに対して第1信号処理を行う第1信号処理部と、
 前記デジタル画素データ、又は前記第1信号処理の少なくとも一部を行ったデータに対して、第2信号処理を行う第2信号処理部と、を有し、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記第2信号処理の少なくとも一部を行い、
 前記第2信号処理部の出力データに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部をさらに備えてもよい。
The signal processing unit
A first signal processing unit that performs a first signal processing on the digital pixel data,
It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
The second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
An information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit may be further provided.
 前記第1信号処理及び前記第2信号処理には、共通の信号処理が含まれており、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記共通の信号処理の少なくとも一部を行ってもよい。
The first signal processing and the second signal processing include common signal processing.
The second signal processing unit may perform at least a part of the common signal processing based on the brightness information detected by the brightness information detector.
 前記共通の信号処理は、前記画素アレイ部で撮像された画像の明るさ調整処理を含んでおり、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記明るさ調整処理を行ってもよい。
The common signal processing includes brightness adjustment processing of an image captured by the pixel array unit.
The second signal processing unit may perform the brightness adjustment process based on the brightness information detected by the brightness information detector.
 前記共通の信号処理は、前記画素アレイ部で撮像された画像のホワイトバランス調整処理を含んでおり、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記ホワイトバランス調整処理を行ってもよい。
The common signal processing includes a white balance adjustment process of an image captured by the pixel array unit.
The second signal processing unit may perform the white balance adjustment process based on the brightness information detected by the brightness information detector.
 前記認識処理は、機械学習により生成された計算モデルに入力データを与えて演算される処理を含んでおり、
 前記入力データは、前記信号処理部の出力データであってもよい。
The recognition process includes a process of giving input data to a calculation model generated by machine learning and performing a calculation.
The input data may be output data of the signal processing unit.
 前記第2信号処理部又は前記情報処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて動き検出処理を行ってもよい。 The second signal processing unit or the information processing unit may perform motion detection processing based on the brightness information detected by the brightness information detector.
 前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理部が前記信号処理を行うための設定値を演算する演算器を備えてもよい。 Based on the brightness information detected by the brightness information detector, the signal processing unit may include an arithmetic unit that calculates a set value for performing the signal processing.
 前記画素アレイ部を有する第1基板と、
 前記第1基板に積層される、前記変換器、前記信号処理部及び前記明るさ情報検出器を有する第2基板と、を備えてもよい。
The first substrate having the pixel array portion and
The converter, the signal processing unit, and the second substrate having the brightness information detector, which are laminated on the first substrate, may be provided.
 前記第1基板と前記第2基板とは、CoC(Chip on Chip)方式、CoW(Chip on Wafer)方式、又はWoW(Wafer on Wafer)方式のいずれかで貼り合わされてもよい。 The first substrate and the second substrate may be bonded by any of a CoC (Chip on Chip) method, a CoW (Chip on Wafer) method, or a WoW (Wafer on Wafer) method.
 前記明るさ情報検出器で検出された明るさ情報に基づいて、前記アナログ画素信号のゲイン調整を行うゲイン調整部を備えてもよい。 A gain adjusting unit that adjusts the gain of the analog pixel signal based on the brightness information detected by the brightness information detector may be provided.
 前記ゲイン調整部は、外部からの指示に応じた前記ゲイン調整を行い、外部からの指示がない場合は、前記明るさ情報検出器で検出された明るさ情報に基づいて前記ゲイン調整を行ってもよい。 The gain adjusting unit adjusts the gain according to an instruction from the outside, and if there is no instruction from the outside, adjusts the gain based on the brightness information detected by the brightness information detector. May be good.
 本開示の他の一態様では、撮像された画像データを出力する撮像装置と、
 前記画像データに対して所定の信号処理を行うプロセッサと、を備え、
 前記撮像装置は、
 光電変換を行う複数の画素を有する画素アレイ部と、
 前記画素アレイ部から出力されたアナログ画素信号をデジタル画素データに変換する変換器と、
 前記デジタル画素データに対して信号処理を行う信号処理部と、
 前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出する明るさ情報検出器と、を備え、
 前記信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理の少なくとも一部を行い、
 前記信号処理部で前記信号処理された前記画像データが前記プロセッサに供給される、電子機器が提供される。
In another aspect of the present disclosure, an imaging device that outputs captured image data and
A processor that performs predetermined signal processing on the image data is provided.
The image pickup device
A pixel array unit having a plurality of pixels that perform photoelectric conversion,
A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data,
A signal processing unit that performs signal processing on the digital pixel data, and
A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
The signal processing unit performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
An electronic device is provided in which the signal-processed image data is supplied to the processor by the signal processing unit.
 前記信号処理部は、
 前記デジタル画素データに対して第1信号処理を行う第1信号処理部と、
 前記デジタル画素データ、又は前記第1信号処理の少なくとも一部を行ったデータに対して、第2信号処理を行う第2信号処理部と、を有し、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記第2信号処理の少なくとも一部を行い、
 前記第2信号処理部の出力データに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部をさらに備えてもよい。
The signal processing unit
A first signal processing unit that performs a first signal processing on the digital pixel data,
It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
The second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
An information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit may be further provided.
 前記プロセッサは、前記第1信号処理部に指示を送って、前記第1信号処理部の出力データに基づく処理を行う第1動作モードと、前記第1動作モードよりも低消費電力で、前記第1信号処理部に指示を送らず、前記第1信号処理部の出力データを受領しない第2動作モードとを有し、
 前記第2信号処理部又は前記情報処理部は、前記プロセッサの動作モードに関係なく、前記明るさ情報検出器で検出された明るさ情報に基づいて動き検出処理を行い、
 前記第2信号処理部又は前記情報処理部は、前記プロセッサが前記第2動作モードの場合には、前記明るさ情報検出器で検出された明るさ情報に基づいて前記動き検出処理を行い、前記動き検出処理にて動きが検出されると、前記プロセッサを前記第2動作モードから前記第1動作モードに復帰させてもよい。
The processor sends an instruction to the first signal processing unit to perform processing based on the output data of the first signal processing unit, and the first operation mode has lower power consumption than the first operation mode. It has a second operation mode in which one does not send an instruction to the first signal processing unit and does not receive the output data of the first signal processing unit.
The second signal processing unit or the information processing unit performs motion detection processing based on the brightness information detected by the brightness information detector regardless of the operation mode of the processor.
When the processor is in the second operation mode, the second signal processing unit or the information processing unit performs the motion detection process based on the brightness information detected by the brightness information detector, and the motion detection process is performed. When motion is detected by the motion detection process, the processor may be returned from the second operation mode to the first operation mode.
 本開示の他の一態様では、画素アレイ部にて光電変換を行ってアナログ画素信号を出力するステップと、
 前記アナログ画素信号をデジタル画素データに変換するステップと、
 前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出するステップと、
 前記デジタル画素データに対して信号処理を行い、前記信号処理の少なくとも一部については前記明るさ情報に基づいて行うステップと、を備える、撮像方法が提供される。
In another aspect of the present disclosure, there is a step of performing photoelectric conversion in the pixel array unit and outputting an analog pixel signal.
The step of converting the analog pixel signal into digital pixel data,
A step of detecting brightness information incident on the pixel array unit based on the digital pixel data, and
Provided is an imaging method including a step of performing signal processing on the digital pixel data and performing at least a part of the signal processing based on the brightness information.
第1の実施形態による撮像装置を備えた電子機器の概略構成を示すブロック図。The block diagram which shows the schematic structure of the electronic device provided with the image pickup apparatus according to 1st Embodiment. 撮像装置内の信号処理部とアプリケーションプロセッサ3内の信号処理部の内部構成を示すブロック図。The block diagram which shows the internal structure of the signal processing unit in an image pickup apparatus, and the signal processing unit in an application processor 3. 撮像装置内の信号処理部が、図2の内部構成に加えて、AWB用ゲイン調整部をさらに有する例を示す図。It is a figure which shows the example which the signal processing part in an image pickup apparatus further has a gain adjustment part for AWB in addition to the internal structure of FIG. 図1の撮像装置のチップ構造の一例を示す図。The figure which shows an example of the chip structure of the image pickup apparatus of FIG. 画素アレイ部13が配置される第1基板31のレイアウト例を示す図。The figure which shows the layout example of the 1st substrate 31 in which a pixel array part 13 is arranged. 制御部と、ADCと、信号処理部と、メモリと、DSPと、セレクタとが配置される第2基板のレイアウト例を示す図。The figure which shows the layout example of the 2nd board in which a control part, ADC, a signal processing part, a memory, a DSP, and a selector are arranged. 第1の実施形態による撮像装置が行う処理手順を示すフローチャート。The flowchart which shows the processing procedure performed by the image pickup apparatus by 1st Embodiment. 第2の実施形態による撮像装置を備えた電子機器の概略構成を示すブロック図。The block diagram which shows the schematic structure of the electronic device provided with the image pickup apparatus by 2nd Embodiment. 撮像装置内の第1信号処理部及び第2信号処理部の内部構成と、アプリケーションプロセッサ内の信号処理部の内部構成とを示すブロック図。The block diagram which shows the internal structure of the 1st signal processing part and the 2nd signal processing part in an image pickup apparatus, and the internal structure of a signal processing part in an application processor. 撮像装置内の第2信号処理部が、図7の内部構成に加えて、AWB用ゲイン調整部をさらに有する例を示す図。It is a figure which shows the example which the 2nd signal processing part in an image pickup apparatus further has a gain adjustment part for AWB in addition to the internal structure of FIG. 制御部が、OPDが明るさ情報検出信号を生成するタイミングの制御と、OPDが生成した明るさ情報検出信号を読み出す制御と、読み出した明るさ情報検出信号に基づいて第2信号処理部内の個々の処理部に設定する設定値を計算する処理と、計算された設定値を第2信号処理部内の個々の処理部に送信する制御とを行う場合の撮像装置1のブロック図。The control unit controls the timing at which the OPD generates the brightness information detection signal, controls the reading of the brightness information detection signal generated by the OPD, and the individual in the second signal processing unit based on the read brightness information detection signal. The block diagram of the image pickup apparatus 1 in the case of performing the process of calculating the set value set in the processing part of 1 and the control of transmitting the calculated set value to each processing part in a 2nd signal processing part. 本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図。The block diagram which shows the schematic configuration example of the vehicle control system which is an example of the mobile body control system to which the technique which concerns on this disclosure can be applied. 撮像部の設置位置の例を示す図。The figure which shows the example of the installation position of the image pickup part. 本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図。The figure which shows an example of the schematic structure of the endoscopic surgery system to which the technique (the present technique) which concerns on this disclosure can be applied. 図12に示すカメラヘッド及びCCUの機能構成の一例を示すブロック図。The block diagram which shows an example of the functional structure of the camera head and CCU shown in FIG. 本開示に係る技術が適用される診断支援システムの概略的な構成の一例を示す図。The figure which shows an example of the schematic structure of the diagnosis support system to which the technique which concerns on this disclosure is applied.
 以下、図面を参照して、撮像装置及び電子機器の実施形態について説明する。以下では、撮像装置及び電子機器の主要な構成部分を中心に説明するが、撮像装置及び電子機器には、図示又は説明されていない構成部分や機能が存在しうる。以下の説明は、図示又は説明されていない構成部分や機能を除外するものではない。 Hereinafter, embodiments of the imaging device and the electronic device will be described with reference to the drawings. In the following, the main components of the image pickup device and the electronic device will be mainly described, but the image pickup device and the electronic device may have components and functions not shown or described. The following description does not exclude components or functions not shown or described.
 (第1の実施形態)
 図1は第1の実施形態による撮像装置1を備えた電子機器2の概略構成を示すブロック図である。図1の電子機器2は、撮像装置1と、アプリケーションプロセッサ3とを備えている。図1の電子機器2は、撮像機能を備えたスマートフォン、携帯電話、タブレット、PC、デジタルカメラなどであり、その具体的な機器は問わない。
(First Embodiment)
FIG. 1 is a block diagram showing a schematic configuration of an electronic device 2 provided with an image pickup apparatus 1 according to the first embodiment. The electronic device 2 of FIG. 1 includes an image pickup device 1 and an application processor 3. The electronic device 2 in FIG. 1 is a smartphone, a mobile phone, a tablet, a PC, a digital camera, or the like having an imaging function, and the specific device thereof does not matter.
 撮像装置1は、一つの半導体デバイスで実現可能であり、この半導体デバイスをイメージセンサ又は固体撮像装置と呼ぶこともある。撮像装置1は、撮像部4と、制御部5と、変換器(以下、ADC:Analog to Digital Converter)6と、信号処理部7と、メモリ8と、DSP(Digital Signal Processor)9と、セレクタ10と、明るさ情報検出器(以下、OPD:Optical Detector)11とを備えている。 The image pickup device 1 can be realized by one semiconductor device, and this semiconductor device may be called an image sensor or a solid-state image pickup device. The imaging device 1 includes an imaging unit 4, a control unit 5, a converter (hereinafter referred to as ADC: Analog to Digital Converter) 6, a signal processing unit 7, a memory 8, a DSP (Digital Signal Processor) 9, and a selector. 10 and a brightness information detector (hereinafter, OPD: Optical Detector) 11 are provided.
 撮像部4は、光学系12と画素アレイ部13とを有する。光学系12は、例えばズームレンズ、単焦点レンズ、絞り等を有する。光学系12は、入射された光を画素アレイ部13に導く。画素アレイ部13は、二次元方向に配置された複数の画素を有する。各画素は、RGB等の複数色用の複数の単位画素で構成されている。各単位画素は、フォトダイオード等の受光素子を有する。受光素子は、入射された光を光電変換して、アナログ画素信号を出力する。撮像部4に入射された光は、光学系12を通って、複数の受光素子が配列された受光面に結像される。各受光素子は、入射された光の強度に応じた電荷を蓄積して、蓄積電荷量に応じたアナログ画素信号を出力する。 The imaging unit 4 has an optical system 12 and a pixel array unit 13. The optical system 12 includes, for example, a zoom lens, a single focus lens, an aperture, and the like. The optical system 12 guides the incident light to the pixel array unit 13. The pixel array unit 13 has a plurality of pixels arranged in the two-dimensional direction. Each pixel is composed of a plurality of unit pixels for a plurality of colors such as RGB. Each unit pixel has a light receiving element such as a photodiode. The light receiving element photoelectrically converts the incident light and outputs an analog pixel signal. The light incident on the image pickup unit 4 passes through the optical system 12 and is imaged on a light receiving surface in which a plurality of light receiving elements are arranged. Each light receiving element accumulates electric charges according to the intensity of the incident light and outputs an analog pixel signal according to the amount of accumulated electric charges.
 制御部5は、アプリケーションプロセッサ3等からの指示により、撮像装置1内の各部を制御する。制御部5は、後述するDSP9と統合してもよい。 The control unit 5 controls each unit in the image pickup apparatus 1 according to an instruction from the application processor 3 or the like. The control unit 5 may be integrated with the DSP 9 described later.
 ADC6は、撮像部4から出力されたアナログ画素信号をデジタル画素データに変換する。ADC6でA/D変換を行うため、ADC6よりも後段側の信号処理部7、DSP9、メモリ8及びセレクタ10は、デジタル画素データを扱うことになる。なお、ADC6の内部、あるいはADC6とは別個に、撮像装置1に供給される電源電圧等から撮像部4を駆動するための駆動電圧を生成する電圧生成回路を設けてもよい。 The ADC 6 converts the analog pixel signal output from the imaging unit 4 into digital pixel data. Since the ADC 6 performs A / D conversion, the signal processing unit 7, the DSP 9, the memory 8 and the selector 10 on the subsequent stage side of the ADC 6 handle digital pixel data. A voltage generation circuit may be provided inside the ADC 6 or separately from the ADC 6 to generate a drive voltage for driving the image pickup unit 4 from a power supply voltage or the like supplied to the image pickup apparatus 1.
 信号処理部7は、デジタル画素データに対して種々の信号処理を行う。信号処理部7は、ADC6から出力されたデジタル画素データに対して信号処理を行ってもよいし、ADC6から出力されてメモリ8に記憶されたデジタル画素データに対して信号処理を行ってもよい。信号処理部7は、外部からの指示に応じた信号処理を行い、外部からの指示がない場合は、OPD11で検出された明るさ情報に応じた信号処理を行う。信号処理部7が行う具体的な信号処理の内容については後述する。信号処理部7で信号処理されたデータは、いったんメモリ8に格納された後にDSP9に送られる。あるいは、信号処理部7で信号処理されたデータをメモリ8に格納することなくDSP9に送ってもよい。 The signal processing unit 7 performs various signal processing on the digital pixel data. The signal processing unit 7 may perform signal processing on the digital pixel data output from the ADC 6 or may perform signal processing on the digital pixel data output from the ADC 6 and stored in the memory 8. .. The signal processing unit 7 performs signal processing according to an instruction from the outside, and if there is no instruction from the outside, performs signal processing according to the brightness information detected by the OPD 11. The specific contents of signal processing performed by the signal processing unit 7 will be described later. The data processed by the signal processing unit 7 is once stored in the memory 8 and then sent to the DSP 9. Alternatively, the data processed by the signal processing unit 7 may be sent to the DSP 9 without being stored in the memory 8.
 DSP9は、信号処理部7で信号処理されたデータに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部の機能を有する。DSP9は、信号処理部7がOPD11で検出された明るさ情報に応じた信号処理を行ったデータに基づいて、認識処理及び検出処理の少なくとも一方を行う。DSP9は、例えばメモリ8に記憶されたプログラムを実行することで、機械学習された計算モデルを用いた演算処理を行う。メモリ8には、学習済の計算モデルに関する種々の情報が予め記憶されており、DSP9はメモリ8から計算モデルに関する必要な情報を読み出して、信号処理部7の出力データを計算モデルに入力して、演算処理を行う。機械学習された計算モデルの具体的な形式は問わないが、例えば、ディープニューラルネットワーク(以下、DNN)による計算モデルである。この計算モデルは、ADC6の出力データ又は信号処理部7の出力データを入力とし、この入力に対するラベルが紐付いている学習データを学習済の計算モデルに入力して生成されたパラメータに基づいて設計することができる。また、DNNは、複数階層のニューラルネットワークで構成されてもよい。DSP9は、DNNを用いた演算処理により、例えば所定の認識処理を行うことができる。ここで、認識処理とは、信号処理部7の出力データである画像データに、特徴のある画像情報が含まれるか否かを自動で認識する処理である。より具体的には、認識処理は、機械学習により生成された計算モデルに入力データを与えて演算される処理であり、入力データは、信号処理部7の出力データである。 The DSP 9 has a function of an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the data signal-processed by the signal processing unit 7. The DSP 9 performs at least one of the recognition process and the detection process based on the data obtained by the signal processing unit 7 performing the signal processing according to the brightness information detected by the OPD 11. The DSP 9 performs arithmetic processing using a machine-learned calculation model by, for example, executing a program stored in the memory 8. Various information about the learned calculation model is stored in the memory 8 in advance, and the DSP 9 reads necessary information about the calculation model from the memory 8 and inputs the output data of the signal processing unit 7 into the calculation model. , Perform arithmetic processing. The specific form of the machine-learned calculation model does not matter, but for example, it is a calculation model by a deep neural network (hereinafter, DNN). This calculation model is designed based on the parameters generated by inputting the output data of the ADC 6 or the output data of the signal processing unit 7 as an input and inputting the training data associated with the label for this input into the trained calculation model. be able to. Further, the DNN may be composed of a multi-layer neural network. The DSP 9 can perform a predetermined recognition process, for example, by an arithmetic process using the DNN. Here, the recognition process is a process of automatically recognizing whether or not the image data, which is the output data of the signal processing unit 7, includes characteristic image information. More specifically, the recognition process is a process in which input data is given to a calculation model generated by machine learning and calculated, and the input data is output data of the signal processing unit 7.
 DSP9は、メモリ8に記憶されている学習済の計算モデルに基づいて演算処理を実行する過程で、メモリ8に記憶されている辞書係数と画像データとの積和演算を行う。DSP9による演算結果は、メモリ8に記憶されるとともに、セレクタ10に入力される。DSP9による計算モデルを用いた演算処理の結果は、画像データや、画像データから得られる各種情報(メタデータ)でありうる。DSP9、又は上述した制御部5は、メモリ8に対する書き込み及び読み出しを制御するメモリコントローラの機能を有してもよいし、DSP9や制御部5とは別個に、メモリコントローラを設けてもよい。また、DSP9は、動き検出処理や顔検出処理などの検出処理を行ってもよい。検出処理は、DSP9の代わりに信号処理部7で行ってもよい。あるいは、信号処理部7とDSP9が協働して検出処理を行ってもよい。 The DSP 9 performs a product-sum calculation of the dictionary coefficient stored in the memory 8 and the image data in the process of executing the calculation process based on the learned calculation model stored in the memory 8. The calculation result by the DSP 9 is stored in the memory 8 and input to the selector 10. The result of the arithmetic processing using the calculation model by DSP9 may be image data or various information (metadata) obtained from the image data. The DSP 9 or the control unit 5 described above may have a function of a memory controller that controls writing and reading to the memory 8, or a memory controller may be provided separately from the DSP 9 and the control unit 5. Further, the DSP 9 may perform detection processing such as motion detection processing and face detection processing. The detection process may be performed by the signal processing unit 7 instead of the DSP 9. Alternatively, the signal processing unit 7 and the DSP 9 may cooperate to perform the detection process.
 メモリ8は、ADC6から出力されたデジタル画素データ、DSP9が実行するプログラム、DSP9が演算処理に利用する学習済の計算モデルに関する各種情報等を記憶する。また、メモリ8は、DSP9の演算処理結果のデータを記憶してもよい。メモリ8は、読み書きが可能なRAM(Random Access Memory)である。メモリ8内の計算モデルに関する情報を入れ替えることで、DSP9は、様々な機械学習の計算モデルを実行することができ、汎用性が高くて適用範囲の広い認識処理や検出処理を行うことができる。なお、DSP9が、特定用途の計算モデルによる演算処理を行う場合には、メモリ8はROM(Read Only Memory)でもよい。 The memory 8 stores digital pixel data output from the ADC 6, a program executed by the DSP 9, various information related to the learned calculation model used by the DSP 9 for arithmetic processing, and the like. Further, the memory 8 may store the data of the calculation processing result of the DSP 9. The memory 8 is a readable and writable RAM (RandomAccessMemory). By exchanging the information about the calculation model in the memory 8, the DSP 9 can execute various machine learning calculation models, and can perform recognition processing and detection processing having high versatility and a wide range of application. When the DSP 9 performs arithmetic processing based on a calculation model for a specific purpose, the memory 8 may be a ROM (Read Only Memory).
 セレクタ10は、制御部5からの選択制御信号に基づいて、信号処理部7の出力データ、又はDSP9の出力データを選択して出力する。セレクタ10の出力データは、例えばアプリケーションプロセッサ3に入力される。 The selector 10 selects and outputs the output data of the signal processing unit 7 or the output data of the DSP 9 based on the selection control signal from the control unit 5. The output data of the selector 10 is input to, for example, the application processor 3.
 OPD11は、ADC6から出力されたデジタル画素データに基づいて、画素アレイ部13に入射された明るさ情報を検出する。より具体的には、OPD11は、デジタル画素データに基づいて、画素アレイ部13に入射された光の平均的な明るさ、又は入射された光の積算値に基づく明るさを検出する。OPD11が検出する明るさは、撮像装置1の周囲の明るさ情報である。本明細書では、OPD11が出力する信号を明るさ情報検出信号と呼ぶ。OPD11は、画素アレイ部13内の全受光素子が光電変換したアナログ画素信号に対応するデジタル画素データの平均値を計算して明るさ情報検出信号を生成してもよいし、画素アレイ部13内の一部の受光素子が光電変換したアナログ画素信号に対応するデジタル画素データの平均値を計算して明るさ情報検出信号を生成してもよい。また、画素アレイ部13内に、OPD11用の専用の受光素子を設けてもよい。 The OPD 11 detects the brightness information incident on the pixel array unit 13 based on the digital pixel data output from the ADC 6. More specifically, the OPD 11 detects the average brightness of the light incident on the pixel array unit 13 or the brightness based on the integrated value of the incident light based on the digital pixel data. The brightness detected by the OPD 11 is the brightness information around the image pickup device 1. In the present specification, the signal output by the OPD 11 is referred to as a brightness information detection signal. The OPD 11 may generate a brightness information detection signal by calculating the average value of digital pixel data corresponding to the analog pixel signal photoelectrically converted by all the light receiving elements in the pixel array unit 13, or may generate the brightness information detection signal in the pixel array unit 13. The brightness information detection signal may be generated by calculating the average value of the digital pixel data corresponding to the analog pixel signal photoelectrically converted by some of the light receiving elements of the above. Further, a dedicated light receiving element for OPD 11 may be provided in the pixel array unit 13.
 アプリケーションプロセッサ3は、撮像装置1とは別個の半導体デバイスであり、撮像装置1と同一又は別のベース基板に実装される。アプリケーションプロセッサ3は、その内部にCPU(Central Processing Unit)を有し、オペレーティングシステムや各種のアプリケーションソフトウェア等のプログラムを実行する。アプリケーションプロセッサ3は、後述するように信号処理部14を備えており、種々の信号処理を行う。アプリケーションプロセッサ3内の信号処理部14は、撮像装置1内の信号処理部7よりも高度な信号処理を高速に行うことができる。また、アプリケーションプロセッサ3は、OPD15を備えている。OPD15は、画素アレイ部13で撮像されてADC6から出力される1フレーム分のデジタル画素データに基づいて、明るさ情報検出信号を生成する。アプリケーションプロセッサ3は、撮像装置1よりも複雑な処理を短時間で行える処理性能を備えているため、撮像装置1内のOPD11よりも信頼性の高い明るさ情報検出信号を生成できる可能性が高い。 The application processor 3 is a semiconductor device separate from the image pickup device 1, and is mounted on the same or different base substrate as the image pickup device 1. The application processor 3 has a CPU (Central Processing Unit) inside the application processor 3 and executes programs such as an operating system and various application software. The application processor 3 includes a signal processing unit 14 as described later, and performs various signal processing. The signal processing unit 14 in the application processor 3 can perform advanced signal processing at a higher speed than the signal processing unit 7 in the image pickup apparatus 1. Further, the application processor 3 includes an OPD 15. The OPD 15 generates a brightness information detection signal based on one frame of digital pixel data captured by the pixel array unit 13 and output from the ADC 6. Since the application processor 3 has a processing performance capable of performing complicated processing in a shorter time than that of the image pickup device 1, there is a high possibility that a brightness information detection signal having higher reliability than that of the OPD 11 in the image pickup device 1 can be generated. ..
 この他、アプリケーションプロセッサ3は、GPU(Graphics Processing Unit)やベースバンドプロセッサなどの画像処理や信号処理等を行う機能を搭載していてもよい。アプリケーションプロセッサ3は、撮像装置1からの画像データや演算結果に対して、必要に応じて種々の処理を実行したり、電子機器2の表示部に画像を表示する制御を行ったり、処理結果のデータを所定のネットワーク20を介して外部のクラウドサーバ21に送信したりする。 In addition, the application processor 3 may be equipped with a function of performing image processing, signal processing, etc., such as a GPU (Graphics Processing Unit) or a baseband processor. The application processor 3 executes various processes on the image data and the calculation result from the image pickup apparatus 1 as necessary, controls the display of the image on the display unit of the electronic device 2, and determines the processing result. Data is transmitted to an external cloud server 21 via a predetermined network 20.
 なお、所定のネットワーク20は、例えば、インターネット、有線LAN(Local Area Network)、無線LAN、移動体通信網、Bluetooth(登録商標)等の近接無線通信などの種々の通信ネットワークを適用可能である。また、画像データや演算結果の送信先は、クラウドサーバ21に限定されず、スタンドアローン型のサーバや、ファイルサーバ、携帯電話機等の通信端末など、通信機能を有する種々の情報処理装置であってもよい。 Note that the predetermined network 20 can be applied with various communication networks such as the Internet, a wired LAN (Local Area Network), a wireless LAN, a mobile communication network, and a proximity wireless communication such as Bluetooth (registered trademark). Further, the destination of image data and calculation results is not limited to the cloud server 21, but is various information processing devices having a communication function such as a stand-alone server, a file server, and a communication terminal such as a mobile phone. May be good.
 図1では、アプリケーションプロセッサ3が撮像装置1内の信号処理部7に対して指示を送る例を示しているが、アプリケーションプロセッサ3の管理下で動作するISP(Image Signal Processor)が設けられていてもよい。この場合、ISPが信号処理部7に対して指示を送ることもありうる。以下では、アプリケーションプロセッサ3が信号処理部7に対して指示を送る例を説明するが、実際にはISP等のアプリケーションプロセッサ3以外のプロセッサが信号処理部7に対して指示を送る場合も含めて解釈されるものとする。 FIG. 1 shows an example in which the application processor 3 sends an instruction to the signal processing unit 7 in the image pickup apparatus 1, but an ISP (Image Signal Processor) that operates under the control of the application processor 3 is provided. May be good. In this case, the ISP may send an instruction to the signal processing unit 7. In the following, an example in which the application processor 3 sends an instruction to the signal processing unit 7 will be described, but in reality, a processor other than the application processor 3 such as an ISP may send an instruction to the signal processing unit 7. It shall be interpreted.
 (信号処理部7の内部構成)
 図2は第1の実施形態による撮像装置1内の信号処理部7とアプリケーションプロセッサ3内の信号処理部14の内部構成を示すブロック図である。
(Internal configuration of signal processing unit 7)
FIG. 2 is a block diagram showing an internal configuration of a signal processing unit 7 in the image pickup apparatus 1 and a signal processing unit 14 in the application processor 3 according to the first embodiment.
 撮像装置1内の信号処理部7が行う信号処理の具体的内容は任意である。図2の撮像装置1内の信号処理部7は、処理A部7aと、自動露光(AE:Automatic Exposure)用ゲイン調整部7bと、処理B部7cと、処理C部7dとを有する。AE用ゲイン調整部7bは、デジタル画素データの画素値を引き上げて明るくする処理を行う。 The specific content of the signal processing performed by the signal processing unit 7 in the image pickup apparatus 1 is arbitrary. The signal processing unit 7 in the image pickup apparatus 1 of FIG. 2 includes a processing A unit 7a, a gain adjusting unit 7b for automatic exposure (AE: Automatic Exposure), a processing B unit 7c, and a processing C unit 7d. The AE gain adjusting unit 7b performs a process of raising the pixel value of the digital pixel data to make it brighter.
 処理A部7a、処理B部7c、処理C部7dの具体的な処理内容は任意であり、例えば、レンズシェーディング処理、クランプ処理、水平クロップ処理、欠陥補正処理などである。レンズシェーディング処理は、被写体光がレンズを通過すると、周辺部分が暗くなることから、周辺部分の画素値を持ち上げる処理である。クランプ処理は、黒のレベルを規定する処理である。水平クロップ処理は、画像の横幅の一部を切り取って画像サイズを調整する処理である。切り取る部分は、有効な画像を構成しない部分である。欠陥補正処理は、画素の欠陥を周辺の画素データを用いて補間する処理である。欠陥補正処理には、静的な欠陥補正処理と動的な欠陥補正処理がある。まず静的な欠陥補正処理を行った後に、必要に応じて動的な欠陥補正処理を行ってもよい。静的な欠陥補正処理は、既知の欠陥画素を周辺の画素データを用いて固定的に補間する処理である。動的な欠陥補正処理は、欠陥画素が固定されておらず、消費電力が大きいことから、必要に応じて行うようにしてもよい。 The specific processing contents of the processing A part 7a, the processing B part 7c, and the processing C part 7d are arbitrary, for example, lens shading processing, clamping processing, horizontal cropping processing, defect correction processing, and the like. The lens shading process is a process of increasing the pixel value of the peripheral portion because the peripheral portion becomes dark when the subject light passes through the lens. The clamping process is a process that defines the level of black. The horizontal cropping process is a process of adjusting the image size by cutting out a part of the width of the image. The part to be cut out is the part that does not constitute a valid image. The defect correction process is a process of interpolating pixel defects using peripheral pixel data. Defect correction processing includes static defect correction processing and dynamic defect correction processing. First, static defect correction processing may be performed, and then dynamic defect correction processing may be performed as needed. The static defect correction process is a process of fixedly interpolating known defect pixels using peripheral pixel data. Since the defective pixels are not fixed and the power consumption is large, the dynamic defect correction process may be performed as needed.
 このように、処理A部7a、処理B部7c、処理C部7dの候補となりうる信号処理には種々のものがあり、どの信号処理をどの順番に行うかは任意である。また、信号処理部7で行う具体的な信号処理の数も任意であり、図2に示した4つより多い、又は少ない数の信号処理を行ってもよい。 As described above, there are various types of signal processing that can be candidates for the processing A unit 7a, the processing B unit 7c, and the processing C unit 7d, and which signal processing is performed in which order is arbitrary. Further, the number of specific signal processes performed by the signal processing unit 7 is also arbitrary, and signal processing may be performed in a number larger or smaller than the four shown in FIG.
 撮像装置1内の信号処理部7は、OPD11から出力された明るさ情報検出信号に基づいて、各種の信号処理を行うことができる。信号処理部7は、基本的には、アプリケーションプロセッサ3の指示に従った信号処理を行うが、アプリケーションプロセッサ3が動作を停止している場合、アプリケーションプロセッサ3は撮像装置1からの画像データを必要としないことから、撮像装置1内の信号処理部7は、OPD11から出力された明るさ情報検出信号に基づいて、認識処理や検出処理を行うのに最適なデータを生成するべく信号処理を行う。例えば、撮像装置1の周囲の明るさが暗い場合には、AE用ゲイン調整部7bのゲインをより大きく引き上げる調整を行う。また、撮像装置1が特定の光源色で照明されている場合には、ホワイトバランスが片寄ってしまうため、AWB用ゲイン調整部7eによりホワイトバランスを調整する。 The signal processing unit 7 in the image pickup apparatus 1 can perform various signal processing based on the brightness information detection signal output from the OPD 11. The signal processing unit 7 basically performs signal processing according to the instruction of the application processor 3, but when the application processor 3 is stopped, the application processor 3 needs image data from the imaging device 1. Therefore, the signal processing unit 7 in the image pickup apparatus 1 performs signal processing based on the brightness information detection signal output from the OPD 11 in order to generate optimum data for performing recognition processing and detection processing. .. For example, when the ambient brightness of the image pickup apparatus 1 is dark, the gain of the gain adjusting unit 7b for AE is adjusted to be further increased. Further, when the image pickup apparatus 1 is illuminated with a specific light source color, the white balance is biased, so the white balance is adjusted by the gain adjusting unit 7e for AWB.
 上述したように、DSP9は画素アレイ部13で撮像された画像データに基づいて認識処理や検出処理を行うが、信頼性の高い認識処理や検出処理を行うには、DSP9に入力される画像データが認識処理や検出処理を行うのに適したデータである必要がある。DSP9に入力される画像データは、信号処理部7がデジタル画素データに対して信号処理を行った後のデータである。信号処理部7は、ODP11から出力された明るさ情報検出信号に基づいて、デジタル画素データに対して種々の信号処理を行って、DSP9が認識処理や検出処理を行うのに適した画像データを生成して、DSP9に送る。DSP9は、信号処理部7から出力された画像データに基づいて、認識処理や検出処理を行うことで、認識精度や検出精度を向上できる。 As described above, the DSP 9 performs recognition processing and detection processing based on the image data captured by the pixel array unit 13, but in order to perform highly reliable recognition processing and detection processing, the image data input to the DSP 9 is performed. Must be data suitable for performing recognition processing and detection processing. The image data input to the DSP 9 is the data after the signal processing unit 7 performs signal processing on the digital pixel data. The signal processing unit 7 performs various signal processing on the digital pixel data based on the brightness information detection signal output from the ODP 11, and obtains image data suitable for the DSP 9 to perform the recognition processing and the detection processing. Generate and send to DSP9. The DSP 9 can improve the recognition accuracy and the detection accuracy by performing the recognition process and the detection process based on the image data output from the signal processing unit 7.
 アプリケーションプロセッサ3内の信号処理部14は、処理X部14aと、AE用ゲイン調整部14bと、ホワイトバランス(AWB:Auto White Balance)用ゲイン調整部14cと、処理Y部14dと、処理Z部14eとを有する。AE用ゲイン調整部14bは、撮像装置1内のAE用ゲイン調整部7bと同様の処理を行う。AWB用ゲイン調整部14cは、画像データのホワイトバランスを適正に調整する。処理X部14a、処理Y部14d及び処理Z部14eが行う具体的な信号処理の内容は任意である。例えば、処理X部14a、処理Y部14d及び処理Z部14eは、上述したレンズシェーディング処理、クランプ処理、水平クロップ処理、欠陥補正部処理などを行うことができる。 The signal processing unit 14 in the application processor 3 includes a processing X unit 14a, an AE gain adjusting unit 14b, a white balance (AWB: Auto White Balance) gain adjusting unit 14c, a processing Y unit 14d, and a processing Z unit. It has 14e and. The AE gain adjusting unit 14b performs the same processing as the AE gain adjusting unit 7b in the image pickup apparatus 1. The AWB gain adjusting unit 14c properly adjusts the white balance of the image data. The specific contents of the signal processing performed by the processing X unit 14a, the processing Y unit 14d, and the processing Z unit 14e are arbitrary. For example, the processing X unit 14a, the processing Y unit 14d, and the processing Z unit 14e can perform the above-mentioned lens shading processing, clamping processing, horizontal crop processing, defect correction unit processing, and the like.
 アプリケーションプロセッサ3内の信号処理部14は、OPD15からの明るさ情報検出信号に基づいて、各信号処理を行う。アプリケーションプロセッサ3内の信号処理部14は、撮像装置1内の信号処理部7よりも、複雑かつ高度な信号処理を行うことができるため、アプリケーションプロセッサ3内の信号処理部14が行う信号処理は、撮像装置1内の信号処理が行う信号処理よりも信頼性が高くなる。 The signal processing unit 14 in the application processor 3 performs each signal processing based on the brightness information detection signal from the OPD 15. Since the signal processing unit 14 in the application processor 3 can perform more complicated and advanced signal processing than the signal processing unit 7 in the image pickup apparatus 1, the signal processing performed by the signal processing unit 14 in the application processor 3 can be performed. , The reliability is higher than the signal processing performed by the signal processing in the image pickup apparatus 1.
 アプリケーションプロセッサ3は、通常動作モード(第1動作モード)とスリープモード(第2動作モード)を備えている。アプリケーションプロセッサ3は、通常動作モードでは、撮像装置1内の信号処理部7に対して、信号処理部7で行うべき信号処理の内容を指示する。撮像装置1内の信号処理部7は、アプリケーションプロセッサ3からの指示に従って、信号処理を行う。上述したように、アプリケーションプロセッサ3内の信号処理部14は、撮像装置1内の信号処理部7よりも高度かつ複雑な信号処理を短時間で行えるため、アプリケーションプロセッサ3は、撮像装置1内の信号処理部7が行うことができる一部の信号処理を省略したデジタル画素データを送るよう指示することができる。例えば、アプリケーションプロセッサ3は、撮像装置1内の信号処理部7に対して、AE用ゲイン調整部7bとAWB用ゲイン調整部の処理を省略したデジタル画素データを送るよう指示する。この指示に従って、撮像装置1内の信号処理部7は、AE用ゲイン調整部7bとAWB用ゲイン調整部の処理を省略して、残りの少なくとも一部の信号処理を行ったデジタル画素データをアプリケーションプロセッサ3に送る。 The application processor 3 has a normal operation mode (first operation mode) and a sleep mode (second operation mode). In the normal operation mode, the application processor 3 instructs the signal processing unit 7 in the image pickup apparatus 1 about the content of the signal processing to be performed by the signal processing unit 7. The signal processing unit 7 in the image pickup apparatus 1 performs signal processing according to an instruction from the application processor 3. As described above, since the signal processing unit 14 in the application processor 3 can perform more advanced and complicated signal processing than the signal processing unit 7 in the image pickup device 1 in a short time, the application processor 3 is in the image pickup device 1. It is possible to instruct to send digital pixel data in which some signal processing that can be performed by the signal processing unit 7 is omitted. For example, the application processor 3 instructs the signal processing unit 7 in the image pickup apparatus 1 to send digital pixel data in which the processing of the AE gain adjusting unit 7b and the AWB gain adjusting unit is omitted. According to this instruction, the signal processing unit 7 in the image pickup apparatus 1 omits the processing of the AE gain adjusting unit 7b and the AWB gain adjusting unit, and applies the digital pixel data obtained by performing at least a part of the remaining signal processing. Send to processor 3.
 一方、アプリケーションプロセッサ3はスリープモードに移行すると、撮像装置1内の信号処理部7に対する指示を行わず、信号処理部7からのデータも受信しなくなる。この場合、撮像装置1内の信号処理部7は、OPD11からの明るさ情報検出信号に基づいて、AE用ゲイン調整部7bとAWB用ゲイン調整部のゲイン調整を行う。例えば、信号処理部7は、撮像装置1内でDSP9がDNNを用いた認識処理を行うのに最適なデータをDNNに入力できるように、AE用ゲイン調整を行う。 On the other hand, when the application processor 3 shifts to the sleep mode, the application processor 3 does not give an instruction to the signal processing unit 7 in the image pickup apparatus 1 and does not receive data from the signal processing unit 7. In this case, the signal processing unit 7 in the image pickup apparatus 1 adjusts the gains of the AE gain adjusting unit 7b and the AWB gain adjusting unit based on the brightness information detection signal from the OPD 11. For example, the signal processing unit 7 adjusts the gain for AE so that the DSP 9 can input the optimum data to the DNN for performing the recognition process using the DNN in the image pickup apparatus 1.
 (信号処理部7の内部構成の変形例)
 図3は、撮像装置1内の信号処理部7が、図2の内部構成に加えて、AWB用ゲイン調整部7eをさらに有する例を示している。図3では、AWB用ゲイン調整部7eをAE用ゲイン調整部7bと処理B部7cの間に配置しているが、他の箇所に配置してもよい。AWB用ゲイン調整部7eの処理をAE用ゲイン調整部7bの処理よりも先に行ってもよい。AWB用ゲイン調整部7eは、AWB用ゲイン調整部14cと同様に、画像データのホワイトバランスを適正に調整する。
(Modification example of internal configuration of signal processing unit 7)
FIG. 3 shows an example in which the signal processing unit 7 in the image pickup apparatus 1 further includes an AWB gain adjusting unit 7e in addition to the internal configuration of FIG. In FIG. 3, the AWB gain adjusting unit 7e is arranged between the AE gain adjusting unit 7b and the processing B unit 7c, but it may be arranged at another location. The processing of the AWB gain adjusting unit 7e may be performed before the processing of the AE gain adjusting unit 7b. The AWB gain adjusting unit 7e appropriately adjusts the white balance of the image data in the same manner as the AWB gain adjusting unit 14c.
 図3の信号処理部7は、アプリケーションプロセッサ3がスリープモードに移行した場合、OPD11からの明るさ情報検出信号に基づいて、AE用ゲイン調整部7bとAWB用ゲイン調整部7eのゲイン調整を行う。このとき、信号処理部7は、DSP9がDNNを用いた認識処理や検出処理を行うのに最適なデータをDNNに入力できるように、AE用ゲイン調整とAWB用ゲイン調整を行う。図3の信号処理部7は、OPD11からの明るさ情報検出信号に基づいてAE用ゲイン調整部7bとAWB用ゲイン調整部7eのゲイン調整を行えるため、画素アレイ部13で撮像した画像データの明るさを最適化できるだけでなく、ホワイトバランスも最適化できる。「最適化できる」とは、DSP9が認識処理や検出処理を行うのに最も適した入力データをDSP9に与えることを意味する。 When the application processor 3 shifts to the sleep mode, the signal processing unit 7 of FIG. 3 adjusts the gains of the AE gain adjusting unit 7b and the AWB gain adjusting unit 7e based on the brightness information detection signal from the OPD 11. .. At this time, the signal processing unit 7 adjusts the gain for AE and the gain for AWB so that the DSP 9 can input the optimum data for performing the recognition process and the detection process using the DNN to the DNN. Since the signal processing unit 7 of FIG. 3 can adjust the gains of the AE gain adjusting unit 7b and the AWB gain adjusting unit 7e based on the brightness information detection signal from the OPD 11, the image data captured by the pixel array unit 13 can be adjusted. Not only can the brightness be optimized, but the white balance can also be optimized. By "optimizing", it means that the DSP 9 provides the DSP 9 with the most suitable input data for performing the recognition process and the detection process.
 なお、図1~図3では、信号処理部7がOPD11からの明るさ情報検出信号に基づいて少なくとも一部の信号処理を行う例を示したが、信号処理部7又は制御部5は、OPD11からの明るさ情報検出信号に基づいてADC6がA/D変換を行う前のアナログ画素信号のゲイン調整を指示してもよい。この場合、この指示信号は撮像部4に送られて、不図示のアンプのゲイン調整が行われる。このように、OPD11からの明るさ情報検出信号に基づいて、アナログ画素信号のゲイン調整を行ってもよい。このゲイン調整は、アプリケーションプロセッサ3がスリープモード時に行い、通常動作モード時には、アプリケーションプロセッサ3からの指示により行ってもよい。 Although FIGS. 1 to 3 show an example in which the signal processing unit 7 performs at least a part of signal processing based on the brightness information detection signal from the OPD 11, the signal processing unit 7 or the control unit 5 may perform the opD 11 The gain adjustment of the analog pixel signal before the ADC 6 performs the A / D conversion may be instructed based on the brightness information detection signal from. In this case, this instruction signal is sent to the imaging unit 4 to adjust the gain of an amplifier (not shown). In this way, the gain of the analog pixel signal may be adjusted based on the brightness information detection signal from the OPD 11. This gain adjustment may be performed by the application processor 3 in the sleep mode, and may be performed by an instruction from the application processor 3 in the normal operation mode.
 (撮像装置1のチップ構造)
 次に、図1の撮像装置1のチップ構造について説明する。図4は図1の撮像装置1のチップ構造の一例を示す図である。図4の撮像装置1は、第1基板31と第2基板32を積層した積層体である。第1基板31と第2基板32は、ダイと呼ばれることもある。図4の例では、第1基板31と第2基板32が矩形状の例を示しているが、第1基板31と第2基板32の具体的な形状とサイズは任意である。また、第1基板31と第2基板32は同じサイズでもよいし、互いに異なるサイズでもよい。
(Chip structure of imaging device 1)
Next, the chip structure of the image pickup apparatus 1 of FIG. 1 will be described. FIG. 4 is a diagram showing an example of the chip structure of the image pickup apparatus 1 of FIG. The image pickup apparatus 1 of FIG. 4 is a laminated body in which a first substrate 31 and a second substrate 32 are laminated. The first substrate 31 and the second substrate 32 are sometimes called dies. In the example of FIG. 4, the first substrate 31 and the second substrate 32 are rectangular, but the specific shapes and sizes of the first substrate 31 and the second substrate 32 are arbitrary. Further, the first substrate 31 and the second substrate 32 may have the same size or may be different sizes from each other.
 第1基板31には、図1に示した画素アレイ部13が配置されている。また、第1基板31には、撮像部4の光学系12の少なくとも一部がオンチップで実装されてもよい。 The pixel array unit 13 shown in FIG. 1 is arranged on the first substrate 31. Further, at least a part of the optical system 12 of the imaging unit 4 may be mounted on the first substrate 31 on-chip.
 第2基板32には、図1に示した制御部5と、ADC6と、信号処理部7と、メモリ8と、DSP9と、セレクタ10と、OPD11とが配置されている。この他、第2基板32には、不図示の入力/出力インターフェイス部や電源回路等が配置されてもよい。 The control unit 5, ADC 6, signal processing unit 7, memory 8, DSP 9, selector 10, and OPD 11 shown in FIG. 1 are arranged on the second substrate 32. In addition, an input / output interface unit (not shown), a power supply circuit, or the like may be arranged on the second substrate 32.
 貼合せの具体的形態として、第1基板31と第2基板32を例えばウエハから切り出して個片化した後に、上下に重ねて貼り合わされる、いわゆるCoC(Chip on Chip)方式を採用してもよい。あるいは、第1基板31と第2基板32の一方(例えば第1基板31)をウエハから切り出して個片化した後、個片化された第1基板31を個片化前の第2基板32に貼り合わせる、いわゆるCoW(Chip on Wafer)方式を採用してもよい。あるいは、第1基板31と第2基板32をウエハの状態で貼り合わせる、いわゆるWoW(Wafer on Wafer)方式を採用してもよい。 As a specific form of bonding, even if a so-called CoC (Chip on Chip) method is adopted in which the first substrate 31 and the second substrate 32 are cut out from a wafer, separated into individual pieces, and then laminated on top of each other. Good. Alternatively, one of the first substrate 31 and the second substrate 32 (for example, the first substrate 31) is cut out from the wafer and individualized, and then the individualized first substrate 31 is separated into the second substrate 32 before individualization. The so-called CoW (Chip on Wafer) method may be adopted. Alternatively, a so-called WoW (Wafer on Wafer) method, in which the first substrate 31 and the second substrate 32 are bonded together in a wafer state, may be adopted.
 第1基板31と第2基板32の接合方法には、例えばプラズマ接合等を用いることができる。ただし、それ以外の種々の接合方法を用いてもよい。 For example, plasma bonding or the like can be used as the bonding method between the first substrate 31 and the second substrate 32. However, various other joining methods may be used.
 図5A及び図5Bは第1基板31及び第2基板32のレイアウトの一例を示す図である。図5Aは、画素アレイ部13が配置される第1基板31のレイアウト例を示している。図5Aの例では、画素アレイ部13は、第1基板31の4つの辺L101~L104のうち、1つの辺L101側に片寄って配置されている。言い換えれば、画素アレイ部13は、その中心部O101が第1基板31の中心部O100よりも辺L101側に寄って配置されている。なお、第1基板31における画素アレイ部13が設けられた面が長方形である場合、辺L101は、例えば、第1基板31の短い方の辺であってもよい。ただし、これに限定されず、長い方の辺に、画素アレイ部13が片寄って配置されてもよい。 5A and 5B are diagrams showing an example of the layout of the first substrate 31 and the second substrate 32. FIG. 5A shows a layout example of the first substrate 31 on which the pixel array unit 13 is arranged. In the example of FIG. 5A, the pixel array unit 13 is arranged on one side L101 side of the four sides L101 to L104 of the first substrate 31. In other words, the pixel array unit 13 is arranged so that its central portion O101 is closer to the side L101 than the central portion O100 of the first substrate 31. When the surface of the first substrate 31 on which the pixel array portion 13 is provided is rectangular, the side L101 may be, for example, the shorter side of the first substrate 31. However, the present invention is not limited to this, and the pixel array portion 13 may be arranged on the longer side.
 画素アレイ部13の4つの辺のうちの辺L101に近接する領域、言い換えれば、辺L101と画素アレイ部13との間の領域には、画素アレイ部13中の各単位画素101aを第2基板32に配置されたADC6に電気的に接続させるための配線として、第1基板31を貫通する複数の貫通配線(Through Silicon Via:以下、TSVという)が配列するTSVアレイ102が設けられている。このように、TSVアレイ102を画素アレイ部13が近接する辺L101に近接させることで、第2基板32において、ADC6等の配置スペースを確保し易くなる。 In the region close to the side L101 among the four sides of the pixel array unit 13, in other words, in the region between the side L101 and the pixel array unit 13, each unit pixel 101a in the pixel array unit 13 is placed on the second substrate. As wiring for electrically connecting to the ADC 6 arranged in 32, a TSV array 102 in which a plurality of through wires (Through Silicon Via: hereinafter referred to as TSV) penetrating the first substrate 31 are arranged is provided. By bringing the TSV array 102 close to the side L101 where the pixel array unit 13 is close to each other in this way, it becomes easy to secure an arrangement space for the ADC 6 and the like on the second substrate 32.
 なお、TSVアレイ102は、辺L101と交わる2つの辺L103及びL104のうち一方の辺L104(ただし、辺L103であってもよい)に近接する領域、言い換えれば、辺L104(又は、辺L103)と画素アレイ部13との間の領域にも設けられていてよい。 The TSV array 102 is a region close to one of the two sides L103 and L104 intersecting the side L101 (however, the side L103 may be used), in other words, the side L104 (or the side L103). It may also be provided in the area between the pixel array unit 13 and the pixel array unit 13.
 第1基板31の4つの辺L101~L104のうち、画素アレイ部13が片寄って配置されていない辺L102~L103それぞれには、直線状に配列された複数のパッドを有するパッドアレイ103が設けられている。パッドアレイ103は、例えば、画素アレイ部13やADC6などのアナログ回路用の電源電圧が印加されるパッド(電源ピンともいう)を含んでいてもよい。また、パッドアレイ103は、信号処理部7、DSP9、メモリ8、セレクタ10、制御部5等のデジタル回路用の電源電圧が印加されるパッド(電源ピンともいう)を含んでいてもよい。あるいは、パッドアレイ103は、MIPI(Mobile Industry Processor Interface)、SPI(Serial Peripheral Interface)などのインタフェース用のパッド(信号ピンともいう)を含んでいてもよい。あるいは、パッドアレイ103は、クロックやデータの入出力のためのパッド(信号ピンともいう)を含んでいてもよい。各パッドは、例えば、外部の電源回路やインタフェース回路とワイヤを介して電気的に接続される。各パッドアレイ103とTSVアレイ102とは、パッドアレイ103中の各パッドに接続されたワイヤからの信号の反射の影響を無視できる程度に十分に離れていることが好ましい。 Of the four sides L101 to L104 of the first substrate 31, each of the sides L102 to L103 in which the pixel array portion 13 is not arranged on one side is provided with a pad array 103 having a plurality of pads arranged in a straight line. ing. The pad array 103 may include, for example, a pad (also referred to as a power supply pin) to which a power supply voltage for an analog circuit such as a pixel array unit 13 or an ADC 6 is applied. Further, the pad array 103 may include a pad (also referred to as a power supply pin) to which a power supply voltage for a digital circuit such as a signal processing unit 7, a DSP 9, a memory 8, a selector 10, and a control unit 5 is applied. Alternatively, the pad array 103 may include pads (also referred to as signal pins) for interfaces such as MIPI (Mobile Industry Processor Interface) and SPI (Serial Peripheral Interface). Alternatively, the pad array 103 may include pads (also referred to as signal pins) for input / output of clocks and data. Each pad is electrically connected to, for example, an external power supply circuit or interface circuit via a wire. It is preferable that the pad array 103 and the TSV array 102 are sufficiently separated from each other so that the influence of the reflection of the signal from the wire connected to each pad in the pad array 103 can be ignored.
 図5Bは、制御部5と、ADC6と、信号処理部7と、メモリ8と、DSP9と、セレクタ10とが配置される第2基板32のレイアウト例を示している。第2基板32には、ADC6と、制御部5と、信号処理部7と、DSP9と、メモリ8とが配置されている。図5Bのレイアウト例では、ADC6がADC部6aとDAC部(Digital toAnalog Converter)6bとの2つの領域に分かれている。DAC6bは、ADC部6aにAD変換用の参照電圧を供給する回路であり、広い意味でADC6の一部に含まれる。また、図5Bには図示されていないが、セレクタ10とOPD11も第2基板32に配置されている。 FIG. 5B shows a layout example of the second board 32 in which the control unit 5, the ADC 6, the signal processing unit 7, the memory 8, the DSP 9, and the selector 10 are arranged. The ADC 6, the control unit 5, the signal processing unit 7, the DSP 9, and the memory 8 are arranged on the second substrate 32. In the layout example of FIG. 5B, the ADC 6 is divided into two regions, an ADC section 6a and a DAC section (Digital to Analog Converter) 6b. The DAC 6b is a circuit that supplies a reference voltage for AD conversion to the ADC unit 6a, and is included in a part of the ADC 6 in a broad sense. Further, although not shown in FIG. 5B, the selector 10 and the OPD 11 are also arranged on the second substrate 32.
 また、第2基板32には、第1基板31を貫通するTSVアレイ102中の各TSV(以下、単にTSVアレイ102とする)と接触することで電気的に接続される配線122が設けられている。さらに、第2基板32には、第1基板31のパッドアレイ103における各パッドと電気的に接続される複数のパッドが直線状に配列されたパッドアレイ123が設けられている。 Further, the second substrate 32 is provided with wiring 122 which is electrically connected by contacting each TSV (hereinafter, simply referred to as TSV array 102) in the TSV array 102 penetrating the first substrate 31. There is. Further, the second substrate 32 is provided with a pad array 123 in which a plurality of pads electrically connected to each pad in the pad array 103 of the first substrate 31 are linearly arranged.
 TSVアレイ102と配線122との接続には、例えば、第1基板31に設けられたTSVと第1基板31から第2基板32にかけて設けられたTSVとの2つのTSVをチップ外表で接続する、いわゆるツインTSV方式が採用されてもよい。あるいは、第1基板31から第2基板32にかけて設けられた共通のTSVで接続する、いわゆるシェアードTSV方式が採用されてもよい。ただし、これらに限定されず、例えば、第1基板31の接合面と第2基板32の接合面とにそれぞれ露出させた銅(Cu)同士を接合する、いわゆるCu-Cuボンディング方式など、種々の接続形態が採用可能である。 For the connection between the TSV array 102 and the wiring 122, for example, two TSVs, that is, the TSV provided on the first substrate 31 and the TSV provided from the first substrate 31 to the second substrate 32 are connected by the outer surface of the chip. The so-called twin TSV system may be adopted. Alternatively, a so-called shared TSV method in which the first substrate 31 to the second substrate 32 are connected by a common TSV may be adopted. However, the present invention is not limited to these, and various methods such as a so-called Cu-Cu bonding method in which exposed copper (Cu) is bonded to the bonding surface of the first substrate 31 and the bonding surface of the second substrate 32, respectively. A connection form can be adopted.
 第1基板31のパッドアレイ103における各パッドと、第2基板32のパッドアレイ123における各パッドとの接続形態は、例えば、ワイヤボンディングである。ただし、これに限定されず、スルーホールやキャスタレーション等の接続形態であってもよい。 The connection form between each pad in the pad array 103 of the first substrate 31 and each pad in the pad array 123 of the second substrate 32 is, for example, wire bonding. However, the present invention is not limited to this, and a connection form such as a through hole or casting may be used.
 第2基板32のレイアウト例では、例えば、TSVアレイ102と接続される配線122の近傍を上流側とし、画素アレイ部13から読み出された信号の流れに沿って、上流から順に、ADC部6aと、信号処理部7と、DSP9とが配置されている。すなわち、画素アレイ部13から読み出された画素信号が最初に入力されるADC部6aが最も上流側である配線122の近傍に配置され、次いで、信号処理部7が配置され、配線122から最も遠い領域にDSP9が配置されている。このように、ADC6からDSP9までを信号の流れに沿って上流側から配置したレイアウトとすることで、各部を接続する配線を短縮することができる。それにより、信号遅延の低減、信号の伝搬損失の低減、S/N比の向上、消費電力の削減などを図ることができる。 In the layout example of the second substrate 32, for example, the vicinity of the wiring 122 connected to the TSV array 102 is on the upstream side, and the ADC unit 6a is sequentially arranged from the upstream along the flow of the signal read from the pixel array unit 13. And the signal processing unit 7 and the DSP 9 are arranged. That is, the ADC unit 6a to which the pixel signal read from the pixel array unit 13 is first input is arranged in the vicinity of the wiring 122 which is the most upstream side, and then the signal processing unit 7 is arranged and the most from the wiring 122. The DSP9 is arranged in a distant area. In this way, by arranging the ADC 6 to the DSP 9 from the upstream side along the signal flow, the wiring connecting each part can be shortened. As a result, it is possible to reduce the signal delay, reduce the signal propagation loss, improve the S / N ratio, reduce the power consumption, and the like.
 また、制御部5は、例えば、上流側である配線122の近傍に配置されている。図5Bでは、ADC6と信号処理部7との間に制御部5が配置されている。このようなレイアウトを採用することで、制御部5が画素アレイ部13を制御する際の信号遅延の低減、信号の伝搬損失の低減、S/N比の向上、消費電力の低減などを図れる。また、アナログ回路用の信号ピンや電源ピンをアナログ回路の近傍(例えば、図5B中の下側)にまとめて配置し、残りのデジタル回路用の信号ピンや電源ピンをデジタル回路の近傍(例えば、図5B中の上側)にまとめて配置したり、アナログ回路用の電源ピンとデジタル回路用の電源ピンとを十分に離して配置することもできる。 Further, the control unit 5 is arranged in the vicinity of the wiring 122 on the upstream side, for example. In FIG. 5B, the control unit 5 is arranged between the ADC 6 and the signal processing unit 7. By adopting such a layout, it is possible to reduce the signal delay when the control unit 5 controls the pixel array unit 13, reduce the signal propagation loss, improve the S / N ratio, reduce the power consumption, and the like. Further, the signal pins and power supply pins for the analog circuit are arranged together in the vicinity of the analog circuit (for example, the lower side in FIG. 5B), and the remaining signal pins and power supply pins for the digital circuit are arranged in the vicinity of the digital circuit (for example, the lower side in FIG. 5B). , The upper side in FIG. 5B), or the power supply pin for the analog circuit and the power supply pin for the digital circuit can be arranged sufficiently apart.
 また、図5Bに示すレイアウトでは、DSP9が最も下流側であるADC部6aとは反対側に配置されている。このようなレイアウトを採用することで、言い換えれば、第1基板31と第2基板32との積層方向(以下、単に上下方向という)において、画素アレイ部13と重畳しない領域に、DSP9を配置することができる。 Further, in the layout shown in FIG. 5B, the DSP 9 is arranged on the side opposite to the ADC portion 6a, which is the most downstream side. By adopting such a layout, in other words, the DSP 9 is arranged in a region that does not overlap with the pixel array unit 13 in the stacking direction of the first substrate 31 and the second substrate 32 (hereinafter, simply referred to as the vertical direction). be able to.
 このように、上下方向において画素アレイ部13とDSP9とが重畳しない構成とすることで、DSP9が信号処理を実行することで発生したノイズが画素アレイ部13に入り込むことを低減することができる。その結果、DSP9を学習済みモデルに基づいた演算を実行する処理部として動作させた場合でも、画素アレイ部13へのDSP9の信号処理に起因したノイズの入り込みを低減することができるため、品質の劣化が低減された画像を取得することができる。 In this way, by configuring the pixel array unit 13 and the DSP 9 not to overlap in the vertical direction, it is possible to reduce noise generated by the DSP 9 executing signal processing from entering the pixel array unit 13. As a result, even when the DSP 9 is operated as a processing unit that executes an operation based on the trained model, it is possible to reduce noise input to the pixel array unit 13 due to the signal processing of the DSP 9, and thus the quality is improved. It is possible to acquire an image with reduced deterioration.
 メモリ8は、DSP9及び信号処理部7の近傍に配置されている。メモリ8には、学習済の計算モデルに関する各種情報が記憶されており、DSP9は、メモリ8から計算モデルに関する情報を読み出して、計算モデルを用いて演算処理を行い、演算処理の結果をメモリ8に記憶する。このため、DSP9の近傍にメモリ8を配置することで、メモリ8にアクセスする際の信号伝搬時間を短縮でき、DSP9はメモリ8に高速でアクセスすることができる。 The memory 8 is arranged in the vicinity of the DSP 9 and the signal processing unit 7. Various information about the learned calculation model is stored in the memory 8, and the DSP 9 reads the information about the calculation model from the memory 8 and performs arithmetic processing using the calculation model, and the result of the arithmetic processing is stored in the memory 8. Remember in. Therefore, by arranging the memory 8 in the vicinity of the DSP 9, the signal propagation time when accessing the memory 8 can be shortened, and the DSP 9 can access the memory 8 at high speed.
 パッドアレイ123は、例えば、第1基板31のパッドアレイ103と上下方向において対応する第2基板32上の位置に配置される。ここで、パッドアレイ123に含まれるパッドのうち、ADC部6aの近傍に位置するパッドは、アナログ回路(主にADC部6a)用の電源電圧やアナログ信号の伝搬に使用される。一方、制御部5や信号処理部7やDSP9やメモリ8の近傍に位置するパッドは、デジタル回路(主に、制御部5、信号処理部7、DSP9、メモリ8)用の電源電圧やデジタル信号の伝搬に使用される。このようなパッドレイアウトとすることで、各パッドと各部とを接続する配線上の距離を短くすることができる。それにより、信号遅延の低減、信号や電源電圧の伝搬損失の低減、S/N比の向上、消費電力の低減などを実現できる。 The pad array 123 is arranged, for example, at a position on the second substrate 32 corresponding to the pad array 103 of the first substrate 31 in the vertical direction. Here, among the pads included in the pad array 123, the pads located in the vicinity of the ADC unit 6a are used for propagating the power supply voltage and the analog signal for the analog circuit (mainly the ADC unit 6a). On the other hand, the pads located near the control unit 5, the signal processing unit 7, the DSP 9, and the memory 8 are power supply voltages and digital signals for digital circuits (mainly, the control unit 5, signal processing unit 7, DSP 9, and memory 8). Used for propagation of. With such a pad layout, the distance on the wiring connecting each pad and each part can be shortened. As a result, it is possible to reduce the signal delay, reduce the propagation loss of the signal and the power supply voltage, improve the S / N ratio, reduce the power consumption, and the like.
 図6は第1の実施形態による撮像装置1が行う処理手順を示すフローチャートである。まず、画素アレイ部13にて光電変換を行ってアナログ画素信号を出力する(ステップS1)。次に、アナログ画素信号をデジタル画素データに変換する(ステップS2)。次に、デジタル画素データに基づいて、画素アレイ部13に入射された明るさ情報を検出する(ステップS3)。次に、デジタル画素データに対して信号処理を行い、信号処理の少なくとも一部については、上述した明るさ情報に基づいて行う(ステップS4)。 FIG. 6 is a flowchart showing a processing procedure performed by the image pickup apparatus 1 according to the first embodiment. First, the pixel array unit 13 performs photoelectric conversion and outputs an analog pixel signal (step S1). Next, the analog pixel signal is converted into digital pixel data (step S2). Next, the brightness information incident on the pixel array unit 13 is detected based on the digital pixel data (step S3). Next, signal processing is performed on the digital pixel data, and at least a part of the signal processing is performed based on the above-mentioned brightness information (step S4).
 このように、第1の実施形態では、撮像装置1内にOPD11を設けて明るさ情報検出信号を生成し、撮像装置1内の信号処理部7が明るさ情報検出信号に基づいて信号処理を行うことができるようにしたため、アプリケーションプロセッサ3が信号処理部7に指示を与えない場合には、信号処理部7は撮像装置1にとって最適な信号処理を行うことができる。より具体的には、撮像装置1内で、信号処理部7の出力データに基づいて認識処理や検出処理などの情報処理を行う場合には、認識処理や検出処理などの情報処理を行うために最適な入力データとなるように、信号処理部7にて信号処理を行うことができる。これにより、撮像装置1の内部で信頼性の高い認識処理や検出処理などの情報処理を行うことができ、単に撮像を行うだけでない認識処理や検出処理などの情報処理も行えるインテリジェントな撮像装置1が得られる。 As described above, in the first embodiment, the OPD 11 is provided in the image pickup apparatus 1 to generate the brightness information detection signal, and the signal processing unit 7 in the image pickup apparatus 1 performs signal processing based on the brightness information detection signal. Therefore, when the application processor 3 does not give an instruction to the signal processing unit 7, the signal processing unit 7 can perform the optimum signal processing for the image pickup apparatus 1. More specifically, in the case of performing information processing such as recognition processing and detection processing based on the output data of the signal processing unit 7 in the image pickup apparatus 1, in order to perform information processing such as recognition processing and detection processing. The signal processing unit 7 can perform signal processing so as to obtain the optimum input data. As a result, the intelligent imaging device 1 can perform highly reliable information processing such as recognition processing and detection processing inside the imaging device 1, and can also perform information processing such as recognition processing and detection processing as well as simply performing imaging. Is obtained.
 (第2の実施形態)
 図7は第2の実施形態による撮像装置1を備えた電子機器2の概略構成を示すブロック図である。図7の撮像装置1は、図1の撮像装置1内の信号処理部7の代わりに、第1信号処理部71と第2信号処理部72を備える点で、図1の撮像装置1と異なっている。
(Second Embodiment)
FIG. 7 is a block diagram showing a schematic configuration of the electronic device 2 provided with the image pickup apparatus 1 according to the second embodiment. The image pickup device 1 of FIG. 7 is different from the image pickup device 1 of FIG. 1 in that it includes a first signal processing unit 71 and a second signal processing unit 72 instead of the signal processing unit 7 in the image pickup device 1 of FIG. ing.
 第1信号処理部71は、デジタル画素データに対して第1信号処理を行う。第1信号処理部71は、アプリケーションプロセッサ3の指示に従って第1信号処理を行うものであり、この点では図1の撮像装置1内の信号処理部7と共通する。第1信号処理部71が行う第1信号処理の具体的な内容は、後述するように、図1の撮像装置1内の信号処理部7が行う信号処理と同じであってもよいし、少なくとも一部が異なっていてもよい。 The first signal processing unit 71 performs the first signal processing on the digital pixel data. The first signal processing unit 71 performs the first signal processing according to the instruction of the application processor 3, and is common to the signal processing unit 7 in the image pickup apparatus 1 of FIG. 1 in this respect. As will be described later, the specific content of the first signal processing performed by the first signal processing unit 71 may be the same as the signal processing performed by the signal processing unit 7 in the image pickup apparatus 1 of FIG. 1, or at least. Some may be different.
 第2信号処理部72は、デジタル画素データ、又は第1信号処理の少なくとも一部を行ったデータに対して、第2信号処理を行う。第2信号処理部72は、OPD11で検出された明るさ情報に基づいて、第2信号処理の少なくとも一部を行う。このように、第2信号処理の少なくとも一部は、OPD11で検出された明るさ情報を参照して行われる。DSP9は、第2信号処理部72の出力データに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う。第2信号処理部72は、少なくともスリープモード時にはアプリケーションプロセッサ3の指示に依存せずに信号処理を行う。すなわち、第2信号処理部72は、撮像装置1の独自の判断で、信号処理を行うことができる。アプリケーションプロセッサ3の通常動作時には、第2信号処理部72は、アプリケーションプロセッサ3の指示に基づいて第2信号処理を行ってもよいし、あるいは、撮像装置1の独自の判断で第2信号処理を行ってもよい。図7では、第1信号処理部71の後段側に第2信号処理部72が配置されている例を示しているが、第2信号処理部72は、第1信号処理部71と並行して信号処理を行うことができる。より具体的には、ADC6からのデジタル画素データが第1信号処理部71と第2信号処理部72の双方に入力されて、第1信号処理部71と第2信号処理部72が並行して信号処理を行ってもよい。 The second signal processing unit 72 performs the second signal processing on the digital pixel data or the data obtained by performing at least a part of the first signal processing. The second signal processing unit 72 performs at least a part of the second signal processing based on the brightness information detected by the OPD 11. As described above, at least a part of the second signal processing is performed with reference to the brightness information detected by the OPD 11. The DSP 9 performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit 72. The second signal processing unit 72 performs signal processing without depending on the instruction of the application processor 3 at least in the sleep mode. That is, the second signal processing unit 72 can perform signal processing at the original judgment of the image pickup apparatus 1. During normal operation of the application processor 3, the second signal processing unit 72 may perform the second signal processing based on the instruction of the application processor 3, or may perform the second signal processing at its own discretion of the imaging device 1. You may go. FIG. 7 shows an example in which the second signal processing unit 72 is arranged on the rear side of the first signal processing unit 71, but the second signal processing unit 72 is in parallel with the first signal processing unit 71. Signal processing can be performed. More specifically, the digital pixel data from the ADC 6 is input to both the first signal processing unit 71 and the second signal processing unit 72, and the first signal processing unit 71 and the second signal processing unit 72 are in parallel. Signal processing may be performed.
 あるいは、ADC6からのデジタル画素データを第1信号処理部71に入力し、第1信号処理部71で途中まで信号処理を行った段階のデータを出力して、第2信号処理部72に入力してもよい。この場合、第2信号処理部72は、第1信号処理部71の信号処理結果を途中まで利用して、その後の信号処理を第1信号処理部71とは無関係に独自に行うことになる。 Alternatively, the digital pixel data from the ADC 6 is input to the first signal processing unit 71, the data at the stage where the signal processing is halfway performed by the first signal processing unit 71 is output, and the data is input to the second signal processing unit 72. You may. In this case, the second signal processing unit 72 uses the signal processing result of the first signal processing unit 71 halfway, and performs the subsequent signal processing independently of the first signal processing unit 71.
 あるいは、ADC6からのデジタル画素データを第1信号処理部71に入力し、第1信号処理部71で最後まで信号処理を行った結果のデータを第2信号処理部72に入力してもよい。この場合、第2信号処理部72は、第1信号処理部71の信号処理が終わった後に、独自の信号処理を追加で行うことになる。 Alternatively, the digital pixel data from the ADC 6 may be input to the first signal processing unit 71, and the data resulting from the signal processing to the end by the first signal processing unit 71 may be input to the second signal processing unit 72. In this case, the second signal processing unit 72 additionally performs its own signal processing after the signal processing of the first signal processing unit 71 is completed.
 第1信号処理及び第2信号処理には、共通の信号処理が含まれていてもよい。第2信号処理部72は、OPD11で検出された明るさ情報に基づいて、共通の信号処理の少なくとも一部を行ってもよい。 The first signal processing and the second signal processing may include common signal processing. The second signal processing unit 72 may perform at least a part of common signal processing based on the brightness information detected by the OPD 11.
 共通の信号処理は、画素アレイ部13で撮像された画像の明るさ調整処理を含んでいてもよい。また、第2信号処理部72は、OPD11で検出された明るさ情報に基づいて、明るさ調整処理を行ってもよい。 The common signal processing may include the brightness adjustment processing of the image captured by the pixel array unit 13. Further, the second signal processing unit 72 may perform the brightness adjustment process based on the brightness information detected by the OPD 11.
 共通の信号処理は、画素アレイ部13で撮像された画像のホワイトバランス調整処理を含んでいてもよい。第2信号処理部72は、OPD11で検出された明るさ情報に基づいて、ホワイトバランス調整処理を行ってもよい。 The common signal processing may include white balance adjustment processing of the image captured by the pixel array unit 13. The second signal processing unit 72 may perform white balance adjustment processing based on the brightness information detected by the OPD 11.
 図8は、第2の実施形態による撮像装置1内の第1信号処理部71及び第2信号処理部72の内部構成と、アプリケーションプロセッサ3内の信号処理部14の内部構成とを示すブロック図である。 FIG. 8 is a block diagram showing an internal configuration of the first signal processing unit 71 and the second signal processing unit 72 in the image pickup apparatus 1 according to the second embodiment and an internal configuration of the signal processing unit 14 in the application processor 3. Is.
 第1信号処理部71の内部構成は、図4の撮像装置1内の信号処理部7と同様であり、例えば処理A部7aと、AE用ゲイン調整部7bと、処理B部7cと、処理C部7dとを有する。図1の撮像装置1内の信号処理部7にはOPD11が接続されていたが、第1信号処理部71にはOPD11は接続されておらず、OPD11は第2信号処理部72に接続されている。第1信号処理部71は、アプリケーションプロセッサ3の指示に従って、種々の信号処理を行い、最終的な信号処理結果を示すデータをアプリケーションプロセッサ3に送る。このように、第1信号処理部71は、アプリケーションプロセッサ3で高度な信号処理を行うのに適したデータを生成するために、種々の信号処理を行う。 The internal configuration of the first signal processing unit 71 is the same as that of the signal processing unit 7 in the image pickup apparatus 1 of FIG. 4, for example, processing A unit 7a, AE gain adjusting unit 7b, processing B unit 7c, and processing. It has a C portion 7d. The OPD 11 was connected to the signal processing unit 7 in the image pickup apparatus 1 of FIG. 1, but the OPD 11 was not connected to the first signal processing unit 71, and the OPD 11 was connected to the second signal processing unit 72. There is. The first signal processing unit 71 performs various signal processing according to the instructions of the application processor 3, and sends data indicating the final signal processing result to the application processor 3. As described above, the first signal processing unit 71 performs various signal processing in order to generate data suitable for performing advanced signal processing in the application processor 3.
 第2信号処理部72は、撮像装置1の独自の判断で信号処理を行う。本実施形態による撮像装置1は、DSP9が画素アレイ部13で撮像した画像データを用いて認識処理や検出処理等の情報処理を行う。このため、第2信号処理部72は、認識処理や検出処理等の情報処理を行うにあたって最適な画像データを生成するための信号処理を行う。 The second signal processing unit 72 performs signal processing at its own discretion of the image pickup apparatus 1. The image pickup apparatus 1 according to the present embodiment performs information processing such as recognition processing and detection processing using the image data captured by the DSP 9 in the pixel array unit 13. Therefore, the second signal processing unit 72 performs signal processing for generating optimum image data for information processing such as recognition processing and detection processing.
 第2信号処理部72は、処理D部7fと、AE用ゲイン調整部7gと、AWB用ゲイン調整部7hと、処理E部7iと、処理F部7jとを有する。処理D部7f、処理E部7i及び処理F部7jの具体的な信号処理の内容は任意である。例えば、処理D部7f、処理E部7i及び処理F部7jのそれぞれは、レンズシェーディング処理(Lenz Shade)と、デモザイク処理と、リニアマトリクス/ガンマ/ヒューゲイン処理(LinearMatrix Gamma HueGain)と、デワープ処理(Dewarp)と、ゲイン/YCマトリクス/正規化処理(Gain YC Matrix Normalize)と、顔認識/動き検出処理(Face/Motion Detection)と、回転スケーリング処理(Rotation Scale)のいずれかの処理を実行してもよい。 The second signal processing unit 72 includes a processing D unit 7f, an AE gain adjusting unit 7g, an AWB gain adjusting unit 7h, a processing E unit 7i, and a processing F unit 7j. The specific contents of signal processing of the processing D unit 7f, the processing E unit 7i, and the processing F unit 7j are arbitrary. For example, the processing D unit 7f, the processing E unit 7i, and the processing F unit 7j each have a lens shading process (Lenz Shade), a demosaic process, a linear matrix / gamma / hue gain process (LinearMatrix Gamma HueGain), and a dewarp process. (Dewarp), gain / YC matrix / normalization processing (Gain YC Matrix Normalize), face recognition / motion detection processing (Face / Motion Detection), and rotation scaling processing (Rotation Scale) are executed. You may.
 レンズシェーディング処理は、周辺部分の画素値を持ち上げる処理である。デモザイク処理は、4色の配色から3色(RGB)の画素データを生成する処理である。リニアマトリクス/ガンマ/ヒューゲイン処理は、画像の線形化、ガンマ補正、色調整などの処理である。デワープ処理は、魚眼レンズや広角レンズで歪んだ画像をできるだけ平面画像に補正する処理である。デワープ処理は、各レンズごとに設けられる調整テーブルを用いて補正する処理である。ゲイン/YCマトリクス/正規化処理は、DNNに入力するためのゲイン調整、色調整、及び正規化処理である。ADC6からのデジタル画素データは、8~12ビットの画素値を有するのに対して、DNNの入力データは、0~1の画素値を有するため、ビットシフトや正規化処理が必要となる。 The lens shading process is a process that raises the pixel value of the peripheral part. The demosaic process is a process of generating pixel data of three colors (RGB) from a four-color arrangement. The linear matrix / gamma / huge gain process is a process such as image linearization, gamma correction, and color adjustment. The dewarp process is a process of correcting a distorted image with a fisheye lens or a wide-angle lens into a flat image as much as possible. The dewarp process is a process of correcting using an adjustment table provided for each lens. The gain / YC matrix / normalization process is a gain adjustment, color adjustment, and normalization process for inputting to the DNN. The digital pixel data from the ADC 6 has a pixel value of 8 to 12 bits, whereas the input data of the DNN has a pixel value of 0 to 1, so that bit shift and normalization processing are required.
 第2信号処理部72がどのような信号処理を行うかは、第2信号処理部72に入力されるデータに依存する。例えば、ADC6からのデジタル画素データが第1信号処理部71と第2信号処理部72に入力される場合、第2信号処理部72は、第1信号処理部71と並行して信号処理を行うことになる。このため、第2信号処理部72は、第1信号処理部71が行う信号処理と同等の信号処理を行う必要がある。一方、第1信号処理部71が途中まで信号処理を行った段階のデータを第2信号処理部72に入力する場合、すでに第1信号処理部71で行った信号処理については、第2信号処理部72で行う必要はない。ただし、第1信号処理部71は、アプリケーションプロセッサ3の指示に従って信号処理を行うため、第1信号処理部71がすでに行った信号処理であっても、第2信号処理部72にて再度信号処理を行ってもよい。同様に、第1信号処理部71での信号処理が完了したデータを第2信号処理部72に入力する場合、基本的には第2信号処理部72では、第1信号処理部71とは異なる信号処理を行うが、場合によっては、第1信号処理部71が行った信号処理を設定値を変えてやり直してもよい。 What kind of signal processing the second signal processing unit 72 performs depends on the data input to the second signal processing unit 72. For example, when digital pixel data from the ADC 6 is input to the first signal processing unit 71 and the second signal processing unit 72, the second signal processing unit 72 performs signal processing in parallel with the first signal processing unit 71. It will be. Therefore, the second signal processing unit 72 needs to perform signal processing equivalent to the signal processing performed by the first signal processing unit 71. On the other hand, when the data at the stage where the first signal processing unit 71 has partially processed the signal is input to the second signal processing unit 72, the signal processing already performed by the first signal processing unit 71 is the second signal processing. It is not necessary to do it in the part 72. However, since the first signal processing unit 71 performs signal processing according to the instruction of the application processor 3, even if the signal processing has already been performed by the first signal processing unit 71, the second signal processing unit 72 processes the signal again. May be done. Similarly, when the data for which the signal processing completed by the first signal processing unit 71 is completed is input to the second signal processing unit 72, the second signal processing unit 72 is basically different from the first signal processing unit 71. Although signal processing is performed, in some cases, the signal processing performed by the first signal processing unit 71 may be redone by changing the set value.
 第2信号処理部72には、OPD11が接続されている。第2信号処理部72は、OPD11から出力された明るさ情報検出信号に基づいて、種々の信号処理を行う。例えば、画素アレイ部13で撮像された画像データが全体的に暗い場合には、認識処理や検出処理を行うDNNに最適なデータが入力されるように、明るさ調整等の信号処理を行う。 The OPD 11 is connected to the second signal processing unit 72. The second signal processing unit 72 performs various signal processing based on the brightness information detection signal output from the OPD 11. For example, when the image data captured by the pixel array unit 13 is dark as a whole, signal processing such as brightness adjustment is performed so that optimum data is input to the DNN that performs recognition processing and detection processing.
 アプリケーションプロセッサ3は、消費電力の削減のために、スリープモードに移行することがあり、アプリケーションプロセッサ3がスリープモードに移行すると、アプリケーションプロセッサ3は第1信号処理部71に指示を出せなくなるため、第1信号処理部71は有効な信号処理を行えなくなるおそれがある。ただし、第2信号処理部72は、アプリケーションプロセッサ3の指示とは無関係に信号処理を行うため、アプリケーションプロセッサ3がスリープモードに移行した場合でも、有効な信号処理を継続して行うことができる。 The application processor 3 may shift to the sleep mode in order to reduce power consumption. When the application processor 3 shifts to the sleep mode, the application processor 3 cannot give an instruction to the first signal processing unit 71. 1 The signal processing unit 71 may not be able to perform effective signal processing. However, since the second signal processing unit 72 performs signal processing regardless of the instruction of the application processor 3, effective signal processing can be continuously performed even when the application processor 3 shifts to the sleep mode.
 また、第2信号処理部72又はDSP9が動き検出部を有する場合、動き検出部で動きが検出されたときに、制御部5を介してアプリケーションプロセッサ3に対して、通常動作モードに復帰するように促すことができる。すなわち、第2信号処理部72内の信号処理機能により、アプリケーションプロセッサ3の動作モードの切替を行うことができる。 Further, when the second signal processing unit 72 or the DSP 9 has a motion detection unit, when the motion is detected by the motion detection unit, the application processor 3 is returned to the normal operation mode via the control unit 5. Can be urged to. That is, the operation mode of the application processor 3 can be switched by the signal processing function in the second signal processing unit 72.
 図9は、撮像装置1内の第2信号処理部72が、図8の内部構成に加えて、AWB用ゲイン調整部7eをさらに有する例を示している。第2信号処理部72は、OPD11から出力された明るさ情報検出信号に基づいて、明るさ調整やホワイトバランス調整等の信号処理を行う。 FIG. 9 shows an example in which the second signal processing unit 72 in the image pickup apparatus 1 further includes the gain adjusting unit 7e for AWB in addition to the internal configuration of FIG. The second signal processing unit 72 performs signal processing such as brightness adjustment and white balance adjustment based on the brightness information detection signal output from the OPD 11.
 上述した図8と図9では、第2信号処理部72が、OPD11から出力された明るさ情報検出信号に基づいて、個々の処理部に設定する設定値を計算し、計算された設定値に基づいて個々の処理部が信号処理を実行する例を説明したが、明るさ情報検出信号に基づく設定値の計算処理やOPD11の制御を第2信号処理部72が行う代わりに、図7の制御部5、あるいは別個に設けられる不図示のCPU(Central Processing Unit)が行ってもよい。以下では、制御部5又はCPUを演算器と呼ぶこともある。 In FIGS. 8 and 9 described above, the second signal processing unit 72 calculates the setting value to be set in each processing unit based on the brightness information detection signal output from the OPD 11, and sets the calculated setting value to the calculated setting value. An example in which each processing unit executes signal processing has been described based on the above, but instead of the second signal processing unit 72 performing the calculation processing of the set value based on the brightness information detection signal and the control of the OPD 11, the control of FIG. 7 is performed. A CPU (Central Processing Unit) (not shown) provided separately or separately may perform the operation. Hereinafter, the control unit 5 or the CPU may be referred to as an arithmetic unit.
 図10は、制御部5又はCPU等の演算器が、OPD11が明るさ情報検出信号を生成するタイミングの制御と、OPD11が生成した明るさ情報検出信号を読み出す制御と、読み出した明るさ情報検出信号に基づいて第2信号処理部72内の個々の処理部に設定する設定値を計算する処理と、計算された設定値を第2信号処理部72内の個々の処理部に送信する制御とを行う場合の撮像装置1のブロック図である。図10の場合、撮像装置1内の制御部5又はCPU等の演算器がOPD11の制御と、OPD11から出力された明るさ情報検出信号に基づく計算処理を行うため、第2信号処理部72の処理負担を軽減でき、第2信号処理部72内の各処理部が信号処理を行う時間を短縮できる。 FIG. 10 shows control of the timing at which the control unit 5 or an arithmetic unit such as a CPU controls the timing at which the OPD 11 generates a brightness information detection signal, the control of reading the brightness information detection signal generated by the OPD 11, and the read brightness information detection. Processing to calculate the set value to be set in each processing unit in the second signal processing unit 72 based on the signal, and control to transmit the calculated set value to each processing unit in the second signal processing unit 72. It is a block diagram of the image pickup apparatus 1 in the case of performing. In the case of FIG. 10, since the control unit 5 in the image pickup apparatus 1 or an arithmetic unit such as a CPU performs control of the OPD 11 and calculation processing based on the brightness information detection signal output from the OPD 11, the second signal processing unit 72 The processing load can be reduced, and the time for each processing unit in the second signal processing unit 72 to perform signal processing can be shortened.
 このように、第2の実施形態では、アプリケーションプロセッサ3の指示で信号処理を行う第1信号処理部71とは別個に、撮像装置1内に独自に信号処理を行う第2信号処理部72を設けるため、第2信号処理部72は、アプリケーションプロセッサ3の指示に影響されずに、認識処理や検出処理等の情報処理を行うのに最適な信号処理を行うことができ、認識処理や検出処理の信頼性が高くなる。また、アプリケーションプロセッサ3は、スリープモードに移行することがあるが、アプリケーションプロセッサ3がスリープモードに移行したとしても、第2信号処理部72は信号処理を継続して行うことができ、したがってアプリケーションプロセッサ3の動作モードに関係なく、撮像装置1内で信頼性の高い認識処理や検出処理等の情報処理を継続して行うことができる。 As described above, in the second embodiment, the second signal processing unit 72 that independently performs signal processing in the image pickup apparatus 1 is provided separately from the first signal processing unit 71 that performs signal processing according to the instruction of the application processor 3. Therefore, the second signal processing unit 72 can perform optimal signal processing for performing information processing such as recognition processing and detection processing without being influenced by the instruction of the application processor 3, and the recognition processing and detection processing can be performed. Is more reliable. Further, the application processor 3 may shift to the sleep mode, but even if the application processor 3 shifts to the sleep mode, the second signal processing unit 72 can continue the signal processing, and therefore the application processor 3 can shift to the sleep mode. Regardless of the operation mode of No. 3, information processing such as highly reliable recognition processing and detection processing can be continuously performed in the image pickup apparatus 1.
 さらに、第2信号処理部72が動き検出処理を行う場合、動きが検出されたことを制御部5からアプリケーションプロセッサ3に通知することで、スリープモードのアプリケーションプロセッサ3を通常動作モードに復帰させることができる。 Further, when the second signal processing unit 72 performs the motion detection process, the control unit 5 notifies the application processor 3 that the motion has been detected, so that the application processor 3 in the sleep mode is returned to the normal operation mode. Can be done.
 (他のセンサへの適用)
 なお、上述した第1及び第2の実施形態では、2次元画像を取得する撮像装置(イメージセンサ)1に対して本開示に係る技術を適用した場合を例示したが、本開示に係る技術の適用先は撮像装置に限定されるものではない。例えば、ToF(Time of Flight)センサや赤外線(IR)センサやDVS(Dynamic Vision Sensor)等、種々の受光センサに対して本開示に係る技術を適用することが可能である。すなわち、受光センサのチップ構造を積層型とすることで、センサ結果に含まれるノイズの低減やセンサチップの小型化等を達成することが可能である。
(Application to other sensors)
In the first and second embodiments described above, the case where the technique according to the present disclosure is applied to the image pickup apparatus (image sensor) 1 for acquiring a two-dimensional image is illustrated, but the technique according to the present disclosure is illustrated. The application destination is not limited to the imaging device. For example, the technique according to the present disclosure can be applied to various light receiving sensors such as a ToF (Time of Flight) sensor, an infrared (IR) sensor, and a DVS (Dynamic Vision Sensor). That is, by making the chip structure of the light receiving sensor a laminated type, it is possible to reduce noise included in the sensor result, reduce the size of the sensor chip, and the like.
 (移動体への応用例)
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
(Example of application to mobile)
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on any kind of moving body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
 図11は本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システム12000の概略的な構成例を示すブロック図である。 FIG. 11 is a block diagram showing a schematic configuration example of a vehicle control system 12000, which is an example of a mobile control system to which the technique according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図11に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、図11に示した例では、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(Interface)12053を備える。 The vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001. In the example shown in FIG. 11, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050. Further, in the example shown in FIG. 11, as the functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are provided.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付けて、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps. In this case, the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches. The body system control unit 12020 receives the input of these radio waves or signals to control the vehicle door lock device, power window device, lamp, and the like.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000. For example, the image pickup unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received. The image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The in-vehicle information detection unit 12040 detects the in-vehicle information. For example, a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit. A control command can be output to 12010. For example, the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 Further, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving that runs autonomously without depending on the operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Further, the microprocessor 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030. For example, the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図11の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger or the outside of the vehicle of the information. In the example of FIG. 11, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices. The display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
 図12は撮像部12031の設置位置の例を示す図である。図12では、撮像部12031として、撮像部12101、12102、12103、12104、12105を有する。 FIG. 12 is a diagram showing an example of the installation position of the imaging unit 12031. In FIG. 12, the imaging unit 12031 includes imaging units 12101, 12102, 12103, 12104, and 12105.
 撮像部12101、12102、12103、12104、12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102、12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部12105は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as, for example, the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100. The image pickup unit 12101 provided on the front nose and the image pickup section 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100. The imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100. The imaging unit 12105 provided on the upper part of the windshield in the vehicle interior is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図12には、撮像部12101ないし12104の撮影範囲の一例が一点鎖線で示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 12 shows an example of the imaging range of the imaging units 12101 to 12104 with a dashed line. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103. The imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining, it is possible to extract as the preceding vehicle a three-dimensional object that is the closest three-dimensional object on the traveling path of the vehicle 12100 and that travels in substantially the same direction as the vehicle 12100 at a predetermined speed (for example, 0 km / h or more). it can. Further, the microprocessor 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microprocessor 12051 identifies obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. Such pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine. When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian. The display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、撮像部12031等に適用され得る。撮像部12031等に本開示に係る技術を適用することにより、撮像部12031等を小型化することが可能となるため、車両12100のインテリアやエクステリアの設計が容易となる。また、撮像部12031等に本開示に係る技術を適用することにより、ノイズの低減されたクリアな画像を取得することが可能となるため、より見やすい撮影画像をドライバに提供することができる。それにより、ドライバの疲労を軽減することが可能になる。 The above is an example of a vehicle control system to which the technology according to the present disclosure can be applied. The technique according to the present disclosure can be applied to the imaging unit 12031 and the like among the configurations described above. By applying the technique according to the present disclosure to the image pickup unit 12031 or the like, the image pickup unit 12031 or the like can be miniaturized, so that the interior or exterior of the vehicle 12100 can be easily designed. Further, by applying the technique according to the present disclosure to the imaging unit 12031 or the like, it is possible to acquire a clear image with reduced noise, so that it is possible to provide the driver with a more easily visible photographed image. This makes it possible to reduce driver fatigue.
 (内視鏡手術システムへの応用例)
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、内視鏡手術システムに適用されてもよい。
(Example of application to endoscopic surgery system)
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the techniques according to the present disclosure may be applied to endoscopic surgery systems.
 図13は、本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図である。 FIG. 13 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technique according to the present disclosure (the present technique) can be applied.
 図13では、術者(医師)11131が、内視鏡手術システム11000を用いて、患者ベッド11133上の患者11132に手術を行っている様子が図示されている。図示するように、内視鏡手術システム11000は、内視鏡11100と、気腹チューブ11111やエネルギー処置具11112等の、その他の術具11110と、内視鏡11100を支持する支持アーム装置11120と、内視鏡下手術のための各種の装置が搭載されたカート11200と、から構成される。 FIG. 13 illustrates how the surgeon (doctor) 11131 is performing surgery on patient 11132 on patient bed 11133 using the endoscopic surgery system 11000. As shown, the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as an abdominal tube 11111 and an energy treatment tool 11112, and a support arm device 11120 that supports the endoscope 11100. , A cart 11200 equipped with various devices for endoscopic surgery.
 内視鏡11100は、先端から所定の長さの領域が患者11132の体腔内に挿入される鏡筒11101と、鏡筒11101の基端に接続されるカメラヘッド11102と、から構成される。図示する例では、硬性の鏡筒11101を有するいわゆる硬性鏡として構成される内視鏡11100を図示しているが、内視鏡11100は、軟性の鏡筒を有するいわゆる軟性鏡として構成されてもよい。 The endoscope 11100 is composed of a lens barrel 11101 in which a region having a predetermined length from the tip is inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the base end of the lens barrel 11101. In the illustrated example, the endoscope 11100 configured as a so-called rigid mirror having a rigid barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible mirror having a flexible barrel. Good.
 鏡筒11101の先端には、対物レンズが嵌め込まれた開口部が設けられている。内視鏡11100には光源装置11203が接続されており、当該光源装置11203によって生成された光が、鏡筒11101の内部に延設されるライトガイドによって当該鏡筒の先端まで導光され、対物レンズを介して患者11132の体腔内の観察対象に向かって照射される。なお、内視鏡11100は、直視鏡であってもよいし、斜視鏡又は側視鏡であってもよい。 An opening in which an objective lens is fitted is provided at the tip of the lens barrel 11101. A light source device 11203 is connected to the endoscope 11100, and the light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101 to be an objective. It is irradiated toward the observation target in the body cavity of the patient 11132 through the lens. The endoscope 11100 may be a direct endoscope, a perspective mirror, or a side endoscope.
 カメラヘッド11102の内部には光学系及び撮像素子が設けられており、観察対象からの反射光(観察光)は当該光学系によって当該撮像素子に集光される。当該撮像素子によって観察光が光電変換され、観察光に対応する電気信号、すなわち観察像に対応する画像信号が生成される。当該画像信号は、RAWデータとしてカメラコントロールユニット(CCU: Camera Control Unit)11201に送信される。 An optical system and an image sensor are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the image sensor by the optical system. The observation light is photoelectrically converted by the image pickup device, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated. The image signal is transmitted as RAW data to the camera control unit (CCU: Camera Control Unit) 11201.
 CCU11201は、CPU(Central Processing Unit)やGPU(Graphics Processing Unit)等によって構成され、内視鏡11100及び表示装置11202の動作を統括的に制御する。さらに、CCU11201は、カメラヘッド11102から画像信号を受け取り、その画像信号に対して、例えば現像処理(デモザイク処理)等の、当該画像信号に基づく画像を表示するための各種の画像処理を施す。 The CCU11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), etc., and comprehensively controls the operations of the endoscope 11100 and the display device 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal for displaying an image based on the image signal, such as development processing (demosaic processing).
 表示装置11202は、CCU11201からの制御により、当該CCU11201によって画像処理が施された画像信号に基づく画像を表示する。 The display device 11202 displays an image based on the image signal processed by the CCU 11201 under the control of the CCU 11201.
 光源装置11203は、例えばLED(light emitting diode)等の光源から構成され、術部等を撮影する際の照射光を内視鏡11100に供給する。 The light source device 11203 is composed of, for example, a light source such as an LED (light LED radio), and supplies irradiation light to the endoscope 11100 when photographing an operating part or the like.
 入力装置11204は、内視鏡手術システム11000に対する入力インタフェースである。ユーザは、入力装置11204を介して、内視鏡手術システム11000に対して各種の情報の入力や指示入力を行うことができる。例えば、ユーザは、内視鏡11100による撮像条件(照射光の種類、倍率及び焦点距離等)を変更する旨の指示等を入力する。 The input device 11204 is an input interface for the endoscopic surgery system 11000. The user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204. For example, the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100.
 処置具制御装置11205は、組織の焼灼、切開又は血管の封止等のためのエネルギー処置具11112の駆動を制御する。気腹装置11206は、内視鏡11100による視野の確保及び術者の作業空間の確保の目的で、患者11132の体腔を膨らめるために、気腹チューブ11111を介して当該体腔内にガスを送り込む。レコーダ11207は、手術に関する各種の情報を記録可能な装置である。プリンタ11208は、手術に関する各種の情報を、テキスト、画像又はグラフ等各種の形式で印刷可能な装置である。 The treatment tool control device 11205 controls the drive of the energy treatment tool 11112 for ablation of tissue, incision, sealing of blood vessels, and the like. The pneumoperitoneum device 11206 uses a gas in the pneumoperitoneum tube 11111 to inflate the body cavity of the patient 11132 for the purpose of securing the field of view by the endoscope 11100 and securing the work space of the operator. To send. Recorder 11207 is a device capable of recording various information related to surgery. The printer 11208 is a device capable of printing various information related to surgery in various formats such as text, images, and graphs.
 なお、内視鏡11100に術部を撮影する際の照射光を供給する光源装置11203は、例えばLED、レーザ光源又はこれらの組み合わせによって構成される白色光源から構成することができる。RGBレーザ光源の組み合わせにより白色光源が構成される場合には、各色(各波長)の出力強度及び出力タイミングを高精度に制御することができるため、光源装置11203において撮像画像のホワイトバランスの調整を行うことができる。また、この場合には、RGBレーザ光源それぞれからのレーザ光を時分割で観察対象に照射し、その照射タイミングに同期してカメラヘッド11102の撮像素子の駆動を制御することにより、RGBそれぞれに対応した画像を時分割で撮像することも可能である。当該方法によれば、当該撮像素子にカラーフィルタを設けなくても、カラー画像を得ることができる。 The light source device 11203 that supplies the irradiation light to the endoscope 11100 when photographing the surgical site can be composed of, for example, an LED, a laser light source, or a white light source composed of a combination thereof. When a white light source is configured by combining RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. Therefore, the light source device 11203 adjusts the white balance of the captured image. It can be carried out. Further, in this case, the laser light from each of the RGB laser light sources is irradiated to the observation target in a time-divided manner, and the drive of the image sensor of the camera head 11102 is controlled in synchronization with the irradiation timing to support each of RGB. It is also possible to capture the image in a time-divided manner. According to this method, a color image can be obtained without providing a color filter on the image sensor.
 また、光源装置11203は、出力する光の強度を所定の時間ごとに変更するようにその駆動が制御されてもよい。その光の強度の変更のタイミングに同期してカメラヘッド11102の撮像素子の駆動を制御して時分割で画像を取得し、その画像を合成することにより、いわゆる黒つぶれ及び白とびのない高ダイナミックレンジの画像を生成することができる。 Further, the drive of the light source device 11203 may be controlled so as to change the intensity of the output light at predetermined time intervals. By controlling the drive of the image sensor of the camera head 11102 in synchronization with the timing of changing the light intensity to acquire an image in a time-divided manner and synthesizing the image, so-called high dynamic without blackout and overexposure. Range images can be generated.
 また、光源装置11203は、特殊光観察に対応した所定の波長帯域の光を供給可能に構成されてもよい。特殊光観察では、例えば、体組織における光の吸収の波長依存性を利用して、通常の観察時における照射光(すなわち、白色光)に比べて狭帯域の光を照射することにより、粘膜表層の血管等の所定の組織を高コントラストで撮影する、いわゆる狭帯域光観察(Narrow Band Imaging)が行われる。あるいは、特殊光観察では、励起光を照射することにより発生する蛍光により画像を得る蛍光観察が行われてもよい。蛍光観察では、体組織に励起光を照射し当該体組織からの蛍光を観察すること(自家蛍光観察)、又はインドシアニングリーン(ICG)等の試薬を体組織に局注するとともに当該体組織にその試薬の蛍光波長に対応した励起光を照射し蛍光像を得ること等を行うことができる。光源装置11203は、このような特殊光観察に対応した狭帯域光及び/又は励起光を供給可能に構成され得る。 Further, the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation. In special light observation, for example, by utilizing the wavelength dependence of light absorption in body tissue to irradiate light in a narrow band as compared with the irradiation light (that is, white light) in normal observation, the surface layer of the mucous membrane. So-called narrow band imaging, in which a predetermined tissue such as a blood vessel is photographed with high contrast, is performed. Alternatively, in the special light observation, fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating with excitation light. In fluorescence observation, the body tissue is irradiated with excitation light to observe the fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and the body tissue is injected. It is possible to obtain a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent. The light source device 11203 may be configured to be capable of supplying narrow band light and / or excitation light corresponding to such special light observation.
 図14は図13に示すカメラヘッド11102及びCCU11201の機能構成の一例を示すブロック図である。 FIG. 14 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU11201 shown in FIG.
 カメラヘッド11102は、レンズユニット11401と、撮像部11402と、駆動部11403と、通信部11404と、カメラヘッド制御部11405と、を有する。CCU11201は、通信部11411と、画像処理部11412と、制御部11413と、を有する。カメラヘッド11102とCCU11201とは、伝送ケーブル11400によって互いに通信可能に接続されている。 The camera head 11102 includes a lens unit 11401, an imaging unit 11402, a driving unit 11403, a communication unit 11404, and a camera head control unit 11405. CCU11201 has a communication unit 11411, an image processing unit 11412, and a control unit 11413. The camera head 11102 and CCU11201 are communicatively connected to each other by a transmission cable 11400.
 レンズユニット11401は、鏡筒11101との接続部に設けられる光学系である。鏡筒11101の先端から取り込まれた観察光は、カメラヘッド11102まで導光され、当該レンズユニット11401に入射する。レンズユニット11401は、ズームレンズ及びフォーカスレンズを含む複数のレンズが組み合わされて構成される。 The lens unit 11401 is an optical system provided at a connection portion with the lens barrel 11101. The observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and incident on the lens unit 11401. The lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
 撮像部11402を構成する撮像素子は、1つ(いわゆる単板式)であってもよいし、複数(いわゆる多板式)であってもよい。撮像部11402が多板式で構成される場合には、例えば各撮像素子によってRGBそれぞれに対応する画像信号が生成され、それらが合成されることによりカラー画像が得られてもよい。あるいは、撮像部11402は、3D(dimensional)表示に対応する右目用及び左目用の画像信号をそれぞれ取得するための1対の撮像素子を有するように構成されてもよい。3D表示が行われることにより、術者11131は術部における生体組織の奥行きをより正確に把握することが可能になる。なお、撮像部11402が多板式で構成される場合には、各撮像素子に対応して、レンズユニット11401も複数系統設けられ得る。 The image sensor constituting the image pickup unit 11402 may be one (so-called single plate type) or a plurality (so-called multi-plate type). When the image pickup unit 11402 is composed of a multi-plate type, for example, each image pickup element may generate an image signal corresponding to each of RGB, and a color image may be obtained by synthesizing them. Alternatively, the image pickup unit 11402 may be configured to have a pair of image pickup elements for acquiring image signals for the right eye and the left eye corresponding to 3D (dimensional) display, respectively. The 3D display enables the operator 11131 to more accurately grasp the depth of the biological tissue in the surgical site. When the image pickup unit 11402 is composed of a multi-plate type, a plurality of lens units 11401 may be provided corresponding to each image pickup element.
 また、撮像部11402は、必ずしもカメラヘッド11102に設けられなくてもよい。例えば、撮像部11402は、鏡筒11101の内部に、対物レンズの直後に設けられてもよい。 Further, the imaging unit 11402 does not necessarily have to be provided on the camera head 11102. For example, the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
 駆動部11403は、アクチュエータによって構成され、カメラヘッド制御部11405からの制御により、レンズユニット11401のズームレンズ及びフォーカスレンズを光軸に沿って所定の距離だけ移動させる。これにより、撮像部11402による撮像画像の倍率及び焦点が適宜調整され得る。 The drive unit 11403 is composed of an actuator, and the zoom lens and focus lens of the lens unit 11401 are moved by a predetermined distance along the optical axis under the control of the camera head control unit 11405. As a result, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
 通信部11404は、CCU11201との間で各種の情報を送受信するための通信装置によって構成される。通信部11404は、撮像部11402から得た画像信号をRAWデータとして伝送ケーブル11400を介してCCU11201に送信する。 The communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU11201. The communication unit 11404 transmits the image signal obtained from the image pickup unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
 また、通信部11404は、CCU11201から、カメラヘッド11102の駆動を制御するための制御信号を受信し、カメラヘッド制御部11405に供給する。当該制御信号には、例えば、撮像画像のフレームレートを指定する旨の情報、撮像時の露出値を指定する旨の情報、並びに/又は撮像画像の倍率及び焦点を指定する旨の情報等、撮像条件に関する情報が含まれる。 Further, the communication unit 11404 receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405. The control signal includes, for example, information to specify the frame rate of the captured image, information to specify the exposure value at the time of imaging, and / or information to specify the magnification and focus of the captured image, and the like. Contains information about the condition.
 なお、上記のフレームレートや露出値、倍率、焦点等の撮像条件は、ユーザによって適宜指定されてもよいし、取得された画像信号に基づいてCCU11201の制御部11413によって自動的に設定されてもよい。後者の場合には、いわゆるAE(Auto Exposure)機能、AF(Auto Focus)機能及びAWB(Auto White Balance)機能が内視鏡11100に搭載されていることになる。 The imaging conditions such as the frame rate, exposure value, magnification, and focus may be appropriately specified by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. Good. In the latter case, the so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function are mounted on the endoscope 11100.
 カメラヘッド制御部11405は、通信部11404を介して受信したCCU11201からの制御信号に基づいて、カメラヘッド11102の駆動を制御する。 The camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
 通信部11411は、カメラヘッド11102との間で各種の情報を送受信するための通信装置によって構成される。通信部11411は、カメラヘッド11102から、伝送ケーブル11400を介して送信される画像信号を受信する。 The communication unit 11411 is composed of a communication device for transmitting and receiving various information to and from the camera head 11102. The communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
 また、通信部11411は、カメラヘッド11102に対して、カメラヘッド11102の駆動を制御するための制御信号を送信する。画像信号や制御信号は、電気通信や光通信等によって送信することができる。 Further, the communication unit 11411 transmits a control signal for controlling the drive of the camera head 11102 to the camera head 11102. Image signals and control signals can be transmitted by telecommunications, optical communication, or the like.
 画像処理部11412は、カメラヘッド11102から送信されたRAWデータである画像信号に対して各種の画像処理を施す。 The image processing unit 11412 performs various image processing on the image signal which is the RAW data transmitted from the camera head 11102.
 制御部11413は、内視鏡11100による術部等の撮像、及び、術部等の撮像により得られる撮像画像の表示に関する各種の制御を行う。例えば、制御部11413は、カメラヘッド11102の駆動を制御するための制御信号を生成する。 The control unit 11413 performs various controls related to the imaging of the surgical site and the like by the endoscope 11100 and the display of the captured image obtained by the imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
 また、制御部11413は、画像処理部11412によって画像処理が施された画像信号に基づいて、術部等が映った撮像画像を表示装置11202に表示させる。この際、制御部11413は、各種の画像認識技術を用いて撮像画像内における各種の物体を認識してもよい。例えば、制御部11413は、撮像画像に含まれる物体のエッジの形状や色等を検出することにより、鉗子等の術具、特定の生体部位、出血、エネルギー処置具11112の使用時のミスト等を認識することができる。制御部11413は、表示装置11202に撮像画像を表示させる際に、その認識結果を用いて、各種の手術支援情報を当該術部の画像に重畳表示させてもよい。手術支援情報が重畳表示され、術者11131に提示されることにより、術者11131の負担を軽減することや、術者11131が確実に手術を進めることが可能になる。 Further, the control unit 11413 causes the display device 11202 to display an image captured by the surgical unit or the like based on the image signal processed by the image processing unit 11412. At this time, the control unit 11413 may recognize various objects in the captured image by using various image recognition techniques. For example, the control unit 11413 detects the shape, color, and the like of the edge of an object included in the captured image to remove surgical tools such as forceps, a specific biological part, bleeding, and mist when using the energy treatment tool 11112. Can be recognized. When displaying the captured image on the display device 11202, the control unit 11413 may superimpose and display various surgical support information on the image of the surgical unit by using the recognition result. By superimposing and displaying the surgical support information and presenting it to the surgeon 11131, it is possible to reduce the burden on the surgeon 11131 and to allow the surgeon 11131 to proceed with the surgery reliably.
 カメラヘッド11102及びCCU11201を接続する伝送ケーブル11400は、電気信号の通信に対応した電気信号ケーブル、光通信に対応した光ファイバ、又はこれらの複合ケーブルである。 The transmission cable 11400 that connects the camera head 11102 and CCU11201 is an electric signal cable that supports electric signal communication, an optical fiber that supports optical communication, or a composite cable thereof.
 ここで、図示する例では、伝送ケーブル11400を用いて有線で通信が行われていたが、カメラヘッド11102とCCU11201との間の通信は無線で行われてもよい。 Here, in the illustrated example, the communication was performed by wire using the transmission cable 11400, but the communication between the camera head 11102 and the CCU11201 may be performed wirelessly.
 以上、本開示に係る技術が適用され得る内視鏡手術システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、例えば、カメラヘッド11102の撮像部11402等に適用され得る。カメラヘッド11102に本開示に係る技術を適用することにより、カメラヘッド11102等を小型化することが可能となるため、内視鏡手術システム11000をコンパクト化が可能となる。また、カメラヘッド11102等に本開示に係る技術を適用することにより、ノイズの低減されたクリアな画像を取得することが可能となるため、より見やすい撮影画像を術者に提供することができる。それにより、術者の疲労を軽減することが可能になる。 The above is an example of an endoscopic surgery system to which the technology according to the present disclosure can be applied. The technique according to the present disclosure can be applied to, for example, the image pickup unit 11402 of the camera head 11102 among the configurations described above. By applying the technique according to the present disclosure to the camera head 11102, the camera head 11102 and the like can be miniaturized, so that the endoscopic surgery system 11000 can be miniaturized. Further, by applying the technique according to the present disclosure to the camera head 11102 or the like, it is possible to acquire a clear image with reduced noise, so that it is possible to provide an operator with a more easily visible photographed image. As a result, it becomes possible to reduce the fatigue of the operator.
 なお、ここでは、一例として内視鏡手術システムについて説明したが、本開示に係る技術は、その他、例えば、顕微鏡手術システム等に適用されてもよい。 Although the endoscopic surgery system has been described here as an example, the technique according to the present disclosure may be applied to other, for example, a microscopic surgery system.
 (WSI(Whole Slide Imaging)システムへの応用例)
 本開示に係る技術は、様々な製品へ応用することができる。例えば、本開示に係る技術は、医師等が患者から採取された細胞や組織を観察して病変を診断する病理診断システムやその支援システム等(以下、診断支援システムと称する)に適用されてもよい。この診断支援システムは、デジタルパソロジー技術を利用して取得された画像に基づいて病変を診断又はその支援をするWSI(Whole Slide Imaging)システムであってもよい。
(Example of application to WSI (Whole Slide Imaging) system)
The technology according to the present disclosure can be applied to various products. For example, even if the technique according to the present disclosure is applied to a pathological diagnosis system or a support system thereof (hereinafter referred to as a diagnosis support system) in which a doctor or the like observes cells or tissues collected from a patient to diagnose a lesion. Good. This diagnostic support system may be a WSI (Whole Slide Imaging) system that diagnoses or supports lesions based on images acquired by using digital pathology technology.
 図15は本開示に係る技術が適用される診断支援システム5500の概略的な構成の一例を示す図である。図15に示すように、診断支援システム5500は、1以上の病理システム5510を含む。さらに医療情報システム5530と、導出装置5540とを含んでもよい。 FIG. 15 is a diagram showing an example of a schematic configuration of a diagnostic support system 5500 to which the technique according to the present disclosure is applied. As shown in FIG. 15, the diagnostic support system 5500 includes one or more pathological systems 5510. Further, the medical information system 5530 and the out-licensing device 5540 may be included.
 1以上の病理システム5510それぞれは、主に病理医が使用するシステムであり、例えば研究所や病院に導入される。各病理システム5510は、互いに異なる病院に導入されてもよく、それぞれWAN(Wide Area Network)(インターネットを含む)やLAN(Local Area Network)や公衆回線網や移動体通信網などの種々のネットワークを介して医療情報システム5530及び導出装置5540に接続される。 Each of the one or more pathological systems 5510 is a system mainly used by a pathologist, and is introduced into, for example, a laboratory or a hospital. Each pathological system 5510 may be introduced in different hospitals, and may be installed in various networks such as WAN (Wide Area Network) (including the Internet), LAN (Local Area Network), public network, and mobile communication network. It is connected to the medical information system 5530 and the out-licensing device 5540 via the system.
 各病理システム5510は、顕微鏡5511と、サーバ5512と、表示制御装置5513と、表示装置5514とを含む。 Each pathology system 5510 includes a microscope 5511, a server 5512, a display control device 5513, and a display device 5514.
 顕微鏡5511は、光学顕微鏡の機能を有し、ガラススライドに収められた観察対象物を撮像し、デジタル画像である病理画像を取得する。観察対象物とは、例えば、患者から採取された組織や細胞であり、臓器の肉片、唾液、血液等であってよい。 The microscope 5511 has the function of an optical microscope, images an observation object housed on a glass slide, and acquires a pathological image which is a digital image. The object to be observed is, for example, a tissue or cell collected from a patient, and may be a piece of meat, saliva, blood, or the like of an organ.
 サーバ5512は、顕微鏡5511によって取得された病理画像を図示しない記憶部に記憶、保存する。また、サーバ5512は、表示制御装置5513から閲覧要求を受け付けた場合に、図示しない記憶部から病理画像を検索し、検索された病理画像を表示制御装置5513に送る。 The server 5512 stores and stores the pathological image acquired by the microscope 5511 in a storage unit (not shown). When the server 5512 receives a browsing request from the display control device 5513, the server 5512 searches for a pathological image from a storage unit (not shown) and sends the searched pathological image to the display control device 5513.
 表示制御装置5513は、ユーザから受け付けた病理画像の閲覧要求をサーバ5512に送る。そして、表示制御装置5513は、サーバ5512から受け付けた病理画像を、液晶、EL(Electro‐Luminescence)、CRT(35Cathode Ray Tube)などを用いた表示装置5514に表示させる。なお、表示装置5514は、4Kや8Kに対応していてもよく、また、1台に限られず、複数台であってもよい。 The display control device 5513 sends a viewing request for the pathological image received from the user to the server 5512. Then, the display control device 5513 displays the pathological image received from the server 5512 on the display device 5514 using a liquid crystal, EL (Electro-Luminescence), CRT (35CathodeRayTube), or the like. The display device 5514 may be compatible with 4K or 8K, and is not limited to one, and may be a plurality of display devices.
 ここで、観察対象物が臓器の肉片等の固形物である場合、この観察対象物は、例えば、染色された薄切片であってよい。薄切片は、例えば、臓器等の検体から切出されたブロック片を薄切りすることで作製されてもよい。また、薄切りの際には、ブロック片がパラフィン等で固定されてもよい。 Here, when the object to be observed is a solid substance such as a piece of meat of an organ, the object to be observed may be, for example, a stained thin section. The thin section may be prepared, for example, by slicing a block piece cut out from a sample such as an organ. Further, when slicing, the block pieces may be fixed with paraffin or the like.
 薄切片の染色には、HE(Hematoxylin-Eosin)染色などの組織の形態を示す一般染色や、IHC(Immunohistochemistry)染色などの組織の免疫状態を示す免疫染色など、種々の染色が適用されてよい。その際、1つの薄切片が複数の異なる試薬を用いて染色されてもよいし、同じブロック片から連続して切り出された2以上の薄切片(隣接する薄切片ともいう)が互いに異なる試薬を用いて染色されてもよい。 Various stains such as general stain showing the morphology of the tissue such as HE (Hematoxylin-Eosin) stain and immunostaining showing the immune status of the tissue such as IHC (Immunohistochemistry) stain may be applied to the staining of thin sections. .. At that time, one thin section may be stained with a plurality of different reagents, or two or more thin sections (also referred to as adjacent thin sections) continuously cut out from the same block piece may be different from each other. It may be dyed using.
 顕微鏡5511は、低解像度で撮像するための低解像度撮像部と、高解像度で撮像するための高解像度撮像部とを含み得る。低解像度撮像部と高解像度撮像部とは、異なる光学系であってもよいし、同一の光学系であってもよい。同一の光学系である場合には、顕微鏡5511は、撮像対象に応じて解像度が変更されてもよい。 The microscope 5511 may include a low-resolution imaging unit for imaging at low resolution and a high-resolution imaging unit for imaging at high resolution. The low-resolution imaging unit and the high-resolution imaging unit may have different optical systems or may be the same optical system. In the case of the same optical system, the resolution of the microscope 5511 may be changed according to the image pickup target.
 観察対象物が収容されたガラススライドは、顕微鏡5511の画角内に位置するステージ上に載置される。顕微鏡5511は、まず、低解像度撮像部を用いて画角内の全体画像を取得し、取得した全体画像から観察対象物の領域を特定する。続いて、顕微鏡5511は、観察対象物が存在する領域を所定サイズの複数の分割領域に分割し、各分割領域を高解像度撮像部により順次撮像することで、各分割領域の高解像度画像を取得する。対象とする分割領域の切替えでは、ステージを移動させてもよいし、撮像光学系を移動させてもよいし、それら両方を移動させてもよい。また、各分割領域は、ガラススライドの意図しない滑りによる撮像漏れ領域の発生等を防止するために、隣接する分割領域との間で重複していてもよい。さらに、全体画像には、全36体画像と患者とを対応付けておくための識別情報が含まれていてもよい。この識別情報は、例えば、文字列やQRコード(登録商標)等であってよい。 The glass slide containing the observation object is placed on a stage located within the angle of view of the microscope 5511. First, the microscope 5511 acquires an entire image within the angle of view using a low-resolution imaging unit, and identifies a region of an observation object from the acquired overall image. Subsequently, the microscope 5511 divides the region where the observation object exists into a plurality of divided regions of a predetermined size, and sequentially captures each divided region by the high-resolution imaging unit to acquire a high-resolution image of each divided region. To do. In switching the target divided region, the stage may be moved, the imaging optical system may be moved, or both of them may be moved. Further, each divided region may overlap with the adjacent divided region in order to prevent the occurrence of an imaging omission region due to unintended sliding of the glass slide. Further, the whole image may include identification information for associating all 36 body images with the patient. This identification information may be, for example, a character string, a QR code (registered trademark), or the like.
 顕微鏡5511で取得された高解像度画像は、サーバ5512に入力される。サーバ5512は、各高解像度画像をより小さいサイズの部分画像(以下、タイル画像と称する)に分割する。例えば、サーバ5512は、1つの高解像度画像を縦横10×10個の計100個のタイル画像に分割する。その際、隣接する分割領域が重複していれば、サーバ5512は、テンプレートマッチング等の技法を用いて互いに隣り合う高解像度画像にスティッチング処理を施してもよい。その場合、サーバ5512は、スティッチング処理により貼り合わされた高解像度画像全体を分割してタイル画像を生成してもよい。ただし、高解像度画像からのタイル画像の生成は、上記スティッチング処理の前であってもよい。 The high resolution image acquired by the microscope 5511 is input to the server 5512. The server 5512 divides each high-resolution image into smaller-sized partial images (hereinafter referred to as tile images). For example, the server 5512 divides one high-resolution image into a total of 100 tile images of 10 × 10 in length and width. At that time, if the adjacent divided regions overlap, the server 5512 may perform stitching processing on the high-resolution images adjacent to each other by using a technique such as template matching. In that case, the server 5512 may generate a tile image by dividing the entire high-resolution image pasted by the stitching process. However, the tile image may be generated from the high resolution image before the stitching process.
 また、サーバ5512は、タイル画像をさらに分割することで、より小さいサイズのタイル画像を生成し得る。このようなタイル画像の生成は、最小単位として設定されたサイズのタイル画像が生成されるまで繰り返されてよい。 Further, the server 5512 can generate a tile image of a smaller size by further dividing the tile image. The generation of such a tile image may be repeated until a tile image having a size set as the minimum unit is generated.
 このように最小単位のタイル画像を生成すると、サーバ5512は、隣り合う所定数のタイル画像を合成することで1つのタイル画像を生成するタイル合成処理を、全てのタイル画像に対して実行する。このタイル合成処理は、最終的に1つのタイル画像が生成されるまで繰り返され得る。このような処理により、各階層が1つ以上のタイル画像で構成されたピラミッド構造のタイル画像群が生成される。このピラミッド構造では、ある層のタイル画像とこの層とは異なる層のタイル画像との画素数は同じであるが、その解像度が異なっている。例えば、2×2個の計4つのタイル画像を合成して上層の1つのタイル画像を生成する場合、上層のタイル画像の解像度は、合成に用いた下層のタイル画像の解像度の1/2倍となっている。 When the tile image of the smallest unit is generated in this way, the server 5512 executes the tile composition process of generating one tile image by synthesizing a predetermined number of adjacent tile images for all the tile images. This tile composition process can be repeated until one tile image is finally generated. By such processing, a tile image group having a pyramid structure in which each layer is composed of one or more tile images is generated. In this pyramid structure, the tile image of one layer and the tile image of a different layer from this layer have the same number of pixels, but their resolutions are different. For example, when a total of four tile images of 2 × 2 are combined to generate one tile image in the upper layer, the resolution of the tile image in the upper layer is 1/2 times the resolution of the tile image in the lower layer used for composition. It has become.
 このようなピラミッド構造のタイル画像群を構築することによって、表示対象のタイル画像が属する階層次第で、表示装置に表示される観察対象物の37詳細度を切り替えることが可能となる。例えば、最下層のタイル画像が用いられる場合には、観察対象物の狭い領域を詳細に表示し、上層のタイル画像が用いられるほど観察対象物の広い領域が粗く表示されるようにすることができる。 By constructing a tile image group having such a pyramid structure, it is possible to switch the 37 level of detail of the observation object displayed on the display device depending on the hierarchy to which the tile image to be displayed belongs. For example, when the tile image of the lowest layer is used, the narrow area of the observation object is displayed in detail, and the wider area of the observation object is displayed coarser as the tile image of the upper layer is used. it can.
 生成されたピラミッド構造のタイル画像群は、例えば、各タイル画像を一意に識別可能な識別情報(タイル識別情報と称する)とともに、不図示の記憶部に記憶される。サーバ5512は、他の装置(例えば、表示制御装置5513や導出装置5540)からタイル識別情報を含むタイル画像の取得要求を受け付けた場合に、タイル識別情報に対応するタイル画像を他の装置へ送信する。 The generated tile image group of the pyramid structure is stored in a storage unit (not shown) together with identification information (referred to as tile identification information) that can uniquely identify each tile image, for example. When the server 5512 receives a request for acquiring a tile image including tile identification information from another device (for example, display control device 5513 or derivation device 5540), the server 5512 transmits the tile image corresponding to the tile identification information to the other device. To do.
 なお、病理画像であるタイル画像は、焦点距離や染色条件等の撮像条件毎に生成されてもよい。撮像条件毎にタイル画像が生成される場合、特定の病理画像とともに、特定の撮像条件と異なる撮像条件に対応する他の病理画像であって、特定の病理画像と同一領域の他の病理画像を並べて表示してもよい。特定の撮像条件は、閲覧者によって指定されてもよい。また、閲覧者に複数の撮像条件が指定された場合には、各撮像条件に対応する同一領域の病理画像が並べて表示されてもよい。 Note that the tile image, which is a pathological image, may be generated for each imaging condition such as focal length and staining conditions. When a tile image is generated for each imaging condition, a specific pathological image and another pathological image corresponding to an imaging condition different from the specific imaging condition, which is another pathological image in the same region as the specific pathological image, are displayed. It may be displayed side by side. Specific imaging conditions may be specified by the viewer. Further, when a plurality of imaging conditions are specified for the viewer, pathological images of the same region corresponding to each imaging condition may be displayed side by side.
 また、サーバ5512は、ピラミッド構造のタイル画像群をサーバ5512以外の他の記憶装置、例えば、クラウドサーバ等に記憶してもよい。さらに、以上のようなタイル画像の生成処理の一部又は全部は、クラウドサーバ等で実行されてもよい。 Further, the server 5512 may store the tile image group having a pyramid structure in a storage device other than the server 5512, for example, a cloud server. Further, a part or all of the tile image generation process as described above may be executed by a cloud server or the like.
 表示制御装置5513は、ユーザからの入力操作に応じて、ピラミッド構造のタイル画像群から所望のタイル画像を抽出し、これを表示装置5514に出力する。このような処理により、ユーザは、観察倍率を変えながら観察対象物を観察しているような感覚を得ることができる。すなわち、表示制御装置5513は仮想顕微鏡として機能する。ここでの仮想的な観察倍率は、実際には解像度に相当する。 The display control device 5513 extracts a desired tile image from the tile image group having a pyramid structure in response to an input operation from the user, and outputs this to the display device 5514. By such a process, the user can obtain the feeling of observing the observation object while changing the observation magnification. That is, the display control device 5513 functions as a virtual microscope. The virtual observation magnification here actually corresponds to the resolution.
 なお、高解像度画像の撮像方法は、どの様な方法を用いてもよい。ステー38ジの停止、移動を繰り返しながら分割領域を撮像して高解像度画像を取得してもよいし、所定の速度でステージを移動しながら分割領域を撮像してストリップ上の高解像度画像を取得してもよい。また、高解像度画像からタイル画像を生成する処理は必須の構成ではなく、スティッチング処理により貼り合わされた高解像度画像全体の解像度を段階的に変化させることで、解像度が段階的に変化する画像を生成してもよい。この場合でも、広いエリア域の低解像度画像から狭いエリアの高解像度画像までを段階的にユーザに提示することが可能である。 Any method may be used as the method for capturing a high-resolution image. The divided area may be imaged while repeatedly stopping and moving the stay 38 to acquire a high-resolution image, or the divided area may be imaged while moving the stage at a predetermined speed to acquire a high-resolution image on the strip. You may. In addition, the process of generating a tile image from a high-resolution image is not an indispensable configuration, and by gradually changing the resolution of the entire high-resolution image bonded by the stitching process, an image whose resolution changes stepwise can be created. It may be generated. Even in this case, it is possible to present the user stepwise from a low-resolution image in a wide area to a high-resolution image in a narrow area.
 医療情報システム5530は、いわゆる電子カルテシステムであり、患者を識別する情報、患者の疾患情報、診断に用いた検査情報や画像情報、診断結果、処方薬などの診断に関する情報を記憶する。例えば、ある患者の観察対象物を撮像することで得られる病理画像は、一旦、サーバ5512を介して保存された後、表示制御装置5513によって表示装置5514に表示され得る。病理システム5510を利用する病理医は、表示装置5514に表示された病理画像に基づいて病理診断を行う。病理医によって行われた病理診断結果は、医療情報システム5530に記憶される。 The medical information system 5530 is a so-called electronic medical record system, and stores information related to diagnosis such as patient identification information, patient disease information, test information and image information used for diagnosis, diagnosis result, and prescription drug. For example, a pathological image obtained by imaging an observation object of a patient can be displayed on the display device 5514 by the display control device 5513 after being temporarily stored via the server 5512. A pathologist using the pathological system 5510 makes a pathological diagnosis based on a pathological image displayed on the display device 5514. The results of the pathological diagnosis made by the pathologist are stored in the medical information system 5530.
 導出装置5540は、病理画像に対する解析を実行し得る。この解析には、機械学習によって作成された学習モデルを用いることができる。導出装置5540は、当該解析結果として、特定領域の分類結果や組織の識別結果等を導出してもよい。さらに、導出装置5540は、細胞情報、数、位置、輝度情報等の識別結果やそれらに対するスコアリング情報等を導出してもよい。導出装置5540によって導出されたこれらの情報は、診断支援情報として、病理システム5510の表示装置5514に表示されてもよい。 The derivation device 5540 can perform analysis on the pathological image. A learning model created by machine learning can be used for this analysis. The derivation device 5540 may derive a classification result of a specific region, an organization identification result, or the like as the analysis result. Further, the derivation device 5540 may derive identification results such as cell information, number, position, and luminance information, and scoring information for them. These information derived by the derivation device 5540 may be displayed on the display device 5514 of the pathology system 5510 as diagnostic support information.
 なお、導出装置5540は、1台以上のサーバ(クラウドサーバを含む)等で構成されたサーバシステムであってもよい。また、導出装置5540は、病理システム5510内の例えば表示制御装置5513又はサーバ5512に組み込まれた構成であってもよい。すなわち、病理画像に対する各種解析は、病理システム5510内で実行されてもよい。 The out-licensing device 5540 may be a server system composed of one or more servers (including a cloud server) and the like. Further, the out-licensing device 5540 may be configured to be incorporated in, for example, a display control device 5513 or a server 5512 in the pathology system 5510. That is, various analyzes on the pathological image may be performed within the pathological system 5510.
 本開示に係る技術は、以上説明した構成のうち、例えば、顕微鏡5511に好適に適用され得る。具体的には、顕微鏡5511における低解像度撮像部及び/又は高解像度撮像部に本開示に係る技術を適用することができる。本開示に係る技術を低解像度撮像部及び/又は高解像度撮像部に適用することで、低解像度撮像部及び/又は高解像度撮像部の小型化、強いては、顕微鏡5511の小型化が可能となる。それにより、顕微鏡5511の運搬が容易となるため、システム導入やシステム組換え等を容易化することが可能となる。さらに、本開示に係る技術を低解像度撮像部及び/又は高解像度撮像部に適用することで、病理画像の取得から病理画像の解析までの処理の一部又は全部を顕微鏡5511内においてオンザフライで実行可能となるため、より迅速且つ的確な診断支援情報の出力も可能となる。 The technique according to the present disclosure can be suitably applied to, for example, the microscope 5511 among the configurations described above. Specifically, the technique according to the present disclosure can be applied to the low-resolution imaging unit and / or the high-resolution imaging unit in the microscope 5511. By applying the technique according to the present disclosure to the low-resolution imaging unit and / or the high-resolution imaging unit, the low-resolution imaging unit and / or the high-resolution imaging unit can be miniaturized, and the microscope 5511 can be miniaturized. .. As a result, the microscope 5511 can be easily transported, so that system introduction, system recombination, and the like can be facilitated. Furthermore, by applying the technique according to the present disclosure to the low-resolution imaging unit and / or the high-resolution imaging unit, part or all of the processing from the acquisition of the pathological image to the analysis of the pathological image is performed on the fly in the microscope 5511. Since it is possible, it is possible to output diagnostic support information more quickly and accurately.
 なお、上記で説明した構成は、診断支援システムに限らず、共焦点顕微鏡や蛍光顕微鏡、ビデオ顕微鏡等の生物顕微鏡全般にも適用され得る。ここで、観察対象物は、培養細胞や受精卵、精子等の生体試料、細胞シート、三次元細胞組織等の生体材料、ゼブラフィッシュやマウス等の生体であってもよい。また、観察対象物は、ガラススライドに限らず、ウェルプレートやシャーレ等に格納された状態で観察されることもできる。 The configuration described above can be applied not only to the diagnostic support system but also to general biological microscopes such as confocal microscopes, fluorescence microscopes, and video microscopes. Here, the observation target may be a biological sample such as cultured cells, fertilized eggs, sperms, a biological material such as a cell sheet or a three-dimensional cell tissue, or a living body such as a zebrafish or a mouse. Further, the object to be observed is not limited to the glass slide, and can be observed in a state of being stored in a well plate, a petri dish, or the like.
 さらに、顕微鏡を利用して取得した観察対象物の静止画像から動画像が生成されてもよい。例えば、所定期間連続的に撮像した静止画像から動画像を生成してもよいし、所定の間隔を空けて撮像した静止画像から画像シーケンスを生成してもよい。このように、静止画像から動画像を生成することで、がん細胞や神経細胞、心筋組織、精子等の拍動や伸長、遊走等の動きや培養細胞や受精卵の分裂過程など、観察対象物の動的な特徴を機械学習を用いて解析することが可能となる。 Further, a moving image may be generated from a still image of an observation object acquired by using a microscope. For example, a moving image may be generated from a still image continuously captured for a predetermined period, or an image sequence may be generated from a still image captured at a predetermined interval. In this way, by generating a moving image from a still image, the movements such as beating and elongation of cancer cells, nerve cells, myocardial tissue, sperm, migration, and the division process of cultured cells and fertilized eggs can be observed. It is possible to analyze the dynamic characteristics of objects using machine learning.
 なお、本技術は以下のような構成を取ることができる。
 (1)光電変換を行う複数の画素を有する画素アレイ部と、
 前記画素アレイ部から出力されたアナログ画素信号をデジタル画素データに変換する変換器と、
 前記デジタル画素データに対して信号処理を行う信号処理部と、
 前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出する明るさ情報検出器と、を備え、
 前記信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理の少なくとも一部を行う、撮像装置。
 (2)前記信号処理部で信号処理されたデータに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部を備える、前記(1)に記載の撮像装置。
 (3)前記信号処理部は、外部からの指示に応じた前記信号処理を行い、外部からの指示がない場合は、前記明るさ情報検出器で検出された明るさ情報に応じた前記信号処理を行い、
 前記情報処理部は、前記信号処理部が前記明るさ情報検出器で検出された明るさ情報に応じた前記信号処理を行ったデータに基づいて、前記認識処理及び前記検出処理の少なくとも一方を行う、前記(2)に記載の撮像装置。
 (4)前記信号処理部は、
 前記デジタル画素データに対して第1信号処理を行う第1信号処理部と、
 前記デジタル画素データ、又は前記第1信号処理の少なくとも一部を行ったデータに対して、第2信号処理を行う第2信号処理部と、を有し、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記第2信号処理の少なくとも一部を行い、
 前記第2信号処理部の出力データに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部をさらに備える、前記(1)に記載の撮像装置。
 (5)前記第1信号処理及び前記第2信号処理には、共通の信号処理が含まれており、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記共通の信号処理の少なくとも一部を行う、前記(4)に記載の撮像装置。
 (6)前記共通の信号処理は、前記画素アレイ部で撮像された画像の明るさ調整処理を含んでおり、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記明るさ調整処理を行う、前記(5)に記載の撮像装置。
 (7)前記共通の信号処理は、前記画素アレイ部で撮像された画像のホワイトバランス調整処理を含んでおり、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記ホワイトバランス調整処理を行う、前記(5)又は(6)に記載の撮像装置。
 (8)前記認識処理は、機械学習により生成された計算モデルに入力データを与えて演算される処理を含んでおり、
 前記入力データは、前記信号処理部の出力データである、前記(2)乃至(7)のいずれか一項に記載の撮像装置。
 (9)前記第2信号処理部又は前記情報処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて動き検出処理を行う、(4)乃至(8)のいずれか一項に記載の撮像装置。
 (10)前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理部が前記信号処理を行うための設定値を演算する演算器を備える、(1)乃至(9)のいずれか一項に記載の撮像装置。
 (11)前記画素アレイ部を有する第1基板と、
 前記第1基板に積層される、前記変換器、前記信号処理部及び前記明るさ情報検出器を有する第2基板と、を備える、(1)乃至(10)のいずれか一項に記載の撮像装置。
 (12)前記第1基板と前記第2基板とは、CoC(Chip on Chip)方式、CoW(Chip on Wafer)方式、又はWoW(Wafer on Wafer)方式のいずれかで貼り合わされる、(11)に記載の撮像装置。
 (13)前記明るさ情報検出器で検出された明るさ情報に基づいて、前記アナログ画素信号のゲイン調整を行うゲイン調整部を備える、(1)乃至(12)のいずれか一項に記載の撮像装置。
(14)前記ゲイン調整部は、外部からの指示に応じた前記ゲイン調整を行い、外部からの指示がない場合は、前記明るさ情報検出器で検出された明るさ情報に基づいて前記ゲイン調整を行う(13)に記載の撮像装置。
 (15)撮像された画像データを出力する撮像装置と、
 前記画像データに対して所定の信号処理を行うプロセッサと、を備え、
 前記撮像装置は、
 光電変換を行う複数の画素を有する画素アレイ部と、
 前記画素アレイ部から出力されたアナログ画素信号をデジタル画素データに変換する変換器と、
 前記デジタル画素データに対して信号処理を行う信号処理部と、
 前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出する明るさ情報検出器と、を備え、
 前記信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理の少なくとも一部を行い、
 前記信号処理部で前記信号処理された前記画像データが前記プロセッサに供給される、電子機器。
 (16)前記信号処理部は、
 前記デジタル画素データに対して第1信号処理を行う第1信号処理部と、
 前記デジタル画素データ、又は前記第1信号処理の少なくとも一部を行ったデータに対して、第2信号処理を行う第2信号処理部と、を有し、
 前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記第2信号処理の少なくとも一部を行い、
 前記第2信号処理部の出力データに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部をさらに備える、(15)に記載の電子機器。
 (17)前記プロセッサは、前記第1信号処理部に指示を送って、前記第1信号処理部の出力データに基づく処理を行う第1動作モードと、前記第1動作モードよりも低消費電力で、前記第1信号処理部に指示を送らず、前記第1信号処理部の出力データを受領しない第2動作モードとを有し、
 前記第2信号処理部又は前記情報処理部は、前記プロセッサの動作モードに関係なく、前記明るさ情報検出器で検出された明るさ情報に基づいて動き検出処理を行い、
 前記第2信号処理部又は前記情報処理部は、前記プロセッサが前記第2動作モードの場合には、前記明るさ情報検出器で検出された明るさ情報に基づいて前記動き検出処理を行い、前記動き検出処理にて動きが検出されると、前記プロセッサを前記第2動作モードから前記第1動作モードに復帰させる、(16)に記載の電子機器。
 (18)画素アレイ部にて光電変換を行ってアナログ画素信号を出力するステップと、
 前記アナログ画素信号をデジタル画素データに変換するステップと、
 前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出するステップと、
 前記デジタル画素データに対して信号処理を行い、前記信号処理の少なくとも一部については前記明るさ情報に基づいて行うステップと、を備える、撮像方法。
The present technology can have the following configurations.
(1) A pixel array unit having a plurality of pixels that perform photoelectric conversion, and
A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data,
A signal processing unit that performs signal processing on the digital pixel data, and
A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
The signal processing unit is an imaging device that performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
(2) The image pickup apparatus according to (1) above, further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the data signal processed by the signal processing unit.
(3) The signal processing unit performs the signal processing according to an instruction from the outside, and if there is no instruction from the outside, the signal processing according to the brightness information detected by the brightness information detector. And
The information processing unit performs at least one of the recognition process and the detection process based on the data obtained by the signal processing unit performing the signal processing according to the brightness information detected by the brightness information detector. , The imaging device according to (2) above.
(4) The signal processing unit
A first signal processing unit that performs a first signal processing on the digital pixel data,
It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
The second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
The image pickup apparatus according to (1), further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit.
(5) The first signal processing and the second signal processing include common signal processing.
The imaging device according to (4), wherein the second signal processing unit performs at least a part of the common signal processing based on the brightness information detected by the brightness information detector.
(6) The common signal processing includes a brightness adjustment processing of an image captured by the pixel array unit.
The imaging device according to (5), wherein the second signal processing unit performs the brightness adjustment process based on the brightness information detected by the brightness information detector.
(7) The common signal processing includes a white balance adjustment process of an image captured by the pixel array unit.
The imaging device according to (5) or (6) above, wherein the second signal processing unit performs the white balance adjustment process based on the brightness information detected by the brightness information detector.
(8) The recognition process includes a process of giving input data to a calculation model generated by machine learning and performing a calculation.
The imaging device according to any one of (2) to (7) above, wherein the input data is output data of the signal processing unit.
(9) The second signal processing unit or the information processing unit performs motion detection processing based on the brightness information detected by the brightness information detector, any one of (4) to (8). The imaging apparatus according to.
(10) Of (1) to (9), the signal processing unit includes an arithmetic unit that calculates a set value for performing the signal processing based on the brightness information detected by the brightness information detector. The imaging device according to any one item.
(11) A first substrate having the pixel array portion and
The imaging image according to any one of (1) to (10), comprising the converter, the signal processing unit, and the second substrate having the brightness information detector, which are laminated on the first substrate. apparatus.
(12) The first substrate and the second substrate are bonded to each other by any of a CoC (Chip on Chip) method, a CoW (Chip on Wafer) method, or a WoW (Wafer on Wafer) method (11). The imaging apparatus according to.
(13) The item according to any one of (1) to (12), further comprising a gain adjusting unit that adjusts the gain of the analog pixel signal based on the brightness information detected by the brightness information detector. Imaging device.
(14) The gain adjusting unit adjusts the gain in response to an external instruction, and if there is no external instruction, the gain adjustment is based on the brightness information detected by the brightness information detector. (13).
(15) An imaging device that outputs captured image data and
A processor that performs predetermined signal processing on the image data is provided.
The image pickup device
A pixel array unit having a plurality of pixels that perform photoelectric conversion,
A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data,
A signal processing unit that performs signal processing on the digital pixel data, and
A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
The signal processing unit performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
An electronic device in which the image data processed by the signal processing unit is supplied to the processor.
(16) The signal processing unit
A first signal processing unit that performs a first signal processing on the digital pixel data,
It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
The second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
The electronic device according to (15), further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit.
(17) The processor sends an instruction to the first signal processing unit to perform processing based on the output data of the first signal processing unit, and has lower power consumption than the first operation mode. It has a second operation mode in which an instruction is not sent to the first signal processing unit and the output data of the first signal processing unit is not received.
The second signal processing unit or the information processing unit performs motion detection processing based on the brightness information detected by the brightness information detector regardless of the operation mode of the processor.
When the processor is in the second operation mode, the second signal processing unit or the information processing unit performs the motion detection process based on the brightness information detected by the brightness information detector, and the motion detection process is performed. The electronic device according to (16), wherein when motion is detected by the motion detection process, the processor is returned from the second operation mode to the first operation mode.
(18) A step of performing photoelectric conversion in the pixel array section and outputting an analog pixel signal.
The step of converting the analog pixel signal into digital pixel data,
A step of detecting brightness information incident on the pixel array unit based on the digital pixel data, and
An imaging method comprising a step of performing signal processing on the digital pixel data and performing at least a part of the signal processing based on the brightness information.
 本開示の態様は、上述した個々の実施形態に限定されるものではなく、当業者が想到しうる種々の変形も含むものであり、本開示の効果も上述した内容に限定されない。すなわち、特許請求の範囲に規定された内容およびその均等物から導き出される本開示の概念的な思想と趣旨を逸脱しない範囲で種々の追加、変更および部分的削除が可能である。 The aspect of the present disclosure is not limited to the individual embodiments described above, but also includes various modifications that can be conceived by those skilled in the art, and the effects of the present disclosure are not limited to the above-mentioned contents. That is, various additions, changes and partial deletions are possible without departing from the conceptual idea and purpose of the present disclosure derived from the contents defined in the claims and their equivalents.
 1 撮像装置、2 電子機器、3 アプリケーションプロセッサ、4 撮像部、5 制御部、6 変換器、7 信号処理部、7a 処理A部、7b AE用ゲイン調整部、7c 処理B部、7d 処理C部、7e AWB用ゲイン調整部、7f 処理D部、7g AE用ゲイン調整部、7h AWB用ゲイン調整部、7i 処理E部、7j 処理F部、8 メモリ、9 DSP、10 セレクタ、11 明るさ情報検出器、12 光学系、13 画素アレイ部、14 信号処理部、14a 処理X部、14b AE用ゲイン調整部、14c AWB用ゲイン調整部、14d 処理Y部、14e 処理Z部、15 OPD、20 ネットワーク、21 クラウドサーバ、31 第1基板、32 第2基板、71 第1信号処理部、72 第2信号処理部 1 Imaging device, 2 Electronic equipment, 3 Application processor, 4 Imaging unit, 5 Control unit, 6 Converter, 7 Signal processing unit, 7a processing A unit, 7b AE gain adjustment unit, 7c processing B unit, 7d processing C unit , 7e AWB gain adjustment unit, 7f processing D unit, 7g AE gain adjustment unit, 7h AWB gain adjustment unit, 7i processing E unit, 7j processing F unit, 8 memory, 9 DSP, 10 selector, 11 brightness information Detector, 12 optical system, 13 pixel array unit, 14 signal processing unit, 14a processing X unit, 14b AE gain adjustment unit, 14c AWB gain adjustment unit, 14d processing Y unit, 14e processing Z unit, 15 OPD, 20 Network, 21 cloud server, 31 1st board, 32 2nd board, 71 1st signal processing unit, 72 2nd signal processing unit

Claims (18)

  1.  光電変換を行う複数の画素を有する画素アレイ部と、
     前記画素アレイ部から出力されたアナログ画素信号をデジタル画素データに変換する変換器と、
     前記デジタル画素データに対して信号処理を行う信号処理部と、
     前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出する明るさ情報検出器と、を備え、
     前記信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理の少なくとも一部を行う、撮像装置。
    A pixel array unit having a plurality of pixels that perform photoelectric conversion,
    A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data,
    A signal processing unit that performs signal processing on the digital pixel data, and
    A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
    The signal processing unit is an imaging device that performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
  2.  前記信号処理部で信号処理されたデータに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部を備える、請求項1に記載の撮像装置。 The imaging device according to claim 1, further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the data processed by the signal processing unit.
  3.  前記信号処理部は、外部からの指示に応じた前記信号処理を行い、外部からの指示がない場合は、前記明るさ情報検出器で検出された明るさ情報に応じた前記信号処理を行い、
     前記情報処理部は、前記信号処理部が前記明るさ情報検出器で検出された明るさ情報に応じた前記信号処理を行ったデータに基づいて、前記認識処理及び前記検出処理の少なくとも一方を行う、請求項2に記載の撮像装置。
    The signal processing unit performs the signal processing according to an instruction from the outside, and if there is no instruction from the outside, performs the signal processing according to the brightness information detected by the brightness information detector.
    The information processing unit performs at least one of the recognition process and the detection process based on the data obtained by the signal processing unit performing the signal processing according to the brightness information detected by the brightness information detector. , The imaging apparatus according to claim 2.
  4.  前記信号処理部は、
     前記デジタル画素データに対して第1信号処理を行う第1信号処理部と、
     前記デジタル画素データ、又は前記第1信号処理の少なくとも一部を行ったデータに対して、第2信号処理を行う第2信号処理部と、を有し、
     前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記第2信号処理の少なくとも一部を行い、
     前記第2信号処理部の出力データに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部をさらに備える、請求項1に記載の撮像装置。
    The signal processing unit
    A first signal processing unit that performs a first signal processing on the digital pixel data,
    It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
    The second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
    The imaging apparatus according to claim 1, further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit.
  5.  前記第1信号処理及び前記第2信号処理には、共通の信号処理が含まれており、
     前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記共通の信号処理の少なくとも一部を行う、請求項4に記載の撮像装置。
    The first signal processing and the second signal processing include common signal processing.
    The imaging device according to claim 4, wherein the second signal processing unit performs at least a part of the common signal processing based on the brightness information detected by the brightness information detector.
  6.  前記共通の信号処理は、前記画素アレイ部で撮像された画像の明るさ調整処理を含んでおり、
     前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記明るさ調整処理を行う、請求項5に記載の撮像装置。
    The common signal processing includes brightness adjustment processing of an image captured by the pixel array unit.
    The imaging device according to claim 5, wherein the second signal processing unit performs the brightness adjustment process based on the brightness information detected by the brightness information detector.
  7.  前記共通の信号処理は、前記画素アレイ部で撮像された画像のホワイトバランス調整処理を含んでおり、
     前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記ホワイトバランス調整処理を行う、請求項5に記載の撮像装置。
    The common signal processing includes a white balance adjustment process of an image captured by the pixel array unit.
    The imaging device according to claim 5, wherein the second signal processing unit performs the white balance adjustment process based on the brightness information detected by the brightness information detector.
  8.  前記認識処理は、機械学習により生成された計算モデルに入力データを与えて演算される処理を含んでおり、
     前記入力データは、前記信号処理部の出力データである、請求項2に記載の撮像装置。
    The recognition process includes a process of giving input data to a calculation model generated by machine learning and performing a calculation.
    The imaging device according to claim 2, wherein the input data is output data of the signal processing unit.
  9.  前記第2信号処理部又は前記情報処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて動き検出処理を行う、請求項4に記載の撮像装置。 The imaging device according to claim 4, wherein the second signal processing unit or the information processing unit performs motion detection processing based on the brightness information detected by the brightness information detector.
  10.  前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理部が前記信号処理を行うための設定値を演算する演算器を備える、請求項1に記載の撮像装置。 The imaging device according to claim 1, further comprising an arithmetic unit in which the signal processing unit calculates a set value for performing the signal processing based on the brightness information detected by the brightness information detector.
  11.  前記画素アレイ部を有する第1基板と、
     前記第1基板に積層される、前記変換器、前記信号処理部及び前記明るさ情報検出器を有する第2基板と、を備える、請求項1に記載の撮像装置。
    The first substrate having the pixel array portion and
    The imaging device according to claim 1, further comprising the converter, the signal processing unit, and the second substrate having the brightness information detector, which are laminated on the first substrate.
  12.  前記第1基板と前記第2基板とは、CoC(Chip on Chip)方式、CoW(Chip on Wafer)方式、又はWoW(Wafer on Wafer)方式のいずれかで貼り合わされる、請求項11に記載の撮像装置。 The eleventh aspect of claim 11, wherein the first substrate and the second substrate are bonded by any of a CoC (Chip on Chip) method, a CoW (Chip on Wafer) method, or a WoW (Wafer on Wafer) method. Imaging device.
  13.  前記明るさ情報検出器で検出された明るさ情報に基づいて、前記アナログ画素信号のゲイン調整を行うゲイン調整部を備える、請求項1に記載の撮像装置。 The imaging device according to claim 1, further comprising a gain adjusting unit that adjusts the gain of the analog pixel signal based on the brightness information detected by the brightness information detector.
  14.  前記ゲイン調整部は、外部からの指示に応じた前記ゲイン調整を行い、外部からの指示がない場合は、前記明るさ情報検出器で検出された明るさ情報に基づいて前記ゲイン調整を行う請求項13に記載の撮像装置。 The gain adjusting unit performs the gain adjustment in response to an external instruction, and if there is no external instruction, the gain adjusting unit performs the gain adjustment based on the brightness information detected by the brightness information detector. Item 13. The imaging device according to item 13.
  15.  撮像された画像データを出力する撮像装置と、
     前記画像データに対して所定の信号処理を行うプロセッサと、を備え、
     前記撮像装置は、
     光電変換を行う複数の画素を有する画素アレイ部と、
     前記画素アレイ部から出力されたアナログ画素信号をデジタル画素データに変換する変換器と、
     前記デジタル画素データに対して信号処理を行う信号処理部と、
     前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出する明るさ情報検出器と、を備え、
     前記信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記信号処理の少なくとも一部を行い、
     前記信号処理部で前記信号処理された前記画像データが前記プロセッサに供給される、電子機器。
    An image pickup device that outputs the captured image data, and
    A processor that performs predetermined signal processing on the image data is provided.
    The image pickup device
    A pixel array unit having a plurality of pixels that perform photoelectric conversion,
    A converter that converts an analog pixel signal output from the pixel array unit into digital pixel data,
    A signal processing unit that performs signal processing on the digital pixel data, and
    A brightness information detector that detects brightness information incident on the pixel array unit based on the digital pixel data is provided.
    The signal processing unit performs at least a part of the signal processing based on the brightness information detected by the brightness information detector.
    An electronic device in which the image data processed by the signal processing unit is supplied to the processor.
  16.  前記信号処理部は、
     前記デジタル画素データに対して第1信号処理を行う第1信号処理部と、
     前記デジタル画素データ、又は前記第1信号処理の少なくとも一部を行ったデータに対して、第2信号処理を行う第2信号処理部と、を有し、
     前記第2信号処理部は、前記明るさ情報検出器で検出された明るさ情報に基づいて、前記第2信号処理の少なくとも一部を行い、
     前記第2信号処理部の出力データに基づいて、所定の認識処理及び検出処理の少なくとも一方を行う情報処理部をさらに備える、請求項15に記載の電子機器。
    The signal processing unit
    A first signal processing unit that performs a first signal processing on the digital pixel data,
    It has a second signal processing unit that performs a second signal processing on the digital pixel data or data that has performed at least a part of the first signal processing.
    The second signal processing unit performs at least a part of the second signal processing based on the brightness information detected by the brightness information detector.
    The electronic device according to claim 15, further comprising an information processing unit that performs at least one of a predetermined recognition process and a detection process based on the output data of the second signal processing unit.
  17.  前記プロセッサは、前記第1信号処理部に指示を送って、前記第1信号処理部の出力データに基づく処理を行う第1動作モードと、前記第1動作モードよりも低消費電力で、前記第1信号処理部に指示を送らず、前記第1信号処理部の出力データを受領しない第2動作モードとを有し、
     前記第2信号処理部又は前記情報処理部は、前記プロセッサの動作モードに関係なく、前記明るさ情報検出器で検出された明るさ情報に基づいて動き検出処理を行い、
     前記第2信号処理部又は前記情報処理部は、前記プロセッサが前記第2動作モードの場合には、前記明るさ情報検出器で検出された明るさ情報に基づいて前記動き検出処理を行い、前記動き検出処理にて動きが検出されると、前記プロセッサを前記第2動作モードから前記第1動作モードに復帰させる、請求項16に記載の電子機器。
    The processor sends an instruction to the first signal processing unit to perform processing based on the output data of the first signal processing unit, and the first operation mode has lower power consumption than the first operation mode. It has a second operation mode in which one does not send an instruction to the first signal processing unit and does not receive the output data of the first signal processing unit.
    The second signal processing unit or the information processing unit performs motion detection processing based on the brightness information detected by the brightness information detector regardless of the operation mode of the processor.
    When the processor is in the second operation mode, the second signal processing unit or the information processing unit performs the motion detection process based on the brightness information detected by the brightness information detector, and the motion detection process is performed. The electronic device according to claim 16, wherein when motion is detected by the motion detection process, the processor is returned from the second operation mode to the first operation mode.
  18.  画素アレイ部にて光電変換を行ってアナログ画素信号を出力するステップと、
     前記アナログ画素信号をデジタル画素データに変換するステップと、
     前記デジタル画素データに基づいて、前記画素アレイ部に入射された明るさ情報を検出するステップと、
     前記デジタル画素データに対して信号処理を行い、前記信号処理の少なくとも一部については前記明るさ情報に基づいて行うステップと、を備える、撮像方法。
    The step of performing photoelectric conversion in the pixel array section and outputting an analog pixel signal,
    The step of converting the analog pixel signal into digital pixel data,
    A step of detecting brightness information incident on the pixel array unit based on the digital pixel data, and
    An imaging method comprising a step of performing signal processing on the digital pixel data and performing at least a part of the signal processing based on the brightness information.
PCT/JP2020/037944 2019-10-18 2020-10-07 Image capture apparatus, electronic device and image capture method WO2021075321A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019191348A JP2021068950A (en) 2019-10-18 2019-10-18 Imaging apparatus and electronic apparatus
JP2019-191348 2019-10-18

Publications (1)

Publication Number Publication Date
WO2021075321A1 true WO2021075321A1 (en) 2021-04-22

Family

ID=75537490

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/037944 WO2021075321A1 (en) 2019-10-18 2020-10-07 Image capture apparatus, electronic device and image capture method

Country Status (2)

Country Link
JP (1) JP2021068950A (en)
WO (1) WO2021075321A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022257574A1 (en) * 2021-06-07 2022-12-15 荣耀终端有限公司 Fusion algorithm of ai automatic white balance and automatic white balance, and electronic device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009750A1 (en) * 2022-07-06 2024-01-11 ソニーセミコンダクタソリューションズ株式会社 Imaging device and operation method of imaging device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009147977A (en) * 2009-03-23 2009-07-02 Sony Corp Camera system and mobile camera system
JP2010204821A (en) * 2009-03-02 2010-09-16 Hitachi Constr Mach Co Ltd Working machine equipped with periphery monitoring device
JP2014220546A (en) * 2013-04-30 2014-11-20 キヤノン株式会社 Image recorder and image recording method
JP2015106860A (en) * 2013-12-02 2015-06-08 株式会社リコー Monitoring imaging system and program
JP2016171297A (en) * 2015-03-12 2016-09-23 ソニー株式会社 Solid-state imaging device, manufacturing method, and electronic device
JP2017079281A (en) * 2015-10-21 2017-04-27 ソニーセミコンダクタソリューションズ株式会社 Semiconductor device and manufacturing method
WO2019069581A1 (en) * 2017-10-02 2019-04-11 ソニー株式会社 Image processing device and image processing method
WO2019087764A1 (en) * 2017-10-30 2019-05-09 ソニーセミコンダクタソリューションズ株式会社 Backside irradiation type solid-state imaging device, method for manufacturing backside irradiation type solid-state imaging device, imaging device, and electronic apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010204821A (en) * 2009-03-02 2010-09-16 Hitachi Constr Mach Co Ltd Working machine equipped with periphery monitoring device
JP2009147977A (en) * 2009-03-23 2009-07-02 Sony Corp Camera system and mobile camera system
JP2014220546A (en) * 2013-04-30 2014-11-20 キヤノン株式会社 Image recorder and image recording method
JP2015106860A (en) * 2013-12-02 2015-06-08 株式会社リコー Monitoring imaging system and program
JP2016171297A (en) * 2015-03-12 2016-09-23 ソニー株式会社 Solid-state imaging device, manufacturing method, and electronic device
JP2017079281A (en) * 2015-10-21 2017-04-27 ソニーセミコンダクタソリューションズ株式会社 Semiconductor device and manufacturing method
WO2019069581A1 (en) * 2017-10-02 2019-04-11 ソニー株式会社 Image processing device and image processing method
WO2019087764A1 (en) * 2017-10-30 2019-05-09 ソニーセミコンダクタソリューションズ株式会社 Backside irradiation type solid-state imaging device, method for manufacturing backside irradiation type solid-state imaging device, imaging device, and electronic apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022257574A1 (en) * 2021-06-07 2022-12-15 荣耀终端有限公司 Fusion algorithm of ai automatic white balance and automatic white balance, and electronic device

Also Published As

Publication number Publication date
JP2021068950A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
JP7090666B2 (en) Solid-state image sensor, electronic equipment and control method
JP6689437B2 (en) Stacked light receiving sensor and electronic equipment
US11792551B2 (en) Stacked light receiving sensor and electronic apparatus
WO2021070894A1 (en) Imaging device, electronic apparatus, and imaging method
WO2020027233A1 (en) Imaging device and vehicle control system
JP7423491B2 (en) Solid-state imaging device and vehicle control system
WO2021075321A1 (en) Image capture apparatus, electronic device and image capture method
US20240021646A1 (en) Stacked light-receiving sensor and in-vehicle imaging device
WO2020027161A1 (en) Layered-type light-receiving sensor and electronic device
WO2021075352A1 (en) Image-capturing device and electronic apparatus
WO2021075292A1 (en) Light receiving device, electronic equipment, and light receiving method
WO2020027074A1 (en) Solid-state imaging device and electronic apparatus
TWI840429B (en) Multilayer photosensitive sensor and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20877633

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20877633

Country of ref document: EP

Kind code of ref document: A1