WO2020027161A1 - Capteur de réception de lumière de type en couches et dispositif électronique - Google Patents

Capteur de réception de lumière de type en couches et dispositif électronique Download PDF

Info

Publication number
WO2020027161A1
WO2020027161A1 PCT/JP2019/029909 JP2019029909W WO2020027161A1 WO 2020027161 A1 WO2020027161 A1 WO 2020027161A1 JP 2019029909 W JP2019029909 W JP 2019029909W WO 2020027161 A1 WO2020027161 A1 WO 2020027161A1
Authority
WO
WIPO (PCT)
Prior art keywords
substrate
unit
receiving sensor
image
processing unit
Prior art date
Application number
PCT/JP2019/029909
Other languages
English (en)
Japanese (ja)
Inventor
良仁 浴
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Priority to CN201980049297.1A priority Critical patent/CN112470462B/zh
Priority to KR1020217001553A priority patent/KR20210029205A/ko
Priority to US17/251,926 priority patent/US11735614B2/en
Priority to EP19845060.3A priority patent/EP3833007B1/fr
Priority claimed from JP2019139439A external-priority patent/JP6689437B2/ja
Publication of WO2020027161A1 publication Critical patent/WO2020027161A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • the present disclosure relates to a stacked light receiving sensor and an electronic device.
  • an imaging apparatus for acquiring a still image or a moving image there is a flat-type image sensor in which chips such as a sensor chip, a memory chip, and a DSP (Digital Signal Processor) chip are connected in parallel by a plurality of bumps. .
  • chips such as a sensor chip, a memory chip, and a DSP (Digital Signal Processor) chip are connected in parallel by a plurality of bumps. .
  • DSP Digital Signal Processor
  • the present disclosure proposes a stacked light-receiving sensor and an electronic device that can execute higher-level processing in a chip.
  • a stacked light-receiving sensor includes a first substrate, and a second substrate bonded to the first substrate, wherein the first substrate includes a plurality of substrates.
  • a pixel array unit in which unit pixels are arranged in a two-dimensional matrix; wherein the second substrate is configured to convert an analog pixel signal output from the pixel array unit into digital image data;
  • a processing unit that performs processing based on a neural network calculation model on data based on the at least one of the converters, at least a part of the converter is disposed on a first side of the second substrate, and the processing unit
  • the second substrate is arranged on a second side opposite to the first side.
  • FIG. 2 is a block diagram illustrating a schematic configuration example of an imaging device as an electronic apparatus according to the first embodiment.
  • FIG. 2 is a schematic diagram illustrating an example of a chip configuration of the image sensor according to the first embodiment.
  • FIG. 3 is a diagram illustrating a layout example of a first substrate in a first layout example according to the first embodiment.
  • FIG. 3 is a diagram illustrating a layout example of a second substrate in a first layout example according to the first embodiment.
  • FIG. 5 is a diagram illustrating a layout example of a second substrate in a second layout example according to the first embodiment.
  • FIG. 6 is a diagram illustrating a layout example of a second substrate in a third layout example according to the first embodiment.
  • FIG. 11 is a diagram illustrating a layout example of a second substrate in a fourth layout example according to the first embodiment.
  • FIG. 14 is a diagram illustrating a layout example of a second substrate in a fifth layout example according to the first embodiment.
  • FIG. 13 is a diagram illustrating a layout example of a second substrate in a sixth layout example according to the first embodiment.
  • FIG. 14 is a diagram illustrating a layout example of a second substrate in a seventh layout example according to the first embodiment.
  • FIG. 15 is a diagram illustrating a layout example of a second substrate in an eighth layout example according to the first embodiment. It is a figure showing the example of layout of the 2nd board in the 9th example of layout concerning a 1st embodiment.
  • FIG. 14 is a diagram illustrating a layout example of a second substrate in a fifth layout example according to the first embodiment.
  • FIG. 13 is a diagram illustrating a layout example of a second substrate in a sixth layout example according to the first embodiment.
  • FIG. 11 is a layout diagram illustrating a schematic configuration example of a first substrate in an image sensor according to a second embodiment. It is a schematic diagram which shows the example of a chip structure of the image sensor which concerns on 2nd Embodiment.
  • FIG. 13 is a layout diagram illustrating a schematic configuration example of a first substrate in an image sensor according to a third embodiment.
  • FIG. 13 is a layout diagram illustrating a schematic configuration example of a second substrate in the image sensor according to the third embodiment.
  • It is a schematic diagram which shows the example of a chip structure of the image sensor which concerns on 3rd Embodiment.
  • It is a block diagram showing an example of a schematic structure of a vehicle control system.
  • FIG. 3 is a block diagram illustrating an example of a functional configuration of a camera head and a CCU. It is a block diagram showing an example of a schematic structure of a diagnosis support system.
  • FIG. 1 is a block diagram illustrating a schematic configuration example of an imaging apparatus as an electronic apparatus according to the first embodiment.
  • the imaging device 1 includes an image sensor 10 that is a solid-state imaging device, and an application processor 20.
  • the image sensor 10 includes an imaging unit 11, a control unit 12, a converter (hereinafter, referred to as an ADC) 17, a signal processing unit 13, a DSP (Digital Signal Processor) 14, a memory 15, And a selector (also referred to as an output unit) 16.
  • ADC analog to Processor
  • DSP Digital Signal Processor
  • the control unit 12 controls each unit in the image sensor 10 according to, for example, a user operation or a set operation mode.
  • the imaging unit 11 has, for example, a configuration in which an optical system 104 including a zoom lens, a focus lens, an aperture, and the like, and unit pixels (unit pixels 101a in FIG. 2) including light receiving elements such as photodiodes are arranged in a two-dimensional matrix. And a pixel array unit 101 provided. Light incident from the outside passes through the optical system 104 to form an image on a light receiving surface of the pixel array unit 101 on which light receiving elements are arranged. Each unit pixel 101a of the pixel array unit 101 converts the light incident on the light receiving element into an electric charge, and accumulates the charge corresponding to the amount of the incident light in a readable manner.
  • the ADC 17 generates digital image data by converting an analog pixel signal of each unit pixel 101a read from the imaging unit 11 into a digital value, and generates the generated image data by the signal processing unit 13 and / or the memory. 15 is output.
  • the ADC 17 may include a voltage generation circuit that generates a drive voltage for driving the imaging unit 11 from a power supply voltage or the like.
  • the signal processing unit 13 performs various signal processing on digital image data input from the ADC 17 or digital image data read from the memory 15 (hereinafter, referred to as processing target image data). For example, when the image data to be processed is a color image, the signal processing unit 13 converts the format of the image data into YUV image data, RGB image data, or the like. In addition, the signal processing unit 13 performs, for example, processing such as noise removal and white balance adjustment on the image data to be processed as necessary. In addition, the signal processing unit 13 performs various signal processing (also referred to as pre-processing) necessary for the DSP 14 to process the image data to be processed.
  • pre-processing various signal processing necessary for the DSP 14 to process the image data to be processed.
  • the DSP 14 executes, for example, a program stored in the memory 15 to perform various processes using a learned model (also called a neural network calculation model) created by machine learning using a deep neural network (DNN).
  • a learned model also called a neural network calculation model
  • DNN deep neural network
  • This learned model is generated by inputting an input signal corresponding to the output of the pixel array unit 101 and learning data associated with a label for the input signal to a predetermined machine learning model. It may be designed on the basis of the set parameters.
  • the predetermined machine learning model may be a learning model using a multilayer neural network (also referred to as a multilayer neural network model).
  • the DSP 14 executes a calculation process based on the learned model stored in the memory 15 to execute a process of multiplying the dictionary coefficient stored in the memory 15 by the image data.
  • the result (calculation result) obtained by such calculation processing is output to the memory 15 and / or the selector 16.
  • the calculation result may include image data obtained by executing a calculation process using the learned model, and various information (metadata) obtained from the image data.
  • the DSP 14 may include a memory controller for controlling access to the memory 15.
  • the image data to be processed by the DSP 14 may be image data normally read from the pixel array unit 101, or the data size may be reduced by thinning out pixels of the normally read image data.
  • the image data may be reduced image data.
  • the image data may be image data read out with a smaller data size than usual by executing reading out of the pixel array unit 101 by thinning out pixels.
  • the normal reading here may be reading without skipping pixels.
  • the memory 15 stores the image data output from the ADC 17, the image data signal-processed by the signal processing unit 13, the calculation result obtained by the DSP 14, and the like as necessary.
  • the memory 15 stores the algorithm of the learned model executed by the DSP 14 as a program and dictionary coefficients.
  • the DSP 14 learns a learning model by changing the weights of various parameters in the learning model using the learning data, or prepares a plurality of learning models and uses the learning model according to the content of the arithmetic processing. , Or by acquiring a learned learning model from an external device to execute the arithmetic processing.
  • the selector 16 selectively outputs the image data output from the DSP 14, the image data stored in the memory 15, and the calculation result, for example, in accordance with a selection control signal from the control unit 12. Note that when the DSP 14 does not perform processing on the image data output from the signal processing unit 13 and the selector 16 outputs the image data output from the DSP 14, the selector 16 13 is output as it is.
  • the image data and the calculation result output from the selector 16 as described above are input to the application processor 20 that processes display and a user interface.
  • the application processor 20 is configured using, for example, a CPU (Central Processing Unit) and executes an operating system and various application software.
  • the application processor 20 may have functions such as a GPU (Graphics Processing Unit) and a baseband processor.
  • the application processor 20 performs various processes as needed on the input image data and the calculation results, executes display to the user, and transmits the image data and the calculation result to the external cloud server 30 via the predetermined network 40. Or
  • Various networks such as the Internet, a wired LAN (Local Area Network) or a wireless LAN, a mobile communication network, and Bluetooth (registered trademark) can be applied to the predetermined network 40.
  • the transmission destination of the image data and the calculation result is not limited to the cloud server 30, and various servers having a communication function such as a server that operates alone, a file server that stores various data, and a communication terminal such as a mobile phone.
  • Information processing device system
  • FIG. 2 is a schematic diagram illustrating an example of a chip configuration of the image sensor according to the present embodiment.
  • the image sensor 10 has a laminated structure in which a rectangular flat first substrate (die) 100 and a rectangular flat second substrate (die) 120 are bonded together. I have.
  • the size of the first substrate 100 and the size of the second substrate may be the same, for example. Further, the first substrate 100 and the second substrate 120 may be semiconductor substrates such as a silicon substrate.
  • the ADC 17, the control unit 12, the signal processing unit 13, the DSP 14, the memory 15, and the selector 16 are arranged.
  • an interface circuit, a driver circuit, and the like may be arranged on the second substrate 120.
  • the bonding of the first substrate 100 and the second substrate 120 is performed by dividing the first substrate 100 and the second substrate 120 into chips, respectively, and then dividing the first substrate 100 and the second substrate 120 into individual chips.
  • a so-called CoC (Chip-on-Chip) method of bonding may be used.
  • one of the first substrate 100 and the second substrate 120 (for example, the first substrate 100) may be separated into chips, and then this chip may be separated.
  • the so-called CoW (Chip on Wafer) method in which the singulated first substrate 100 is bonded to the second substrate 120 before singulation (that is, in a wafer state), or the first substrate 100 and the second substrate 120 may be used.
  • a so-called WoW (Wafer-on-Wafer) method may be used in which the substrate 120 and the substrate 120 are bonded together in a wafer state.
  • a method for bonding the first substrate 100 and the second substrate 120 for example, plasma bonding or the like can be used.
  • plasma bonding or the like can be used.
  • the present invention is not limited to this, and various joining methods may be used.
  • the processing unit that executes an operation based on the learned model of the DSP 14
  • the DSP 14 starts the arithmetic processing or the processing of the DSP 14 while the pixel array unit 101 is being reset, the pixel array unit 101 is being exposed, or the pixel signal from each unit pixel 101a of the pixel array unit 101 is being read.
  • noise fluctuations in the current or electric field, etc.
  • FIGS. 3 and 4 are diagrams for explaining a first layout example according to the present embodiment.
  • FIG. 3 shows a layout example of the first substrate 100
  • FIG. 4 shows a layout example of the second substrate 120.
  • the pixel array unit 101 of the imaging unit 11 in the configuration of the image sensor 10 shown in FIG. I have.
  • the optical system 104 is provided at a position corresponding to the pixel array unit 101.
  • the pixel array unit 101 is arranged to be shifted toward one side L101 among the four sides L101 to L104 of the first substrate 100.
  • the pixel array unit 101 is arranged such that the center O101 is closer to the side L101 than the center O100 of the first substrate 100.
  • the side L101 may be, for example, the shorter side.
  • the present invention is not limited to this, and the pixel array unit 101 may be arranged to be offset on the longer side.
  • Each of the unit pixels 101a in the pixel array unit 101 is placed in a region near the side L101 of the four sides of the pixel array unit 101, in other words, in a region between the side L101 and the pixel array unit 101.
  • a TSV array 102 in which a plurality of through wirings (Through Silicon Via) (hereinafter, referred to as TSVs) penetrating the first substrate 100 is provided as wiring for electrically connecting to the ADC 17 arranged in the 120.
  • TSVs through wirings
  • the TSV array 102 has a region in the vicinity of one of the two sides L103 and L104 intersecting the side L101 (but may be the side L103), in other words, the side L104 (or the side L103). May be provided also in a region between the pixel array unit 101 and the pixel array unit 101.
  • each of the sides L102 to L103 in which the pixel array unit 101 is not offset is provided with a pad array 103 including a plurality of pads arranged linearly.
  • the pads included in the pad array 103 include, for example, a pad (also referred to as a power supply pin) to which a power supply voltage for an analog circuit such as the pixel array unit 101 and the ADC 17 is applied, a signal processing unit 13, a DSP 14, a memory 15, and a selector.
  • each pad is electrically connected to an external power supply circuit or interface circuit via a wire, for example. It is preferable that each pad array 103 and the TSV array 102 are sufficiently separated from each other in the pad array 103 so that the influence of signal reflection from a wire connected to each pad in the pad array 103 can be ignored.
  • the memory 15 is divided into two areas, a memory 15A and a memory 15B.
  • the ADC 17 is divided into two areas, an ADC 17A and a DAC (Digital to Analog Converter) 17B.
  • the DAC 17B supplies a reference voltage for AD conversion to the ADC 17A, and is included in a part of the ADC 17 in a broad sense.
  • the selector 16 is also disposed on the second substrate 120.
  • the second substrate 120 includes a wiring 122 electrically connected to each TSV (hereinafter, simply referred to as the TSV array 102) in the TSV array 102 penetrating the first substrate 100, A pad array 123 in which a plurality of pads electrically connected to each pad in the pad array 103 of the substrate 100 is linearly arranged.
  • TSV array 102 and the wiring 122 For connection between the TSV array 102 and the wiring 122, for example, two TSVs, that is, a TSV provided on the first substrate 100 and a TSV provided from the first substrate 100 to the second substrate 120, are connected in an out-of-chip manner.
  • a so-called twin TSV method a so-called shared TSV method in which connection is performed by a common TSV provided from the first substrate 100 to the second substrate 120, or the like can be employed.
  • the present invention is not limited thereto, and various methods such as a so-called Cu-Cu bonding method in which copper (Cu) exposed on the bonding surface of the first substrate 100 and the bonding surface of the second substrate 120 are bonded to each other are used.
  • a connection mode can be adopted.
  • connection form between each pad in the pad array 103 of the first substrate 100 and each pad in the pad array 123 of the second substrate 120 is, for example, wire bonding.
  • connection forms such as through holes and castellations may be used.
  • the vicinity of the wiring 122 connected to the TSV array 102 is set as the upstream side, and the ADC 17A and the ADC 17A are sequentially arranged from the upstream along the flow of the signal read from the pixel array unit 101.
  • the signal processing unit 13 and the DSP 14 are provided. That is, the ADC 17A to which the pixel signal read from the pixel array unit 101 is first input is arranged near the wiring 122 on the most upstream side, and then the signal processing unit 13 is arranged and the area farthest from the wiring 122 The DSP 14 is arranged in the.
  • the control unit 12 is arranged, for example, near the wiring 122 on the upstream side. In FIG. 4, the control unit 12 is disposed between the ADC 17A and the signal processing unit 13. With such a layout, it is possible to reduce the signal delay, reduce the signal propagation loss, improve the S / N ratio, and reduce the power consumption when the control unit 12 controls the pixel array unit 101.
  • signal pins and power supply pins for analog circuits are collectively arranged near the analog circuit (for example, on the lower side in FIG. 4), and signal pins and power supply pins for the remaining digital circuits are placed near digital circuits (for example, 4 (upper side in FIG. 4), and the power pins for analog circuits and the power pins for digital circuits can be sufficiently separated from each other.
  • the DSP 14 is arranged on the opposite side of the ADC 17A, which is the most downstream side.
  • the DSP 14 is disposed in a region that does not overlap with the pixel array unit 101 in the stacking direction of the first substrate 100 and the second substrate 120 (hereinafter, simply referred to as the vertical direction). Becomes possible.
  • the DSP 14 and the signal processing unit 13 are connected by a part of the DSP 14 or a connection unit 14a formed by a signal line.
  • the selector 16 is arranged, for example, near the DSP 14.
  • the connecting portion 14a is a part of the DSP 14
  • some of the DSPs 14 overlap the pixel array portion 101 in the vertical direction, but even in such a case, all the DSPs 14 overlap the pixel array portion 101 in the vertical direction. It is possible to reduce the intrusion of noise into the pixel array unit 101 as compared with the case of performing the operation.
  • the memories 15A and 15B are arranged, for example, so as to surround the DSP 14 from three directions. In this way, by disposing the memories 15A and 15B so as to surround the DSP 14, it is possible to shorten the overall distance while averaging the wiring distance between each memory element and the DSP 14 in the memory 15. This makes it possible to reduce signal delay, signal propagation loss, and power consumption when the DSP 14 accesses the memory 15.
  • the pad array 123 is disposed, for example, at a position on the second substrate 120 corresponding to the pad array 103 of the first substrate 100 in the vertical direction.
  • a pad located near the ADC 17A is used for transmitting a power supply voltage and an analog signal for an analog circuit (mainly, the ADC 17A).
  • pads located near the control unit 12, the signal processing unit 13, the DSP 14, and the memories 15A and 15B are power supplies for digital circuits (mainly, the control unit 12, the signal processing unit 13, the DSP 14, the memories 15A and 15B). Used for voltage and digital signal propagation. With such a pad layout, it is possible to reduce a distance on a wiring connecting each pad and each part. This makes it possible to reduce signal delay, reduce signal and power supply voltage propagation loss, improve S / N ratio, and reduce power consumption.
  • the layout example of the first substrate 100 may be the same as the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 5 is a diagram showing a layout example of the second substrate according to the second layout example.
  • the DSP 14 is arranged at the center of the area where the DSP 14 and the memory 15 are arranged.
  • the memory 15 is arranged so as to surround the DSP 14 from four directions.
  • the DSP 14 and the pixel array unit 101 are arranged so as not to overlap in the vertical direction.
  • the present invention is not limited thereto, and a part of the DSP 14 may overlap the pixel array unit 101 in the vertical direction. . Even in such a case, it is possible to reduce the entry of noise into the pixel array unit 101 as compared with the case where all the DSPs 14 overlap the pixel array unit 101 in the vertical direction.
  • the layout example of the first substrate 100 may be the same as the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 6 is a diagram showing a layout example of the second substrate according to the third layout example.
  • the DSP 14 is arranged adjacent to the signal processing unit 13 in the same layout as the first layout example. According to such a configuration, the signal line from the signal processing unit 13 to the DSP 14 can be shortened. This makes it possible to reduce signal delay, reduce signal and power supply voltage propagation loss, improve S / N ratio, and reduce power consumption.
  • the memory 15 is arranged so as to surround the DSP 14 from three directions. This makes it possible to reduce signal delay, signal propagation loss, and power consumption when the DSP 14 accesses the memory 15.
  • a part of the DSP 14 overlaps the pixel array unit 101 in the vertical direction. However, even in such a case, all the DSPs 14 overlap the pixel array unit 101 in the vertical direction. In comparison, it is possible to reduce the entry of noise into the pixel array unit 101.
  • the layout example of the first substrate 100 may be the same as the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 7 is a diagram showing a layout example of the second substrate according to the fourth layout example.
  • the fourth layout example in a layout similar to the third layout example, that is, in a layout in which the DSP 14 is arranged adjacent to the signal processing unit 13, the DSP 14 includes both the two TSV arrays 102. It is located far from
  • the memory 15 is arranged so as to surround the DSP 14 from two directions. This makes it possible to reduce signal delay, signal propagation loss, and power consumption when the DSP 14 accesses the memory 15.
  • part of the DSP 14 overlaps the pixel array unit 101 in the vertical direction, but even in such a case, all the DSPs 14 overlap the pixel array unit 101 in the vertical direction. In comparison, it is possible to reduce the entry of noise into the pixel array unit 101.
  • the layout example of the first substrate 100 may be the same as the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 8 is a diagram showing a layout example of the second substrate according to the fifth layout example.
  • the fifth layout example in a layout similar to the first layout example, that is, in a layout in which the DSP 14 is arranged on the most downstream side, the DSP 14 is located far from both the two TSV arrays 102. Are located.
  • the layout example of the first substrate 100 may be the same as the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 9 is a diagram showing a layout example of the second substrate according to the sixth layout example.
  • the sixth layout example has a configuration in which the DSP 14 is sandwiched between memories 15C and 15D divided into two regions from above and below in the drawing.
  • the layout example of the first substrate 100 may be the same as the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 10 is a diagram showing a layout example of the second substrate according to the seventh layout example.
  • the seventh layout example has a configuration in which the memory 15 is sandwiched from above and below by DSPs 14A and 14B divided into two regions.
  • the layout example of the first substrate 100 may be the same as the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 11 is a diagram showing a layout example of the second substrate according to the eighth layout example. As shown in FIG. 11, the eighth layout example has a configuration in which the DSP 14 is sandwiched between memories 15E and 15F divided into two regions from the left and right directions in the drawing.
  • the layout example of the first substrate 100 may be the same as the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 12 is a diagram showing a layout example of the second substrate according to the ninth layout example. As shown in FIG. 12, the ninth layout example has a configuration in which the memory 15 is sandwiched between left and right directions in the drawing by DSPs 14C and 14D divided into two regions.
  • the DSP 14 of the second substrate 120 is disposed in the pixel array section in the stacking direction (the vertical direction) of the first substrate 100 and the second substrate 120.
  • the positional relationship between the pixel array unit 101 and the DSP 14 is adjusted so as not to overlap with the pixel 101. Accordingly, it is possible to reduce the intrusion of noise due to the signal processing of the DSP 14 into the pixel array unit 101. Therefore, even when the DSP 14 is operated as a processing unit that performs an operation based on the learned model, It is possible to obtain an image with reduced quality deterioration.
  • An imaging device as an electronic apparatus according to the second embodiment may be, for example, the same as the imaging device 1 described with reference to FIG. 1 in the first embodiment. Description is omitted.
  • FIG. 13 is a layout diagram illustrating a schematic configuration example of the first substrate in the image sensor according to the present embodiment.
  • FIG. 14 is a schematic diagram illustrating a chip configuration example of the image sensor according to the present embodiment.
  • the size of the first substrate 200 is smaller than the size of the second substrate 120.
  • the size of the first substrate 200 is reduced according to the size of the pixel array unit 101. As described above, by reducing the size of the first substrate 200, a larger amount of the first substrate 200 can be manufactured from one semiconductor wafer. Further, the chip size of the image sensor 10 can be further reduced.
  • first substrate 200 and the second substrate 120 may be bonded to each other by chipping the first substrate 200 and the second substrate 120 into chips and then bonding the chips to each other, or a chip-on-chip (CIC) method.
  • a CoW (Chip-on-Wafer) method of bonding the first substrate 200 to the second substrate 120 in a wafer state can be adopted.
  • the layout of the first substrate 200 may be, for example, the same as the layout of the first substrate 100 illustrated in the first embodiment except for the upper part.
  • the layout of the second substrate 120 may be, for example, the same as that of the second substrate 120 illustrated in the first embodiment.
  • the bonding position of the first substrate 200 to the second substrate 120 is a position where at least a part of the pixel array unit 101 does not overlap the DSP 14 of the second substrate 120 in the vertical direction, as in the first embodiment. May be.
  • the imaging device as an electronic apparatus according to the third embodiment may be, for example, the same as the imaging device 1 described with reference to FIG. 1 in the first embodiment. Description is omitted.
  • FIG. 15 is a layout diagram illustrating a schematic configuration example of a first substrate in the image sensor according to the present embodiment.
  • FIG. 16 is a layout diagram illustrating a schematic configuration example of the second substrate in the image sensor according to the present embodiment.
  • FIG. 17 is a schematic diagram illustrating a chip configuration example of the image sensor according to the present embodiment.
  • the size of the first substrate 300 is reduced in accordance with the size of the pixel array unit 101. Further, in the present embodiment, the size of the second substrate 320 is reduced to about the same as the size of the first substrate 300. With such a configuration, in the present embodiment, the surplus area of the first substrate 300 can be reduced, so that the chip size of the image sensor 10 is further reduced.
  • the pixel array unit 101 and the DSP 14 overlap in the stacking direction of the first substrate 300 and the second substrate 320 (hereinafter, simply referred to as the vertical direction). For this reason, in some cases, noise caused by the DSP 14 may be superimposed on the pixel signal read from the pixel array unit 101, and the quality of an image acquired by the image sensor 10 may be reduced.
  • the ADC 17A and the DSP 14 are separated from each other. Specifically, for example, the ADC 17A is arranged near one end L321 of the second substrate 320, and the DSP 14 is arranged near the end L322 opposite to the end L321 where the ADC 17A is arranged.
  • the end L321 to which the ADC 17A is close may be the end provided with the wiring 122 connected to the TSV array 102.
  • the vicinity of the wiring 122 connected to the TSV array 102 is set to the upstream side, and along the flow of the signal read from the pixel array unit 101. Since the ADC 17A, the signal processing unit 13, and the DSP 14 are arranged in this order from the upstream, it is possible to reduce the number of wires connecting each unit. Thereby, the transmission load is reduced, and the signal delay and the power consumption can be reduced.
  • the technology according to the present disclosure is applied to the solid-state imaging device (image sensor 10) that acquires a two-dimensional image.
  • the application destination of is not limited to the solid-state imaging device.
  • the technology according to the present disclosure can be applied to various light receiving sensors such as a ToF (Time of Flight) sensor, an infrared (IR) sensor, and a DVS (Dynamic Vision Sensor). That is, by making the chip structure of the light receiving sensor a stacked type, it is possible to reduce noise included in the sensor result, downsize the sensor chip, and the like.
  • the technology (the present technology) according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on any type of moving object such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 18 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile object control system to which the technology according to the present disclosure can be applied.
  • Vehicle control system 12000 includes a plurality of electronic control units connected via communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body control unit 12020, an outside information detection unit 12030, an inside information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio / video output unit 12052, and a vehicle-mounted network I / F (Interface) 12053 are illustrated.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 includes a drive force generation device for generating a drive force of the vehicle such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, and a steering angle of the vehicle. It functions as a control mechanism such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
  • the body control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body-related control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a head lamp, a back lamp, a brake lamp, a blinker, and a fog lamp.
  • a radio wave or a signal of various switches transmitted from a portable device replacing the key can be input to the body control unit 12020.
  • the body control unit 12020 receives the input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
  • Out-of-vehicle information detection unit 12030 detects information external to the vehicle on which vehicle control system 12000 is mounted.
  • an imaging unit 12031 is connected to the outside-of-vehicle information detection unit 12030.
  • the out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image.
  • the out-of-vehicle information detection unit 12030 may perform an object detection process or a distance detection process of a person, a vehicle, an obstacle, a sign, a character on a road surface, or the like based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of received light.
  • the imaging unit 12031 can output an electric signal as an image or can output the information as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects information in the vehicle.
  • the in-vehicle information detection unit 12040 is connected to, for example, a driver status detection unit 12041 that detects the status of the driver.
  • the driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 determines the degree of driver fatigue or concentration based on the detection information input from the driver state detection unit 12041. The calculation may be performed, or it may be determined whether the driver has fallen asleep.
  • the microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information on the inside and outside of the vehicle acquired by the outside information detection unit 12030 or the inside information detection unit 12040, and the drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 realizes functions of an ADAS (Advanced Driver Assistance System) including a vehicle collision avoidance or a shock mitigation, a following operation based on an inter-vehicle distance, a vehicle speed maintaining operation, a vehicle collision warning, or a vehicle lane departure warning. Cooperative control for the purpose.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generation device, the steering mechanism, the braking device, and the like based on the information on the surroundings of the vehicle acquired by the outside information detection unit 12030 or the inside information detection unit 12040, and thereby, It is possible to perform cooperative control for automatic driving or the like in which the vehicle travels autonomously without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12030 based on the information on the outside of the vehicle obtained by the outside information detection unit 12030.
  • the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of preventing glare such as switching a high beam to a low beam. It can be carried out.
  • the sound image output unit 12052 transmits at least one of a sound signal and an image signal to an output device capable of visually or audibly notifying a passenger of the vehicle or the outside of the vehicle of information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 19 is a diagram illustrating an example of an installation position of the imaging unit 12031.
  • imaging unit 12031 there are imaging units 12101, 12102, 12103, 12104, and 12105.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door, and an upper part of a windshield in the vehicle interior of the vehicle 12100.
  • the imaging unit 12101 provided on the front nose and the imaging unit 12105 provided above the windshield in the passenger compartment mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirror mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
  • the imaging unit 12105 provided above the windshield in the passenger compartment is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, and the like.
  • FIG. 19 shows an example of the imaging range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates 14 shows an imaging range of an imaging unit 12104 provided in a rear bumper or a back door. For example, by overlaying image data captured by the imaging units 12101 to 12104, an overhead image of the vehicle 12100 viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements or an imaging element having pixels for detecting a phase difference.
  • the microcomputer 12051 calculates a distance to each three-dimensional object in the imaging ranges 12111 to 12114 and a temporal change of the distance (relative speed with respect to the vehicle 12100).
  • a distance to each three-dimensional object in the imaging ranges 12111 to 12114 and a temporal change of the distance (relative speed with respect to the vehicle 12100).
  • microcomputer 12051 can set an inter-vehicle distance to be secured before the preceding vehicle and perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • the microcomputer 12051 converts the three-dimensional object data relating to the three-dimensional object into other three-dimensional objects such as a motorcycle, a normal vehicle, a large vehicle, a pedestrian, a telephone pole, and the like based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines a collision risk indicating a risk of collision with each obstacle, and when the collision risk is equal to or more than the set value and there is a possibility of collision, via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver through forced driving and avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
  • driving assistance for collision avoidance can be performed.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared light.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian exists in the captured images of the imaging units 12101 to 12104. The recognition of such a pedestrian is performed by, for example, extracting a feature point in an image captured by the imaging units 12101 to 12104 as an infrared camera, and performing a pattern matching process on a series of feature points indicating the outline of the object to determine whether the object is a pedestrian.
  • the audio image output unit 12052 outputs a rectangular outline to the recognized pedestrian for emphasis.
  • the display unit 12062 is controlled so that is superimposed.
  • the sound image output unit 12052 may control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
  • the technology according to the present disclosure can be applied to the imaging unit 12031 or the like among the configurations described above.
  • the technology according to the present disclosure By applying the technology according to the present disclosure to the imaging unit 12031 and the like, it is possible to reduce the size of the imaging unit 12031 and the like, so that the interior and exterior of the vehicle 12100 can be easily designed.
  • the technology according to the present disclosure to the imaging unit 12031 and the like, a clear image with reduced noise can be obtained, so that a more easily viewable captured image can be provided to the driver. This makes it possible to reduce driver fatigue.
  • the technology (the present technology) according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure may be applied to an endoscopic surgery system.
  • FIG. 20 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system to which the technology (the present technology) according to the present disclosure may be applied.
  • FIG. 20 illustrates a state in which an operator (doctor) 11131 is performing an operation on a patient 11132 on a patient bed 11133 using the endoscopic operation system 11000.
  • the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as an insufflation tube 11111 and an energy treatment tool 11112, and a support arm device 11120 that supports the endoscope 11100.
  • a cart 11200 on which various devices for endoscopic surgery are mounted.
  • the endoscope 11100 includes a lens barrel 11101 having a predetermined length from the distal end inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the proximal end of the lens barrel 11101.
  • the endoscope 11100 which is configured as a so-called rigid endoscope having a hard lens barrel 11101 is illustrated.
  • the endoscope 11100 may be configured as a so-called flexible endoscope having a soft lens barrel. Good.
  • An opening in which an objective lens is fitted is provided at the tip of the lens barrel 11101.
  • a light source device 11203 is connected to the endoscope 11100, and light generated by the light source device 11203 is guided to the distal end of the lens barrel by a light guide that extends inside the lens barrel 11101, and the objective The light is radiated toward the observation target in the body cavity of the patient 11132 via the lens.
  • the endoscope 11100 may be a direct view scope, a perspective view scope, or a side view scope.
  • An optical system and an image sensor are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the image sensor by the optical system.
  • the observation light is photoelectrically converted by the imaging element, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated.
  • the image signal is transmitted to a camera control unit (CCU: ⁇ Camera ⁇ Control ⁇ Unit) 11201 as RAW data.
  • the $ CCU 11201 is configured by a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and controls the operations of the endoscope 11100 and the display device 11202 overall. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal for displaying an image based on the image signal, such as a development process (demosaicing process).
  • a development process demosaicing process
  • the display device 11202 displays an image based on an image signal on which image processing has been performed by the CCU 11201 under the control of the CCU 11201.
  • the light source device 11203 is configured by a light source such as an LED (light emitting diode), for example, and supplies the endoscope 11100 with irradiation light when imaging an operation part or the like.
  • a light source such as an LED (light emitting diode)
  • the input device 11204 is an input interface for the endoscopic surgery system 11000.
  • the user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204.
  • the user inputs an instruction or the like to change imaging conditions (type of irradiation light, magnification, focal length, and the like) by the endoscope 11100.
  • the treatment instrument control device 11205 controls the driving of the energy treatment instrument 11112 for cauterizing, incising a tissue, sealing a blood vessel, and the like.
  • the insufflation device 11206 is used to inflate the body cavity of the patient 11132 for the purpose of securing the visual field by the endoscope 11100 and securing the working space of the operator.
  • the recorder 11207 is a device that can record various types of information related to surgery.
  • the printer 11208 is a device capable of printing various types of information on surgery in various formats such as text, images, and graphs.
  • the light source device 11203 that supplies the endoscope 11100 with irradiation light at the time of imaging the operation site can be configured by, for example, a white light source including an LED, a laser light source, or a combination thereof.
  • a white light source is configured by a combination of the RGB laser light sources
  • the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy, so that the light source device 11203 adjusts the white balance of the captured image. It can be carried out.
  • the laser light from each of the RGB laser light sources is radiated to the observation target in a time-division manner, and the driving of the image pickup device of the camera head 11102 is controlled in synchronization with the irradiation timing. It is also possible to capture the image obtained in a time-division manner. According to this method, a color image can be obtained without providing a color filter in the image sensor.
  • the driving of the light source device 11203 may be controlled so as to change the intensity of output light at predetermined time intervals.
  • the driving of the image sensor of the camera head 11102 in synchronization with the timing of the change of the light intensity, an image is acquired in a time-division manner, and the image is synthesized, so that a high dynamic image without so-called blackout and whiteout is obtained. An image of the range can be generated.
  • the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, the wavelength dependence of light absorption in body tissue is used to irradiate light of a narrower band compared to irradiation light (ie, white light) at the time of normal observation, so that the surface of the mucous membrane is exposed.
  • a so-called narrow-band light observation (Narrow-Band-Imaging) for photographing a predetermined tissue such as a blood vessel with high contrast is performed.
  • fluorescence observation in which an image is obtained by fluorescence generated by irradiating excitation light may be performed.
  • the body tissue is irradiated with excitation light to observe fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and Irradiation with excitation light corresponding to the fluorescence wavelength of the reagent can be performed to obtain a fluorescence image.
  • the light source device 11203 can be configured to be able to supply narrowband light and / or excitation light corresponding to such special light observation.
  • FIG. 21 is a block diagram showing an example of a functional configuration of the camera head 11102 and the CCU 11201 shown in FIG.
  • the camera head 11102 includes a lens unit 11401, an imaging unit 11402, a driving unit 11403, a communication unit 11404, and a camera head control unit 11405.
  • the CCU 11201 includes a communication unit 11411, an image processing unit 11412, and a control unit 11413.
  • the camera head 11102 and the CCU 11201 are communicably connected to each other by a transmission cable 11400.
  • the lens unit 11401 is an optical system provided at a connection with the lens barrel 11101. Observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102, and enters the lens unit 11401.
  • the lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
  • the number of imaging elements constituting the imaging unit 11402 may be one (so-called single-panel type) or plural (so-called multi-panel type).
  • the imaging unit 11402 When the imaging unit 11402 is configured as a multi-panel type, for example, an image signal corresponding to each of RGB may be generated by each imaging element, and a color image may be obtained by combining the image signals.
  • the imaging unit 11402 may be configured to include a pair of imaging elements for acquiring right-eye and left-eye image signals corresponding to 3D (dimensional) display. By performing the 3D display, the operator 11131 can more accurately grasp the depth of the living tissue in the operative part.
  • a plurality of lens units 11401 may be provided for each imaging element.
  • the imaging unit 11402 does not necessarily have to be provided in the camera head 11102.
  • the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
  • the drive unit 11403 is configured by an actuator, and moves the zoom lens and the focus lens of the lens unit 11401 by a predetermined distance along the optical axis under the control of the camera head control unit 11405.
  • the magnification and the focus of the image captured by the imaging unit 11402 can be appropriately adjusted.
  • the communication unit 11404 is configured by a communication device for transmitting and receiving various information to and from the CCU 11201.
  • the communication unit 11404 transmits the image signal obtained from the imaging unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
  • the communication unit 11404 receives a control signal for controlling driving of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405.
  • the control signal includes, for example, information indicating that the frame rate of the captured image is specified, information that specifies the exposure value at the time of imaging, and / or information that specifies the magnification and focus of the captured image. Contains information about the condition.
  • imaging conditions such as the frame rate, the exposure value, the magnification, and the focus may be appropriately designated by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. Good.
  • a so-called AE (Auto Exposure) function, an AF (Auto Focus) function, and an AWB (Auto White Balance) function are mounted on the endoscope 11100.
  • the camera head control unit 11405 controls the driving of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
  • the communication unit 11411 is configured by a communication device for transmitting and receiving various information to and from the camera head 11102.
  • the communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
  • the communication unit 11411 transmits a control signal for controlling driving of the camera head 11102 to the camera head 11102.
  • the image signal and the control signal can be transmitted by electric communication, optical communication, or the like.
  • the image processing unit 11412 performs various types of image processing on an image signal that is RAW data transmitted from the camera head 11102.
  • the control unit 11413 performs various kinds of control related to imaging of the operation section and the like by the endoscope 11100 and display of a captured image obtained by imaging the operation section and the like. For example, the control unit 11413 generates a control signal for controlling driving of the camera head 11102.
  • control unit 11413 causes the display device 11202 to display a captured image showing the operative part or the like based on the image signal subjected to the image processing by the image processing unit 11412.
  • the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 detects a shape, a color, or the like of an edge of an object included in the captured image, and thereby detects a surgical tool such as forceps, a specific living body site, bleeding, a mist when using the energy treatment tool 11112, and the like. Can be recognized.
  • the control unit 11413 may use the recognition result to superimpose and display various types of surgery support information on the image of the operative site.
  • the burden on the operator 11131 can be reduced, and the operator 11131 can reliably perform the operation.
  • the transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electric signal cable corresponding to electric signal communication, an optical fiber corresponding to optical communication, or a composite cable thereof.
  • the communication is performed by wire using the transmission cable 11400, but the communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
  • the technology according to the present disclosure can be applied to, for example, the imaging unit 11402 of the camera head 11102 among the configurations described above.
  • the technology according to the present disclosure can be applied to the camera head 11102, so that the endoscopic surgery system 11000 can be reduced in size.
  • the technology according to the present disclosure to the camera head 11102 and the like, a clear image with reduced noise can be obtained, and thus a more easily viewable captured image can be provided to the operator. Thereby, it becomes possible to reduce the fatigue of the operator.
  • the endoscopic surgery system has been described as an example, but the technology according to the present disclosure may be applied to, for example, a microscopic surgery system or the like.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure may be applied to a pathological diagnosis system in which a doctor or the like observes cells or tissues collected from a patient to diagnose a lesion or a support system therefor (hereinafter, referred to as a diagnosis support system).
  • This diagnosis support system may be a WSI (Whole Slide Imaging) system that diagnoses or supports a lesion based on an image acquired using digital pathology technology.
  • FIG. 22 is a diagram illustrating an example of a schematic configuration of a diagnosis support system 5500 to which the technology according to the present disclosure is applied.
  • the diagnosis support system 5500 includes one or more pathology systems 5510. Further, a medical information system 5530 and a derivation device 5540 may be included.
  • Each of the # 1 or more pathology systems 5510 is a system mainly used by a pathologist, and is introduced into, for example, a research laboratory or a hospital.
  • Each pathological system 5510 may be installed in different hospitals, and may be connected to various networks such as a WAN (Wide Area Network) (including the Internet), a LAN (Local Area Network), a public line network, and a mobile communication network. It is connected to the medical information system 5530 and the derivation device 5540 via the terminal.
  • WAN Wide Area Network
  • LAN Local Area Network
  • public line network a public line network
  • mobile communication network a mobile communication network
  • Each pathological system 5510 includes a microscope 5511, a server 5512, a display control device 5513, and a display device 5514.
  • the microscope 5511 has a function of an optical microscope, and captures an observation object contained in a glass slide to acquire a pathological image as a digital image.
  • the observation target is, for example, a tissue or a cell collected from a patient, and may be a piece of organ, saliva, blood, or the like.
  • the server 5512 stores and stores the pathological image acquired by the microscope 5511 in a storage unit (not shown). In addition, when the server 5512 receives a browsing request from the display control device 5513, the server 5512 searches for a pathological image from a storage unit (not shown), and sends the searched pathological image to the display control device 5513.
  • the display control device 5513 sends a browsing request for a pathological image received from the user to the server 5512. Then, the display control device 5513 causes the pathological image received from the server 5512 to be displayed on a display device 5514 using a liquid crystal, EL (Electro-Luminescence), CRT (Cathode Ray Tube), or the like. Note that the number of the display devices 5514 may correspond to 4K or 8K, and is not limited to one and may be plural.
  • the observation target when the observation target is a solid such as a piece of meat of an organ, the observation target may be, for example, a stained thin section.
  • the thin section may be produced by, for example, thinly cutting a block piece cut out from a specimen such as an organ.
  • the block pieces may be fixed with paraffin or the like.
  • Various stains may be applied to the staining of the thin sections, such as general staining indicating the morphology of the tissue such as HE (Hematoxylin-Eosin) staining, and immunostaining indicating the immune state of the tissue such as IHC (Immunohistochemistry) staining.
  • one thin section may be stained using a plurality of different reagents, or two or more thin sections (also referred to as adjacent thin sections) cut out continuously from the same block piece may use different reagents. May be used for staining.
  • the microscope 5511 may include a low-resolution imaging unit for imaging at low resolution and a high-resolution imaging unit for imaging at high resolution.
  • the low-resolution imaging unit and the high-resolution imaging unit may be different optical systems or may be the same optical system. When the optical systems are the same, the resolution of the microscope 5511 may be changed according to the imaging target.
  • the glass slide containing the observation target is placed on a stage located within the angle of view of the microscope 5511.
  • the microscope 5511 first obtains an entire image within the angle of view using the low-resolution imaging unit, and specifies an area of the observation target from the obtained entire image. Subsequently, the microscope 5511 obtains a high-resolution image of each divided region by dividing the region where the observation target object is present into a plurality of divided regions of a predetermined size, and sequentially capturing each divided region with a high-resolution imaging unit. I do.
  • the stage may be moved, the imaging optical system may be moved, or both of them may be moved.
  • each divided region may overlap with an adjacent divided region in order to prevent occurrence of an imaging omission region due to unintentional sliding of the glass slide.
  • the whole image may include identification information for associating the whole image with the patient. This identification information may be, for example, a character string or a QR code (registered trademark).
  • the high-resolution image acquired by the microscope 5511 is input to the server 5512.
  • the server 5512 divides each high-resolution image into smaller-sized partial images (hereinafter, referred to as tile images). For example, the server 5512 divides one high-resolution image into a total of 100 tile images of 10 ⁇ 10 vertically and horizontally. At this time, if adjacent divided areas overlap, the server 5512 may perform a stitching process on the high-resolution images adjacent to each other by using a technique such as template matching. In that case, the server 5512 may generate a tile image by dividing the entire high-resolution image attached by the stitching process. However, the generation of the tile image from the high-resolution image may be performed before the stitching process.
  • the server 5512 may generate a tile image of a smaller size by further dividing the tile image. Such generation of a tile image may be repeated until a tile image of a size set as the minimum unit is generated.
  • the server 5512 executes a tile synthesis process of generating one tile image by synthesizing a predetermined number of adjacent tile images for all tile images. This tile synthesizing process can be repeated until one tile image is finally generated.
  • a tile image group having a pyramid structure in which each layer is configured by one or more tile images is generated.
  • the tile image of a certain layer and the tile image of a layer different from this layer have the same number of pixels, but have different resolutions.
  • the resolution of the tile image of the upper layer is ⁇ times the resolution of the tile image of the lower layer used for the synthesis. It has become.
  • the generated pyramid-structured tile image group is stored in a storage unit (not shown) together with identification information (referred to as tile identification information) capable of uniquely identifying each tile image, for example.
  • tile identification information identification information capable of uniquely identifying each tile image, for example.
  • the server 5512 transmits a tile image corresponding to the tile identification information to another device. I do.
  • a tile image as a pathological image may be generated for each imaging condition such as a focal length and a staining condition.
  • a tile image is generated for each imaging condition, along with a specific pathological image, another pathological image corresponding to an imaging condition different from the specific imaging condition, and another pathological image in the same region as the specific pathological image are displayed. They may be displayed side by side.
  • Specific imaging conditions may be specified by the viewer. When a plurality of imaging conditions are specified for the viewer, pathological images of the same area corresponding to each imaging condition may be displayed side by side.
  • the server 5512 may store the pyramid-structured tile image group in a storage device other than the server 5512, for example, a cloud server. Further, a part or all of the tile image generation processing as described above may be executed by a cloud server or the like.
  • the display control device 5513 extracts a desired tile image from the pyramid-structured tile image group in response to an input operation from the user, and outputs this to the display device 5514. Through such processing, the user can obtain a feeling as if the user is observing the observation target object while changing the observation magnification. That is, the display control device 5513 functions as a virtual microscope. The virtual observation magnification here actually corresponds to the resolution.
  • any method may be used as a method for capturing a high-resolution image. Stopping and moving the stage may be repeated to obtain a high-resolution image by capturing the divided area while moving the stage, or by moving the stage at a predetermined speed and capturing a high-resolution image on the strip by capturing the divided area. Is also good.
  • the process of generating a tile image from a high-resolution image is not an indispensable configuration. By changing the resolution of the entire high-resolution image combined by the stitching process in a stepwise manner, an image in which the resolution changes stepwise can be obtained. May be generated. Even in this case, it is possible to gradually present the user from a low-resolution image in a wide area to a high-resolution image in a narrow area.
  • the medical information system 5530 is a so-called electronic medical record system, and stores information for identifying a patient, information on a patient's disease, examination information and image information used for diagnosis, diagnosis results, and information on diagnosis such as prescription drugs.
  • a pathological image obtained by imaging an observation target of a patient may be temporarily stored via the server 5512, and then displayed on the display device 5514 by the display control device 5513.
  • a pathologist using the pathological system 5510 makes a pathological diagnosis based on the pathological image displayed on the display device 5514.
  • the result of the pathological diagnosis performed by the pathologist is stored in the medical information system 5530.
  • the derivation device 5540 can execute analysis on a pathological image. For this analysis, a learning model created by machine learning can be used. The derivation device 5540 may derive a classification result of a specific area, a tissue identification result, or the like as the analysis result. Furthermore, the deriving device 5540 may derive identification results of cell information, number, position, luminance information, and the like, and scoring information for them. These pieces of information derived by the derivation device 5540 may be displayed on the display device 5514 of the pathology system 5510 as diagnosis support information.
  • the deriving device 5540 may be a server system including one or more servers (including a cloud server).
  • the derivation device 5540 may be configured to be incorporated in, for example, the display control device 5513 or the server 5512 in the pathology system 5510. That is, various analyzes on the pathological image may be executed in the pathological system 5510.
  • the technology according to the present disclosure can be suitably applied to, for example, the microscope 5511 among the configurations described above.
  • the technology according to the present disclosure can be applied to the low-resolution imaging unit and / or the high-resolution imaging unit of the microscope 5511.
  • the technology according to the present disclosure it is possible to reduce the size of the low-resolution imaging unit and / or the high-resolution imaging unit, and in turn, to reduce the size of the microscope 5511. .
  • This facilitates transportation of the microscope 5511, and thus facilitates system introduction, system recombination, and the like.
  • the configuration described above can be applied not only to the diagnosis support system but also to all biological microscopes such as a confocal microscope, a fluorescence microscope, and a video microscope.
  • the observation target may be a biological sample such as a cultured cell, a fertilized egg, or a sperm, a biological material such as a cell sheet or a three-dimensional cell tissue, or a living body such as a zebrafish or a mouse.
  • the observation target object is not limited to a glass slide, and can be observed in a state stored in a well plate, a petri dish, or the like.
  • a moving image may be generated from a still image of the observation target acquired using a microscope.
  • a moving image may be generated from still images captured continuously for a predetermined period, or an image sequence may be generated from still images captured at predetermined intervals.
  • an image sequence may be generated from still images captured at predetermined intervals.
  • the second substrate includes: A converter that converts an analog pixel signal output from the pixel array unit into digital image data, A processing unit that performs processing based on a neural network calculation model on data based on the image data, With At least a portion of the converter is disposed on a first side of the second substrate, The stacked type light receiving sensor, wherein the processing unit is disposed on a second side of the second substrate opposite to the first side.
  • the neural network calculation model is designed based on parameters generated by inputting an input signal corresponding to an output of the pixel array unit and learning data associated with a label for the input signal to a predetermined machine learning model.
  • the data based on the image data is any one of the image data read from the pixel array unit or the image data whose data size is reduced by thinning out pixels of the image data. 2.
  • the first substrate includes the pixel array unit and the converter on a third side corresponding to the first side of the second substrate in a state where the first substrate and the second substrate are bonded to each other.
  • the stacked light-receiving sensor according to any one of the above (1) to (4), further comprising a connection wiring for electrical connection.
  • TSV through silicon via
  • the second substrate has a connection wiring electrically connected to the converter on the first side,
  • the stacked light receiving sensor according to (5), wherein the connection wiring of the first substrate and the connection wiring of the second substrate are directly joined by metal joining.
  • the second substrate further includes a signal processing unit that performs signal processing on the image data, The stacked light-receiving sensor according to any one of (1) to (7), wherein the signal processing unit is disposed between the converter and the processing unit on the second substrate.
  • the second substrate further includes a memory for storing data, The stacked light-receiving sensor according to any one of (1) to (8), wherein the memory is arranged in a region of the second substrate adjacent to the processing unit.
  • (11) The stacked light receiving sensor according to (9), wherein the memory is arranged in a region sandwiching the processing unit from two directions.
  • the processing unit is divided into two regions on the second substrate, and is disposed.
  • the stacked light-receiving sensor according to (9), wherein the memory is arranged in a region interposed between the divided processing units.
  • the stacked light-receiving sensor according to (9), wherein the memory stores a program for causing the processing unit to execute the processing.
  • the second substrate further includes a control unit that controls reading of the pixel signal from the pixel array unit, The stacked light-receiving sensor according to any one of (1) to (13), wherein the control unit is disposed between the converter and the processing unit on the second substrate.
  • the size of the surface of the first substrate bonded to the second substrate is substantially the same as the size of the surface of the second substrate bonded to the first substrate.
  • the stacked light-receiving sensor according to any one of the preceding claims.
  • (17) The first substrate and the second substrate are bonded by any one of a CoC (Chip on Chip) method, a CoW (Chip on Wafer) method, and a WoW (Wafer on Wafer) method.
  • the stacked light-receiving sensor according to any one of (5) to (7), wherein the first substrate includes a pad that is adjacent to at least one of sides different from the third side.
  • the pad includes a first power supply pad to which a power supply voltage supplied to the converter is applied, and a second power supply pad to which a power supply voltage supplied to the processing unit is applied, The first power supply pad is disposed closer to the converter than the second power supply pad;
  • a converter that converts an analog pixel signal output from the pixel array unit into digital image data
  • a processing unit that performs processing based on a neural network calculation model on data based on the image data, At least a part of the converter is disposed on a first side of the second substrate, and the processing unit is disposed on a second side of the second substrate opposite to the first side.
  • Type light receiving sensor A processor that executes a predetermined process on image data output from the stacked light receiving sensor, Electronic equipment provided with.
  • the second substrate includes: A converter that converts an analog pixel signal output from the pixel array unit into digital image data, A processing unit that performs processing based on a neural network calculation model on data based on the image data, With In the second substrate, in the stacking direction of the first substrate and the second substrate, at least half of a region where the processing unit is disposed on the second substrate, the pixel array unit is disposed on the first substrate.
  • a stacked-type light receiving sensor attached to the first substrate so as not to overlap with the region.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • Electromagnetism (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

La présente invention exécute un niveau de traitement plus élevé à l'intérieur d'une puce. Un capteur de réception de lumière de type en couches selon un mode de réalisation est pourvu : d'un premier substrat (100, 200, 300) ; et d'un second substrat (120, 320) fixé au premier substrat. Le premier substrat est pourvu d'une partie de réseau de pixels (101) dans laquelle une pluralité de pixels unitaires sont agencés selon une forme de matrice bidimensionnelle. Le second substrat est pourvu : d'un convertisseur (17) qui convertit un signal de pixel analogique délivré en sortie en provenance de la partie de réseau de pixels en données d'image numérique ; et d'une unité de traitement (15) qui, sur la base d'un modèle de calcul de réseau neuronal, traite des données sur la base des données d'image. Au moins une partie du convertisseur est disposée sur un premier côté du second substrat, et l'unité de traitement est disposée sur un second côté du second substrat opposé au premier côté.
PCT/JP2019/029909 2018-07-31 2019-07-30 Capteur de réception de lumière de type en couches et dispositif électronique WO2020027161A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201980049297.1A CN112470462B (zh) 2018-07-31 2019-07-30 层叠式光接收传感器和电子装置
KR1020217001553A KR20210029205A (ko) 2018-07-31 2019-07-30 적층형 수광 센서 및 전자기기
US17/251,926 US11735614B2 (en) 2018-07-31 2019-07-30 Stacked light-receiving sensor and electronic device
EP19845060.3A EP3833007B1 (fr) 2018-07-31 2019-07-30 Capteur de réception de lumière de type en couches et dispositif électronique

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018-143973 2018-07-31
JP2018143973 2018-07-31
JP2019-139439 2019-07-30
JP2019139439A JP6689437B2 (ja) 2018-07-31 2019-07-30 積層型受光センサ及び電子機器

Publications (1)

Publication Number Publication Date
WO2020027161A1 true WO2020027161A1 (fr) 2020-02-06

Family

ID=69231871

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/029909 WO2020027161A1 (fr) 2018-07-31 2019-07-30 Capteur de réception de lumière de type en couches et dispositif électronique

Country Status (1)

Country Link
WO (1) WO2020027161A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11706546B2 (en) * 2021-06-01 2023-07-18 Sony Semiconductor Solutions Corporation Image sensor with integrated single object class detection deep neural network (DNN)
US11849238B2 (en) 2021-02-04 2023-12-19 Canon Kabushiki Kaisha Photoelectric conversion apparatus, photoelectric conversion system, moving body

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016163011A (ja) * 2015-03-05 2016-09-05 ソニー株式会社 半導体装置および製造方法、並びに電子機器
WO2018051809A1 (fr) 2016-09-16 2018-03-22 ソニーセミコンダクタソリューションズ株式会社 Dispositif de capture d'image, et appareil électronique
JP2018074445A (ja) * 2016-10-31 2018-05-10 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置およびその信号処理方法、並びに電子機器
JP2018107759A (ja) * 2016-12-28 2018-07-05 ソニーセミコンダクタソリューションズ株式会社 画像処理装置、画像処理方法、及び画像処理システム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016163011A (ja) * 2015-03-05 2016-09-05 ソニー株式会社 半導体装置および製造方法、並びに電子機器
WO2018051809A1 (fr) 2016-09-16 2018-03-22 ソニーセミコンダクタソリューションズ株式会社 Dispositif de capture d'image, et appareil électronique
JP2018074445A (ja) * 2016-10-31 2018-05-10 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置およびその信号処理方法、並びに電子機器
JP2018107759A (ja) * 2016-12-28 2018-07-05 ソニーセミコンダクタソリューションズ株式会社 画像処理装置、画像処理方法、及び画像処理システム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3833007A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11849238B2 (en) 2021-02-04 2023-12-19 Canon Kabushiki Kaisha Photoelectric conversion apparatus, photoelectric conversion system, moving body
US11706546B2 (en) * 2021-06-01 2023-07-18 Sony Semiconductor Solutions Corporation Image sensor with integrated single object class detection deep neural network (DNN)

Similar Documents

Publication Publication Date Title
JP7414869B2 (ja) 固体撮像装置、電子機器及び固体撮像装置の制御方法
JP6705044B2 (ja) 積層型受光センサ及び車載撮像装置
US11792551B2 (en) Stacked light receiving sensor and electronic apparatus
WO2020027233A1 (fr) Dispositif d'imagerie et système de commande de véhicule
US11962916B2 (en) Imaging device with two signal processing circuitry partly having a same type of signal processing, electronic apparatus including imaging device, and imaging method
WO2021075321A1 (fr) Appareil de capture d'image, dispositif électronique et procédé de capture d'image
JP7423491B2 (ja) 固体撮像装置及び車両制御システム
WO2020027161A1 (fr) Capteur de réception de lumière de type en couches et dispositif électronique
US20240021646A1 (en) Stacked light-receiving sensor and in-vehicle imaging device
TWI846718B (zh) 積層型受光感測器及電子機器
WO2021075292A1 (fr) Dispositif de réception de lumière, équipement électronique, et procédé de réception de lumière
WO2020027074A1 (fr) Dispositif d'imagerie à semi-conducteur et appareil électronique
US20240080546A1 (en) Imaging apparatus and electronic equipment
TWI840429B (zh) 積層型受光感測器及電子機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19845060

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217001553

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019845060

Country of ref document: EP

Effective date: 20210301