US20210168318A1 - Stacked light-receiving sensor and electronic device - Google Patents

Stacked light-receiving sensor and electronic device Download PDF

Info

Publication number
US20210168318A1
US20210168318A1 US17/262,691 US201917262691A US2021168318A1 US 20210168318 A1 US20210168318 A1 US 20210168318A1 US 201917262691 A US201917262691 A US 201917262691A US 2021168318 A1 US2021168318 A1 US 2021168318A1
Authority
US
United States
Prior art keywords
substrate
vehicle
imaging device
image
pixel array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/262,691
Inventor
Ryoji Eki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Semiconductor Solutions Corp
Original Assignee
Sony Semiconductor Solutions Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Semiconductor Solutions Corp filed Critical Sony Semiconductor Solutions Corp
Priority claimed from PCT/JP2019/030093 external-priority patent/WO2020027230A1/en
Assigned to SONY SEMICONDUCTOR SOLUTIONS CORPORATION reassignment SONY SEMICONDUCTOR SOLUTIONS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EKI, RYOJI
Publication of US20210168318A1 publication Critical patent/US20210168318A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14634Assemblies, i.e. Hybrid structures
    • H04N5/379
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14636Interconnect structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/709Circuitry for control of the power supply
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • H04N25/771Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising storage means other than floating diffusion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/78Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/79Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
    • H04N5/378
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates to a stacked light-receiving sensor and an in-vehicle imaging device.
  • Patent Literature 1 WO2018/051809
  • the present disclosure develops a stacked light-receiving sensor and an in-vehicle imaging device capable of performing more advanced processing in a chip.
  • a stacked light-receiving sensor comprises: a first substrate; a second substrate bonded to the first substrate; and connection wiring attached to the second substrate, the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix, the second substrate including a converter configured to convert an analog pixel signal output from the pixel array to digital image data and a processing unit configured to perform a process for data based on the image data, wherein at least a part of the converter is disposed on a first side in the second substrate, the processing unit is disposed on a second side opposite to the first side in the second substrate, and the connection wiring is attached to a side other than the second side in the second substrate.
  • FIG. 1 is a block diagram illustrating an overall configuration example of an imaging device as an electronic device according to a first embodiment.
  • FIG. 2 is a diagram illustrating a chip configuration example of an image sensor according to the first embodiment.
  • FIG. 3 is a diagram illustrating a layout example of a first substrate in a first layout example according to the first embodiment.
  • FIG. 4 is a diagram illustrating a layout example of a second substrate in the first layout example according to the first embodiment.
  • FIG. 5 is a diagram illustrating a layout example of the second substrate in a second layout example according to the first embodiment.
  • FIG. 6 is a diagram illustrating a layout example of the second substrate in a third layout example according to the first embodiment.
  • FIG. 7 is a diagram illustrating a layout example of the second substrate in a fourth layout example according to the first embodiment.
  • FIG. 8 is a diagram illustrating a layout example of the second substrate in a fifth layout example according to the first embodiment.
  • FIG. 9 is a diagram illustrating a layout example of the second substrate in a sixth layout example according to the first embodiment.
  • FIG. 10 is a diagram illustrating a layout example of the second substrate in a seventh layout example according to the first embodiment.
  • FIG. 11 is a diagram illustrating a layout example of the second substrate in an eighth layout example according to the first embodiment.
  • FIG. 12 is a diagram illustrating a layout example of the second substrate in a ninth layout example according to the first embodiment.
  • FIG. 13 is a layout diagram illustrating an overall configuration example of the first substrate in an image sensor according to a second embodiment.
  • FIG. 14 is a diagram illustrating a chip configuration example of the image sensor according to the second embodiment.
  • FIG. 15 is a layout diagram illustrating an overall configuration example of the first substrate in an image sensor according to a third embodiment.
  • FIG. 16 is a layout diagram illustrating an overall configuration example of the second substrate in the image sensor according to the third embodiment.
  • FIG. 17 is a diagram illustrating a chip configuration example of the image sensor according to the third embodiment.
  • FIG. 18 is a diagram illustrating an overall configuration example of an imaging device according to a fourth embodiment.
  • FIG. 19 is a diagram illustrating an attachment example of the imaging device according to a first example of the fourth embodiment.
  • FIG. 20 is a diagram illustrating an attachment example of the imaging device according to a second example of the fourth embodiment.
  • FIG. 21 is a block diagram illustrating an example of the overall configuration of a vehicle control system.
  • FIG. 22 is a diagram illustrating an example of the installation position of a vehicle exterior information detector and an imager.
  • FIG. 23 is a diagram illustrating an example of the overall configuration of an endoscopic surgery system.
  • FIG. 24 is a block diagram illustrating an example of the functional configuration of a camera head and a CCU.
  • FIG. 25 is a block diagram illustrating an example of the overall configuration of a diagnostic assistance system.
  • FIG. 1 is a block diagram illustrating an overall configuration example of an imaging device as an electronic device according to the first embodiment.
  • an imaging device 1 includes an image sensor 10 that is a solid-state imaging device and an application processor 20 .
  • the image sensor 10 includes an imager 11 , a controller 12 , a converter (analog-to-digital converter, hereinafter referred to as ADC) 17 , a signal processor 13 , a digital signal processor (DSP) 14 , a memory 15 , and a selector (also referred to as output module) 16 .
  • ADC analog-to-digital converter
  • DSP digital signal processor
  • the controller 12 controls each part in the image sensor 10 in accordance with user's operation or a set operation mode, for example.
  • the imager 11 includes, for example, an optical system 104 including a zoom lens, a focus lens, and an aperture, and a pixel array 101 having a configuration in which unit pixels (unit pixels 101 a in FIG. 2 ) including light-receiving elements such as photodiodes are arranged in a two-dimensional matrix. External incident light passes through the optical system 104 to form an image on a light-receiving surface that is an array of light-receiving elements in the pixel array 101 . Each unit pixel 101 a in the pixel array 101 converts light incident on its light-receiving element into electricity to accumulate charge in accordance with the quantity of incident light so that the charge can be read out.
  • an optical system 104 including a zoom lens, a focus lens, and an aperture
  • a pixel array 101 having a configuration in which unit pixels (unit pixels 101 a in FIG. 2 ) including light-receiving elements such as photodiodes are arranged in a two-dimensional matrix. External incident light passes
  • the ADC 17 converts an analog pixel signal for each unit pixel 101 a read from the imager 11 to a digital value to generate digital image data and outputs the generated image data to the signal processor 13 and/or the memory 15 .
  • the ADC 17 may include a voltage generating circuit that generates a drive voltage for driving the imager 11 from power supply voltage and the like.
  • the signal processor 13 performs a variety of signal processing for digital image data input from the ADC 17 or digital image data read from the memory 15 (hereinafter referred to as process target image data). For example, when the process target image data is a color image, the signal processor 13 converts the format of this image data to YUV image data, RGB image data, or the like. The signal processor 13 performs, for example, processing such as noise removal and white balance adjustment for the process target image data, if necessary. In addition, the signal processor 13 performs a variety of signal processing (also referred to as pre-processing) for the process target image data in order for the DSP 14 to process the image data.
  • pre-processing signal processing for the process target image data in order for the DSP 14 to process the image data.
  • the DSP 14 executes, for example, a computer program stored in the memory 15 to function as a processing unit that performs a variety of processing using a pre-trained model (also referred to as neural network calculation model) created by machine learning using a deep neural network (DNN).
  • This pre-trained model may be designed based on parameters generated by inputting, to a predetermined machine learning model, training data in which an input signal corresponding to output of the pixel array 101 is associated with a label for the input signal.
  • the predetermined machine learning model may be a learning model using a multi-layer neural network (also referred to as multi-layer neural network model).
  • the DSP 14 performs a computation process based on the pre-trained model stored in the memory 15 to perform a process of combining image data with a dictionary coefficient stored in the memory 15 .
  • the result obtained through such a computation process (computation result) is output to the memory 15 and/or the selector 16 .
  • the computation result may include image data obtained by performing a computation process using the pre-trained model and a variety of information (metadata) obtained from the image data.
  • a memory controller for controlling access to the memory 15 may be embedded in the DSP 14 .
  • the image data to be processed by the DSP 14 may be image data normally read out from the pixel array 101 or may be image data having a data size reduced by decimating pixels of the image data normally read out.
  • the image data to be processed may be image data read out in a data size smaller than normal obtained by performing readout from the pixel array 101 with pixels decimated.
  • normal readout may be readout without decimating pixels.
  • the memory 15 stores image data output from the ADC 17 , image data subjected to signal processing by the signal processor 13 , the computation result obtained from the DSP 14 , and the like, if necessary.
  • the memory 15 stores an algorithm of the pre-trained model to be executed by the DSP 14 , in the form of a computer program and a dictionary coefficient.
  • the DSP 14 can perform the computation process described above by training a learning model by changing the weights of a variety of parameters in the learning model using training data, by preparing a plurality of learning models and changing a learning model to be used in accordance with a computation process, or by acquiring a pre-trained learning model from an external device.
  • the selector 16 for example, selectively outputs image data output from the DSP 14 , or image data or a computation result stored in the memory 15 , in accordance with a select control signal from the controller 12 .
  • the selector 16 outputs the image data output from the signal processor 13 as it is.
  • the image data or the computation result output from the selector 16 is input to the application processor 20 that processes display and user interface.
  • the application processor 20 is configured, for example, with a central processing unit (CPU) and executes an operating system and a variety of application software.
  • This application processor 20 may be equipped with functions such as a graphics processing unit (GPU) and a baseband processor.
  • the application processor 20 performs a variety of processes for the input image data or the computation result as necessary, or performs display to users, or transmits the input image data or the computation result to an external cloud server 30 through a predetermined network 40 .
  • a variety of networks such as the Internet, a wired local area network (LAN) or a wireless LAN, a mobile communication network, or Bluetooth (registered trademark) can be applied to the predetermined network 40 .
  • the image data or the computation result may be transmitted not only to the cloud server 30 but also to a variety of information processing devices (systems) having a communication function, such as a server operating on its own, a file server storing a variety of data, and a communication terminal such as a mobile phone.
  • systems information processing devices having a communication function, such as a server operating on its own, a file server storing a variety of data, and a communication terminal such as a mobile phone.
  • FIG. 2 is a diagram illustrating a chip configuration of the image sensor according to the present embodiment.
  • the image sensor 10 has a stack structure in which a first substrate (die) 100 shaped like a quadrangular flat plate and a second substrate (die) 120 similarly shaped like a quadrangular flat plate are bonded together.
  • the first substrate 100 and the second substrate may have the same size, for example.
  • the first substrate 100 and the second substrate 120 each may be a semiconductor substrate such as a silicon substrate.
  • the pixel array 101 of the imager 11 is provided in the first substrate 100 .
  • a part or the whole of the optical system 104 may be provided on a chip in the first substrate 100 .
  • the ADC 17 In the second substrate 120 , in the configuration of the image sensor 10 illustrated in FIG. 1 , the ADC 17 , the controller 12 , the signal processor 13 , the DSP 14 , the memory 15 , and the selector 16 are arranged. A not-illustrated interface circuit, driver circuit, and the like may be arranged in the second substrate 120 .
  • the first substrate 100 and the second substrate 120 may be bonded together by chip-on-chip (CoC) technology in which the first substrate 100 and the second substrate 120 are individually diced into chips, and these diced first substrate 100 and second substrate 120 are bonded together, or by chip-on-wafer (CoW) technology in which one of the first substrate 100 and the second substrate 120 (for example, the first substrate 100 ) is diced into a chip, and the diced first substrate 100 is bonded to the second substrate 120 before dicing (that is, in a wafer state), or by wafer-on-wafer (WoW) technology in which the first substrate 100 and the second substrate 120 both in a wafer state are bonded together.
  • CoC chip-on-chip
  • CoW chip-on-wafer
  • plasma joining can be used as a joining process between the first substrate 100 and the second substrate 120 .
  • the present invention is not limited thereto and a variety of joining processes may be used.
  • the DSP 14 When the DSP 14 operates as a processing unit that performs a computation process based on a pre-trained model as described above, implementation of its operation algorithm is software implementation by running computer programs. Operation algorithms for pre-trained models are updated day by day. It is therefore difficult to grasp in advance, for example, at which timing the DSP 14 performing a computation process based on a pre-trained model performs a process or at which timing a process of the DSP 14 peaks.
  • the DSP 14 operates as a processing unit that performs computation based on a pre-trained model in a chip configuration in which the pixel array 101 is mounted on the first substrate 100 and the DSP 14 is mounted on the second substrate 120 , if the DSP 14 starts a computation process or a process in the DSP 14 reaches a peak during resetting of the pixel array 101 , during exposure of the pixel array 101 , or during readout of a pixel signal from each unit pixel 101 a of the pixel array 101 , noise is superimposed on a pixel signal read out from the pixel array 101 , and consequently, the quality of the image acquired by the image sensor 10 is deteriorated.
  • the present embodiment then reduces intrusion of noise resulting from the signal processing by the DSP 14 into the pixel array 101 , by adjusting the positional relation between the pixel array 101 and the DSP 14 . Accordingly, an image with less deterioration in quality can be acquired even when the DSP 14 operates as a processing unit that performs computation based on a pre-trained model.
  • the positional relation between the pixel array 101 and the DSP 14 according to the present embodiment will now be described in detail below with reference to the drawings.
  • the positional relation between the pixel array 101 and the DSP 14 will be described by taking several examples of the layout (also referred to as floor map) of layers (the first substrate 100 and the second substrate 120 ).
  • FIG. 3 and FIG. 4 are diagrams for explaining a first layout example according to the present embodiment.
  • FIG. 3 illustrates a layout example of the first substrate 100
  • FIG. 4 illustrates a layout example of the second substrate 120 .
  • the pixel array 101 of the imager 11 is provided in the first substrate 100 .
  • the optical system 104 is mounted on the first substrate 100 , it is provided at a position corresponding to the pixel array 101 .
  • the pixel array 101 is provided off-center to one side L 101 among four sides L 101 to L 104 of the first substrate 100 .
  • the pixel array 101 is provided such that its center O 101 is more proximate to the side L 101 than the center O 100 of the first substrate 100 .
  • the side L 101 may be, for example, a shorter side.
  • the present invention is not limited thereto, and the pixel array 101 may be provided off-center to a longer side.
  • a TSV array 102 is provided, in which a plurality of through silicon vias (hereinafter referred to as TSVs) passing through the first substrate 100 are arranged as wiring for electrically connecting each unit pixel 101 a in the pixel array 101 to the ADC 17 provided in the second substrate 120 .
  • TSVs through silicon vias
  • the TSV array 102 is provided in proximity to the side L 101 proximate to the pixel array 101 to ensure a space for each part such as the ADC 17 in the second substrate 120 .
  • the TSV array 102 may also be provided in a region proximate to one side L 104 (or may be the side L 103 ) of two sides L 103 and L 104 intersecting the side L 101 , in other words, in a region between the side L 104 (or the side L 103 ) and the pixel array 101 .
  • a pad array 103 having a plurality of pads arranged linearly is provided on each of the sides L 102 and L 103 , on which the pixel array 101 is not provided off-center, among four sides L 101 to L 104 of the first substrate 100 .
  • the pads included in the pad array 103 include, for example, a pad (also referred to as power supply pin) receiving power supply voltage for analog circuits such as the pixel array 101 and the ADC 17 , a pad (also referred to as power supply pin) receiving power supply voltage for digital circuits such as the signal processor 13 , the DSP 14 , the memory 15 , the selector 16 , and the controller 12 , a pad (also referred to as signal pin) for interfaces such as a mobile industry processor interface (MIPI) and a serial peripheral interface (SPI), and a pad (also referred to as signal pin) for input/output of clock and data.
  • MIPI mobile industry processor interface
  • SPI serial peripheral interface
  • Each pad is electrically connected to, for example, an external power supply circuit or an interface circuit through a wire. It is preferable that each pad array 103 and the TSV array 102 are sufficiently spaced apart to such a degree that influences of reflection of signals from the wire connected to each pad in the pad array 103 can be ignored.
  • the ADC 17 in the second substrate 120 , in the configuration of the image sensor 10 illustrated in FIG. 1 , the ADC 17 , the controller 12 , the signal processor 13 , the DSP 14 , and the memory 15 are arranged.
  • the memory 15 is divided into two regions: a memory 15 A and a memory 15 B.
  • the ADC 17 is divided into two regions: an ADC 17 A and a digital-to-analog converter (DAC) 17 B.
  • the DAC 17 B supplies a reference voltage for AD conversion to the ADC 17 A and, broadly speaking, is included in a part of the ADC 17 .
  • the selector 16 is also provided on the second substrate 120 .
  • the second substrate 120 also has wiring 122 in contact with and electrically connected to the TSVs in the TSV array 102 passing through the first substrate 100 (hereinafter simply referred to as TSV array 102 ), and a pad array 123 , in which a plurality of pads electrically connected to the pads in the pad array 103 of the first substrate 100 are arranged linearly.
  • twin TSV technology in which two TSVs, namely, a TSV provided in the first substrate 100 and a TSV provided from the first substrate 100 to the second substrate 120 are connected with the chip facing out
  • shared TSV technology in which a shared TSV provided from the first substrate 100 to the second substrate 120 provides connection.
  • connection modes include Cu—Cu bonding in which copper (Cu) exposed on the joint surface of the first substrate 100 and Cu exposed on the joint surface of the second substrate 120 are joined.
  • connection mode between the pads in the pad array 103 on the first substrate 100 and the pads in the pad array 123 of the second substrate 120 is, for example, wire bonding.
  • connection modes such as through holes and castellation may be employed.
  • the ADC 17 A, the signal processor 13 , and the DSP 14 are arranged in order from the upstream side along the flow of a signal read out from the pixel array 101 , where the upstream side is the vicinity of the wiring 122 connected to the TSV array 102 . That is, the ADC 17 A to which a pixel signal read out from the pixel array 101 is initially input is provided in the vicinity of the wiring 122 on the most upstream side, next the signal processor 13 is provided, and the DSP 14 is provided in a region farthest from the wiring 122 .
  • Such a layout in which the ADC 17 to the DSP 14 are arranged from the upstream side along the flow of a signal can shorten the wiring connecting the parts. This layout leads to reduction in signal delay, reduction in signal propagation loss, improvement of the S/N ratio, and lower power consumption.
  • the controller 12 is provided, for example, in the vicinity of the wiring 122 on the upstream side. In FIG. 4 , the controller 12 is provided between the ADC 17 A and the signal processor 13 .
  • Such a layout leads to reduction in signal delay, reduction in signal propagation loss, improvement of the S/N ratio, and lower power consumption when the controller 12 controls the pixel array 101 .
  • the signal pin and the power supply pin for analog circuits can be collectively arranged in the vicinity of the analog circuits (for example, in the lower side of FIG. 4 ), the remaining signal pin and power supply pin for digital circuits can be collectively arranged in the vicinity of the digital circuits (for example, in the upper side of FIG. 4 ), or the power supply pin for analog circuits and the power supply pin for digital circuits can be sufficiently spaced apart from each other.
  • the DSP 14 is provided on the side opposite to the ADC 17 A on the most downstream side.
  • the DSP 14 can be provided in a region not overlapping with the pixel array 101 in the stacking direction of the first substrate 100 and the second substrate 120 (hereinafter simply referred to as top-bottom direction).
  • the DSP 14 and the signal processor 13 are connected by an interconnect 14 a configured with a part of the DSP 14 or a signal line.
  • the selector 16 is provided, for example, in the vicinity of the DSP 14 .
  • the interconnect 14 a is a part of the DSP 14
  • the DSP 14 may partially overlap with the pixel array 101 in the top-bottom direction.
  • intrusion of noise into the pixel array 101 can be reduced.
  • Memories 15 A and 15 B are arranged, for example, so as to surround the DSP 14 from three directions. In such an arrangement of the memories 15 A and 15 B surrounding the DSP 14 , the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be reduced when the DSP 14 accesses the memory 15 .
  • the pad array 123 is provided, for example, at a position on the second substrate 120 corresponding to the pad array 103 of the first substrate 100 in the top-bottom direction.
  • a pad positioned in the vicinity of the ADC 17 A is used for propagation of power supply voltage for analog circuits (mainly the ADC 17 A) or an analog signal.
  • a pad positioned in the vicinity of the controller 12 , the signal processor 13 , the DSP 14 , or the memories 15 A and 15 B is used for propagation of power supply voltage for digital circuits (mainly, the controller 12 , the signal processor 13 , the DSP 14 , the memories 15 A and 15 B) and a digital signal.
  • Such a pad layout can reduce the distance of wiring connecting the pads to the parts. This layout leads to reduction in signal delay, reduction in propagation loss of signals and power supply voltage, improvement of the S/N ratio, and lower power consumption.
  • the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 5 is a diagram illustrating a layout example of the second substrate in the second layout example.
  • the DSP 14 is provided at the center of a region in which the DSP 14 and the memory 15 are arranged.
  • the memory 15 is provided so as to surround the DSP 14 from four directions.
  • the distance of wiring between each memory element in the memory 15 and the DSP 14 can be further averaged while the distance can be further reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15 .
  • the DSP 14 and the pixel array 101 are arranged so as not to be superimposed on each other in the top-bottom direction.
  • the present invention is not limited thereto, and the DSP 14 may be partially superimposed on the pixel array 101 in the top-bottom direction. Even in such a case, compared with when the whole of the DSP 14 is superimposed on the pixel array 101 in the top-bottom direction, intrusion of noise into the pixel array 101 can be reduced.
  • the other layout may be similar to the first layout example and is not further elaborated here.
  • the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 6 is a diagram illustrating a layout example of the second substrate in the third layout example.
  • the DSP 14 is provided adjacent to the signal processor 13 .
  • the signal line from the signal processor 13 to the DSP 14 can be shortened. This layout leads to reduction in signal delay, reduction in propagation loss of signals and power supply voltage, improvement of the S/N ratio, and lower power consumption.
  • the memory 15 is provided so as to surround the DSP 14 from three directions. Consequently, signal delay, signal propagation loss, and power consumption can be reduced when the DSP 14 accesses the memory 15 .
  • the DSP 14 is partially superimposed on the pixel array 101 in the top-bottom direction. Even in such a case, compared with when the whole of the DSP 14 is superimposed on the pixel array 101 in the top-bottom direction, intrusion of noise into the pixel array 101 can be reduced.
  • the other layout may be similar to the other layout examples and is not further elaborated here.
  • the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 7 is a diagram illustrating a layout example of the second substrate in the fourth layout example.
  • the fourth layout example in a layout similar to the third layout example, that is, in a layout in which the DSP 14 is provided adjacent to the signal processor 13 , the DSP 14 is provided at a position far from both of two TSV arrays 102 .
  • the memory 15 is provided so as to surround the DSP 14 from two directions. Consequently, signal delay, signal propagation loss, and power consumption can be reduced when the DSP 14 accesses the memory 15 .
  • the DSP 14 is partially superimposed on the pixel array 101 in the top-bottom direction. Even in such a case, compared with when the whole of the DSP 14 is superimposed on the pixel array 101 in the top-bottom direction, intrusion of noise into the pixel array 101 can be reduced.
  • the other layout may be similar to the other layout examples and is not further elaborated here.
  • the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 8 is a diagram illustrating a layout example of the second substrate in the fifth layout example.
  • the fifth layout example in a layout similar to the first layout example, that is, in a layout in which the DSP 14 is provided on the most downstream side, the DSP 14 is provided at a position far from both of two TSV arrays 102 .
  • the ADC 17 A to the DSP 14 can be arranged more faithfully to the signal flow, the signal line from the signal processor 13 to the DSP 14 can be further shortened. As a result, signal delay, signal propagation loss, and power consumption can be further reduced.
  • the other layout may be similar to the other layout examples and is not further elaborated here.
  • the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 9 is a diagram illustrating a layout example of the second substrate in the sixth layout example. As illustrated in FIG. 9 , in the sixth layout example, the DSP 14 is sandwiched in the top-bottom direction in the drawing between memories 15 C and 15 D divided into two regions.
  • the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15 .
  • the other layout may be similar to the first layout example and is not further elaborated here.
  • the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 10 is a diagram illustrating a layout example of the second substrate in the seventh layout example. As illustrated in FIG. 10 , in the seventh layout example, the memory 15 is sandwiched in the top-bottom direction in the drawing between DSPs 14 A and 14 B divided into two regions.
  • the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15 .
  • the other layout may be similar to the first layout example and is not further elaborated here.
  • the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 11 is a diagram illustrating a layout example of the second substrate in the eighth layout example. As illustrated in FIG. 11 , in the eighth layout example, the DSP 14 is sandwiched in the left-right direction in the drawing between memories 15 E and 15 F divided into two regions.
  • the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15 .
  • the other layout may be similar to the first layout example and is not further elaborated here.
  • the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 12 is a diagram illustrating a layout example of the second substrate in the ninth layout example. As illustrated in FIG. 12 , in the ninth layout example, the memory 15 is sandwiched in the left-right direction in the drawing between DSPs 14 C and 14 D divided into two regions.
  • the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15 .
  • the other layout may be similar to the first layout example and is not further elaborated here.
  • the positional relation between the pixel array 101 and the DSP 14 is adjusted such that at least a part of the DSP 14 of the second substrate 120 is not superimposed on the pixel array 101 in the stacking direction (the top-bottom direction) of the first substrate 100 and the second substrate 120 .
  • This configuration can reduce intrusion of noise resulting from signal processing by the DSP 14 into the pixel array 101 and therefore can provide an image with less deteriorated quality even when the DSP 14 operates as a processing unit that performs computation based on a pre-trained model.
  • An imaging device as an electronic device according to the second embodiment may be similar to, for example, the imaging device 1 described in the first embodiment with reference to FIG. 1 , which is hereby referred to and will not be further elaborated.
  • FIG. 13 is a layout diagram illustrating an overall configuration example of the first substrate in the image sensor according to the present embodiment.
  • FIG. 14 is a diagram illustrating a chip configuration example of the image sensor according to the present embodiment.
  • the size of a first substrate 200 is smaller than the size of the second substrate 120 .
  • the size of the first substrate 200 is reduced in accordance with the size of the pixel array 101 .
  • many first substrates 200 can be fabricated from a single semiconductor wafer.
  • the chip size of the image sensor 10 can be further reduced.
  • chip-on-chip (CoC) technology in which the first substrate 200 and the second substrate 120 are individually diced into chips and then bonded
  • chip-on-wafer (CoW) technology in which the diced first substrate 200 is bonded to the second substrate 120 in a wafer state
  • the layout of the first substrate 200 may be similar to, for example, the layout of the first substrate 100 illustrated in the first embodiment, excluding the upper portion.
  • the layout of the second substrate 120 may be similar to, for example, the second substrate 120 illustrated in the first embodiment.
  • the bonding place of the first substrate 200 to the second substrate 120 may be a position where at least a part of the pixel array 101 does not overlap the DSP 14 of the second substrate 120 in the top-bottom direction, in the same manner as in the first embodiment.
  • An imaging device as an electronic device according to the third embodiment may be similar to, for example, the imaging device 1 described in the first embodiment with reference to FIG. 1 , which is hereby referred to and will not be further elaborated.
  • FIG. 15 is a layout diagram illustrating an overall configuration example of the first substrate in the image sensor according to the present embodiment.
  • FIG. 16 is a layout diagram illustrating an overall configuration example of the second substrate in the image sensor according to the present embodiment.
  • FIG. 17 is a diagram illustrating a chip configuration example of the image sensor according to the present embodiment.
  • the size of a first substrate 300 is reduced in accordance with the size of the pixel array 101 .
  • the size of a second substrate 320 is reduced to the same degree as the size of the first substrate 300 .
  • a surplus region of the first substrate 300 can be reduced, and the chip size of the image sensor 10 can be further reduced accordingly.
  • the pixel array 101 and the DSP 14 are superimposed on each other in the stacking direction of the first substrate 300 and the second substrate 320 (hereinafter simply referred to as top-bottom direction). Because of this, noise resulting from the DSP 14 may be superimposed on a pixel signal read out from the pixel array 101 in some cases and may reduce the quality of an image acquired by the image sensor 10 .
  • the ADC 17 A and the DSP 14 are spaced apart from each other. Specifically, for example, the ADC 17 A is provided closer to one end L 321 of the second substrate 320 , while the DSP 14 is provided closer to an end L 322 on the side opposite to the end L 321 at which the ADC 17 A is disposed.
  • the end L 321 proximate to the ADC 17 A may be an end on which the wiring 122 connected to the TSV array 102 is provided.
  • the ADC 17 A, the signal processor 13 , and the DSP 14 are arranged in order from the upstream side along the flow of a signal read out from the pixel array 101 , where the upstream side is the vicinity of the wiring 122 connected to the TSV array 102 , in the same manner as in the foregoing embodiments.
  • the wiring connecting the parts therefore can be shortened. Consequently, reduction in signal delay, reduction in signal propagation loss, improvement in the S/N ratio, and lower power consumption can be achieved.
  • the ADC 17 A and the DSP 14 are spaced apart from each other, thereby reducing noise produced by intrusion of noise produced in the DSP 14 into the ADC 17 A. Consequently, deterioration in quality of an image acquired by the image sensor 10 can be suppressed.
  • the imaging device 1 is mounted on a vehicle, that is, the imaging device 1 is an in-vehicle camera, by way of example.
  • the imaging device 1 is not necessarily attached to a vehicle but may be attached to various equipment, devices, and locations.
  • FIG. 18 is a diagram illustrating an overall configuration example of an imaging device according to the present embodiment.
  • the first substrate 100 in the image sensor 10 is not illustrated.
  • the chip of the image sensor 10 may be connected to a circuit board 400 provided with the application processor 20 and the like, for example, through connection wiring that is flexible and deformable, such as a flexible cable 401 .
  • one end of the flexible cable 402 may be connected to a side other than the side on which the ADC 17 A in the second substrate 120 is provided in proximity, for example, to the side on which the DSP 14 is provided in proximity.
  • This configuration can reduce the wiring length from a data output end in the DSP 14 or the memory 15 provided in vicinity thereof to the flexible cable 402 , thereby suppressing size increase of the chip of the image sensor 10 and facilitating designing of this wiring layout. Reducing the wiring length can lead to suppression of signal delay, reduction in signal propagation loss, and lower power consumption.
  • FIG. 19 is a diagram illustrating an attachment example of the imaging device according to the first example of the present embodiment.
  • the imaging device 1 when the imaging device 1 is mounted as a front camera on a vehicle 410 , the imaging device 1 is provided, for example, in the vicinity of a windshield 411 inside the vehicle 410 , at a location that does not interrupt the field of view of a driver D.
  • the image sensor 10 is inclined at about 90° to the front direction of the vehicle 410 , for example, such that a light-receiving surface of the pixel array 101 of the first substrate 100 faces substantially the front side of the vehicle 410 .
  • the circuit board 400 connected to the image sensor 10 through the flexible cable 402 is installed, for example, such that its main plane is substantially horizontal.
  • the main plane of the circuit board 400 may be, for example, a surface parallel to the surface provided with an electronic circuit such as the application processor 20 .
  • the horizontal direction may be a horizontal direction in the interior space of the vehicle 410 .
  • the image sensor 10 and the circuit board 400 may be arranged, for example, such that the circuit board 400 is positioned below the image sensor 10 and at least the front-side end of the circuit board 400 protrudes toward the front direction of the vehicle relative to the front-side end of the image sensor 10 .
  • the imaging device 1 can be provided in closer proximity to the windshield 411 inclined toward the front direction of the vehicle 410 , so that the space occupied by the imaging device 1 can be reduced.
  • the respective end portions of the image sensor 10 and the circuit board 400 are arranged in proximity to each other so as to form the L shape, and the end portions arranged in proximity to each other are connected by the flexible cable 402 , whereby the length of the flexible cable 402 can be reduced.
  • the image sensor 10 is installed such that the ADC 17 A in the second substrate 120 is positioned on the upper side in the vertical direction and the DSP 14 is positioned on the lower side in the vertical direction.
  • the flexible cable 402 is attached so as to connect the lower-side end of the image sensor 10 and the rear-side end of the circuit board 400 .
  • the application processor 20 in the circuit board 400 is provided, for example, on the front side of the vehicle 410 relative to the image sensor
  • FIG. 20 is a diagram illustrating an installation example of the imaging device according to the second example of the present embodiment.
  • the imaging device 1 when the imaging device 1 is mounted as a rear camera on the vehicle 410 , the imaging device 1 is provided, for example, in the vicinity of a rear window 412 inside the vehicle 410 , at a location that does not interrupt the field of view of a driver D through a rearview mirror.
  • the image sensor 10 In a state in which the imaging device 1 is attached to the inside of the vehicle 410 , the image sensor 10 is inclined at about 90° to the rear direction of the vehicle 410 , for example, such that the light-receiving surface of the pixel array 101 of the first substrate 100 faces substantially the rear side of the vehicle 410 .
  • the circuit board 400 connected to the image sensor 10 through the flexible cable 402 is installed, for example, such that its main plane is substantially parallel to the light-receiving surface of the image sensor 10 .
  • the flexible cable 402 may connect the upper ends of the image sensor 10 and the circuit board 400 in the installed state, or may connect the lower ends thereof, or may connect the side ends thereof.
  • the embodiment is not limited to such an attachment example.
  • the image sensor 10 and the circuit board 400 may be arranged, for example, such that the circuit board 400 is positioned below the image sensor 10 and at least the rear-side end of the circuit board 400 protrudes to the rear direction of the vehicle relative to the rear-side end of the image sensor 10 , similarly to when the imaging device 1 is installed as a front camera in the first example.
  • the imaging device 1 can be provided in closer proximity to the rear window 412 inclined toward the rear direction of the vehicle 410 , so that the space occupied by the imaging device 1 can be reduced.
  • one end of the flexible cable 402 is connected to a side other than the side on which the ADC 17 A in the second substrate 120 is provided in proximity, for example, to the side on which the DSP 14 is provided in proximity.
  • This configuration can reduce the wiring length from a data output end in the DSP 14 or the memory 15 provided in vicinity thereof to the flexible cable 402 , thereby suppressing size increase of the chip of the image sensor 10 and facilitating designing of this wiring layout. Reducing the wiring length can lead to suppression of signal delay, reduction in signal propagation loss, and lower power consumption.
  • the image sensor 10 and the circuit board 400 are not necessarily connected by a flexible and deformable connection cable, such as the flexible cable 402 , as described above but may be connected using a connection part that does not have flexibility, such as solder balls, bonding pads, and connection pins.
  • the technique according to the present disclosure is applied to a solid-state imaging device (image sensor 10 ) that acquires a two-dimensional image.
  • the application of the technique according to the present disclosure is not limited to a solid-state imaging device.
  • the technique according to the present disclosure can be applied to a variety of light-receiving sensors such as Time of Flight (ToF) sensors, infrared (IR) sensors, and dynamic vision sensors (DVS). That is, when the chip structure of light-receiving sensors is of the stacked type, reduction of noise included in sensor results and miniaturization of sensor chips can be achieved.
  • ToF Time of Flight
  • IR infrared
  • DVS dynamic vision sensors
  • the technique according to the present disclosure (the present technique) is applicable to a variety of products.
  • the technique according to the present disclosure may be implemented as a device mounted on any type of movable bodies, such as automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility devices, airplanes, drones, vessels and ships, and robots.
  • FIG. 21 is a block diagram illustrating an example of the overall configuration of a vehicle control system that is an example of a movable body control system to which the technique according to the present disclosure is applicable.
  • a vehicle control system 12000 includes a plurality of electronic control units connected through a communication network 12001 .
  • the vehicle control system 12000 includes a drive control unit 12010 , a body control unit 12020 , a vehicle exterior information detection unit 12030 , a vehicle interior information detection unit 12040 , and a central control unit 12050 .
  • a microcomputer 12051 As a functional configuration of the central control unit 12050 , a microcomputer 12051 , a sound image output module 12052 , and an in-vehicle network I/F (interface) 12053 are illustrated.
  • the drive control unit 12010 controls operation of devices related to a drive system of a vehicle in accordance with a variety of computer programs.
  • the drive control unit 12010 functions as a control device for a drive force generating device for generating drive force of the vehicle, such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting drive force to the wheels, a steering mechanism for adjusting the steering angle of the vehicle, and a braking device for generating braking force of the vehicle.
  • the body control unit 12020 controls operation of a variety of devices installed in the vehicle body in accordance with a variety of computer programs.
  • the body control unit 12020 functions as a control device for a keyless entry system, a smart key system, a power window device, or a variety of lamps such as head lamps, rear lamps, brake lamps, turn signals, and fog lamps.
  • the body control unit 12020 may receive radio waves transmitted from a portable device alternative to a key or signals from a variety of switches.
  • the body control unit 12020 accepts input of the radio waves or signals and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
  • the vehicle exterior information detection unit 12030 detects information on the outside of the vehicle equipped with the vehicle control system 12000 .
  • an imager 12031 is connected to the vehicle exterior information detection unit 12030 .
  • the vehicle exterior information detection unit 12030 allows the imager 12031 to capture an image of the outside of the vehicle and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform an object detection process or a distance detection process for persons, vehicles, obstacles, signs, or characters on roads, based on the received image.
  • the imager 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the quantity of received light of the light.
  • the imager 12031 may output an electrical signal as an image or output as information on a measured distance.
  • Light received by the imager 12031 may be visible light or invisible light such as infrared rays.
  • the vehicle interior information detection unit 12040 detects information on the inside of the vehicle.
  • the vehicle interior information detection unit 12040 is connected to, for example, a driver state detector 12041 that detects a state of the driver.
  • the driver state detector 12041 includes, for example, a camera for taking an image of the driver, and the vehicle interior information detection unit 12040 may calculate the degree of fatigue or the degree of concentration of the driver or may determine whether the driver falls asleep, based on detection information input from the driver state detector 12041 .
  • the microcomputer 12051 can compute a control target value for the drive force generating device, the steering mechanism, or the braking device, based on information on the inside and outside of the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040 , and output a control command to the drive control unit 12010 .
  • the microcomputer 12051 can perform coordination control for the purpose of function implementation of advanced driver assistance systems (ADAS), including collision avoidance or shock mitigation of the vehicle, car-following drive based on the distance between vehicles, vehicle speed-keeping drive, vehicle collision warning, or lane departure warning.
  • ADAS advanced driver assistance systems
  • the microcomputer 12051 can perform coordination control for the purpose of, for example, autonomous driving, in which the drive force generating device, the steering mechanism, or the braking device is controlled based on information on the surroundings of the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040 to enable autonomous driving without depending on the operation by the driver.
  • autonomous driving in which the drive force generating device, the steering mechanism, or the braking device is controlled based on information on the surroundings of the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040 to enable autonomous driving without depending on the operation by the driver.
  • the microcomputer 12051 can output a control command to the body control unit 12020 , based on information on the outside of the vehicle acquired by the vehicle exterior information detection unit 12030 .
  • the microcomputer 12051 can perform coordination control for the antidazzle purpose, for example, by controlling the head lamps in accordance with the position of a vehicle ahead or an oncoming vehicle detected by the vehicle exterior information detection unit 12030 to switch high beams to low beams.
  • the sound image output module 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or aurally giving information to a passenger in the vehicle or the outside of the vehicle.
  • an audio speaker 12061 a display 12062 , and an instrument panel 12063 are illustrated as the output device.
  • the display 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 22 is a diagram illustrating an example of the installation position of the imager 12031 .
  • imagers 12101 , 12102 , 12103 , 12104 , and 12105 are provided as the imager 12031 .
  • the imagers 12101 , 12102 , 12103 , 12104 , and 12105 are provided, for example, at positions such as front nose, side mirrors, rear bumper, back door of a vehicle 12100 , and an upper portion of the front glass inside the vehicle.
  • the imager 12101 provided at the front nose and the imager 12105 provided at the upper portion of the front glass inside the vehicle mainly acquire an image in front of the vehicle 12100 .
  • the imagers 12102 and 12103 provided at the side mirrors mainly acquire images on the sides of the vehicle 12100 .
  • the imager 12104 provided at the rear bumper or the back door mainly acquires an image behind the vehicle 12100 .
  • the imager 12105 provided at the upper portion of the front glass in the vehicle interior is mainly used for detecting a vehicle ahead, pedestrians, obstacles, traffic signs, road signs, traffic lanes, and the like.
  • FIG. 22 illustrates an example of the imaging ranges of the imagers 12101 to 12104 .
  • An imaging range 12111 indicates the imaging range of the imager 12101 provided at the front nose
  • imaging ranges 12112 and 12113 indicate the imaging ranges of the imagers 12102 and 12103 provided at the side mirrors
  • an imaging range 12114 indicates the imaging range of the imager 12104 provided at the rear bumper or the back door.
  • a bird's eye view of the vehicle 12100 viewed from above can be obtained by superimposing image data captured by the imagers 12101 to 12104 .
  • At least one of the imagers 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imagers 12101 to 12104 may be a stereo camera including a plurality of image sensors or may be an image sensor having a pixel for phase difference detection.
  • the microcomputer 12051 can obtain the distance to a three-dimensional object within the imaging ranges 12111 to 12114 and a temporal change of this distance (relative speed to the vehicle 12100 ), based on distance information obtained from the imagers 12101 to 12104 , to specifically extract a three-dimensional object closest to the vehicle 12100 on the path of travel and traveling at a predetermined speed (for example, 0 km/h or more) in substantially the same direction as the vehicle 12100 , as a vehicle ahead.
  • the microcomputer 12051 can preset a distance between vehicles to be kept behind a vehicle ahead and perform, for example, automatic braking control (including car-following stop control) and automatic speed-up control (including car-following startup control). In this way, coordination control can be performed for the purpose of, for example, autonomous driving in which the vehicle runs autonomously without depending on the operation by the driver.
  • the microcomputer 12051 can classify three-dimensional object data on a three-dimensional object into “two-wheel vehicle”, “standard-sized vehicle”, “heavy vehicle”, “pedestrian”, “utility pole”, or “any other three-dimensional object”, based on the distance information obtained from the imagers 12101 to 12104 and extract the data, and can use the extracted data for automatic avoidance of obstacles.
  • the microcomputer 12051 identifies an obstacle in the surroundings of the vehicle 12100 as an obstacle visible to the driver of the vehicle 12100 or as an obstacle hardly visible.
  • the microcomputer 12051 determines a collision risk indicating the degree of risk of collision with each obstacle and, when the collision risk is equal to or higher than a setting value and there is a possibility of collision, outputs an alarm to the driver through the audio speaker 12061 or the display 12062 , or performs forced deceleration or avoidance steering through the drive control unit 12010 , thereby implementing drive assistance for collision avoidance.
  • At least one of the imagers 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian exists in the image captured by the imagers 12101 to 12104 .
  • recognition of pedestrians is performed, for example, through the procedure of extracting feature points in the image captured by the imagers 12101 to 12104 serving as an infrared camera and the procedure of performing pattern matching with a series of feature points indicating the outline of an object to determine whether the object is a pedestrian.
  • the sound image output module 12052 controls the display 12062 such that a rectangular outline for highlighting the recognized pedestrian is superimposed.
  • the sound image output module 12052 may control the display 12062 such that an icon indicating a pedestrian appears at a desired position.
  • the technique according to the present disclosure is applicable to the imager 12031 and the like in the configuration described above.
  • miniaturization of the imager 12031 and the like can be achieved, thereby facilitating design of the interior and the exterior of the vehicle 12100 .
  • a clear image with reduced noise can be acquired to provide a driver with a more visible image. Consequently, the driver's fatigue can be alleviated.
  • the technique according to the present disclosure (the present technique) is applicable to a variety of products.
  • the technique according to the present disclosure may be applied to an endoscopic surgery system.
  • FIG. 23 is a diagram illustrating an example of the overall configuration of an endoscopic surgery system to which the technique according to the present disclosure (the present technique) is applicable.
  • FIG. 23 illustrates a situation in which an operator (doctor) 11131 uses an endoscopic surgery system 11000 to perform an operation on a patient 11132 on a patient bed 11133 .
  • the endoscopic surgery system 11000 includes an endoscope 11100 , other surgical instruments 11110 such as an insufflation tube 11111 and an energy treatment tool 11112 , a support arm device 11120 supporting the endoscope 11100 , and a cart 11200 carrying a variety of devices for endoscopic surgery.
  • the endoscope 11100 includes a barrel 11101 having a region of a predetermined length from its tip end to be inserted into the body cavity of the patient 11132 , and a camera head 11102 connected to the base end of the barrel 11101 .
  • the endoscope 11100 is a rigid borescope having a rigid barrel 11101 .
  • the endoscope 11100 may be configured as a soft borescope having a soft barrel.
  • the tip end of the barrel 11101 has an opening having an objective lens fitted therein.
  • a light source device 11203 is connected to the endoscope 11100 . Light generated by the light source device 11203 is propagated to the tip end of the barrel through a light guide extending inside the barrel 11101 and irradiates an observation target in the body cavity of the patient 11132 through the objective lens.
  • the endoscope 11100 may be a forward-viewing endoscope or may be a forward-oblique viewing endoscope or a side-viewing endoscope.
  • An optical system and an image sensor are provided inside the camera head 11102 .
  • Reflected light (observation light) from an observation target is collected by the optical system onto the image sensor.
  • the observation light is converted to electricity by the image sensor to generate an electrical signal corresponding to the observation light, that is, an image signal corresponding to an observation image.
  • the image signal is transmitted as RAW data to a camera control unit (CCU) 11201 .
  • CCU camera control unit
  • the CCU 11201 is configured with a central processing unit (CPU), a graphics processing unit (GPU), or the like to centrally control the operation of the endoscope 11100 and a display device 11202 .
  • the CCU 11201 receives an image signal from the camera head 11102 and performs a variety of image processing on the image signal, for example, a development process (demosaicing) for displaying an image based on the image signal.
  • a development process demosaicing
  • the display device 11202 displays an image based on the image signal subjected to image processing by the CCU 11201 , under the control of the CCU 11201 .
  • the light source device 11203 is configured with, for example, a light source such as a light emitting diode (LED) and supplies the endoscope 11100 with radiation light in imaging a surgery site.
  • a light source such as a light emitting diode (LED)
  • An input device 11204 is an input interface with the endoscopic surgery system 11000 .
  • the user can input a variety of information and instructions to the endoscopic surgery system 11000 through the input device 11204 .
  • the user inputs an instruction to change the imaging conditions by the endoscope 11100 (the kind of radiation light, magnification, focal length, etc.).
  • a treatment tool control device 11205 controls actuation of the energy treatment tool 11112 for cauterization of tissues, incision, or sealing of blood vessels.
  • An insufflator 11206 feeds gas into the body cavity through the insufflation tube 11111 to insufflate the body cavity of the patient 11132 in order to ensure the field of view with the endoscope 11100 and ensure a working space for the operator.
  • a recorder 11207 is a device capable of recording a variety of information on surgery.
  • a printer 11208 is a device capable of printing a variety of information on surgery in a variety of forms such as text, image, or graph.
  • the light source device 11203 that supplies the endoscope 11100 with radiation light in imaging a surgery site can be configured with, for example, a white light source such as an LED, a laser light source, or a combination thereof.
  • a white light source is configured with a combination of RGB laser light sources, the output power and the output timing of each color (each wavelength) can be controlled accurately, and, therefore, the white balance of the captured image can be adjusted in the light source device 11203 .
  • an observation target is irradiated time-divisionally with laser light from each of the RGB laser light sources, and actuation of the image sensor in the camera head 11102 is controlled in synchronization with the radiation timing, whereby an image corresponding to each of R, G, and B can be captured time-divisionally.
  • a color image can be obtained even without color filters in the image sensor.
  • the actuation of the light source device 11203 may be controlled such that the intensity of output light is changed every certain time.
  • the actuation of the image sensor in the camera head 11102 is controlled to acquire images time-divisionally, and the images are combined to generate an image with a high dynamic range free from blocked-up shadows and blown out highlights.
  • the light source device 11203 may be configured to supply light in a predetermined wavelength band corresponding to specific light observation.
  • specific light observation for example, narrow band imaging is performed, which uses the wavelength dependency of light absorption in body tissues and applies light in a narrow band, compared with radiation light (that is, white light) in normal observation, to capture a high-contrast image of predetermined tissues such as blood vessels in the outermost surface of mucosa.
  • fluorescence observation may be performed in which an image is acquired by fluorescence generated by radiation of excitation light.
  • excitation light is applied to body tissues and fluorescence from the body tissues is observed (autofluorescence imaging), or a reagent such as indocyanine green (ICG) is locally injected to body tissues and excitation light corresponding to the fluorescence wavelength of the reagent is applied to the body tissues to obtain a fluorescence image.
  • the light source device 11203 may be configured to supply narrow-band light and/or excitation light corresponding to such specific light observation.
  • FIG. 24 is a block diagram illustrating an example of the functional configuration of the camera head 11102 and the CCU 11201 illustrated in FIG. 23 .
  • the camera head 11102 includes a lens unit 11401 , an imager 11402 , a driver 11403 , a communication module 11404 , and a camera head controller 11405 .
  • the CCU 11201 includes a communication module 11411 , an image processor 11412 , and a controller 11413 .
  • the camera head 11102 and the CCU 11201 are connected to communicate with each other through a transmission cable 11400 .
  • the lens unit 11401 is an optical system provided at a connection portion to the barrel 11101 . Observation light taken in from the tip end of the barrel 11101 is propagated to the camera head 11102 and enters the lens unit 11401 .
  • the lens unit 11401 is configured with a combination of a plurality of lenses including a zoom lens and a focus lens.
  • the imager 11402 may be configured with one image sensor (called single sensor-type) or a plurality of image sensors (called multi sensor-type).
  • image signals corresponding to R, G, and B may be generated by image sensors and combined to produce a color image.
  • the imager 11402 may have a pair of image sensors for acquiring image signals for right eye and for left eye corresponding to three-dimensional (3D) display.
  • 3D display enables the operator 11131 to more accurately grasp the depth of living tissues in a surgery site.
  • several lines of lens units 11401 may be provided corresponding to the image sensors.
  • the imager 11402 is not necessarily provided in the camera head 11102 .
  • the imager 11402 may be provided immediately behind the objective lens inside the barrel 11101 .
  • the driver 11403 is configured with an actuator and moves the zoom lens and the focus lens of the lens unit 11401 by a predetermined distance along the optical axis under the control of the camera head controller 11405 .
  • the magnification and the focal point of an image captured by the imager 11402 thus can be adjusted as appropriate.
  • the communication module 11404 is configured with a communication device for transmitting/receiving a variety of information to/from the CCU 11201 .
  • the communication module 11404 transmits an image signal obtained from the imager 11402 as RAW data to the CCU 11201 through the transmission cable 11400 .
  • the communication module 11404 receives a control signal for controlling actuation of the camera head 11102 from the CCU 11201 and supplies the received signal to the camera head controller 11405 .
  • the control signal includes, for example, information on imaging conditions, such as information specifying a frame rate of the captured images, information specifying an exposure value in imaging, and/or information specifying a magnification and a focal point of the captured image.
  • the image conditions such as frame rate, exposure value, magnification, and focal point may be specified as appropriate by the user or may be automatically set by the controller 11413 of the CCU 11201 based on the acquired image signal.
  • the endoscope 11100 is equipped with an auto exposure (AE) function, an auto focus (AF) function, and an auto white balance (AWB) function.
  • AE auto exposure
  • AF auto focus
  • ABB auto white balance
  • the camera head controller 11405 controls actuation of the camera head 11102 , based on a control signal received from the CCU 11201 through the communication module 11404 .
  • the communication module 11411 is configured with a communication device for transmitting/receiving a variety of information to/from the camera head 11102 .
  • the communication module 11411 receives an image signal transmitted from the camera head 11102 through the transmission cable 11400 .
  • the communication module 11411 transmits a control signal for controlling actuation of the camera head 11102 to the camera head 11102 .
  • the image signal and the control signal can be transmitted via electrical communication or optical communication.
  • the image processor 11412 performs a variety of image processing on the image signal that is RAW data transmitted from the camera head 11102 .
  • the controller 11413 performs a variety of control on imaging of a surgery site and the like by the endoscope 11100 and display of a captured image obtained by imaging of a surgery site and the like. For example, the controller 11413 generates a control signal for controlling actuation of the camera head 11102 .
  • the controller 11413 displays a captured image visualizing a surgery site and the like on the display device 11202 , based on the image signal subjected to image processing by the image processor 11412 . In doing so, the controller 11413 may recognize a variety of objects in the captured image using a variety of image recognition techniques. For example, the controller 11413 can detect the shape of edge, color, and the like of an object included in the captured image to recognize a surgical instrument such as forceps, a specific living body site, bleeding, and mist in use of the energy treatment tool 11112 . When displaying the captured image on the display device 11202 , the controller 11413 may use the recognition result to superimpose a variety of surgery assisting information on the image of the surgery site. The surgery assisting information superimposed and presented to the operator 11131 can alleviate burden on the operator 11131 or ensure the operator 11131 to proceed surgery.
  • the transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electrical signal cable corresponding to communication of electrical signals, an optical fiber corresponding to optical communication, or a composite cable thereof.
  • the transmission cable 11400 is used for wired communication.
  • communication between the camera head 11102 and the CCU 11201 may be wireless.
  • the technique according to the present disclosure is applicable to, for example, the imager 11402 and the like in the camera head 11102 among the configurations described above.
  • the camera head 11102 and the like can be miniaturized, resulting in the compact endoscopic surgery system 11000 .
  • the technique according to the present disclosure is applied to the camera head 11102 and the like, a clear image with reduced noise can be acquired to provide the operator with a more visible image. Consequently, the operator's fatigue can be alleviated.
  • the technique according to the present disclosure may be applied to, for example, a microscopic surgery system.
  • the technique according to the present disclosure is applicable to a variety of products.
  • the technique according to the present disclosure may be applied to a pathology diagnosis system to allow doctors to diagnose pathological changes by observing cells and tissues sampled from patients, and an assistance system therefor (hereinafter referred to as diagnostic assistance system).
  • This diagnostic assistance system may be a whole slide imaging (WSI) system for diagnosing pathological changes based on an image acquired using digital pathology technology, and assisting the diagnosis.
  • WSI whole slide imaging
  • FIG. 25 is a diagram illustrating an example of the overall configuration of a diagnostic assistance system 5500 to which the technique according to the present disclosure is applied.
  • the diagnostic assistance system 5500 includes one or more pathology systems 5510 .
  • the diagnostic assistance system 5500 may further include a medical information system 5530 and a derivation device 5540 .
  • Each of one or more pathology systems 5510 is a system mainly used by pathologists and introduced into, for example, a research laboratory or a hospital.
  • the pathology systems 5510 may be introduced into different hospitals and are connected to the medical information system 5530 and the derivation device 5540 through a variety of networks such as wide area networks (WANs) (including the Internet), local area networks (LAN), public networks, and mobile communication networks.
  • WANs wide area networks
  • LAN local area networks
  • public networks public networks
  • Each pathology system 5510 includes a microscope 5511 , a server 5512 , a display control device 5513 , and a display device 5514 .
  • the microscope 5511 has the function of an optical microscope and captures an image of an observation target on a glass slide to acquire a pathological image that is a digital image.
  • the observation target is, for example, tissues or cells sampled from a patient and may be a piece of organ, saliva, or blood.
  • the server 5512 stores and saves the pathological image acquired by the microscope 5511 in a not-illustrated storage unit.
  • the server 5512 searches the not-illustrated storage unit for a pathological image and sends the retrieved pathological image to the display control device 5513 .
  • the display control device 5513 sends an inspection request for a pathological image accepted from the user to the server 5512 .
  • the display control device 5513 then displays the pathological image accepted from the server 5512 on the display device 5514 using liquid crystal, electro-luminescence (EL), cathode ray tube (CRT), or the like.
  • the display device 5514 may support 4 K or 8 K, and one or more display devices 5514 may be provided.
  • the observation target when the observation target is a solid matter such as a piece of organ, the observation target may be, for example, a stained slice.
  • the slice may be prepared, for example, by slicing a block cut out from a specimen such as an organ.
  • the block When sliced, the block may be fixed by, for example, paraffin.
  • staining the slice a variety of staining can be employed, such as common staining such as hematoxylin-eosin (HE) staining for defining the form of tissue, and immunostaining such as immunohistochemistry (IHC) staining for identifying the immune state of tissue.
  • HE hematoxylin-eosin
  • IHC immunohistochemistry staining for identifying the immune state of tissue.
  • one slice may be stained using different kinds of reagents, or two or more slices (also referred to as adjacent slices) continuously cut out from the same block may be stained using different reagents.
  • the microscope 5511 may include a low-resolution imager for capturing an image at low resolution and a high-resolution imager for capturing an image at high resolution.
  • the low-resolution imager and the high-resolution imager may be different optical systems or the same optical system. In the case of the same optical system, the microscope 5511 may have a resolution changed according to an imaging target.
  • a glass slide having an observation target is placed on a stage positioned in the angle of view of the microscope 5511 .
  • the microscope 5511 first acquires the entire image in the angle of view using the low-resolution imager and specifies the region of the observation target from the acquired entire image. Subsequently, the microscope 5511 divides the region including the observation target into a plurality of division regions with a predetermined size and successively captures images of the division regions using the high-resolution imager to acquire high-resolution images of the division regions.
  • the stage may be moved, the imaging optical system may be moved, or both may be moved.
  • Each division region may be overlapped with the adjacent division region in order to prevent occurrence of an imaging-missed region due to unintended slippage of the glass slide.
  • the entire image may include identification information for associating the entire image with the patient. Examples of the identification information include a character string and a QR code (registered trademark).
  • the high-resolution images acquired by the microscope 5511 are input to the server 5512 .
  • the server 5512 divides each high-resolution image into partial images (hereinafter referred to as tile images) with a smaller size. For example, the server 5512 vertically and horizontally divides one high-resolution image into 10 ⁇ 10, in total, 100 tile images. In doing so, if adjacent division regions are overlapping, the server 5512 may perform a stitching process for the high-resolution images adjacent to each other using such technology as template matching. In this case, the server 5512 may divide the stitched high-resolution images as a whole to generate tile images. However, the generation of tile images from a high-resolution image may precede the stitching process.
  • the server 5512 may further divide a tile image to generate tile images with a smaller size. The generation of such tile images may be repeated until a tile image set as a minimum unit is generated.
  • the server 5512 Upon generating a tile image as a minimum unit, the server 5512 executes a tile combining process for all the tile images to combine a predetermined number of adjacent tile images and generate one tile image. This tile combining process may be repeated until finally one tile image is generated.
  • a tile image group having a pyramid structure is generated, in which each layer is configured with one or more tile images.
  • a tile image on a certain layer and a tile image on a layer different from this layer have the same pixel count, but their resolutions are different. For example, when 2 ⁇ 2, in total, four tile images are combined into one tile image on a higher layer, the resolution of the tile image on the higher layer is half the resolution of the tile images on the lower layer used in combining.
  • the level of detail of the observation target appearing on the display device can be changed according to the layer to which a tile image to be displayed belongs to. For example, when the tile image on the lowest layer is used, a narrow region of the observation target can be displayed in detail, and as the tile image on a higher layer is used, a wide region of the observation target can be displayed more coarsely.
  • the generated tile image group having a pyramid structure is, for example, stored in a not-illustrated storage unit together with identification information uniquely identifying each tile image (referred to as tile identification information).
  • tile identification information uniquely identifying each tile image
  • the server 5512 transmits the tile image corresponding to the tile identification information to another device.
  • the tile image that is a pathological image may be generated for each imaging condition such as focal length and staining condition.
  • a certain pathological image as well as another pathological image corresponding to an imaging condition different from a certain imaging condition and in the same region as the certain pathological image may be displayed side by side.
  • the certain imaging condition may be designated by an inspector. When the inspector designates a plurality of imaging conditions, pathological images in the same region corresponding to the respective imaging conditions may be displayed side by side.
  • the server 5512 may store the tile image group having a pyramid structure in a storage device other than the server 5512 , for example, a cloud server.
  • a part or the whole of the tile image generating process as described above may be performed, for example, by a cloud server.
  • the display control device 5513 extracts a desired tile image from the tile image group having a pyramid structure in accordance with input operation from the user and outputs the same to the display device 5514 . Through such a process, the user can attain a sense of viewing the observation target while changing the observation magnification. That is, the display control device 5513 functions as a virtual microscope.
  • the virtual observation magnification here actually corresponds to a resolution.
  • High-resolution images can be captured by any methods.
  • a high-resolution image may be acquired by capturing images of division regions while the stage is repeatedly stopped and moved, or a high-resolution image on a strip may be acquired by capturing images of division regions while the stage is moved at a predetermined speed.
  • the process of generating tile images from a high-resolution image is not an essential configuration, and the resolution of the stitched high-resolution images as a whole may be changed stepwise to generate an image with resolution changing stepwise. Also in this case, a low-resolution image in a wide area to a high-resolution image in a narrow area can be presented stepwise to the user.
  • the medical information system 5530 is an electronic health record system and stores information related to diagnosis, such as information identifying patients, disease information of patients, examination information and image information used in diagnosis, diagnosis results, and prescribed drugs. For example, a pathological image obtained by imaging an observation target of a patient may be saved once through the server 5512 and thereafter displayed on the display device 5514 by the display control device 5513 . The pathologist using the pathology system 5510 conducts pathology diagnosis based on the pathological image appearing on the display device 5514 . The result of pathology diagnosis conducted by the pathologist is stored in the medical information system 5530 .
  • the derivation device 5540 may perform analysis of a pathological image. In this analysis, a learning model created by machine learning can be used. The derivation device 5540 may derive the classification result of a certain region, the identification result of tissues, and the like, as the analysis result. The derivation device 5540 may further derive the identification result such as cell information, count, position, and brightness information, scoring information therefor, and the like. These pieces of information derived by the derivation device 5540 may be displayed as diagnostic assistance information on the display device 5514 of the pathology system 5510 .
  • the derivation device 5540 may be a server system including one or more servers (including a cloud server).
  • the derivation device 5540 may be a configuration incorporated into, for example, the display control device 5513 or the server 5512 in the pathology system 5510 . That is, a variety of analysis for a pathological image may be performed in the pathology system 5510 .
  • the technique according to the present disclosure may be preferably applied, for example, to the microscope 5511 among the configurations described above.
  • the technique according to the present disclosure can be applied to the low-resolution imager and/or the high-resolution imager in the microscope 5511 .
  • the application of the technique according to the present disclosure to the low-resolution imager and/or the high-resolution imager leads to miniaturization of the low-resolution imager and/or the high-resolution imager, and thus miniaturization of the microscope 5511 .
  • the miniaturization facilitates transportation of the microscope 5511 and thereby facilitates system introduction and system replacement.
  • the technique according to the present disclosure when the technique according to the present disclosure is applied to the low-resolution imager and/or the high-resolution imager, a part or the whole of the process from acquisition of a pathological image to analysis of the pathological image can be performed on the fly in the microscope 5511 , so that diagnostic assistance information can be output more promptly and appropriately.
  • the configuration described above is not limited to a diagnostic assistance system and may be applied generally to biological microscopes such as a confocal microscope, a fluorescent microscope, and a video microscope.
  • the observation target may be a biological sample such as cultured cell, fertilized egg, and sperm, a biological material such as cell sheet and three-dimensional cell tissue, and a living body such as zebrafish and mouse.
  • the observation target may be observed in a microplate or a petri dish, rather than on a glass slide.
  • a moving image may be generated from still images of the observation target acquired using the microscope.
  • a moving image may be generated from still images captured successively for a predetermined period of time, or an image sequence may be generated from still images captured at predetermined intervals.
  • dynamic features of an observation target such as motion such as pulsation, expansion, and migration of cancer cell, nerve cell, cardiac muscle tissue, sperm, and the like, and a division process of cultured cell and fertilized egg, can be analyzed using machine learning.
  • the present technique may employ the configuration as follows.
  • a stacked light-receiving sensor comprising:
  • connection wiring attached to the second substrate
  • the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix
  • the second substrate including
  • At least a part of the converter is disposed on a first side in the second substrate
  • the processing unit is disposed on a second side opposite to the first side in the second substrate, and
  • connection wiring is attached to a side other than the second side in the second substrate.
  • connection wiring is a flexible cable.
  • An in-vehicle imaging device comprising a solid-state imaging device, a circuit board provided with an information processing device, and connection wiring connecting the solid-state imaging device and the circuit board,
  • the solid-state imaging device including
  • the second substrate including
  • At least a part of the converter is disposed on a first side in the second substrate
  • the processing unit is disposed on a second side opposite to the first side in the second substrate,
  • connection wiring is attached to a side other than the second side in the second substrate and attached to a third side of the circuit board,
  • the solid-state imaging device is installed inside a vehicle such that a light-receiving surface of the pixel array faces a front direction of the vehicle and the first side is positioned on a lower side in a vertical direction, and
  • the circuit board is installed such that at least an end on a front side of the vehicle in the circuit board protrudes to the front direction of the vehicle relative to an end on the front side of the vehicle of the solid-state imaging device and a main plane is parallel or substantially parallel to a horizontal plane.
  • connection wiring is attached to the side on the lower side in the vertical direction in the second substrate of the solid-state imaging device
  • connection wiring is attached to an end on a rear side of the vehicle in the circuit board.
  • the in-vehicle imaging device according to (4) or (5), wherein the information processing device is positioned on the front side of the vehicle relative to the solid-state imaging device.
  • An in-vehicle imaging device comprising a solid-state imaging device, a circuit board provided with an information processing device, and connection wiring connecting the solid-state imaging device and the circuit board,
  • the solid-state imaging device including
  • the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix
  • the second substrate including
  • At least a part of the converter is disposed on a first side in the second substrate
  • the processing unit is disposed on a second side opposite to the first side in the second substrate,
  • connection wiring is attached to a side other than the second side in the second substrate and attached to a third side of the circuit board, and
  • the solid-state imaging device is installed inside a vehicle such that a light-receiving surface of the pixel array faces a rear direction of the vehicle.
  • the in-vehicle imaging device according to (7), wherein the circuit board is installed inside the vehicle to be positioned on a surface side opposite to the light-receiving surface in the solid-state imaging device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Power Engineering (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Vascular Medicine (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)

Abstract

Advanced processing is performed in a chip. A stacked light-receiving sensor according to an embodiment includes a first substrate (100, 200, 300), a second substrate (120, 320) bonded to the first substrate, and connection wiring (402) bonded to the second substrate. The first substrate includes a pixel array (101) in which a plurality of unit pixels are arranged in a two-dimensional matrix. The second substrate includes a converter (17A) configured to convert an analog pixel signal output from the pixel array to digital image data and a processing unit (14) configured to perform a process for data based on the image data. At least a part of the converter is disposed on a first side in the second substrate. The processing unit is disposed on a second side opposite to the first side in the second substrate. The connection wiring is attached to a side other than the second side in the second substrate.

Description

    FIELD
  • The present disclosure relates to a stacked light-receiving sensor and an in-vehicle imaging device.
  • BACKGROUND
  • Conventionally, flat-type image sensors in which chips such as a sensor chip, a memory chip, and a digital signal processor (DSP) chip are connected in parallel with a plurality of bumps exist as imaging devices that acquire still images and moving images.
  • In recent years, one-chip image sensors having a stack structure in which a plurality of dies are stacked have been developed for the purpose of miniaturization of imaging devices.
  • CITATION LIST Patent Literature
  • Patent Literature 1: WO2018/051809
  • SUMMARY Technical Problem
  • In recent years, more advanced processing in image sensor chips is desired in terms of increasing the variety and the speed of image processing and protection of private information, for example.
  • The present disclosure develops a stacked light-receiving sensor and an in-vehicle imaging device capable of performing more advanced processing in a chip.
  • Solution to Problem
  • To solve the above-described problem, a stacked light-receiving sensor according to the present disclosure comprises: a first substrate; a second substrate bonded to the first substrate; and connection wiring attached to the second substrate, the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix, the second substrate including a converter configured to convert an analog pixel signal output from the pixel array to digital image data and a processing unit configured to perform a process for data based on the image data, wherein at least a part of the converter is disposed on a first side in the second substrate, the processing unit is disposed on a second side opposite to the first side in the second substrate, and the connection wiring is attached to a side other than the second side in the second substrate.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram illustrating an overall configuration example of an imaging device as an electronic device according to a first embodiment.
  • FIG. 2 is a diagram illustrating a chip configuration example of an image sensor according to the first embodiment.
  • FIG. 3 is a diagram illustrating a layout example of a first substrate in a first layout example according to the first embodiment.
  • FIG. 4 is a diagram illustrating a layout example of a second substrate in the first layout example according to the first embodiment.
  • FIG. 5 is a diagram illustrating a layout example of the second substrate in a second layout example according to the first embodiment.
  • FIG. 6 is a diagram illustrating a layout example of the second substrate in a third layout example according to the first embodiment.
  • FIG. 7 is a diagram illustrating a layout example of the second substrate in a fourth layout example according to the first embodiment.
  • FIG. 8 is a diagram illustrating a layout example of the second substrate in a fifth layout example according to the first embodiment.
  • FIG. 9 is a diagram illustrating a layout example of the second substrate in a sixth layout example according to the first embodiment.
  • FIG. 10 is a diagram illustrating a layout example of the second substrate in a seventh layout example according to the first embodiment.
  • FIG. 11 is a diagram illustrating a layout example of the second substrate in an eighth layout example according to the first embodiment.
  • FIG. 12 is a diagram illustrating a layout example of the second substrate in a ninth layout example according to the first embodiment.
  • FIG. 13 is a layout diagram illustrating an overall configuration example of the first substrate in an image sensor according to a second embodiment.
  • FIG. 14 is a diagram illustrating a chip configuration example of the image sensor according to the second embodiment.
  • FIG. 15 is a layout diagram illustrating an overall configuration example of the first substrate in an image sensor according to a third embodiment.
  • FIG. 16 is a layout diagram illustrating an overall configuration example of the second substrate in the image sensor according to the third embodiment.
  • FIG. 17 is a diagram illustrating a chip configuration example of the image sensor according to the third embodiment.
  • FIG. 18 is a diagram illustrating an overall configuration example of an imaging device according to a fourth embodiment.
  • FIG. 19 is a diagram illustrating an attachment example of the imaging device according to a first example of the fourth embodiment.
  • FIG. 20 is a diagram illustrating an attachment example of the imaging device according to a second example of the fourth embodiment.
  • FIG. 21 is a block diagram illustrating an example of the overall configuration of a vehicle control system.
  • FIG. 22 is a diagram illustrating an example of the installation position of a vehicle exterior information detector and an imager.
  • FIG. 23 is a diagram illustrating an example of the overall configuration of an endoscopic surgery system.
  • FIG. 24 is a block diagram illustrating an example of the functional configuration of a camera head and a CCU.
  • FIG. 25 is a block diagram illustrating an example of the overall configuration of a diagnostic assistance system.
  • DESCRIPTION OF EMBODIMENTS
  • Embodiments of the present disclosure will be described in detail below with reference to the drawings. In the following embodiments, the same parts are denoted by the same reference signs and an overlapping description will be omitted.
  • The present disclosure will be described in the order of items below.
  • 1. First Embodiment
  • 1.1 Overall Configuration Example of Imaging Device
  • 1.2 Chip Configuration Example of Image Sensor Chip
  • 1.3 Technical Problem of Image Sensor Chip Equipped with Processing Unit Performing Computation Based on Pre-Trained Model
  • 1.4 Noise Reduction Method
  • 1.4.1 First Layout Example
  • 1.4.1.1 Layout Example of First Substrate
  • 1.4.1.2 Layout Example of Second Substrate
  • 1.4.2 Second Layout Example
  • 1.4.3 Third Layout Example
  • 1.4.4 Fourth Layout Example
  • 1.4.5 Fifth Layout Example
  • 1.4.6 Sixth Layout Example
  • 1.4.7 Seventh Layout Example
  • 1.4.8 Eighth Layout Example
  • 1.4.9 Ninth Layout Example
  • 1.5 Operation Effects
  • 2. Second Embodiment
  • 2.1 Chip Configuration Example of Image Sensor Chip
  • 2.2 Operation Effects
  • 3. Third Embodiment
  • 3.1 Chip Configuration Example of Image Sensor Chip
  • 3.2 Operation Effects
  • 4. Fourth Embodiment
  • 4.1 Configuration Example
  • 4.2 Attachment Example
  • 4.2.1 First Example
  • 4.2.2 Second Example
  • 4.3 Operation Effects
  • 5. Application to Other Sensors
  • 6. Application to Movable Body
  • 7. Application to Endoscopic Surgery System
  • 8. Application to Whole Slide Imaging (WSI) System
  • 1. First Embodiment
  • First of all, a first embodiment will be described in detail with reference to the drawings.
  • 1.1 Overall Configuration Example of Imaging Device
  • FIG. 1 is a block diagram illustrating an overall configuration example of an imaging device as an electronic device according to the first embodiment. As illustrated in FIG. 1, an imaging device 1 includes an image sensor 10 that is a solid-state imaging device and an application processor 20. The image sensor 10 includes an imager 11, a controller 12, a converter (analog-to-digital converter, hereinafter referred to as ADC) 17, a signal processor 13, a digital signal processor (DSP) 14, a memory 15, and a selector (also referred to as output module) 16.
  • The controller 12 controls each part in the image sensor 10 in accordance with user's operation or a set operation mode, for example.
  • The imager 11 includes, for example, an optical system 104 including a zoom lens, a focus lens, and an aperture, and a pixel array 101 having a configuration in which unit pixels (unit pixels 101 a in FIG. 2) including light-receiving elements such as photodiodes are arranged in a two-dimensional matrix. External incident light passes through the optical system 104 to form an image on a light-receiving surface that is an array of light-receiving elements in the pixel array 101. Each unit pixel 101 a in the pixel array 101 converts light incident on its light-receiving element into electricity to accumulate charge in accordance with the quantity of incident light so that the charge can be read out.
  • The ADC 17 converts an analog pixel signal for each unit pixel 101 a read from the imager 11 to a digital value to generate digital image data and outputs the generated image data to the signal processor 13 and/or the memory 15. The ADC 17 may include a voltage generating circuit that generates a drive voltage for driving the imager 11 from power supply voltage and the like.
  • The signal processor 13 performs a variety of signal processing for digital image data input from the ADC 17 or digital image data read from the memory 15 (hereinafter referred to as process target image data). For example, when the process target image data is a color image, the signal processor 13 converts the format of this image data to YUV image data, RGB image data, or the like. The signal processor 13 performs, for example, processing such as noise removal and white balance adjustment for the process target image data, if necessary. In addition, the signal processor 13 performs a variety of signal processing (also referred to as pre-processing) for the process target image data in order for the DSP 14 to process the image data.
  • The DSP 14 executes, for example, a computer program stored in the memory 15 to function as a processing unit that performs a variety of processing using a pre-trained model (also referred to as neural network calculation model) created by machine learning using a deep neural network (DNN). This pre-trained model (neural network calculation model) may be designed based on parameters generated by inputting, to a predetermined machine learning model, training data in which an input signal corresponding to output of the pixel array 101 is associated with a label for the input signal. The predetermined machine learning model may be a learning model using a multi-layer neural network (also referred to as multi-layer neural network model).
  • For example, the DSP 14 performs a computation process based on the pre-trained model stored in the memory 15 to perform a process of combining image data with a dictionary coefficient stored in the memory 15. The result obtained through such a computation process (computation result) is output to the memory 15 and/or the selector 16. The computation result may include image data obtained by performing a computation process using the pre-trained model and a variety of information (metadata) obtained from the image data. A memory controller for controlling access to the memory 15 may be embedded in the DSP 14.
  • The image data to be processed by the DSP 14 may be image data normally read out from the pixel array 101 or may be image data having a data size reduced by decimating pixels of the image data normally read out. Alternatively, the image data to be processed may be image data read out in a data size smaller than normal obtained by performing readout from the pixel array 101 with pixels decimated. As used herein “normal readout” may be readout without decimating pixels.
  • The memory 15 stores image data output from the ADC 17, image data subjected to signal processing by the signal processor 13, the computation result obtained from the DSP 14, and the like, if necessary. The memory 15 stores an algorithm of the pre-trained model to be executed by the DSP 14, in the form of a computer program and a dictionary coefficient.
  • The DSP 14 can perform the computation process described above by training a learning model by changing the weights of a variety of parameters in the learning model using training data, by preparing a plurality of learning models and changing a learning model to be used in accordance with a computation process, or by acquiring a pre-trained learning model from an external device.
  • The selector 16, for example, selectively outputs image data output from the DSP 14, or image data or a computation result stored in the memory 15, in accordance with a select control signal from the controller 12. When the DSP 14 does not process image data output from the signal processor 13 and the selector 16 outputs the image data output from the DSP 14, the selector 16 outputs the image data output from the signal processor 13 as it is.
  • As described above, the image data or the computation result output from the selector 16 is input to the application processor 20 that processes display and user interface. The application processor 20 is configured, for example, with a central processing unit (CPU) and executes an operating system and a variety of application software. This application processor 20 may be equipped with functions such as a graphics processing unit (GPU) and a baseband processor. The application processor 20 performs a variety of processes for the input image data or the computation result as necessary, or performs display to users, or transmits the input image data or the computation result to an external cloud server 30 through a predetermined network 40.
  • For example, a variety of networks such as the Internet, a wired local area network (LAN) or a wireless LAN, a mobile communication network, or Bluetooth (registered trademark) can be applied to the predetermined network 40. The image data or the computation result may be transmitted not only to the cloud server 30 but also to a variety of information processing devices (systems) having a communication function, such as a server operating on its own, a file server storing a variety of data, and a communication terminal such as a mobile phone.
  • 1.2 Chip Configuration Example of Image Sensor Chip
  • An example of the chip configuration of the image sensor 10 illustrated in FIG. 1 will now be described in detail below with reference to the drawings.
  • FIG. 2 is a diagram illustrating a chip configuration of the image sensor according to the present embodiment. As illustrated in FIG. 2, the image sensor 10 has a stack structure in which a first substrate (die) 100 shaped like a quadrangular flat plate and a second substrate (die) 120 similarly shaped like a quadrangular flat plate are bonded together.
  • The first substrate 100 and the second substrate may have the same size, for example. The first substrate 100 and the second substrate 120 each may be a semiconductor substrate such as a silicon substrate.
  • In the first substrate 100, in the configuration of the image sensor 10 illustrated in FIG. 1, the pixel array 101 of the imager 11 is provided. A part or the whole of the optical system 104 may be provided on a chip in the first substrate 100.
  • In the second substrate 120, in the configuration of the image sensor 10 illustrated in FIG. 1, the ADC 17, the controller 12, the signal processor 13, the DSP 14, the memory 15, and the selector 16 are arranged. A not-illustrated interface circuit, driver circuit, and the like may be arranged in the second substrate 120.
  • The first substrate 100 and the second substrate 120 may be bonded together by chip-on-chip (CoC) technology in which the first substrate 100 and the second substrate 120 are individually diced into chips, and these diced first substrate 100 and second substrate 120 are bonded together, or by chip-on-wafer (CoW) technology in which one of the first substrate 100 and the second substrate 120 (for example, the first substrate 100) is diced into a chip, and the diced first substrate 100 is bonded to the second substrate 120 before dicing (that is, in a wafer state), or by wafer-on-wafer (WoW) technology in which the first substrate 100 and the second substrate 120 both in a wafer state are bonded together.
  • For example, plasma joining can be used as a joining process between the first substrate 100 and the second substrate 120. However, the present invention is not limited thereto and a variety of joining processes may be used.
  • 1.3 Technical Problem of Image Sensor Chip Equipped with Processing Unit Performing Computation Based on Pre-Trained Model
  • When the DSP 14 operates as a processing unit that performs a computation process based on a pre-trained model as described above, implementation of its operation algorithm is software implementation by running computer programs. Operation algorithms for pre-trained models are updated day by day. It is therefore difficult to grasp in advance, for example, at which timing the DSP 14 performing a computation process based on a pre-trained model performs a process or at which timing a process of the DSP 14 peaks.
  • As illustrated in FIG. 2, in the case where the DSP 14 operates as a processing unit that performs computation based on a pre-trained model in a chip configuration in which the pixel array 101 is mounted on the first substrate 100 and the DSP 14 is mounted on the second substrate 120, if the DSP 14 starts a computation process or a process in the DSP 14 reaches a peak during resetting of the pixel array 101, during exposure of the pixel array 101, or during readout of a pixel signal from each unit pixel 101 a of the pixel array 101, noise is superimposed on a pixel signal read out from the pixel array 101, and consequently, the quality of the image acquired by the image sensor 10 is deteriorated.
  • The present embodiment then reduces intrusion of noise resulting from the signal processing by the DSP 14 into the pixel array 101, by adjusting the positional relation between the pixel array 101 and the DSP 14. Accordingly, an image with less deterioration in quality can be acquired even when the DSP 14 operates as a processing unit that performs computation based on a pre-trained model.
  • 1.4 Noise Reduction Method
  • The positional relation between the pixel array 101 and the DSP 14 according to the present embodiment will now be described in detail below with reference to the drawings. In the following, the positional relation between the pixel array 101 and the DSP 14 will be described by taking several examples of the layout (also referred to as floor map) of layers (the first substrate 100 and the second substrate 120).
  • 1.4.1 First Layout Example
  • FIG. 3 and FIG. 4 are diagrams for explaining a first layout example according to the present embodiment. FIG. 3 illustrates a layout example of the first substrate 100, and FIG. 4 illustrates a layout example of the second substrate 120.
  • 1.4.1.1 Layout Example of First Substrate
  • As illustrated in FIG. 3, in the first substrate 100, in the configuration of the image sensor 10 illustrated in FIG. 1, the pixel array 101 of the imager 11 is provided. When a part or the whole of the optical system 104 is mounted on the first substrate 100, it is provided at a position corresponding to the pixel array 101.
  • The pixel array 101 is provided off-center to one side L101 among four sides L101 to L104 of the first substrate 100. In other words, the pixel array 101 is provided such that its center O101 is more proximate to the side L101 than the center O100 of the first substrate 100. When the surface having the pixel array 101 in the first substrate 100 is rectangular, the side L101 may be, for example, a shorter side. However, the present invention is not limited thereto, and the pixel array 101 may be provided off-center to a longer side.
  • In a region proximate to the side L101 among four sides of the pixel array 101, in other words, a region between the side L101 and the pixel array 101, a TSV array 102 is provided, in which a plurality of through silicon vias (hereinafter referred to as TSVs) passing through the first substrate 100 are arranged as wiring for electrically connecting each unit pixel 101 a in the pixel array 101 to the ADC 17 provided in the second substrate 120. In this way, the TSV array 102 is provided in proximity to the side L101 proximate to the pixel array 101 to ensure a space for each part such as the ADC 17 in the second substrate 120.
  • The TSV array 102 may also be provided in a region proximate to one side L104 (or may be the side L103) of two sides L103 and L104 intersecting the side L101, in other words, in a region between the side L104 (or the side L103) and the pixel array 101.
  • A pad array 103 having a plurality of pads arranged linearly is provided on each of the sides L102 and L103, on which the pixel array 101 is not provided off-center, among four sides L101 to L104 of the first substrate 100. The pads included in the pad array 103 include, for example, a pad (also referred to as power supply pin) receiving power supply voltage for analog circuits such as the pixel array 101 and the ADC 17, a pad (also referred to as power supply pin) receiving power supply voltage for digital circuits such as the signal processor 13, the DSP 14, the memory 15, the selector 16, and the controller 12, a pad (also referred to as signal pin) for interfaces such as a mobile industry processor interface (MIPI) and a serial peripheral interface (SPI), and a pad (also referred to as signal pin) for input/output of clock and data. Each pad is electrically connected to, for example, an external power supply circuit or an interface circuit through a wire. It is preferable that each pad array 103 and the TSV array 102 are sufficiently spaced apart to such a degree that influences of reflection of signals from the wire connected to each pad in the pad array 103 can be ignored.
  • 1.4.1.2 Layout Example of Second Substrate
  • On the other hand, as illustrated in FIG. 4, in the second substrate 120, in the configuration of the image sensor 10 illustrated in FIG. 1, the ADC 17, the controller 12, the signal processor 13, the DSP 14, and the memory 15 are arranged. In the first layout example, the memory 15 is divided into two regions: a memory 15A and a memory 15B. Similarly, the ADC 17 is divided into two regions: an ADC 17A and a digital-to-analog converter (DAC) 17B. The DAC 17B supplies a reference voltage for AD conversion to the ADC 17A and, broadly speaking, is included in a part of the ADC 17. Although not illustrated in FIG. 4, the selector 16 is also provided on the second substrate 120.
  • The second substrate 120 also has wiring 122 in contact with and electrically connected to the TSVs in the TSV array 102 passing through the first substrate 100 (hereinafter simply referred to as TSV array 102), and a pad array 123, in which a plurality of pads electrically connected to the pads in the pad array 103 of the first substrate 100 are arranged linearly.
  • For the connection between the TSV array 102 and the wiring 122, for example, the following technology can be employed: twin TSV technology in which two TSVs, namely, a TSV provided in the first substrate 100 and a TSV provided from the first substrate 100 to the second substrate 120 are connected with the chip facing out, or shared TSV technology in which a shared TSV provided from the first substrate 100 to the second substrate 120 provides connection. However, the present invention is not limited thereto, and a variety of connection modes can be employed. Examples include Cu—Cu bonding in which copper (Cu) exposed on the joint surface of the first substrate 100 and Cu exposed on the joint surface of the second substrate 120 are joined.
  • The connection mode between the pads in the pad array 103 on the first substrate 100 and the pads in the pad array 123 of the second substrate 120 is, for example, wire bonding. However, the present invention is not limited thereto, and connection modes such as through holes and castellation may be employed.
  • In a layout example of the second substrate 120, for example, the ADC 17A, the signal processor 13, and the DSP 14 are arranged in order from the upstream side along the flow of a signal read out from the pixel array 101, where the upstream side is the vicinity of the wiring 122 connected to the TSV array 102. That is, the ADC 17A to which a pixel signal read out from the pixel array 101 is initially input is provided in the vicinity of the wiring 122 on the most upstream side, next the signal processor 13 is provided, and the DSP 14 is provided in a region farthest from the wiring 122. Such a layout in which the ADC 17 to the DSP 14 are arranged from the upstream side along the flow of a signal can shorten the wiring connecting the parts. This layout leads to reduction in signal delay, reduction in signal propagation loss, improvement of the S/N ratio, and lower power consumption.
  • The controller 12 is provided, for example, in the vicinity of the wiring 122 on the upstream side. In FIG. 4, the controller 12 is provided between the ADC 17A and the signal processor 13. Such a layout leads to reduction in signal delay, reduction in signal propagation loss, improvement of the S/N ratio, and lower power consumption when the controller 12 controls the pixel array 101. Advantageously, the signal pin and the power supply pin for analog circuits can be collectively arranged in the vicinity of the analog circuits (for example, in the lower side of FIG. 4), the remaining signal pin and power supply pin for digital circuits can be collectively arranged in the vicinity of the digital circuits (for example, in the upper side of FIG. 4), or the power supply pin for analog circuits and the power supply pin for digital circuits can be sufficiently spaced apart from each other.
  • In the layout illustrated in FIG. 4, the DSP 14 is provided on the side opposite to the ADC 17A on the most downstream side. With such a layout, in other words, the DSP 14 can be provided in a region not overlapping with the pixel array 101 in the stacking direction of the first substrate 100 and the second substrate 120 (hereinafter simply referred to as top-bottom direction).
  • In this way, in the configuration in which the pixel array 101 and the DSP 14 are not superimposed in the top-bottom direction, intrusion of noise produced due to signal processing by the DSP 14 into the pixel array 101 can be reduced. As a result, even when the DSP 14 operates as a processing unit that performs computation based on a pre-trained model, intrusion of noise resulting from signal processing by the DSP 14 into the pixel array 101 can be reduced, and consequently, an image with less deterioration in quality can be acquired.
  • The DSP 14 and the signal processor 13 are connected by an interconnect 14 a configured with a part of the DSP 14 or a signal line. The selector 16 is provided, for example, in the vicinity of the DSP 14. When the interconnect 14 a is a part of the DSP 14, the DSP 14 may partially overlap with the pixel array 101 in the top-bottom direction. However, even in such a case, compared with when the whole of the DSP 14 is superimposed on the pixel array 101 in the top-bottom direction, intrusion of noise into the pixel array 101 can be reduced.
  • Memories 15A and 15B are arranged, for example, so as to surround the DSP 14 from three directions. In such an arrangement of the memories 15A and 15B surrounding the DSP 14, the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be reduced when the DSP 14 accesses the memory 15.
  • The pad array 123 is provided, for example, at a position on the second substrate 120 corresponding to the pad array 103 of the first substrate 100 in the top-bottom direction. Here, among the pads included in the pad array 123, a pad positioned in the vicinity of the ADC 17A is used for propagation of power supply voltage for analog circuits (mainly the ADC 17A) or an analog signal. On the other hand, a pad positioned in the vicinity of the controller 12, the signal processor 13, the DSP 14, or the memories 15A and 15B is used for propagation of power supply voltage for digital circuits (mainly, the controller 12, the signal processor 13, the DSP 14, the memories 15A and 15B) and a digital signal. Such a pad layout can reduce the distance of wiring connecting the pads to the parts. This layout leads to reduction in signal delay, reduction in propagation loss of signals and power supply voltage, improvement of the S/N ratio, and lower power consumption.
  • 1.4.2 Second Layout Example
  • A second layout example will now be described. In the second layout example, the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 5 is a diagram illustrating a layout example of the second substrate in the second layout example. As illustrated in FIG. 5, in the second layout example, in a layout similar to the first layout example, the DSP 14 is provided at the center of a region in which the DSP 14 and the memory 15 are arranged. In other words, in the second layout example, the memory 15 is provided so as to surround the DSP 14 from four directions.
  • In such an arrangement of the memories 15A and 15B surrounding the DSP 14 from four directions, the distance of wiring between each memory element in the memory 15 and the DSP 14 can be further averaged while the distance can be further reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15.
  • In FIG. 5, the DSP 14 and the pixel array 101 are arranged so as not to be superimposed on each other in the top-bottom direction. However, the present invention is not limited thereto, and the DSP 14 may be partially superimposed on the pixel array 101 in the top-bottom direction. Even in such a case, compared with when the whole of the DSP 14 is superimposed on the pixel array 101 in the top-bottom direction, intrusion of noise into the pixel array 101 can be reduced.
  • The other layout may be similar to the first layout example and is not further elaborated here.
  • 1.4.3 Third Layout Example
  • A third layout example will now be described. In the third layout example, the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 6 is a diagram illustrating a layout example of the second substrate in the third layout example. As illustrated in FIG. 6, in the third layout example, in a layout similar to the first layout example, the DSP 14 is provided adjacent to the signal processor 13. In such a configuration, the signal line from the signal processor 13 to the DSP 14 can be shortened. This layout leads to reduction in signal delay, reduction in propagation loss of signals and power supply voltage, improvement of the S/N ratio, and lower power consumption.
  • In the third layout example, the memory 15 is provided so as to surround the DSP 14 from three directions. Consequently, signal delay, signal propagation loss, and power consumption can be reduced when the DSP 14 accesses the memory 15.
  • In the third layout example, the DSP 14 is partially superimposed on the pixel array 101 in the top-bottom direction. Even in such a case, compared with when the whole of the DSP 14 is superimposed on the pixel array 101 in the top-bottom direction, intrusion of noise into the pixel array 101 can be reduced.
  • The other layout may be similar to the other layout examples and is not further elaborated here.
  • 1.4.4 Fourth Layout Example
  • A fourth layout example will now be described. In the fourth layout example, the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 7 is a diagram illustrating a layout example of the second substrate in the fourth layout example. As illustrated in FIG. 7, in the fourth layout example, in a layout similar to the third layout example, that is, in a layout in which the DSP 14 is provided adjacent to the signal processor 13, the DSP 14 is provided at a position far from both of two TSV arrays 102.
  • In such an arrangement of the DSP 14 at a position far from both of two TSV arrays 102, because the ADC 17A to the DSP 14 can be provided more faithfully to the signal flow, the signal line from the signal processor 13 to the DSP 14 can be further shortened. As a result, signal delay, signal propagation loss, and power consumption can be further reduced.
  • In the fourth layout example, the memory 15 is provided so as to surround the DSP 14 from two directions. Consequently, signal delay, signal propagation loss, and power consumption can be reduced when the DSP 14 accesses the memory 15.
  • Also in the fourth layout example, the DSP 14 is partially superimposed on the pixel array 101 in the top-bottom direction. Even in such a case, compared with when the whole of the DSP 14 is superimposed on the pixel array 101 in the top-bottom direction, intrusion of noise into the pixel array 101 can be reduced.
  • The other layout may be similar to the other layout examples and is not further elaborated here.
  • 1.4.5 Fifth Layout Example
  • A fifth layout example will now be described. In the fifth layout example, the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 8 is a diagram illustrating a layout example of the second substrate in the fifth layout example. As illustrated in FIG. 8, in the fifth layout example, in a layout similar to the first layout example, that is, in a layout in which the DSP 14 is provided on the most downstream side, the DSP 14 is provided at a position far from both of two TSV arrays 102.
  • Even in such a layout, because the ADC 17A to the DSP 14 can be arranged more faithfully to the signal flow, the signal line from the signal processor 13 to the DSP 14 can be further shortened. As a result, signal delay, signal propagation loss, and power consumption can be further reduced.
  • The other layout may be similar to the other layout examples and is not further elaborated here.
  • 1.4.6 Sixth Layout Example
  • A sixth layout example will now be described. In the sixth layout example, the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 9 is a diagram illustrating a layout example of the second substrate in the sixth layout example. As illustrated in FIG. 9, in the sixth layout example, the DSP 14 is sandwiched in the top-bottom direction in the drawing between memories 15C and 15D divided into two regions.
  • In such an arrangement of the memories 15C and 15D sandwiching the DSP 14, the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15.
  • The other layout may be similar to the first layout example and is not further elaborated here.
  • 1.4.7 Seventh Layout Example
  • A seventh layout example will now be described. In the seventh layout example, the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 10 is a diagram illustrating a layout example of the second substrate in the seventh layout example. As illustrated in FIG. 10, in the seventh layout example, the memory 15 is sandwiched in the top-bottom direction in the drawing between DSPs 14A and 14B divided into two regions.
  • In such an arrangement of the DSPs 14A and 14B sandwiching the memory 15, the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15.
  • The other layout may be similar to the first layout example and is not further elaborated here.
  • 1.4.8 Eighth Layout Example
  • An eighth layout example will now be described. In the eighth layout example, the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 11 is a diagram illustrating a layout example of the second substrate in the eighth layout example. As illustrated in FIG. 11, in the eighth layout example, the DSP 14 is sandwiched in the left-right direction in the drawing between memories 15E and 15F divided into two regions.
  • In such an arrangement of the memories 15C and 15D sandwiching the DSP 14, the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15.
  • The other layout may be similar to the first layout example and is not further elaborated here.
  • 1.4.9 Ninth Layout Example
  • A ninth layout example will now be described. In the ninth layout example, the layout example of the first substrate 100 may be similar to the layout example described with reference to FIG. 3 in the first layout example.
  • FIG. 12 is a diagram illustrating a layout example of the second substrate in the ninth layout example. As illustrated in FIG. 12, in the ninth layout example, the memory 15 is sandwiched in the left-right direction in the drawing between DSPs 14C and 14D divided into two regions.
  • In such an arrangement of the DSPs 14C and 14D sandwiching the memory 15, the distance of wiring between each memory element in the memory 15 and the DSP 14 can be averaged while the distance can be reduced as a whole. Consequently, signal delay, signal propagation loss, and power consumption can be further reduced when the DSP 14 accesses the memory 15.
  • The other layout may be similar to the first layout example and is not further elaborated here.
  • 1.5 Operation Effects
  • As described above, according to the present embodiment, the positional relation between the pixel array 101 and the DSP 14 is adjusted such that at least a part of the DSP 14 of the second substrate 120 is not superimposed on the pixel array 101 in the stacking direction (the top-bottom direction) of the first substrate 100 and the second substrate 120. This configuration can reduce intrusion of noise resulting from signal processing by the DSP 14 into the pixel array 101 and therefore can provide an image with less deteriorated quality even when the DSP 14 operates as a processing unit that performs computation based on a pre-trained model.
  • 2. Second Embodiment
  • A second embodiment will now be described in detail with reference to the drawings. In the following description, a configuration similar to the first embodiment is denoted by the same reference sign and an overlapping description thereof is omitted.
  • An imaging device as an electronic device according to the second embodiment may be similar to, for example, the imaging device 1 described in the first embodiment with reference to FIG. 1, which is hereby referred to and will not be further elaborated.
  • 2.1 Chip Configuration Example of Image Sensor Chip
  • An example of the chip configuration of an image sensor according to the present embodiment will now be described in detail below with reference to the drawings. FIG. 13 is a layout diagram illustrating an overall configuration example of the first substrate in the image sensor according to the present embodiment. FIG. 14 is a diagram illustrating a chip configuration example of the image sensor according to the present embodiment.
  • As illustrated in FIG. 13 and FIG. 14, in the present embodiment, the size of a first substrate 200 is smaller than the size of the second substrate 120. For example, the size of the first substrate 200 is reduced in accordance with the size of the pixel array 101. With such size reduction of the first substrate 200, many first substrates 200 can be fabricated from a single semiconductor wafer. Furthermore, the chip size of the image sensor 10 can be further reduced.
  • For the bonding between the first substrate 200 and the second substrate 120, chip-on-chip (CoC) technology in which the first substrate 200 and the second substrate 120 are individually diced into chips and then bonded, or chip-on-wafer (CoW) technology in which the diced first substrate 200 is bonded to the second substrate 120 in a wafer state can be employed.
  • The layout of the first substrate 200 may be similar to, for example, the layout of the first substrate 100 illustrated in the first embodiment, excluding the upper portion. The layout of the second substrate 120 may be similar to, for example, the second substrate 120 illustrated in the first embodiment. The bonding place of the first substrate 200 to the second substrate 120 may be a position where at least a part of the pixel array 101 does not overlap the DSP 14 of the second substrate 120 in the top-bottom direction, in the same manner as in the first embodiment.
  • 2.2 Operation Effects
  • As described above, even when the first substrate 200 is downsized in accordance with the size of the pixel array 101, intrusion of noise resulting from signal processing by the DSP 14 into the pixel array 101 can be reduced, in the same manner as in the first embodiment. Consequently, an image with less deterioration in quality can be acquired even when the DSP 14 operates as a processing unit that performs computation based on a pre-trained model. The other configuration (including the layout example of the second substrate 120) and effects may be similar to those of the first embodiment and will not be further elaborated here.
  • 3. Third Embodiment
  • A third embodiment will now be described in detail with reference to the drawings. In the following description, a configuration similar to the first or second embodiment is denoted by the same reference sign and an overlapping description thereof is omitted.
  • An imaging device as an electronic device according to the third embodiment may be similar to, for example, the imaging device 1 described in the first embodiment with reference to FIG. 1, which is hereby referred to and will not be further elaborated.
  • 3.1 Chip Configuration Example of Image Sensor Chip
  • An example of the chip configuration of an image sensor according to the present embodiment will now be described in detail below with reference to the drawings. FIG. 15 is a layout diagram illustrating an overall configuration example of the first substrate in the image sensor according to the present embodiment. FIG. 16 is a layout diagram illustrating an overall configuration example of the second substrate in the image sensor according to the present embodiment. FIG. 17 is a diagram illustrating a chip configuration example of the image sensor according to the present embodiment.
  • As illustrated in FIG. 15 to FIG. 17, in the present embodiment, the size of a first substrate 300 is reduced in accordance with the size of the pixel array 101. In the present embodiment, the size of a second substrate 320 is reduced to the same degree as the size of the first substrate 300. With such a configuration, in the present embodiment, a surplus region of the first substrate 300 can be reduced, and the chip size of the image sensor 10 can be further reduced accordingly.
  • However, in the present embodiment, the pixel array 101 and the DSP 14 are superimposed on each other in the stacking direction of the first substrate 300 and the second substrate 320 (hereinafter simply referred to as top-bottom direction). Because of this, noise resulting from the DSP 14 may be superimposed on a pixel signal read out from the pixel array 101 in some cases and may reduce the quality of an image acquired by the image sensor 10.
  • Then, in the present embodiment, the ADC 17A and the DSP 14 are spaced apart from each other. Specifically, for example, the ADC 17A is provided closer to one end L321 of the second substrate 320, while the DSP 14 is provided closer to an end L322 on the side opposite to the end L321 at which the ADC 17A is disposed.
  • With such an arrangement, noise produced by intrusion of noise produced in the DSP 14 into the ADC 17A can be reduced, thereby suppressing deterioration in quality of an image acquired by the image sensor 10. The end L321 proximate to the ADC 17A may be an end on which the wiring 122 connected to the TSV array 102 is provided.
  • With such an arrangement, for example, the ADC 17A, the signal processor 13, and the DSP 14 are arranged in order from the upstream side along the flow of a signal read out from the pixel array 101, where the upstream side is the vicinity of the wiring 122 connected to the TSV array 102, in the same manner as in the foregoing embodiments. The wiring connecting the parts therefore can be shortened. Consequently, reduction in signal delay, reduction in signal propagation loss, improvement in the S/N ratio, and lower power consumption can be achieved.
  • 3.2 Operation Effects
  • As described above, when the first substrate 300 and the second substrate 320 are downsized in accordance with the size of the pixel array 101, the ADC 17A and the DSP 14 are spaced apart from each other, thereby reducing noise produced by intrusion of noise produced in the DSP 14 into the ADC 17A. Consequently, deterioration in quality of an image acquired by the image sensor 10 can be suppressed.
  • The other configuration and effects are similar to those of the foregoing embodiments and will not be further elaborated here.
  • 4. Fourth Embodiment
  • In a fourth embodiment, a specific configuration example of the image sensor 10 and the imaging device 1 according to the foregoing embodiments and an attachment example thereof will be described. In the following description, an example based on the first embodiment is illustrated. However, the present embodiment may be based on any other embodiments rather than the first embodiment. In the present embodiment, the imaging device 1 is mounted on a vehicle, that is, the imaging device 1 is an in-vehicle camera, by way of example. However, the imaging device 1 is not necessarily attached to a vehicle but may be attached to various equipment, devices, and locations.
  • 4.1 Configuration Example
  • FIG. 18 is a diagram illustrating an overall configuration example of an imaging device according to the present embodiment. In FIG. 18, for simplicity of explanation, the first substrate 100 in the image sensor 10 is not illustrated.
  • As illustrated in FIG. 18, the chip of the image sensor 10 according to the foregoing embodiments may be connected to a circuit board 400 provided with the application processor 20 and the like, for example, through connection wiring that is flexible and deformable, such as a flexible cable 401.
  • In this case, one end of the flexible cable 402 may be connected to a side other than the side on which the ADC 17A in the second substrate 120 is provided in proximity, for example, to the side on which the DSP 14 is provided in proximity. This configuration can reduce the wiring length from a data output end in the DSP 14 or the memory 15 provided in vicinity thereof to the flexible cable 402, thereby suppressing size increase of the chip of the image sensor 10 and facilitating designing of this wiring layout. Reducing the wiring length can lead to suppression of signal delay, reduction in signal propagation loss, and lower power consumption.
  • 4.2 Attachment Example
  • Some of attachment examples of the imaging device 1 according to the present embodiment will now be described.
  • 4.2.1 First Example
  • First, an attachment example in which the imaging device 1 is mounted as a front camera for capturing an image in front of a vehicle will be described as a first example. FIG. 19 is a diagram illustrating an attachment example of the imaging device according to the first example of the present embodiment.
  • As illustrated in FIG. 19, when the imaging device 1 is mounted as a front camera on a vehicle 410, the imaging device 1 is provided, for example, in the vicinity of a windshield 411 inside the vehicle 410, at a location that does not interrupt the field of view of a driver D.
  • In a state in which the imaging device 1 is attached to the inside of the vehicle 410, the image sensor 10 is inclined at about 90° to the front direction of the vehicle 410, for example, such that a light-receiving surface of the pixel array 101 of the first substrate 100 faces substantially the front side of the vehicle 410. On the other hand, the circuit board 400 connected to the image sensor 10 through the flexible cable 402 is installed, for example, such that its main plane is substantially horizontal. The main plane of the circuit board 400 may be, for example, a surface parallel to the surface provided with an electronic circuit such as the application processor 20. As used herein the horizontal direction may be a horizontal direction in the interior space of the vehicle 410.
  • The image sensor 10 and the circuit board 400 may be arranged, for example, such that the circuit board 400 is positioned below the image sensor 10 and at least the front-side end of the circuit board 400 protrudes toward the front direction of the vehicle relative to the front-side end of the image sensor 10. Thus, the imaging device 1 can be provided in closer proximity to the windshield 411 inclined toward the front direction of the vehicle 410, so that the space occupied by the imaging device 1 can be reduced.
  • In this case, the respective end portions of the image sensor 10 and the circuit board 400 are arranged in proximity to each other so as to form the L shape, and the end portions arranged in proximity to each other are connected by the flexible cable 402, whereby the length of the flexible cable 402 can be reduced.
  • In such an arrangement example, for example, when the flexible cable 402 is connected to the end on which the DSP 14 is provided in proximity in the second substrate 120, the image sensor 10 is installed such that the ADC 17A in the second substrate 120 is positioned on the upper side in the vertical direction and the DSP 14 is positioned on the lower side in the vertical direction. The flexible cable 402 is attached so as to connect the lower-side end of the image sensor 10 and the rear-side end of the circuit board 400. Furthermore, the application processor 20 in the circuit board 400 is provided, for example, on the front side of the vehicle 410 relative to the image sensor
  • 4.2.2 Second Example
  • Next, an attachment example in which the imaging device 1 is mounted as a rear camera for capturing an image behind a vehicle will be described as a second example. FIG. 20 is a diagram illustrating an installation example of the imaging device according to the second example of the present embodiment.
  • As illustrated in FIG. 20, when the imaging device 1 is mounted as a rear camera on the vehicle 410, the imaging device 1 is provided, for example, in the vicinity of a rear window 412 inside the vehicle 410, at a location that does not interrupt the field of view of a driver D through a rearview mirror.
  • In a state in which the imaging device 1 is attached to the inside of the vehicle 410, the image sensor 10 is inclined at about 90° to the rear direction of the vehicle 410, for example, such that the light-receiving surface of the pixel array 101 of the first substrate 100 faces substantially the rear side of the vehicle 410. On the other hand, the circuit board 400 connected to the image sensor 10 through the flexible cable 402 is installed, for example, such that its main plane is substantially parallel to the light-receiving surface of the image sensor 10. The flexible cable 402 may connect the upper ends of the image sensor 10 and the circuit board 400 in the installed state, or may connect the lower ends thereof, or may connect the side ends thereof.
  • However, the embodiment is not limited to such an attachment example. Even when the imaging device 1 is installed as a rear camera, the image sensor 10 and the circuit board 400 may be arranged, for example, such that the circuit board 400 is positioned below the image sensor 10 and at least the rear-side end of the circuit board 400 protrudes to the rear direction of the vehicle relative to the rear-side end of the image sensor 10, similarly to when the imaging device 1 is installed as a front camera in the first example. Thus, the imaging device 1 can be provided in closer proximity to the rear window 412 inclined toward the rear direction of the vehicle 410, so that the space occupied by the imaging device 1 can be reduced.
  • 4.3 Operation Effects
  • As described above, in the present embodiment, one end of the flexible cable 402 is connected to a side other than the side on which the ADC 17A in the second substrate 120 is provided in proximity, for example, to the side on which the DSP 14 is provided in proximity. This configuration can reduce the wiring length from a data output end in the DSP 14 or the memory 15 provided in vicinity thereof to the flexible cable 402, thereby suppressing size increase of the chip of the image sensor 10 and facilitating designing of this wiring layout. Reducing the wiring length can lead to suppression of signal delay, reduction in signal propagation loss, and lower power consumption.
  • The image sensor 10 and the circuit board 400 are not necessarily connected by a flexible and deformable connection cable, such as the flexible cable 402, as described above but may be connected using a connection part that does not have flexibility, such as solder balls, bonding pads, and connection pins.
  • Other configurations and effects may be similar to those in the foregoing embodiments and will not be further elaborated here.
  • 5. Application to Other Sensors
  • In the foregoing embodiments, the technique according to the present disclosure is applied to a solid-state imaging device (image sensor 10) that acquires a two-dimensional image. However, the application of the technique according to the present disclosure is not limited to a solid-state imaging device. For example, the technique according to the present disclosure can be applied to a variety of light-receiving sensors such as Time of Flight (ToF) sensors, infrared (IR) sensors, and dynamic vision sensors (DVS). That is, when the chip structure of light-receiving sensors is of the stacked type, reduction of noise included in sensor results and miniaturization of sensor chips can be achieved.
  • 6. Application to Movable Body
  • The technique according to the present disclosure (the present technique) is applicable to a variety of products. For example, the technique according to the present disclosure may be implemented as a device mounted on any type of movable bodies, such as automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility devices, airplanes, drones, vessels and ships, and robots.
  • FIG. 21 is a block diagram illustrating an example of the overall configuration of a vehicle control system that is an example of a movable body control system to which the technique according to the present disclosure is applicable.
  • A vehicle control system 12000 includes a plurality of electronic control units connected through a communication network 12001. In the example illustrated in FIG. 21, the vehicle control system 12000 includes a drive control unit 12010, a body control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and a central control unit 12050. As a functional configuration of the central control unit 12050, a microcomputer 12051, a sound image output module 12052, and an in-vehicle network I/F (interface) 12053 are illustrated.
  • The drive control unit 12010 controls operation of devices related to a drive system of a vehicle in accordance with a variety of computer programs. For example, the drive control unit 12010 functions as a control device for a drive force generating device for generating drive force of the vehicle, such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting drive force to the wheels, a steering mechanism for adjusting the steering angle of the vehicle, and a braking device for generating braking force of the vehicle.
  • The body control unit 12020 controls operation of a variety of devices installed in the vehicle body in accordance with a variety of computer programs. For example, the body control unit 12020 functions as a control device for a keyless entry system, a smart key system, a power window device, or a variety of lamps such as head lamps, rear lamps, brake lamps, turn signals, and fog lamps. In this case, the body control unit 12020 may receive radio waves transmitted from a portable device alternative to a key or signals from a variety of switches. The body control unit 12020 accepts input of the radio waves or signals and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
  • The vehicle exterior information detection unit 12030 detects information on the outside of the vehicle equipped with the vehicle control system 12000. For example, an imager 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle exterior information detection unit 12030 allows the imager 12031 to capture an image of the outside of the vehicle and receives the captured image. The vehicle exterior information detection unit 12030 may perform an object detection process or a distance detection process for persons, vehicles, obstacles, signs, or characters on roads, based on the received image.
  • The imager 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the quantity of received light of the light. The imager 12031 may output an electrical signal as an image or output as information on a measured distance. Light received by the imager 12031 may be visible light or invisible light such as infrared rays.
  • The vehicle interior information detection unit 12040 detects information on the inside of the vehicle. The vehicle interior information detection unit 12040 is connected to, for example, a driver state detector 12041 that detects a state of the driver. The driver state detector 12041 includes, for example, a camera for taking an image of the driver, and the vehicle interior information detection unit 12040 may calculate the degree of fatigue or the degree of concentration of the driver or may determine whether the driver falls asleep, based on detection information input from the driver state detector 12041.
  • The microcomputer 12051 can compute a control target value for the drive force generating device, the steering mechanism, or the braking device, based on information on the inside and outside of the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and output a control command to the drive control unit 12010. For example, the microcomputer 12051 can perform coordination control for the purpose of function implementation of advanced driver assistance systems (ADAS), including collision avoidance or shock mitigation of the vehicle, car-following drive based on the distance between vehicles, vehicle speed-keeping drive, vehicle collision warning, or lane departure warning.
  • The microcomputer 12051 can perform coordination control for the purpose of, for example, autonomous driving, in which the drive force generating device, the steering mechanism, or the braking device is controlled based on information on the surroundings of the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040 to enable autonomous driving without depending on the operation by the driver.
  • The microcomputer 12051 can output a control command to the body control unit 12020, based on information on the outside of the vehicle acquired by the vehicle exterior information detection unit 12030. For example, the microcomputer 12051 can perform coordination control for the antidazzle purpose, for example, by controlling the head lamps in accordance with the position of a vehicle ahead or an oncoming vehicle detected by the vehicle exterior information detection unit 12030 to switch high beams to low beams.
  • The sound image output module 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or aurally giving information to a passenger in the vehicle or the outside of the vehicle. In the example in FIG. 21, an audio speaker 12061, a display 12062, and an instrument panel 12063 are illustrated as the output device. The display 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 22 is a diagram illustrating an example of the installation position of the imager 12031.
  • In FIG. 22, imagers 12101, 12102, 12103, 12104, and 12105 are provided as the imager 12031.
  • The imagers 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as front nose, side mirrors, rear bumper, back door of a vehicle 12100, and an upper portion of the front glass inside the vehicle. The imager 12101 provided at the front nose and the imager 12105 provided at the upper portion of the front glass inside the vehicle mainly acquire an image in front of the vehicle 12100. The imagers 12102 and 12103 provided at the side mirrors mainly acquire images on the sides of the vehicle 12100. The imager 12104 provided at the rear bumper or the back door mainly acquires an image behind the vehicle 12100. The imager 12105 provided at the upper portion of the front glass in the vehicle interior is mainly used for detecting a vehicle ahead, pedestrians, obstacles, traffic signs, road signs, traffic lanes, and the like.
  • FIG. 22 illustrates an example of the imaging ranges of the imagers 12101 to 12104. An imaging range 12111 indicates the imaging range of the imager 12101 provided at the front nose, imaging ranges 12112 and 12113 indicate the imaging ranges of the imagers 12102 and 12103 provided at the side mirrors, and an imaging range 12114 indicates the imaging range of the imager 12104 provided at the rear bumper or the back door. For example, a bird's eye view of the vehicle 12100 viewed from above can be obtained by superimposing image data captured by the imagers 12101 to 12104.
  • At least one of the imagers 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the imagers 12101 to 12104 may be a stereo camera including a plurality of image sensors or may be an image sensor having a pixel for phase difference detection.
  • For example, the microcomputer 12051 can obtain the distance to a three-dimensional object within the imaging ranges 12111 to 12114 and a temporal change of this distance (relative speed to the vehicle 12100), based on distance information obtained from the imagers 12101 to 12104, to specifically extract a three-dimensional object closest to the vehicle 12100 on the path of travel and traveling at a predetermined speed (for example, 0 km/h or more) in substantially the same direction as the vehicle 12100, as a vehicle ahead. In addition, the microcomputer 12051 can preset a distance between vehicles to be kept behind a vehicle ahead and perform, for example, automatic braking control (including car-following stop control) and automatic speed-up control (including car-following startup control). In this way, coordination control can be performed for the purpose of, for example, autonomous driving in which the vehicle runs autonomously without depending on the operation by the driver.
  • For example, the microcomputer 12051 can classify three-dimensional object data on a three-dimensional object into “two-wheel vehicle”, “standard-sized vehicle”, “heavy vehicle”, “pedestrian”, “utility pole”, or “any other three-dimensional object”, based on the distance information obtained from the imagers 12101 to 12104 and extract the data, and can use the extracted data for automatic avoidance of obstacles. For example, the microcomputer 12051 identifies an obstacle in the surroundings of the vehicle 12100 as an obstacle visible to the driver of the vehicle 12100 or as an obstacle hardly visible. The microcomputer 12051 then determines a collision risk indicating the degree of risk of collision with each obstacle and, when the collision risk is equal to or higher than a setting value and there is a possibility of collision, outputs an alarm to the driver through the audio speaker 12061 or the display 12062, or performs forced deceleration or avoidance steering through the drive control unit 12010, thereby implementing drive assistance for collision avoidance.
  • At least one of the imagers 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian exists in the image captured by the imagers 12101 to 12104. Such recognition of pedestrians is performed, for example, through the procedure of extracting feature points in the image captured by the imagers 12101 to 12104 serving as an infrared camera and the procedure of performing pattern matching with a series of feature points indicating the outline of an object to determine whether the object is a pedestrian. When the microcomputer 12051 determines that a pedestrian exists in the image captured by the imagers 12101 to 12104 and recognizes a pedestrian, the sound image output module 12052 controls the display 12062 such that a rectangular outline for highlighting the recognized pedestrian is superimposed. The sound image output module 12052 may control the display 12062 such that an icon indicating a pedestrian appears at a desired position.
  • An example of the vehicle control system to which the technique according to the present disclosure is applicable has been described above. The technique according to the present disclosure is applicable to the imager 12031 and the like in the configuration described above. When the technique according to the present disclosure is applied to the imager 12031 and the like, miniaturization of the imager 12031 and the like can be achieved, thereby facilitating design of the interior and the exterior of the vehicle 12100. When the technique according to the present disclosure is applied to the imager 12031 and the like, a clear image with reduced noise can be acquired to provide a driver with a more visible image. Consequently, the driver's fatigue can be alleviated.
  • 7. Application to Endoscopic Surgery System
  • The technique according to the present disclosure (the present technique) is applicable to a variety of products. For example, the technique according to the present disclosure may be applied to an endoscopic surgery system.
  • FIG. 23 is a diagram illustrating an example of the overall configuration of an endoscopic surgery system to which the technique according to the present disclosure (the present technique) is applicable.
  • FIG. 23 illustrates a situation in which an operator (doctor) 11131 uses an endoscopic surgery system 11000 to perform an operation on a patient 11132 on a patient bed 11133. As illustrated in the drawing, the endoscopic surgery system 11000 includes an endoscope 11100, other surgical instruments 11110 such as an insufflation tube 11111 and an energy treatment tool 11112, a support arm device 11120 supporting the endoscope 11100, and a cart 11200 carrying a variety of devices for endoscopic surgery.
  • The endoscope 11100 includes a barrel 11101 having a region of a predetermined length from its tip end to be inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the base end of the barrel 11101. In the example illustrated in the drawing, the endoscope 11100 is a rigid borescope having a rigid barrel 11101. However, the endoscope 11100 may be configured as a soft borescope having a soft barrel.
  • The tip end of the barrel 11101 has an opening having an objective lens fitted therein. A light source device 11203 is connected to the endoscope 11100. Light generated by the light source device 11203 is propagated to the tip end of the barrel through a light guide extending inside the barrel 11101 and irradiates an observation target in the body cavity of the patient 11132 through the objective lens. The endoscope 11100 may be a forward-viewing endoscope or may be a forward-oblique viewing endoscope or a side-viewing endoscope.
  • An optical system and an image sensor are provided inside the camera head 11102. Reflected light (observation light) from an observation target is collected by the optical system onto the image sensor. The observation light is converted to electricity by the image sensor to generate an electrical signal corresponding to the observation light, that is, an image signal corresponding to an observation image. The image signal is transmitted as RAW data to a camera control unit (CCU) 11201.
  • The CCU 11201 is configured with a central processing unit (CPU), a graphics processing unit (GPU), or the like to centrally control the operation of the endoscope 11100 and a display device 11202. The CCU 11201 receives an image signal from the camera head 11102 and performs a variety of image processing on the image signal, for example, a development process (demosaicing) for displaying an image based on the image signal.
  • The display device 11202 displays an image based on the image signal subjected to image processing by the CCU 11201, under the control of the CCU 11201.
  • The light source device 11203 is configured with, for example, a light source such as a light emitting diode (LED) and supplies the endoscope 11100 with radiation light in imaging a surgery site.
  • An input device 11204 is an input interface with the endoscopic surgery system 11000. The user can input a variety of information and instructions to the endoscopic surgery system 11000 through the input device 11204. For example, the user inputs an instruction to change the imaging conditions by the endoscope 11100 (the kind of radiation light, magnification, focal length, etc.).
  • A treatment tool control device 11205 controls actuation of the energy treatment tool 11112 for cauterization of tissues, incision, or sealing of blood vessels. An insufflator 11206 feeds gas into the body cavity through the insufflation tube 11111 to insufflate the body cavity of the patient 11132 in order to ensure the field of view with the endoscope 11100 and ensure a working space for the operator. A recorder 11207 is a device capable of recording a variety of information on surgery. A printer 11208 is a device capable of printing a variety of information on surgery in a variety of forms such as text, image, or graph.
  • The light source device 11203 that supplies the endoscope 11100 with radiation light in imaging a surgery site can be configured with, for example, a white light source such as an LED, a laser light source, or a combination thereof. When a white light source is configured with a combination of RGB laser light sources, the output power and the output timing of each color (each wavelength) can be controlled accurately, and, therefore, the white balance of the captured image can be adjusted in the light source device 11203. In this case, an observation target is irradiated time-divisionally with laser light from each of the RGB laser light sources, and actuation of the image sensor in the camera head 11102 is controlled in synchronization with the radiation timing, whereby an image corresponding to each of R, G, and B can be captured time-divisionally. According to this method, a color image can be obtained even without color filters in the image sensor.
  • The actuation of the light source device 11203 may be controlled such that the intensity of output light is changed every certain time. In synchronization with the timing of changing the intensity of light, the actuation of the image sensor in the camera head 11102 is controlled to acquire images time-divisionally, and the images are combined to generate an image with a high dynamic range free from blocked-up shadows and blown out highlights.
  • The light source device 11203 may be configured to supply light in a predetermined wavelength band corresponding to specific light observation. In specific light observation, for example, narrow band imaging is performed, which uses the wavelength dependency of light absorption in body tissues and applies light in a narrow band, compared with radiation light (that is, white light) in normal observation, to capture a high-contrast image of predetermined tissues such as blood vessels in the outermost surface of mucosa. Alternatively, in specific light observation, fluorescence observation may be performed in which an image is acquired by fluorescence generated by radiation of excitation light. In fluorescence observation, for example, excitation light is applied to body tissues and fluorescence from the body tissues is observed (autofluorescence imaging), or a reagent such as indocyanine green (ICG) is locally injected to body tissues and excitation light corresponding to the fluorescence wavelength of the reagent is applied to the body tissues to obtain a fluorescence image. The light source device 11203 may be configured to supply narrow-band light and/or excitation light corresponding to such specific light observation.
  • FIG. 24 is a block diagram illustrating an example of the functional configuration of the camera head 11102 and the CCU 11201 illustrated in FIG. 23.
  • The camera head 11102 includes a lens unit 11401, an imager 11402, a driver 11403, a communication module 11404, and a camera head controller 11405. The CCU 11201 includes a communication module 11411, an image processor 11412, and a controller 11413. The camera head 11102 and the CCU 11201 are connected to communicate with each other through a transmission cable 11400.
  • The lens unit 11401 is an optical system provided at a connection portion to the barrel 11101. Observation light taken in from the tip end of the barrel 11101 is propagated to the camera head 11102 and enters the lens unit 11401. The lens unit 11401 is configured with a combination of a plurality of lenses including a zoom lens and a focus lens.
  • The imager 11402 may be configured with one image sensor (called single sensor-type) or a plurality of image sensors (called multi sensor-type). When the imager 11402 is a multi-sensor construction, for example, image signals corresponding to R, G, and B may be generated by image sensors and combined to produce a color image. Alternatively, the imager 11402 may have a pair of image sensors for acquiring image signals for right eye and for left eye corresponding to three-dimensional (3D) display. The 3D display enables the operator 11131 to more accurately grasp the depth of living tissues in a surgery site. When the imager 11402 is a multi-sensor construction, several lines of lens units 11401 may be provided corresponding to the image sensors.
  • The imager 11402 is not necessarily provided in the camera head 11102. For example, the imager 11402 may be provided immediately behind the objective lens inside the barrel 11101.
  • The driver 11403 is configured with an actuator and moves the zoom lens and the focus lens of the lens unit 11401 by a predetermined distance along the optical axis under the control of the camera head controller 11405. The magnification and the focal point of an image captured by the imager 11402 thus can be adjusted as appropriate.
  • The communication module 11404 is configured with a communication device for transmitting/receiving a variety of information to/from the CCU 11201. The communication module 11404 transmits an image signal obtained from the imager 11402 as RAW data to the CCU 11201 through the transmission cable 11400.
  • The communication module 11404 receives a control signal for controlling actuation of the camera head 11102 from the CCU 11201 and supplies the received signal to the camera head controller 11405. The control signal includes, for example, information on imaging conditions, such as information specifying a frame rate of the captured images, information specifying an exposure value in imaging, and/or information specifying a magnification and a focal point of the captured image.
  • The image conditions such as frame rate, exposure value, magnification, and focal point may be specified as appropriate by the user or may be automatically set by the controller 11413 of the CCU 11201 based on the acquired image signal. In the latter case, the endoscope 11100 is equipped with an auto exposure (AE) function, an auto focus (AF) function, and an auto white balance (AWB) function.
  • The camera head controller 11405 controls actuation of the camera head 11102, based on a control signal received from the CCU 11201 through the communication module 11404.
  • The communication module 11411 is configured with a communication device for transmitting/receiving a variety of information to/from the camera head 11102. The communication module 11411 receives an image signal transmitted from the camera head 11102 through the transmission cable 11400.
  • The communication module 11411 transmits a control signal for controlling actuation of the camera head 11102 to the camera head 11102. The image signal and the control signal can be transmitted via electrical communication or optical communication.
  • The image processor 11412 performs a variety of image processing on the image signal that is RAW data transmitted from the camera head 11102.
  • The controller 11413 performs a variety of control on imaging of a surgery site and the like by the endoscope 11100 and display of a captured image obtained by imaging of a surgery site and the like. For example, the controller 11413 generates a control signal for controlling actuation of the camera head 11102.
  • The controller 11413 displays a captured image visualizing a surgery site and the like on the display device 11202, based on the image signal subjected to image processing by the image processor 11412. In doing so, the controller 11413 may recognize a variety of objects in the captured image using a variety of image recognition techniques. For example, the controller 11413 can detect the shape of edge, color, and the like of an object included in the captured image to recognize a surgical instrument such as forceps, a specific living body site, bleeding, and mist in use of the energy treatment tool 11112. When displaying the captured image on the display device 11202, the controller 11413 may use the recognition result to superimpose a variety of surgery assisting information on the image of the surgery site. The surgery assisting information superimposed and presented to the operator 11131 can alleviate burden on the operator 11131 or ensure the operator 11131 to proceed surgery.
  • The transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electrical signal cable corresponding to communication of electrical signals, an optical fiber corresponding to optical communication, or a composite cable thereof.
  • In the example illustrated in the drawing, the transmission cable 11400 is used for wired communication. However, communication between the camera head 11102 and the CCU 11201 may be wireless.
  • An example of the endoscopic surgery system to which the technique according to the present disclosure is applicable has been described above. The technique according to the present disclosure is applicable to, for example, the imager 11402 and the like in the camera head 11102 among the configurations described above. When the technique according to the present disclosure is applied to the camera head 11102, the camera head 11102 and the like can be miniaturized, resulting in the compact endoscopic surgery system 11000. When the technique according to the present disclosure is applied to the camera head 11102 and the like, a clear image with reduced noise can be acquired to provide the operator with a more visible image. Consequently, the operator's fatigue can be alleviated.
  • Although the endoscopic surgery system has been described here by way of example, the technique according to the present disclosure may be applied to, for example, a microscopic surgery system.
  • 8. Application to Whole Slide Imaging (WSI) System
  • The technique according to the present disclosure is applicable to a variety of products. For example, the technique according to the present disclosure may be applied to a pathology diagnosis system to allow doctors to diagnose pathological changes by observing cells and tissues sampled from patients, and an assistance system therefor (hereinafter referred to as diagnostic assistance system). This diagnostic assistance system may be a whole slide imaging (WSI) system for diagnosing pathological changes based on an image acquired using digital pathology technology, and assisting the diagnosis.
  • FIG. 25 is a diagram illustrating an example of the overall configuration of a diagnostic assistance system 5500 to which the technique according to the present disclosure is applied. As illustrated in FIG. 25, the diagnostic assistance system 5500 includes one or more pathology systems 5510. The diagnostic assistance system 5500 may further include a medical information system 5530 and a derivation device 5540.
  • Each of one or more pathology systems 5510 is a system mainly used by pathologists and introduced into, for example, a research laboratory or a hospital. The pathology systems 5510 may be introduced into different hospitals and are connected to the medical information system 5530 and the derivation device 5540 through a variety of networks such as wide area networks (WANs) (including the Internet), local area networks (LAN), public networks, and mobile communication networks.
  • Each pathology system 5510 includes a microscope 5511, a server 5512, a display control device 5513, and a display device 5514.
  • The microscope 5511 has the function of an optical microscope and captures an image of an observation target on a glass slide to acquire a pathological image that is a digital image. The observation target is, for example, tissues or cells sampled from a patient and may be a piece of organ, saliva, or blood.
  • The server 5512 stores and saves the pathological image acquired by the microscope 5511 in a not-illustrated storage unit. When accepting an inspection request from the display control device 5513, the server 5512 searches the not-illustrated storage unit for a pathological image and sends the retrieved pathological image to the display control device 5513.
  • The display control device 5513 sends an inspection request for a pathological image accepted from the user to the server 5512. The display control device 5513 then displays the pathological image accepted from the server 5512 on the display device 5514 using liquid crystal, electro-luminescence (EL), cathode ray tube (CRT), or the like. The display device 5514 may support 4K or 8K, and one or more display devices 5514 may be provided.
  • Here, when the observation target is a solid matter such as a piece of organ, the observation target may be, for example, a stained slice. The slice may be prepared, for example, by slicing a block cut out from a specimen such as an organ. When sliced, the block may be fixed by, for example, paraffin.
  • In staining the slice, a variety of staining can be employed, such as common staining such as hematoxylin-eosin (HE) staining for defining the form of tissue, and immunostaining such as immunohistochemistry (IHC) staining for identifying the immune state of tissue. In doing so, one slice may be stained using different kinds of reagents, or two or more slices (also referred to as adjacent slices) continuously cut out from the same block may be stained using different reagents.
  • The microscope 5511 may include a low-resolution imager for capturing an image at low resolution and a high-resolution imager for capturing an image at high resolution. The low-resolution imager and the high-resolution imager may be different optical systems or the same optical system. In the case of the same optical system, the microscope 5511 may have a resolution changed according to an imaging target.
  • A glass slide having an observation target is placed on a stage positioned in the angle of view of the microscope 5511. The microscope 5511 first acquires the entire image in the angle of view using the low-resolution imager and specifies the region of the observation target from the acquired entire image. Subsequently, the microscope 5511 divides the region including the observation target into a plurality of division regions with a predetermined size and successively captures images of the division regions using the high-resolution imager to acquire high-resolution images of the division regions. In switching the target division regions, the stage may be moved, the imaging optical system may be moved, or both may be moved. Each division region may be overlapped with the adjacent division region in order to prevent occurrence of an imaging-missed region due to unintended slippage of the glass slide. The entire image may include identification information for associating the entire image with the patient. Examples of the identification information include a character string and a QR code (registered trademark).
  • The high-resolution images acquired by the microscope 5511 are input to the server 5512. The server 5512 divides each high-resolution image into partial images (hereinafter referred to as tile images) with a smaller size. For example, the server 5512 vertically and horizontally divides one high-resolution image into 10×10, in total, 100 tile images. In doing so, if adjacent division regions are overlapping, the server 5512 may perform a stitching process for the high-resolution images adjacent to each other using such technology as template matching. In this case, the server 5512 may divide the stitched high-resolution images as a whole to generate tile images. However, the generation of tile images from a high-resolution image may precede the stitching process.
  • The server 5512 may further divide a tile image to generate tile images with a smaller size. The generation of such tile images may be repeated until a tile image set as a minimum unit is generated.
  • Upon generating a tile image as a minimum unit, the server 5512 executes a tile combining process for all the tile images to combine a predetermined number of adjacent tile images and generate one tile image. This tile combining process may be repeated until finally one tile image is generated. Through such a process, a tile image group having a pyramid structure is generated, in which each layer is configured with one or more tile images. In this pyramid structure, a tile image on a certain layer and a tile image on a layer different from this layer have the same pixel count, but their resolutions are different. For example, when 2×2, in total, four tile images are combined into one tile image on a higher layer, the resolution of the tile image on the higher layer is half the resolution of the tile images on the lower layer used in combining.
  • When such a tile image group having a pyramid structure is constructed, the level of detail of the observation target appearing on the display device can be changed according to the layer to which a tile image to be displayed belongs to. For example, when the tile image on the lowest layer is used, a narrow region of the observation target can be displayed in detail, and as the tile image on a higher layer is used, a wide region of the observation target can be displayed more coarsely.
  • The generated tile image group having a pyramid structure is, for example, stored in a not-illustrated storage unit together with identification information uniquely identifying each tile image (referred to as tile identification information). When accepting an acquisition request for a tile image including tile identification information from another device (for example, the display control device 5513 or the derivation device 5540), the server 5512 transmits the tile image corresponding to the tile identification information to another device.
  • The tile image that is a pathological image may be generated for each imaging condition such as focal length and staining condition. When a tile image is generated for each imaging condition, a certain pathological image as well as another pathological image corresponding to an imaging condition different from a certain imaging condition and in the same region as the certain pathological image may be displayed side by side. The certain imaging condition may be designated by an inspector. When the inspector designates a plurality of imaging conditions, pathological images in the same region corresponding to the respective imaging conditions may be displayed side by side.
  • The server 5512 may store the tile image group having a pyramid structure in a storage device other than the server 5512, for example, a cloud server. A part or the whole of the tile image generating process as described above may be performed, for example, by a cloud server.
  • The display control device 5513 extracts a desired tile image from the tile image group having a pyramid structure in accordance with input operation from the user and outputs the same to the display device 5514. Through such a process, the user can attain a sense of viewing the observation target while changing the observation magnification. That is, the display control device 5513 functions as a virtual microscope. The virtual observation magnification here actually corresponds to a resolution.
  • High-resolution images can be captured by any methods. A high-resolution image may be acquired by capturing images of division regions while the stage is repeatedly stopped and moved, or a high-resolution image on a strip may be acquired by capturing images of division regions while the stage is moved at a predetermined speed. The process of generating tile images from a high-resolution image is not an essential configuration, and the resolution of the stitched high-resolution images as a whole may be changed stepwise to generate an image with resolution changing stepwise. Also in this case, a low-resolution image in a wide area to a high-resolution image in a narrow area can be presented stepwise to the user.
  • The medical information system 5530 is an electronic health record system and stores information related to diagnosis, such as information identifying patients, disease information of patients, examination information and image information used in diagnosis, diagnosis results, and prescribed drugs. For example, a pathological image obtained by imaging an observation target of a patient may be saved once through the server 5512 and thereafter displayed on the display device 5514 by the display control device 5513. The pathologist using the pathology system 5510 conducts pathology diagnosis based on the pathological image appearing on the display device 5514. The result of pathology diagnosis conducted by the pathologist is stored in the medical information system 5530.
  • The derivation device 5540 may perform analysis of a pathological image. In this analysis, a learning model created by machine learning can be used. The derivation device 5540 may derive the classification result of a certain region, the identification result of tissues, and the like, as the analysis result. The derivation device 5540 may further derive the identification result such as cell information, count, position, and brightness information, scoring information therefor, and the like. These pieces of information derived by the derivation device 5540 may be displayed as diagnostic assistance information on the display device 5514 of the pathology system 5510.
  • The derivation device 5540 may be a server system including one or more servers (including a cloud server). The derivation device 5540 may be a configuration incorporated into, for example, the display control device 5513 or the server 5512 in the pathology system 5510. That is, a variety of analysis for a pathological image may be performed in the pathology system 5510.
  • The technique according to the present disclosure may be preferably applied, for example, to the microscope 5511 among the configurations described above. Specifically, the technique according to the present disclosure can be applied to the low-resolution imager and/or the high-resolution imager in the microscope 5511. The application of the technique according to the present disclosure to the low-resolution imager and/or the high-resolution imager leads to miniaturization of the low-resolution imager and/or the high-resolution imager, and thus miniaturization of the microscope 5511. The miniaturization facilitates transportation of the microscope 5511 and thereby facilitates system introduction and system replacement. In addition, when the technique according to the present disclosure is applied to the low-resolution imager and/or the high-resolution imager, a part or the whole of the process from acquisition of a pathological image to analysis of the pathological image can be performed on the fly in the microscope 5511, so that diagnostic assistance information can be output more promptly and appropriately.
  • The configuration described above is not limited to a diagnostic assistance system and may be applied generally to biological microscopes such as a confocal microscope, a fluorescent microscope, and a video microscope. Here, the observation target may be a biological sample such as cultured cell, fertilized egg, and sperm, a biological material such as cell sheet and three-dimensional cell tissue, and a living body such as zebrafish and mouse. The observation target may be observed in a microplate or a petri dish, rather than on a glass slide.
  • A moving image may be generated from still images of the observation target acquired using the microscope. For example, a moving image may be generated from still images captured successively for a predetermined period of time, or an image sequence may be generated from still images captured at predetermined intervals. With a moving image generated from still images, dynamic features of an observation target, such as motion such as pulsation, expansion, and migration of cancer cell, nerve cell, cardiac muscle tissue, sperm, and the like, and a division process of cultured cell and fertilized egg, can be analyzed using machine learning.
  • Although embodiments of the present disclosure have been described above, the technical scope of the present disclosure is not limited to the foregoing embodiments as they are and may be susceptible to various modifications without departing from the spirit of the present disclosure. The constituent elements in different embodiments and modifications may be combined as appropriate.
  • The effects in the embodiments described in the present description are only by way of illustration and are not intended to be limitative, and any other effects may be achieved.
  • The foregoing embodiments may be used singly or may be used in combination with other embodiments.
  • The present technique may employ the configuration as follows.
  • (1)
  • A stacked light-receiving sensor comprising:
  • a first substrate;
  • a second substrate bonded to the first substrate; and connection wiring attached to the second substrate,
  • the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix,
  • the second substrate including
      • a converter configured to convert an analog pixel signal output from the pixel array to digital image data and
      • a processing unit configured to perform a process for data based on the image data, wherein
  • at least a part of the converter is disposed on a first side in the second substrate,
  • the processing unit is disposed on a second side opposite to the first side in the second substrate, and
  • the connection wiring is attached to a side other than the second side in the second substrate.
  • (2)
  • The stacked light-receiving sensor according to (1), wherein the connection wiring is a flexible cable.
  • (3)
  • The stacked light-receiving sensor according to (1) or (2), wherein the processing unit performs the process based on a neural network calculation model for data based on the image data.
  • (4)
  • An in-vehicle imaging device comprising a solid-state imaging device, a circuit board provided with an information processing device, and connection wiring connecting the solid-state imaging device and the circuit board,
  • the solid-state imaging device including
      • a first substrate and
      • a second substrate bonded to the first substrate, the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix,
  • the second substrate including
      • a converter configured to convert an analog pixel signal output from the pixel array to digital image data and
      • a processing unit configured to perform a process for data based on the image data, wherein
  • at least a part of the converter is disposed on a first side in the second substrate,
  • the processing unit is disposed on a second side opposite to the first side in the second substrate,
  • the connection wiring is attached to a side other than the second side in the second substrate and attached to a third side of the circuit board,
  • the solid-state imaging device is installed inside a vehicle such that a light-receiving surface of the pixel array faces a front direction of the vehicle and the first side is positioned on a lower side in a vertical direction, and
  • the circuit board is installed such that at least an end on a front side of the vehicle in the circuit board protrudes to the front direction of the vehicle relative to an end on the front side of the vehicle of the solid-state imaging device and a main plane is parallel or substantially parallel to a horizontal plane.
  • (5)
  • The in-vehicle imaging device according to (4), wherein
  • one end of the connection wiring is attached to the side on the lower side in the vertical direction in the second substrate of the solid-state imaging device, and
  • another end of the connection wiring is attached to an end on a rear side of the vehicle in the circuit board.
  • (6)
  • The in-vehicle imaging device according to (4) or (5), wherein the information processing device is positioned on the front side of the vehicle relative to the solid-state imaging device.
  • (7)
  • An in-vehicle imaging device comprising a solid-state imaging device, a circuit board provided with an information processing device, and connection wiring connecting the solid-state imaging device and the circuit board,
  • the solid-state imaging device including
      • a first substrate and
      • a second substrate bonded to the first substrate,
  • the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix,
  • the second substrate including
      • a converter configured to convert an analog pixel signal output from the pixel array to digital image data, and
      • a processing unit configured to perform a process for data based on the image data, wherein
  • at least a part of the converter is disposed on a first side in the second substrate,
  • the processing unit is disposed on a second side opposite to the first side in the second substrate,
  • the connection wiring is attached to a side other than the second side in the second substrate and attached to a third side of the circuit board, and
  • the solid-state imaging device is installed inside a vehicle such that a light-receiving surface of the pixel array faces a rear direction of the vehicle.
  • (8)
  • The in-vehicle imaging device according to (7), wherein the circuit board is installed inside the vehicle to be positioned on a surface side opposite to the light-receiving surface in the solid-state imaging device.
  • REFERENCE SIGNS LIST
      • 1 imaging device
      • 10 image sensor
      • 11 imager
      • 12 controller
      • 13 signal processor
      • 14, 14A, 14B, 14C, 14D DSP (machine learning unit)
      • 14 a interconnect
      • 15, 15A, 15B, 15C, 15D, 15E, 15F memory
      • 16 selector
      • 17, 17A ADC
      • 17B DAC
      • 20 application processor
      • 30 cloud server
      • 40 network
      • 100, 200, 300 first substrate
      • 101 pixel array
      • 101 a unit pixel
      • 102 TSV array
      • 103 pad array
      • 104 optical system
      • 120, 320 second substrate
      • 400 circuit board
      • 402 flexible cable
      • 410 vehicle
      • 411 windshield
      • 412 rear window
      • L101 to L104 side
      • O100 center of first substrate
      • O101 center of pixel array

Claims (8)

1. A stacked light-receiving sensor comprising:
a first substrate;
a second substrate bonded to the first substrate; and
connection wiring attached to the second substrate,
the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix,
the second substrate including
a converter configured to convert an analog pixel signal output from the pixel array to digital image data and
a processing unit configured to perform a process for data based on the image data, wherein
at least a part of the converter is disposed on a first side in the second substrate,
the processing unit is disposed on a second side opposite to the first side in the second substrate, and
the connection wiring is attached to a side other than the second side in the second substrate.
2. The stacked light-receiving sensor according to claim 1, wherein the connection wiring is a flexible cable.
3. The stacked light-receiving sensor according to claim 1, wherein the processing unit performs the process based on a neural network calculation model for data based on the image data.
4. An in-vehicle imaging device comprising a solid-state imaging device, a circuit board provided with an information processing device, and connection wiring connecting the solid-state imaging device and the circuit board,
the solid-state imaging device including
a first substrate and
a second substrate bonded to the first substrate,
the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix,
the second substrate including
a converter configured to convert an analog pixel signal output from the pixel array to digital image data and
a processing unit configured to perform a process for data based on the image data, wherein
at least a part of the converter is disposed on a first side in the second substrate,
the processing unit is disposed on a second side opposite to the first side in the second substrate,
the connection wiring is attached to a side other than the second side in the second substrate and attached to a third side of the circuit board,
the solid-state imaging device is installed inside a vehicle such that a light-receiving surface of the pixel array faces a front direction of the vehicle and the first side is positioned on a lower side in a vertical direction, and
the circuit board is installed such that at least an end on a front side of the vehicle in the circuit board protrudes to the front direction of the vehicle relative to an end on the front side of the vehicle of the solid-state imaging device and a main plane is parallel or substantially parallel to a horizontal plane.
5. The in-vehicle imaging device according to claim 4, wherein
one end of the connection wiring is attached to the side on the lower side in the vertical direction in the second substrate of the solid-state imaging device, and
another end of the connection wiring is attached to an end on a rear side of the vehicle in the circuit board.
6. The in-vehicle imaging device according to claim 4, wherein the information processing device is positioned on the front side of the vehicle relative to the solid-state imaging device.
7. An in-vehicle imaging device comprising a solid-state imaging device, a circuit board provided with an information processing device, and connection wiring connecting the solid-state imaging device and the circuit board,
the solid-state imaging device including
a first substrate and
a second substrate bonded to the first substrate,
the first substrate including a pixel array in which a plurality of unit pixels are arranged in a two-dimensional matrix,
the second substrate including
a converter configured to convert an analog pixel signal output from the pixel array to digital image data, and
a processing unit configured to perform a process for data based on the image data, wherein
at least a part of the converter is disposed on a first side in the second substrate,
the processing unit is disposed on a second side opposite to the first side in the second substrate,
the connection wiring is attached to a side other than the second side in the second substrate and attached to a third side of the circuit board, and
the solid-state imaging device is installed inside a vehicle such that a light-receiving surface of the pixel array faces a rear direction of the vehicle.
8. The in-vehicle imaging device according to claim 7, wherein the circuit board is installed inside the vehicle to be positioned on a surface side opposite to the light-receiving surface in the solid-state imaging device.
US17/262,691 2018-07-31 2019-07-31 Stacked light-receiving sensor and electronic device Abandoned US20210168318A1 (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
JP2018-143973 2018-07-31
JP2018143973 2018-07-31
JP2019-139481 2019-07-30
JP2019139481A JP6705044B2 (en) 2018-07-31 2019-07-30 Stacked light receiving sensor and in-vehicle imaging device
JP2019139439A JP6689437B2 (en) 2018-07-31 2019-07-30 Stacked light receiving sensor and electronic equipment
JP2019-139439 2019-07-30
PCT/JP2019/030093 WO2020027230A1 (en) 2018-07-31 2019-07-31 Layered type light-receiving sensor, and on-vehicle imaging device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/030093 A-371-Of-International WO2020027230A1 (en) 2018-07-31 2019-07-31 Layered type light-receiving sensor, and on-vehicle imaging device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/363,332 Continuation US20240021646A1 (en) 2018-07-31 2023-08-01 Stacked light-receiving sensor and in-vehicle imaging device

Publications (1)

Publication Number Publication Date
US20210168318A1 true US20210168318A1 (en) 2021-06-03

Family

ID=69619053

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/251,926 Active 2040-01-31 US11735614B2 (en) 2018-07-31 2019-07-30 Stacked light-receiving sensor and electronic device
US17/262,691 Abandoned US20210168318A1 (en) 2018-07-31 2019-07-31 Stacked light-receiving sensor and electronic device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/251,926 Active 2040-01-31 US11735614B2 (en) 2018-07-31 2019-07-30 Stacked light-receiving sensor and electronic device

Country Status (6)

Country Link
US (2) US11735614B2 (en)
EP (2) EP3833007B1 (en)
JP (2) JP6689437B2 (en)
KR (2) KR20210029205A (en)
CN (2) CN112470462B (en)
TW (1) TWI846718B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220247948A1 (en) * 2021-02-04 2022-08-04 Canon Kabushiki Kaisha Photoelectric conversion apparatus, photoelectric conversion system, and mobile body
US20220247966A1 (en) * 2021-02-04 2022-08-04 Canon Kabushiki Kaisha Photoelectric conversion apparatus, photoelectric conversion system, moving body
US20220246651A1 (en) * 2021-02-04 2022-08-04 Canon Kabushiki Kaisha Photoelectric conversion apparatus
US20230061830A1 (en) * 2021-09-02 2023-03-02 Canoo Technologies Inc. Metamorphic labeling using aligned sensor data
US11706546B2 (en) * 2021-06-01 2023-07-18 Sony Semiconductor Solutions Corporation Image sensor with integrated single object class detection deep neural network (DNN)
US11792551B2 (en) * 2018-10-31 2023-10-17 Sony Semiconductor Solutions Corporation Stacked light receiving sensor and electronic apparatus
US12137291B2 (en) * 2021-02-04 2024-11-05 Canon Kabushiki Kaisha Photoelectric conversion apparatus having pad arrangement

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022044465A (en) * 2020-09-07 2022-03-17 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device and electronic apparatus
JP2022119377A (en) * 2021-02-04 2022-08-17 キヤノン株式会社 Photoelectric conversion device, photoelectric conversion system, mobile body, and semiconductor substrate
JP2022119379A (en) 2021-02-04 2022-08-17 キヤノン株式会社 Photoelectric conversion device
EP4275585A4 (en) * 2021-02-12 2024-06-12 Sony Group Corporation Observation device for medical treatment, observation device, observation method and adapter
WO2022239495A1 (en) * 2021-05-14 2022-11-17 ソニーグループ株式会社 Biological tissue observation system, biological tissue observation device, and biological tissue observation method
WO2022254836A1 (en) * 2021-06-03 2022-12-08 ソニーグループ株式会社 Information processing device, information processing system, and information processing method
US20240285157A1 (en) * 2021-06-30 2024-08-29 Sony Group Corporation Medical observation system, information processing apparatus, and information processing method
JPWO2023132002A1 (en) * 2022-01-05 2023-07-13
WO2023181783A1 (en) 2022-03-24 2023-09-28 ソニーグループ株式会社 Sensor device, sensor control system, and control method
WO2023218936A1 (en) * 2022-05-10 2023-11-16 ソニーセミコンダクタソリューションズ株式会社 Image sensor, information processing method, and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120201068A1 (en) * 2009-10-23 2012-08-09 Ware Frederick A Stacked semiconductor device
US20170077168A1 (en) * 2014-12-22 2017-03-16 Google Inc. Stacked semiconductor chip rgbz sensor
US20180037165A1 (en) * 2015-03-31 2018-02-08 Panasonic Intellectual Property Management Co., Ltd. Image processing device and electronic mirror system
US20180277028A1 (en) * 2017-03-22 2018-09-27 Solera Holdings, Inc. Vehicle Smart Mirror System with Heads-Up Display
US20200388021A1 (en) * 2017-10-13 2020-12-10 Sualab Co., Ltd. Deep learning based image comparison device, method and computer program stored in computer readable medium

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1234234C (en) * 2002-09-30 2005-12-28 松下电器产业株式会社 Solid-state photographic device and equipment using the photographic device
JP5272773B2 (en) * 2009-02-12 2013-08-28 ソニー株式会社 Image processing apparatus, image processing method, and program
JP5685898B2 (en) * 2010-01-08 2015-03-18 ソニー株式会社 Semiconductor device, solid-state imaging device, and camera system
CN103988490B (en) * 2011-12-13 2018-05-22 索尼公司 Image processing apparatus, image processing method and recording medium
US9343497B2 (en) 2012-09-20 2016-05-17 Semiconductor Components Industries, Llc Imagers with stacked integrated circuit dies
DE102013102819A1 (en) 2013-03-19 2014-09-25 Conti Temic Microelectronic Gmbh Camera module and method of manufacture
US20150296158A1 (en) 2014-04-10 2015-10-15 Forza Silicon Corporation Reconfigurable CMOS Image Sensor
TWI648986B (en) * 2014-04-15 2019-01-21 日商新力股份有限公司 Image element, electronic equipment
JP2015227115A (en) * 2014-05-30 2015-12-17 日本電産コパル株式会社 On-vehicle camera control device
US9621769B2 (en) 2014-06-11 2017-04-11 Magna Electronics Inc. Camera module for vehicle vision system
JP6295983B2 (en) 2015-03-05 2018-03-20 ソニー株式会社 SEMICONDUCTOR DEVICE, ITS MANUFACTURING METHOD, AND ELECTRONIC DEVICE
JP6453158B2 (en) * 2015-05-25 2019-01-16 株式会社オプトエレクトロニクス Imaging apparatus and optical information reading apparatus
JP2017118445A (en) 2015-12-25 2017-06-29 日本電産エレシス株式会社 On-vehicle camera
JP2018007035A (en) * 2016-07-01 2018-01-11 ソニーセミコンダクタソリューションズ株式会社 Solid-state image sensing device, imaging apparatus and control method of solid-state image sensing device
KR102526559B1 (en) * 2016-09-16 2023-04-27 소니 세미컨덕터 솔루션즈 가부시키가이샤 Image pickup device and electronic apparatus
JP2018051809A (en) 2016-09-26 2018-04-05 セイコーエプソン株式会社 Liquid discharge device, driving circuit, and driving method
JP2018074445A (en) 2016-10-31 2018-05-10 ソニーセミコンダクタソリューションズ株式会社 Solid state image pickup device, signal processing method thereof, and electronic apparatus
KR102707595B1 (en) * 2016-12-01 2024-09-19 삼성전자주식회사 Metohd and apparatus for eye detection using depth information
JP6832155B2 (en) * 2016-12-28 2021-02-24 ソニーセミコンダクタソリューションズ株式会社 Image processing equipment, image processing method, and image processing system
TWI666941B (en) * 2018-03-27 2019-07-21 緯創資通股份有限公司 Multi-level state detecting system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120201068A1 (en) * 2009-10-23 2012-08-09 Ware Frederick A Stacked semiconductor device
US20170077168A1 (en) * 2014-12-22 2017-03-16 Google Inc. Stacked semiconductor chip rgbz sensor
US20180037165A1 (en) * 2015-03-31 2018-02-08 Panasonic Intellectual Property Management Co., Ltd. Image processing device and electronic mirror system
US20180277028A1 (en) * 2017-03-22 2018-09-27 Solera Holdings, Inc. Vehicle Smart Mirror System with Heads-Up Display
US20200388021A1 (en) * 2017-10-13 2020-12-10 Sualab Co., Ltd. Deep learning based image comparison device, method and computer program stored in computer readable medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11792551B2 (en) * 2018-10-31 2023-10-17 Sony Semiconductor Solutions Corporation Stacked light receiving sensor and electronic apparatus
US20220247948A1 (en) * 2021-02-04 2022-08-04 Canon Kabushiki Kaisha Photoelectric conversion apparatus, photoelectric conversion system, and mobile body
US20220247966A1 (en) * 2021-02-04 2022-08-04 Canon Kabushiki Kaisha Photoelectric conversion apparatus, photoelectric conversion system, moving body
US20220246651A1 (en) * 2021-02-04 2022-08-04 Canon Kabushiki Kaisha Photoelectric conversion apparatus
US11770630B2 (en) * 2021-02-04 2023-09-26 Canon Kabushiki Kaisha Photoelectric conversion apparatus, photoelectric conversion system, and mobile body
US11849238B2 (en) * 2021-02-04 2023-12-19 Canon Kabushiki Kaisha Photoelectric conversion apparatus, photoelectric conversion system, moving body
US12137291B2 (en) * 2021-02-04 2024-11-05 Canon Kabushiki Kaisha Photoelectric conversion apparatus having pad arrangement
US11706546B2 (en) * 2021-06-01 2023-07-18 Sony Semiconductor Solutions Corporation Image sensor with integrated single object class detection deep neural network (DNN)
US20230061830A1 (en) * 2021-09-02 2023-03-02 Canoo Technologies Inc. Metamorphic labeling using aligned sensor data

Also Published As

Publication number Publication date
EP3833007A4 (en) 2021-08-25
KR20210029202A (en) 2021-03-15
US20210266488A1 (en) 2021-08-26
JP6689437B2 (en) 2020-04-28
EP3833007A1 (en) 2021-06-09
US11735614B2 (en) 2023-08-22
CN112470461A (en) 2021-03-09
KR20210029205A (en) 2021-03-15
JP2020025264A (en) 2020-02-13
EP3833006A4 (en) 2021-09-01
JP2020025263A (en) 2020-02-13
TW202021105A (en) 2020-06-01
CN112470462B (en) 2024-06-07
EP3833006A1 (en) 2021-06-09
TWI846718B (en) 2024-07-01
CN112470461B (en) 2024-09-06
JP6705044B2 (en) 2020-06-03
CN112470462A (en) 2021-03-09
EP3833007B1 (en) 2024-03-13

Similar Documents

Publication Publication Date Title
US11735614B2 (en) Stacked light-receiving sensor and electronic device
JP7414869B2 (en) Solid-state imaging device, electronic equipment, and control method for solid-state imaging device
TWI840429B (en) Multilayer photosensitive sensor and electronic device
US11962916B2 (en) Imaging device with two signal processing circuitry partly having a same type of signal processing, electronic apparatus including imaging device, and imaging method
JP7423491B2 (en) Solid-state imaging device and vehicle control system
US20240021646A1 (en) Stacked light-receiving sensor and in-vehicle imaging device
WO2020027161A1 (en) Layered-type light-receiving sensor and electronic device
US20240121536A1 (en) Light receiving device, electronic apparatus, and light receiving method
US20240080546A1 (en) Imaging apparatus and electronic equipment
WO2020027074A1 (en) Solid-state imaging device and electronic apparatus

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

AS Assignment

Owner name: SONY SEMICONDUCTOR SOLUTIONS CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EKI, RYOJI;REEL/FRAME:056147/0732

Effective date: 20210113

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION