WO2024135137A1 - Dispositif d'imagerie - Google Patents

Dispositif d'imagerie Download PDF

Info

Publication number
WO2024135137A1
WO2024135137A1 PCT/JP2023/040305 JP2023040305W WO2024135137A1 WO 2024135137 A1 WO2024135137 A1 WO 2024135137A1 JP 2023040305 W JP2023040305 W JP 2023040305W WO 2024135137 A1 WO2024135137 A1 WO 2024135137A1
Authority
WO
WIPO (PCT)
Prior art keywords
router
data
routers
light receiving
pixel
Prior art date
Application number
PCT/JP2023/040305
Other languages
English (en)
Japanese (ja)
Inventor
晋 宝玉
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2024135137A1 publication Critical patent/WO2024135137A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • H04N25/772Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising A/D, V/T, V/F, I/T or I/F converters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/779Circuitry for scanning or addressing the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/79Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors

Definitions

  • This disclosure relates to an imaging device that utilizes Spiking Neural Network (SNN) hardware.
  • SNN Spiking Neural Network
  • NNs neural networks
  • SNN spiking neural network
  • SNNs use spike signals as a means of transmitting information, and are a form of NN that allows spike signals to be processed asynchronously by maintaining intermediate states in which individual neurons change according to input spikes. Due to these characteristics, SNNs are said to contribute to improved processing speed and power consumption compared to conventional NNs. Signal processing using SNNs is disclosed, for example, in Patent Document 1.
  • a single-level or multiple-level router is connected to a pixel array section having multiple light-receiving pixels and a multiprocessor section having multiple processors configured by SNN hardware. This reduces congestion in data transmission from the pixel array to the multiprocessor compared to when the pixel array and the multiprocessor are connected via a conventional serializer.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an imaging device according to a first embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating an example of a layered configuration of the imaging device illustrated in FIG.
  • FIG. 3 is a diagram illustrating an example of a functional block of the light receiving pixel illustrated in FIG.
  • FIG. 4 is a diagram illustrating an example of a functional block of the router on the sensor unit side illustrated in FIG.
  • FIG. 5 is a diagram illustrating an example of functional blocks of a router on the processor unit side illustrated in FIG.
  • FIG. 6 is a diagram illustrating an example of functional blocks of the processor illustrated in FIG.
  • FIG. 7 is a block diagram showing a modified example of the stacked structure of the imaging device shown in FIG. FIG.
  • FIG. 8 is a diagram illustrating an example of functional blocks of the router on the light receiving pixel side illustrated in FIG.
  • FIG. 9 is a diagram illustrating an example of a schematic configuration of an imaging device according to the second embodiment of the present disclosure.
  • FIG. 10 is a diagram illustrating an example of a layered configuration of the imaging device illustrated in FIG.
  • FIG. 11 is a diagram illustrating an example of functional blocks of the router illustrated in FIG.
  • FIG. 12 is a diagram illustrating an example of functional blocks of the processor illustrated in FIG.
  • FIG. 13 is a diagram illustrating an example of functional blocks of the pixel array unit illustrated in FIGS.
  • FIG. 14 is a diagram showing an example of time series values of the membrane potential in the membrane potential memory unit shown in FIGS.
  • FIG. 15 is a diagram illustrating an example of transmission data.
  • FIG. 16 is a diagram illustrating an example of transmission data.
  • FIG. 17 is a diagram illustrating an example of functional blocks of the processor unit illustrated in FIG.
  • FIG. 18 is a diagram showing a modified example of the functional blocks of the router shown in FIG.
  • FIG. 19 is a diagram showing a modified example of the functional blocks of the router shown in FIG.
  • FIG. 20 is a diagram showing an example of the transmission prohibition table shown in FIG.
  • FIG. 21 is a diagram showing a modified example of the functional blocks of the router shown in FIG.
  • FIG. 22 is a diagram showing a modified example of the functional blocks of the router shown in FIG. 23A, 23B, 23C, and 23D are schematic diagrams showing whether data output is possible or not, determined based on the control signal ctl3.
  • FIG. 23A, 23B, 23C, and 23D are schematic diagrams showing whether data output is possible or not, determined based on the control signal ctl3.
  • FIG. 23A, 23B, 23C, and 23D
  • FIG. 24 is a diagram showing a modification of the functional blocks of the cell of FIG.
  • FIG. 25 is a diagram showing a modified example of the functional blocks of the cell of FIG.
  • FIG. 26 is a diagram showing an example of the operating state of a plurality of neurons in a processor.
  • FIG. 27 is a diagram illustrating an example of data bypass within a processor.
  • FIG. 28 is a diagram showing a modification of the schematic configuration of the imaging device shown in FIG. 29A and 29B are diagrams illustrating an example of an output image from the router of the imaging device illustrated in FIG. 28 and an image obtained after the output image is decoded.
  • FIG. 30 is a diagram showing a modification of the schematic configuration of the imaging device shown in FIG. FIG.
  • FIG. 31 is a diagram illustrating a modification of the schematic configuration of the imaging device shown in FIG.
  • FIG. 32 is a diagram showing a modification of the schematic configuration of the imaging device shown in FIG.
  • FIG. 33 is a diagram illustrating a modification of the schematic configuration of the imaging device shown in FIG.
  • FIG. 34 is a diagram showing a modification of the schematic configuration of the imaging device shown in FIG.
  • FIG. 1 shows a schematic configuration example of an imaging device 1000 according to an embodiment of the present disclosure.
  • the imaging device 1000 includes a sensor unit 100 and a processor unit 200.
  • the sensor unit 100 includes a pixel array unit 110 and a router 120.
  • the processor unit 200 includes a core array unit 210.
  • the core array unit 210 includes a plurality of cores C arranged two-dimensionally. Each core C includes a processor 211 and a router 212.
  • the core array unit 210 includes a plurality of processors 211 arranged two-dimensionally and a plurality of routers 212 arranged two-dimensionally.
  • the plurality of routers 212 are assigned to each processor 211.
  • the two-dimensionally arranged processors 211 correspond to a specific example of a "multiprocessor" according to an embodiment of the present disclosure.
  • the processor 211 is configured by SNN (Spiking Neural Network) hardware.
  • SNN Spiking Neural Network
  • the pixel array unit 110 and each processor 211 are directly connected via routers 120 and 212 .
  • the pixel array unit 110 has a plurality of light receiving pixels P arranged two-dimensionally.
  • the light receiving pixels P include, for example, a CMOS (Complementary Metal Oxide Semiconductor) element, an EVS (Event-Based Vision Sensor) element, or a SPAD (Single Photon Avalanche Diode) element.
  • the light receiving pixels P generate detection signals by detecting light incident from the outside.
  • the pixel array unit 110 may, for example, directly transmit the generated detection signals to the router 120.
  • the pixel array unit 110 may, for example, digitize the detection signals using an ADC (Analog to Digital Converter) or a counter in the pixel array unit 110, and transmit the resulting digital signals to the router 120.
  • the pixel array unit 110 transmits data (for example, the detection signals or the digital signals) generated based on the light detection at the light receiving pixels P to the router 120 as pixel data Dp.
  • the detection signal or the digital signal corresponds to a so-called spike signal.
  • the pixel array section 110 may transmit pixel data Dp obtained from each light receiving pixel P to the router 120 based on control data ctl2 from the router 120.
  • This control data ctl2 includes, for example, data for each light receiving pixel P regarding whether data output from the pixel array section 110 is required.
  • the pixel array section 110 may determine whether or not it is required to output pixel data Dp obtained from each light receiving pixel P based on the control data ctl2, and transmit the pixel data Dp of the light receiving pixel P determined to require output to the router 120.
  • the router 120 When the router 120 acquires pixel data Dp from the pixel array unit 110, it references the pixel address corresponding to the acquired pixel data Dp to acquire the address of the neuron to which the pixel data Dp is to be sent.
  • the router 120 is provided with a routing table, and the router 120 acquires the address of the neuron to which the pixel data Dp is to be sent based on the routing table.
  • the router 120 may generate time data (timestamp) when the pixel data Dp was acquired, if necessary.
  • the router 120 transmits transmission data DA, which includes the acquired address and pixel data Dp, to each core C.
  • the router 120 may determine the destination of the transmission data DA based on control data ctl1 from the core array unit 210.
  • This control data ctl1 includes, for example, data on the operating state of each neuron in the core array unit 210.
  • the state of a neuron includes, for example, data on whether it is waiting for processing (busy) or not.
  • the router 120 may transmit data about the operating state of the digital converters included in the pixel array unit 110 (for example, the digital converters included in the readout units 113, 116, and 119 described below) to each router 212 in the core array unit 210.
  • This operating state may include, for example, whether the digital converter is operating, the bit width of the digital conversion, and the operating timing or operating frequency of the digital converter.
  • a plurality of routers 212 are arranged two-dimensionally in the core array unit 210.
  • the router 212 determines the destination of the transmission data DA by referring to the address included in the transmission data DA acquired from the router 120.
  • the router 212 is provided with a routing table, and determines the destination of the transmission data DA based on the routing table.
  • the router 212 may determine the destination of the transmission data DA based on the address included in the transmission data DA acquired from the router 120 and data on the operating state (the operating state of the digital converter included in the pixel array unit 110) acquired from the router 120.
  • the router 212 transmits the transmission data DA to the determined destination. If the determined destination is a neuron in the processor 211 corresponding to the router 212, the router 212 transmits the transmission data DA to the processor 211 corresponding to the router 212. If the determined destination is a neuron in the processor 211 corresponding to the router 212 adjacent to the router 212, the router 212 transmits the transmission data DA to the processor 211 corresponding to the router 212 adjacent to the router 212.
  • the processor 211 performs signal processing using SNN on the pixel data included in the transmission data DA acquired from the router 212 corresponding to the processor 211.
  • the core array unit 210 outputs data Dout obtained by signal processing in the processor 211 to the outside.
  • the router 212 generates data about the operating state of the neuron in the processor 211 corresponding to the router 212 based on data (spike signals and addresses) obtained from the LIF unit 211d (described below) in the processor 211 corresponding to the router 212.
  • the router 212 may transmit the generated data (data about the operating state of the neuron in the processor 211 corresponding to the router 212) to the router 120.
  • the sensor chip 1000A is provided with one or more pad electrodes PE1 for each light receiving pixel P. Each pad electrode PE1 is provided on the surface of the sensor chip 1000A opposite the light receiving surface, and is connected to wiring through which transmission data DA is transmitted.
  • the SNN chip 1000B is provided with one or more pad electrodes PE2 for each light receiving pixel P. Each pad electrode PE2 is provided on the surface of the SNN chip 1000B, and is connected to the input terminal of the router 120.
  • the sensor chip 1000A and the SNN chip 1000B are stacked with the pad electrodes PE1 and PE2 overlapping each other.
  • the input terminal of the router 120 is connected to each light receiving pixel P, and the output terminal of the router 120 is connected to each core C (router 212).
  • the router 120 is provided at a location opposite to the location where multiple routers 212 are provided.
  • the router provided between the pixel array unit 110 and each processor 211 is a multi-layer (two-layer) structure consisting of a first-layer router 120 and multiple second-layer routers 212.
  • Figures 3(A) to 3(C) show examples of the configuration of a light-receiving pixel P.
  • the light receiving pixel P may include a CMOS element.
  • the light receiving pixel P includes, for example, a photoelectric conversion unit 111 and a charge storage unit 112 as shown in FIG. 3A.
  • the photoelectric conversion unit 111 includes, for example, a photodiode, and performs photoelectric conversion on light incident on the light receiving surface of the sensor chip 1000A to generate a charge according to the amount of light received.
  • the charge storage unit 112 includes, for example, a transfer transistor electrically connected to the photodiode, and a floating diffusion that temporarily holds the charge output from the photodiode via the transfer transistor.
  • the charge storage unit 112 outputs, for example, a voltage signal according to the level of the charge held in the charge storage unit 112 as a detection signal.
  • the light receiving pixel P may further include a readout unit 113, for example, as shown in FIG. 3A.
  • the readout unit 113 includes a digital converter that digitally converts a voltage signal (detection signal) corresponding to the level of the charge stored in the charge storage unit 112, and an output circuit that outputs a digital signal (pixel data Dp) obtained by digital conversion.
  • the pixel array unit 110 may further include a row readout circuit that reads raster data including one row of pixel data Dp from the light receiving pixels P for each pixel row, and transmits the read raster data to the router 120.
  • the row readout circuit sequentially outputs the digital signals (pixel data Dp) obtained from the light receiving pixels P for each pixel row.
  • Visible light image data for example, RGB image data
  • the light receiving pixel P may include an EVS element.
  • the light receiving pixel P includes, for example, a photoelectric conversion unit 114 and a subtraction unit 115, as shown in FIG. 3B.
  • the photoelectric conversion unit 114 includes, for example, a photodiode, and performs photoelectric conversion on the light incident on the light receiving surface of the sensor chip 1000A to generate a charge according to the amount of light received.
  • the subtraction unit 115 includes, for example, a buffer and a sample and hold circuit. The buffer holds a voltage signal according to the level of the charge output from the photodiode.
  • the sample and hold circuit samples the signal supplied from the buffer, holds the sampled signal, and then outputs a signal according to the difference between the signal supplied from the buffer and the signal held in the sample and hold circuit as a detection signal.
  • the subtraction unit 115 functions like a memory in the light receiving pixel P.
  • the light receiving pixel P may further include a readout unit 116, for example, as shown in FIG. 3B.
  • the readout unit 116 includes a digital converter that digitally converts the signal (detection signal) output from the subtraction unit 115, and an output circuit that outputs a digital signal (pixel data Dp) obtained by digital conversion.
  • the pixel array unit 110 may further include a row readout circuit that reads out raster data including one row's worth of pixel data Dp from the light receiving pixels P for each pixel row, and transmits the read data to the router 120.
  • the row readout circuit sequentially outputs the digital signals (pixel data Dp) obtained from the light receiving pixels P for each pixel row.
  • EVS image data is generated by the digital signals obtained from the light receiving pixels P.
  • the light receiving pixel P may include a SPAD element.
  • the light receiving pixel P includes, for example, a SPAD section 117 and a pulse detection section 118, as shown in FIG. 3C.
  • the SPAD section 117 includes, for example, a SPAD element, operates in Geiger mode, and generates an avalanche current when a photon is incident in a state where a negative bias voltage equal to or greater than the breakdown voltage is applied between the anode and cathode of the SPAD element.
  • the pulse detection section 118 includes, for example, a quench resistor connected in series to the SPAD section 117, and an inverter connected to a connection node between the SPAD section 117 and the quench resistor.
  • the inverter outputs a high-level signal when the voltage of the connection node is lower than a predetermined threshold voltage (i.e., when it is at a low level).
  • the inverter outputs a low-level signal when the voltage of the connection node is equal to or greater than a predetermined threshold voltage (i.e., when it is at a high level).
  • the detection section 118 functions as a digital converter that outputs a digital signal as a detection signal.
  • the light receiving pixel P may further include a readout unit 119, for example, as shown in FIG. 3C.
  • the readout unit 119 includes a counter that counts the signal output from the pulse detection unit 118 and outputs a signal (pixel data Dp) based on the count value.
  • the pixel array unit 110 may further include a row readout circuit that reads out raster data including one row's worth of pixel data Dp from the light receiving pixels P for each pixel row, and transmits the readout raster data to the router 120.
  • the row readout circuit sequentially outputs multiple digital signals obtained from the multiple light receiving pixels P for each pixel row. Visible light image data (e.g., RGB image data) is generated by the multiple digital signals obtained from the multiple light receiving pixels P.
  • Visible light image data e.g., RGB image data
  • the readout units 113, 116, and 119 or the row readout circuit may receive control data (control data ctl2) from the router 120.
  • the readout units 113, 116, and 119 or the row readout circuit may output pixel data Dp to the router 120 based on the control data (control data ctl2) from the router 120.
  • the pixel array section 110 has a row readout circuit.
  • one or more pad electrodes PE1 may be connected to the row readout circuit.
  • the input terminal of the router 120 is connected to the row readout circuit, and the output terminal of the router 120 is connected to each core C.
  • the digital converter provided in the light-receiving pixel P may be provided within the row readout circuit.
  • FIG. 4 shows an example of a schematic configuration of the router 120.
  • the router 120 has, for example, an input port 121, a FIFO (first-in first-out) memory unit 122, a destination address determination unit 123, an arbiter 124, and an output port 125, as shown in Fig. 4.
  • FIFO first-in first-out
  • the input port 121 is electrically connected to each pad electrode PE2, and outputs a plurality of pixel signals Dp transmitted from the pixel array section 110 to the FIFO memory section 122.
  • the FIFO memory section 122 temporarily stores the plurality of pixel signals Dp input from the input port 121.
  • the FIFO memory section 122 sequentially outputs the plurality of pixel signals Dp stored in the FIFO memory section 122 under the control of the arbiter 124.
  • the destination address determination section 123 has a routing table (TBL), and acquires the address of the neuron to which each pixel data Dp is to be transmitted based on the TBL.
  • the destination address determination section 123 associates the pixel signal Dp with the acquired address and outputs it to the arbiter 124.
  • the arbiter 124 arbitrates requests for output of pixel signals Dp supplied from each of the multiple light receiving pixels P, and outputs a response based on the arbitration result (i.e., permission/prohibition of output of pixel signals Dp) to the output port 125.
  • the arbiter 124 may perform arbitration according to control data ctl1, for example.
  • the arbiter 124 may determine the destination of the transmission data DA based on the control data ctl1, for example.
  • the control data ctl1 includes data on the operating state of each neuron in the core array unit 210, for example.
  • the arbiter 124 may, for example, output the control data ctl2 generated based on the arbitration result to the pixel array unit 110.
  • the arbiter 124 may, for example, generate the control data ctl2 based on the control data ctl1 and output the generated control data ctl2 to the pixel array unit 110.
  • the control data ctl2 includes, for example, data for each light receiving pixel P regarding whether or not data output from the pixel array unit 110 is required.
  • the output port 125 transmits the transmission data DA to each core C.
  • the configuration of the router 120 is not limited to the configuration shown in FIG. 4.
  • the configuration of the router 120 is not limited to the configuration shown in FIG. 4 as long as it has functions such as an arbiter that arbitrates requests, buffering using FIFO memory, and output destination selection using a TBL.
  • FIG. 5 shows an example of a schematic configuration of the router 212.
  • the router 212 has an input port 212a, a FIFO memory unit 212b, a destination address determination unit 212c, an arbiter 212d, and an output port 212e.
  • the input port 212a of the router 212 is connected to the output port 125 of the router 120.
  • the input port 212a of the router 212 is connected to the output port 212e of four adjacent routers 212.
  • the output port 212e of the router 212 is connected to the input port 212a of four adjacent routers 212.
  • the output port 212e of the router 212 is connected to the processor 211 in the common core C.
  • the input port 212a has four ports (eastIN, southIN, westIN, northIN), one port (UpIN), and one port (DownIN).
  • the four ports (eastIN, southIN, westIN, northIN) are connected to the output ports 212e of four adjacent routers 212.
  • One port (UpIN) is connected to the output port 125 of the router 120.
  • One port (DownIN) is connected to the processor 211 in the common core C.
  • the input port 212a outputs the input transmission data DA to the FIFO memory unit 212b.
  • the FIFO memory unit 212b temporarily stores the transmission data DA input from the input port 212a.
  • the FIFO memory unit 212b outputs the transmission data DA stored in the FIFO memory unit 212b according to the control of the arbiter 212d.
  • the destination address determination unit 212c has a routing table (TBL) and obtains the address of the neuron to which the transmission data DA is to be sent based on the TBL.
  • the destination address determination unit 212c associates the obtained address with the pixel signal Dp and outputs it to the arbiter 212d.
  • the arbiter 212d arbitrates requests for output of multiple pieces of transmission data DA input to the input port 212a, and outputs a response based on the arbitration result (i.e., permission/prohibition of output of the transmission data DA) to the output port 212e.
  • the arbiter 212d may, for example, output control data ctl1 generated based on the arbitration result to the router 120.
  • the control data ctl1 includes, for example, the operating state of each neuron in the core array unit 210.
  • the output port 212e outputs the transmission data DA to the input ports 212a of the four adjacent routers 212, the input port 121 of the router 120, or the processor 211 in the common core C.
  • the configuration of the router 212 is not limited to the configuration shown in FIG. 5.
  • the configuration of the router 212 is not limited to the configuration shown in FIG. 5 as long as it has functions such as an arbiter that arbitrates requests, buffering using FIFO memory, and output destination selection using a TBL.
  • processor 211 6 shows an example of a functional block of the processor 211.
  • the processor 211 is configured by SNN hardware.
  • the processor 211 performs signal processing using SNN on pixel data Dp included in the transmission data DA received from the router 212.
  • the processor 211 has a neuron I/O unit 211a, a product-sum operation unit 211b, a weight storage memory unit 211c, a membrane potential memory unit 211d, and an LIF (Leaky integrate-and-fire) unit 21d.
  • the configuration of the processor 211 is not limited to the configuration shown in FIG. 6.
  • the neuron I/O unit 211a outputs pixel data Dp contained in the transmission data DA received from the router 212 as a spike signal to the product-sum calculation unit 211b.
  • the neuron I/O unit 211a outputs the pixel data Dp received from the router 212 to the product-sum calculation unit 211b in association with the neuron destination address received from the router 212.
  • the product-sum calculation unit 211b multiplies the spike signal input from the neuron I/O unit 211a by a predetermined weight value set for each neuron destination address, and performs a product-sum calculation to add up the number of input spikes for each neuron destination address.
  • the product-sum calculation unit 211b stores the calculation results thus obtained in the membrane potential memory unit 211d via the neuron I/O unit 211a.
  • the weight value is stored in the weight storage memory unit 211c.
  • the neuron I/O unit 211a stores the value (result of the product-sum operation) for each neuron destination address received from the product-sum operation unit 211b as a membrane potential in the membrane potential memory unit 211d.
  • This membrane potential is what is called an intermediate state.
  • an intermediate state is defined for each neuron, and changes based on the input from the product-sum operation unit 211b via the neuron I/O unit 211a.
  • the LIF unit 211d performs leaky integration and firing processing.
  • the LIF unit 211d multiplies the membrane potential stored in the membrane potential memory unit 211d by a predetermined membrane time constant, thereby causing a temporal change (leakage) in the membrane potential.
  • the LIF unit 211d further outputs a spike signal to the router 212 when one or more values of the intermediate state exceed a predetermined threshold.
  • the LIF unit 211d outputs the spike signal to the router 212 in association with the address of the neuron whose intermediate state exceeds the predetermined threshold.
  • the pixel array unit 110 and each processor 211 are directly connected via routers 120 and 212. This makes it possible to reduce congestion (the destination of spikes concentrating at one location at the same time) in data transmission from the pixel array unit 110 to each processor 211, compared to when the pixel array and multiprocessor are connected via a conventional serializer. As a result, it is possible to achieve a further improvement in processing speed in signal processing using an SNN.
  • the router 120 is connected to multiple light receiving pixels P or row readout circuits and multiple processors 211. This makes it possible to reduce congestion in data transmission from the pixel array unit 110 to each processor 211, compared to when the pixel array and multiprocessor are connected via a conventional serializer. As a result, it is possible to achieve a further improvement in processing speed in signal processing using an SNN. Note that when a row readout circuit is provided, it is not necessary to wait for readout from all light receiving pixels P when outputting data from the pixel array unit 110. Therefore, it is possible to reduce congestion, compared to when reading out from all light receiving pixels P.
  • the digital signal obtained by the above-mentioned digital converter is transmitted to multiple processors 211 via routers 120 and 212.
  • the routers 120, 212 are provided with FIFO memories 122, 212b and arbiters 124, 212d, and multiple digital signals stored in the FIFO memories 122, 212b are sequentially transmitted to multiple processors 212 under the control of the arbiters 124, 212d.
  • This makes it possible to reduce congestion in data transmission from the pixel array unit 110 to each processor 211, compared to the conventional case where the pixel array and the multiprocessor are connected via a serializer. As a result, it is possible to achieve a further improvement in processing speed in signal processing using an SNN.
  • a first-level router 120 is provided that is assigned to a plurality of light-receiving pixels P, and a plurality of second-level routers 212 are provided, one for each processor 211.
  • the router 120 is connected to each light receiving pixel P or row readout circuit and each router 212.
  • data on the operating state of the digital converter described above is transmitted by the router 120 to each router 212.
  • each router 212 determines the destination of the digital signal obtained by the digital converter described above based on the data on the operating state obtained from the router 120.
  • congestion in data transmission from the pixel array unit 110 to each processor 211 can be reduced compared to when the pixel array and the multiprocessor are connected via a conventional serializer. Therefore, a further improvement in processing speed can be achieved in signal processing using an SNN.
  • data on the operating state of neurons in processor 211 is transmitted to router 120 by router 212 corresponding to processor 211.
  • router 120 to determine the destination of data obtained from light-receiving pixel P based on the data from router 212.
  • congestion in data transmission from pixel array unit 110 to each processor 211 can be reduced compared to the conventional case where a pixel array and a multiprocessor are connected via a serializer. Therefore, a further improvement in processing speed can be achieved in signal processing using an SNN.
  • the sensor chip 1000A and the SNN chip 1000B are stacked with the pad electrodes PE1 and PE2 overlapping each other. This makes it possible to shorten the data transmission distance from the pixel array 110 to each processor 211 compared to when the pixel array and the multiprocessor are connected via a conventional serializer. As a result, it is possible to achieve a further improvement in the processing speed in signal processing using SNN.
  • the pad electrode PE1 is provided on the surface of the sensor chip 1000A opposite the light receiving surface, and is connected to a wiring that outputs the digital signal obtained by the above-mentioned digital converter.
  • the pad electrode PE2 is provided on the surface of the SNN chip 1000B, and is connected to the input end of the router 120.
  • FIG. 7 shows a modified example of the stacked structure of the imaging device 1000 shown in FIG. 2.
  • the router 120 may be configured, for example, by a plurality of routers 120A arranged two-dimensionally as shown in FIG. 7.
  • the multiple light receiving pixels P are divided into multiple groups (first groups), and the multiple light receiving pixels P divided into each first group constitute the pixel array section 110A.
  • the pixel array section 110 is composed of multiple pixel array sections 110A arranged two-dimensionally.
  • the multiple cores C are divided into multiple groups (second groups), and the multiple cores C divided into each second group constitute the core array section 210A.
  • the core array section 210A is composed of multiple cores C arranged two-dimensionally.
  • the routers 120A are assigned to the pixel array units 110A one by one, and further assigned to the core array units 210A one by one.
  • the routers 120A are connected to the corresponding pixel array units 110A (each light receiving pixel P or row readout circuit) and to the corresponding core array units 210A (each router 212).
  • the routers 120A are further connected to the adjacent routers 120A.
  • the routers 120A are provided at locations opposite to the locations where the corresponding pixel array units 110A (multiple light receiving pixels P) are provided, and are provided at locations opposite to the locations where the corresponding core array units 210A (multiple routers 212) are provided.
  • the routers provided between the pixel array unit 110 and each processor 211 are multiple hierarchical (two hierarchical) layers, consisting of multiple routers 120A in the first layer and multiple routers 212 in the second layer.
  • each router 120A is further connected to the output port 125 of the four adjacent routers 120A.
  • the output port 125 of each router 120A is connected to the input port 121 of the four adjacent routers 120A.
  • the sensor chip 1000A is provided with one or more pad electrodes PE1 for each light receiving pixel P. Each pad electrode PE1 is provided on the surface of the sensor chip 1000A opposite the light receiving surface, and is connected to a wiring that outputs the transmission data DA.
  • the SNN chip 1000B is provided with one or more pad electrodes PE2 for each light receiving pixel P. Each pad electrode PE2 is provided on the surface of the SNN chip 1000B. Each pad electrode PE2 provided in the pixel array section 110A is connected to the input terminal of the router 120A corresponding to the pixel array section 110A.
  • the sensor chip 1000A and the SNN chip 1000B are stacked with the pad electrodes PE1 and PE2 overlapping each other.
  • FIG. 8 shows an example of the schematic configuration of router 120A.
  • router 120A has an input port 121, a FIFO memory unit 122, a destination address determination unit 123, an arbiter 124, and an output port 125.
  • the input port 121 is electrically connected to each corresponding pad electrode PE2, and outputs a plurality of pixel signals Dp transmitted from the corresponding pixel array unit 110A to the FIFO memory unit 122.
  • the input port 121 further outputs a plurality of pixel signals Dp transmitted from the output ports 125 of the four adjacent routers 120A to the FIFO memory unit 122.
  • the FIFO memory unit 122 temporarily stores multiple pixel signals Dp input from the input port 121.
  • the FIFO memory unit 122 sequentially outputs the multiple pixel signals Dp stored in the FIFO memory unit 122 under the control of the arbiter 124.
  • the destination address determination unit 123 has a routing table (TBL) and obtains the address of the neuron to which each piece of pixel data Dp is to be sent based on the TBL.
  • the destination address determination unit 123 associates the pixel signal Dp with the obtained address and outputs it to the arbiter 124.
  • the arbiter 124 arbitrates requests for output of pixel signals Dp supplied from each of the multiple light receiving pixels P, and outputs a response based on the arbitration result (i.e., permission/prohibition of output of pixel signals Dp) to the output port 125.
  • the arbiter 124 may perform arbitration according to control data ctl1, for example.
  • the arbiter 124 may determine the destination of the transmission data DA based on the control data ctl1, for example.
  • the arbiter 124 may, for example, output the control data ctl2 generated based on the arbitration result to the pixel array unit 110.
  • the arbiter 124 may, for example, generate the control data ctl2 based on the control data ctl1 and output the generated control data ctl2 to the pixel array unit 110.
  • the output port 125 transmits the transmission data DA to each core C and to the input port 121 of the adjacent router 120A.
  • the configuration of router 120A is not limited to the configuration shown in FIG. 8. As long as it has functions such as an arbiter that arbitrates requests, buffering using FIFO memory, and output destination selection using a TBL, the configuration of router 120A is not limited to the configuration shown in FIG. 8.
  • a plurality of routers 120A in the first hierarchical layer are provided, one for each of a set of the first group and the second group, and a plurality of routers 212 in the second hierarchical layer are provided, one for each of a set of the first group and the second group.
  • This allows data transmission for each of a set of the first group and the second group.
  • congestion in data transmission from the pixel array unit 110 to each processor 211 can be reduced compared to when the pixel array unit and the multiprocessor unit are connected via a serializer as in the conventional case. Therefore, a further improvement in processing speed can be achieved in signal processing using an SNN.
  • the router 120A is connected to each light receiving pixel P of the corresponding first group and each router 212 of the corresponding second group, and is further connected to multiple routers 120A adjacent to the corresponding first group.
  • Each router 212 is connected to the corresponding router 120A and the corresponding processor 211, and is further connected to multiple adjacent routers 212.
  • data on the operating state of the digital converter described above is transmitted by the router 120 to each router 212.
  • the destination of the digital signal obtained by the digital converter described above is determined by the router 212. This makes it possible to transmit data according to the degree of congestion of each processor 211. As a result, it is possible to achieve a further improvement in the processing speed in signal processing using an SNN.
  • data on the operating state of neurons in processor 211 is transmitted to router 120A by router 212 corresponding to processor 211.
  • router 120A determines the destination of data obtained from light-receiving pixel P based on the data from router 212.
  • congestion in data transmission from pixel array unit 110 to each processor 211 can be reduced compared to the conventional case where the pixel array and multiprocessor are connected via a serializer. Therefore, a further improvement in processing speed can be achieved in signal processing using SNN.
  • FIG. 9 shows a schematic configuration example of an imaging device 2000 according to a second embodiment of the present disclosure.
  • the imaging device 2000 includes a sensor unit 300 and a processor unit 400.
  • the sensor unit 300 includes a pixel array unit 310.
  • the processor unit 400 includes a core array unit 410.
  • the core array unit 410 includes a plurality of cores C arranged two-dimensionally.
  • Each core C includes a processor 211 and a router 412.
  • the core array unit 410 includes a plurality of processors 211 arranged two-dimensionally and a plurality of routers 412 arranged two-dimensionally.
  • the plurality of processors 211 arranged two-dimensionally correspond to a specific example of a "multiprocessor" according to an embodiment of the present disclosure.
  • the processor 211 is configured by SNN hardware.
  • the pixel array unit 310 and each processor 211 are directly connected via a plurality of routers 412.
  • the pixel array section 310 has a plurality of light receiving pixels P arranged two-dimensionally.
  • the light receiving pixels P include, for example, a CMOS element, an EVS element, or a SPAD element.
  • the light receiving pixels P generate detection signals by detecting light incident from the outside.
  • the pixel array section 310 may, for example, directly transmit the generated detection signals to the plurality of routers 412.
  • the pixel array section 310 may, for example, digitize the detection signals using an ADC or a counter in the pixel array section 110, and transmit the digital signals thus obtained to the plurality of routers 412.
  • the pixel array section 310 transmits data (for example, the detection signals or the digital signals) generated based on the light detection at the light receiving pixels P as pixel data Dp to the plurality of routers 412.
  • the detection signals or the digital signals correspond to so-called spike signals.
  • the pixel array unit 310 may transmit pixel data Dp obtained from each light receiving pixel P to the router 412 based on control data ctl2 from the multiple routers 412.
  • This control data ctl2 includes, for example, data for each light receiving pixel P regarding whether data output from the pixel array unit 310 is required.
  • the pixel array unit 310 may determine whether or not it is required to output the pixel data Dp obtained from each light receiving pixel P based on the control data ctl2, and transmit the pixel data Dp of the light receiving pixel P determined to require output to the router 412.
  • the router 412 When the router 412 acquires pixel data Dp from the pixel array unit 310, it references the pixel address corresponding to the acquired pixel data Dp to acquire the address of the neuron to which the pixel data Dp is to be sent.
  • the router 412 is provided with a routing table, and uses the routing table to acquire the address of the neuron to which the pixel data Dp is to be sent. If necessary, the router 412 may generate time data (timestamp) when the pixel data Dp was acquired.
  • the router 412 transmits transmission data DA including the acquired address and pixel data Dp. If the destination is a neuron in the processor 211 corresponding to the router 412, the router 412 transmits the transmission data DA to the processor 211 corresponding to the router 412. If the destination is a neuron in the processor 211 corresponding to the router 412 adjacent to the router 412, the router 412 transmits the transmission data DA to the processor 211 corresponding to the router 412 adjacent to the router 412.
  • the processor 211 performs signal processing using SNN on the pixel data Dp included in the transmission data DA acquired from the router 412.
  • the core array unit 410 outputs data Dout obtained by signal processing in the processor 211 to the outside.
  • the router 412 determines the destination of the digital signal obtained by the digital converter included in the pixel array unit 110 (for example, the digital converter included in the readout units 113, 116, and 119) based on data about the operating state of the digital converter.
  • This operating state includes, for example, whether the digital converter is operating, the bit width of the digital conversion, and the operating timing or operating frequency of the digital converter.
  • the router 412 generates data on the operating state of the neuron in the processor 211 corresponding to the router 412 based on the data (spike signal and address) obtained from the LIF unit 211d in the processor 211 corresponding to the router 412.
  • the router 412 may generate control data Ctl2 based on the generated data (data on the operating state of the neuron in the processor 211 corresponding to the router 412) and transmit the generated control data Ctl2 to the pixel array 310.
  • FIG. 10 shows an example of a layered structure of the imaging device 2000.
  • the pixel array section 310 is formed, for example, by a sensor chip 2000A as shown in FIG. 10. In the sensor chip 2000A, the pixel array section 310 is formed on a semiconductor substrate.
  • the processor section 400 is formed, for example, by an SNN chip 2000B as shown in FIG. 10. In the SNN chip 2000B, the processor section 400 is formed on a semiconductor substrate.
  • the sensor chip 2000A is provided with one or more pad electrodes PE1 for each light receiving pixel P.
  • Each pad electrode PE1 is provided on the surface of the sensor chip 2000A opposite the light receiving surface, and is connected to wiring that outputs pixel data Dp.
  • the SNN chip 2000B is provided with one or more pad electrodes PE2 for each light receiving pixel P.
  • Each pad electrode PE2 is provided on the surface of the SNN chip 2000B, and is connected to the input terminal of the router 412.
  • the sensor chip 2000A and the SNN chip 2000B are stacked with the pad electrodes PE1 and PE2 overlapping each other.
  • the multiple light receiving pixels P are divided into multiple groups (first groups), and the multiple light receiving pixels P divided into each first group constitute the pixel array section 310A.
  • the pixel array section 310 is composed of multiple pixel array sections 310A arranged two-dimensionally.
  • multiple routers 412 are arranged two-dimensionally, and multiple processors 211 are arranged two-dimensionally.
  • the routers 412 are assigned one by one to the pixel array units 310A, and further assigned one by one to the processors 211.
  • the router 412 is connected to the corresponding pixel array unit 310A (each light receiving pixel P) and to the corresponding processor 211.
  • the router 412 is further connected to adjacent routers 412.
  • the router 412 is provided at a location opposite the location where the corresponding pixel array unit 310A (multiple light receiving pixels P) is provided, and is provided at a location opposite the location where the corresponding processor 211 is provided.
  • the router 412 may be provided at a location opposite the location adjacent to the location where the corresponding processor 211 is provided.
  • the router provided between the pixel array unit 310 and each processor 211 is a single layer (one layer) composed of multiple routers 412.
  • the input port 412a of each router 412 is further connected to the output port 412e of the four adjacent routers 412a.
  • the output port 412e of each router 412 is connected to the input port 412a of the four adjacent routers 412.
  • FIG. 11 shows an example of a schematic configuration of the router 412.
  • the router 412 has an input port 412a, a FIFO memory unit 412b, a destination address determination unit 412c, an arbiter 412d, and an output port 412e.
  • the input port 412a of the router 412 is connected to each light receiving pixel P (or row readout circuit) of the corresponding pixel array 310A.
  • the input port 412a of the router 412 is connected to the output port 412e of four adjacent routers 412.
  • the output port 412e of the router 412 is connected to the input port 412a of four adjacent routers 412.
  • the output port 412e of the router 412 is connected to the processor 211 in the common core C.
  • the input port 412a has four ports (eastIN, southIN, westIN, northIN), one port (PxIN), and one port (localIN).
  • the four ports (eastIN, southIN, westIN, northIN) are connected to the output ports 412e of the four adjacent routers 412.
  • One port (PxIN) is connected to the corresponding pixel array unit 310A.
  • One port (localIN) is connected to the processor 211 in the common core C.
  • the input port 412a outputs the input pixel data Dp to the FIFO memory unit 412b.
  • the FIFO memory unit 412b temporarily stores multiple pixel signals Dp input from the input port 412a.
  • the FIFO memory unit 412b outputs the multiple pixel signals Dp stored in the FIFO memory unit 412b according to the control of the arbiter 412d.
  • the destination address determination unit 412c has a routing table (TBL) and obtains the address of the neuron to which each pixel data Dp is to be sent based on the TBL.
  • the destination address determination unit 412c associates the obtained address with the pixel signal Dp and outputs it to the arbiter 412d.
  • the arbiter 412d arbitrates requests for output of pixel signals Dp supplied from each of the multiple light-receiving pixels P, and outputs a response based on the arbitration result (i.e., permission/prohibition of output of pixel signals Dp) to the output port 412e.
  • the arbiter 412d may, for example, output control data ctl1 generated based on the arbitration result to the arbiter 412d of an adjacent router 412.
  • the control data ctl1 includes, for example, the operating state of each neuron in the core array unit 410.
  • the output port 412e outputs the pixel signal Dp to either the input port 412a of the four adjacent routers 412 or the processor 211 in the common core C.
  • the configuration of the router 412 is not limited to the configuration shown in FIG. 11.
  • the configuration of the router 412 is not limited to the configuration shown in FIG. 11 as long as it has functions such as an arbiter that arbitrates requests, buffering using FIFO memory, and output destination selection using a TBL.
  • FIG. 12 shows an example of a functional block of the processor 211.
  • the processor 211 is composed of SNN hardware.
  • the processor 211 performs signal processing using SNN on the pixel signal Dp received from the router 412.
  • the processor 211 has a neuron I/O unit 211a, a product-sum operation unit 211b, a weight storage memory unit 211c, a membrane potential memory unit 211d, and an LIF unit 211d.
  • the configuration of the processor 211 is not limited to the configuration shown in FIG. 12.
  • the neuron I/O unit 211a outputs the pixel signal Dp received from the router 412 to the product-sum calculation unit 211b as a spike signal.
  • the product-sum calculation unit 211b performs a product-sum calculation by multiplying the spike signal input from the neuron I/O unit 211a by a predetermined weight value set for each neuron destination address and adding up the number of input spikes for each neuron destination address.
  • the product-sum calculation unit 211b stores the calculation results thus obtained in the membrane potential memory unit 211d via the neuron I/O unit 211a.
  • the above weight values are stored in the weight storage memory unit 211c.
  • the neuron I/O unit 211a receives the value (result of the product-sum operation) for each neuron destination address from the product-sum operation unit 211b and stores it as a membrane potential in the membrane potential memory unit 211d.
  • This membrane potential is what is called an intermediate state.
  • the LIF unit 211d performs leaky integration and firing processing.
  • the LIF unit 211d multiplies the membrane potential stored in the membrane potential memory unit 211d by a predetermined membrane time constant, thereby causing a temporal change (leakage) in the membrane potential.
  • the LIF unit 211d further outputs a spike signal to the router 412 when one or more values of the intermediate state exceed a predetermined threshold.
  • one router 412 is assigned to each processor 211.
  • the pixel signal Dp obtained in the pixel array unit 310 is transmitted to the processor 211 via the router 412.
  • congestion in data transmission from the pixel array unit 310 to each processor 211 can be reduced compared to when the pixel array and the multiprocessor are connected via a conventional serializer. Therefore, it is possible to achieve a further improvement in processing speed in signal processing using an SNN.
  • the router 412 is connected to each light receiving pixel P or row readout circuit of the corresponding pixel array 310A and each corresponding processor 211, and is further connected to multiple adjacent routers 412. This makes it possible to reduce congestion in data transmission from the pixel array unit 110 to each processor 211, compared to when the pixel array and the multiprocessor are connected via a conventional serializer. As a result, it is possible to achieve a further improvement in processing speed in signal processing using an SNN.
  • the destination of the digital signal obtained by the digital converter is determined by the router 412 based on data about the operating state of the digital converter. This allows data transmission according to the degree of congestion of each processor 211. As a result, it is possible to achieve a further improvement in the processing speed in signal processing using SNN.
  • a control signal ctl1 is generated based on data about the operating state of neurons in the processor 211, and the generated control signal ctl1 is transmitted from the router 412 to the pixel array unit 310.
  • congestion in data transmission from the pixel array unit 310 to each processor 211 can be reduced compared to when the pixel array and multiprocessor are connected via a conventional serializer. Therefore, a further improvement in processing speed can be achieved in signal processing using an SNN.
  • Fig. 13 shows an example of functional blocks of the pixel array units 110, 110A, and 310A.
  • the pixel array units 110, 110A, and 310A may include, for example, a pixel array circuit 101, a vertical scanning circuit 102, and a row readout circuit 103, as shown in Fig. 13.
  • the pixel array circuit 101 has a plurality of light-receiving pixels P arranged two-dimensionally in a matrix, for example, as shown in FIG. 13.
  • a vertical signal line is arranged for each pixel column, and a horizontal signal line is arranged for each pixel row.
  • the vertical scanning circuit 102 selects a number of light receiving pixels P for each row via a number of horizontal signal lines, and outputs the signals generated by each light receiving pixel P for one row to the row readout circuit 103 via a number of vertical signal lines.
  • the row readout circuit 103 has an ADC 104 and a horizontal scanning circuit 105, for example, as shown in FIG. 13.
  • the ADC 104 has a number of ADCs 104a, one for each vertical signal line, and each ADC 104a digitally converts the signals acquired from the light receiving pixels P via the vertical signal line.
  • the horizontal scanning circuit 105 sequentially outputs a number of digital signals (pixel data Dp) obtained from each ADC 104a for each pixel row.
  • the horizontal scanning circuit 105 outputs raster data including a number of pixel data Dp for one row to the routers 120, 120A, and 412.
  • Each router 212, 412 transmits data obtained from routers 120, 120A, 310A, for example, to the corresponding processor 211 using a specified communication standard.
  • Each router 212, 412 transmits, for example, a plurality of pixel data Dp for one frame (all light-receiving pixels P) obtained from routers 120, 120A, 310A, to a plurality of processors 211 together with a frame start (FS) and a frame end (FE).
  • FS frame start
  • FE frame end
  • Each router 212, 412 outputs, for example, FS, a plurality of pixel data Dp for one frame (all light-receiving pixels P), and FE in that order.
  • the LIF unit 211d disables leakage (decrement) over time in the intermediate state during the leaky integrate and fire process from FS to FE. This disablement does not include the decrement that is performed when the data in each row is negatively connected (negative synaptic weight).
  • the LIF unit 211d may disable decrement for the period from FS to FE, for example, as shown in FIG. 14.
  • the period from FS to FE is expressed as a decrement disabled period ⁇ X. This allows each neuron to process one frame of pixel data Dp at the same time.
  • FIG. 15 shows an example of data transmission.
  • Each router 120, 120A transmits, for example, a plurality of pixel data Dp obtained from each light receiving pixel P in the pixel array unit 110, 110A to the corresponding router 212 using a predetermined communication standard.
  • Each router 120, 120A transmits, for example, a plurality of pixel data Dp obtained from each light receiving pixel P in the pixel array unit 110, 110A to the corresponding router 212 together with a frame start (FS) and a frame end (FE).
  • FS frame start
  • FE frame end
  • Each router 120, 120A may, for example, sequentially transmit an FS, a plurality of raster data, and an FE to the router 212 as shown in FIG. 15.
  • the router 212 may obtain the address of the neuron to which the data obtained from the routers 120 and 120A is to be transmitted, for example, based on the TBL. Next, the router 212 may transmit the obtained address to the processor 211, as shown in FIG. 15, by associating it with FS and a plurality of pixel data Dp and FE for one frame (all light receiving pixels P).
  • FIG. 16 shows an example of data transmission.
  • each router 120, 120A may sequentially transmit FS and the raster data of the first row, the raster data of each row from the second row to the (last row - 1) row, and the raster data of the last row and FE to the router 212.
  • the router 212 may obtain the address of the neuron to which the data obtained from the router 120, 120A is to be transmitted, for example, based on the TBL.
  • the router 212 may transmit the obtained address to the processor 211, as shown in FIG. 16, by associating it with FS and each pixel data Dp of one pixel row, each pixel data Dp of the second row to the (last row - 1) row, and FE and each pixel data Dp of the last row.
  • the processor 211 When the processor 211 acquires FS, one frame's worth of pixel data Dp, and FE in the data format shown in FIG. 15 or FIG. 16, for example, it disables decrement during the period from when FS is acquired until when FE is acquired (decrement disabled period ⁇ X). This allows each processor 211 to process one frame's worth of multiple pixel data Dp at the same time.
  • [Variation B] 17 shows a modified example of the functional blocks of the processor unit 200.
  • the processor unit 200 may have, for example, a core array unit 210, a GlobalFS distribution unit 220, and a GlobalFE distribution unit 230, as shown in FIG.
  • the GlobalFS distribution unit 220 transmits an FS to all processors 211.
  • the GlobalFS distribution unit 220 transmits an FS to all processors 211 before the input of the pixel data Dp of one frame (all light receiving pixels P) to the processors 211 starts (i.e., before the processors 211 start reading data).
  • the GlobalFS distribution unit 220 receives a signal (hereinafter referred to as an "input start signal") indicating the start of input of the pixel data Dp of one frame (all light receiving pixels P) to the processors 211 from the routers 120, 120A
  • the GlobalFS distribution unit 220 transmits an FS to all processors 211.
  • the routers 120, 120A transmit the input start signal to the GlobalFS distribution unit 220, for example, immediately before the input of the pixel data Dp of one frame (all light receiving pixels P) to the processors 211 starts.
  • the GlobalFE distribution unit 230 transmits an FE to all processors 211.
  • the GlobalFE distribution unit 230 transmits an FE to all processors 211 after the input of the pixel data Dp of one frame (all light receiving pixels P) to the processors 211 is completed (i.e., after the data is read by the processors 211 is completed).
  • the GlobalFE distribution unit 230 receives a signal indicating the input completion of the pixel data Dp of one frame (all light receiving pixels P) to the processors 211 (hereinafter referred to as the "input completion signal" from the router 120, 120A
  • the GlobalFE distribution unit 230 transmits an FE to all processors 211.
  • the router 120, 120A transmits the input completion signal to the GlobalFE distribution unit 230 at the same time (or immediately after) the input of the pixel data Dp of one frame (all light receiving pixels P) to the processors 211 is completed.
  • Fig. 18 shows a modified example of the functional blocks of the router 120A shown in Fig. 8.
  • Fig. 19 shows a modified example of the functional blocks of the router 412 shown in Fig. 11.
  • the pixel array unit 110, 310 is composed of a plurality of pixel array units 110A, 310A, and pixel data Dp obtained from the pixel array unit 110A, 310A is input to the input port 121, 412a.
  • the pixel data Dp input from the pixel array unit 110A, 310A is input directly to the destination address determination unit 123, 412c without passing through the FIFO memory unit 122, 412b.
  • the destination address determination unit 123, 412c determines whether or not to output the pixel data Dp obtained from the pixel array unit 110A, 310A based on the transmission prohibition data 127, 412g obtained from the transmission prohibition control unit 126, 412f.
  • the destination address determination unit 123, 412c compares the address obtained based on the routing table (TBL) with the transmission prohibition data 127, 412g obtained from the transmission prohibition control unit 126, 412f.
  • the transmission prohibition data 127, 412g is data in which, for example, "1" indicating transmission possible or "0" indicating transmission prohibited is associated with each address, as shown in FIG. 20(A).
  • the transmission prohibition control unit 126, 412f generates transmission prohibition data 127, 412g based on data about the operating state of the neuron in the processor 211 corresponding to the router 212, 412.
  • the transmission prohibition control unit 126, 412f stores the generated transmission prohibition data 127, 412g in a specified memory.
  • the destination address determination unit 123, 412c prohibits the output of pixel data Dp obtained from the pixel array unit 110A, 310A.
  • the destination address determination unit 123, 412c outputs the pixel data Dp obtained from the pixel array unit 110A, 310A to the arbiter 124, 412d.
  • the destination address determination unit 123, 412c permits output to addresses corresponding to "1" in the transmission prohibited data 127, 412g and prohibits output to addresses corresponding to "0" in the transmission prohibited data 127, 412g.
  • the transmission prohibition control unit 126, 412f may reset the transmission prohibition data 127, 412g after a predetermined time has elapsed.
  • the destination address determination unit 123, 412c may redirect the output destination of the pixel data Dp obtained from the pixel array unit 110A, 310A to a neuron having an address corresponding to transmission permitted in the transmission prohibited data 127, 412g.
  • This detouring neuron has a function of auxiliary representation of attention data corresponding to the neuron for which transmission is prohibited.
  • the processor 211 may output the data indicated by the neuron for which transmission is prohibited and the data indicated by the detouring neuron (attention data) in association with each other. In this case, it becomes possible to provide new information processing.
  • Fig. 21 shows a modified example of the functional blocks of the router 120A shown in Fig. 18.
  • Fig. 22 shows a modified example of the functional blocks of the router 412 shown in Fig. 19.
  • the transmission prohibition control units 126, 412f may transmit a control signal ctl3 generated based on the transmission prohibition data 127, 412g to at least one of the vertical scanning circuit 102 and the horizontal scanning circuit 105, for example, as shown in Figs. 21 and 22.
  • the vertical scanning circuit 102 may determine whether or not to select (output data) each line in the pixel array circuit 101 based on the control signal ctl3 input from the transmission prohibition control unit 126, 412f. The vertical scanning circuit 102 may select each line except for the lines for which selection (output data) is prohibited.
  • the horizontal scanning circuit 105 may determine whether or not to output each line of pixel data Dp obtained from the pixel array circuit 101 based on the control signal ctl3 input from the transmission prohibition control units 126 and 412f.
  • the horizontal scanning circuit 105 may output each pixel data Dp except for pixel data Dp for which data output is prohibited.
  • 23(A), 23(B), 23(C), and 23(D) are schematic diagrams showing whether data output is possible based on the control signal ctl3.
  • the pixel array unit 110A, 310A may determine whether data output is possible from each light-receiving pixel P in the pixel array circuit 101 for each line based on the control signal ctl3. At this time, the pixel array unit 110A, 310A outputs pixel data Dp for each line except for lines for which selection (data output) is prohibited, for example, as shown in FIG. 23(A).
  • the pixel array unit 110A, 310A may determine whether data output is possible from the pixel array circuit 101 for each light-receiving pixel P based on the control signal ctl3. At this time, the pixel array units 110A and 310A may output each pixel data Dp except for the pixel data Dp for which data output is prohibited, as shown in, for example, Figures 23(B), 23(C), and 23(D).
  • Fig. 24 shows a modified example of the functional blocks of cell C in Fig. 6.
  • Fig. 25 shows a modified example of the functional blocks of cell C in Fig. 12.
  • the processor 211 may further include a counter 211f, a transmission prohibition control unit 211g, and transmission prohibition data 211h, for example, as shown in Figs. 24 and 25.
  • the counter 211f counts the number of spike signals input from the neuron I/O unit 211a for each neuron destination address.
  • the transmission prohibition control unit 211g writes the neuron destination address corresponding to the count number that exceeds the predetermined threshold into the transmission prohibition data 211h.
  • the transmission prohibition data 211h is data in which either "1" indicating transmission permitted or "0" indicating transmission prohibited is associated with each address.
  • the router 212, 412 determines whether or not to output pixel data Dp obtained from the router 120, 120A or pixel array unit 310A based on the transmission prohibition data 211h.
  • the router 212, 412 compares, for example, an address obtained based on a routing table (TBL) with the transmission prohibition data 211h.
  • TBL routing table
  • the router 212, 412 prohibits the output of pixel data Dp obtained from the router 120, 120A or pixel array unit 310A.
  • the router 212, 412 outputs the pixel data Dp obtained from the router 120, 120A or pixel array unit 310A to the processor 211.
  • the transmission prohibition control unit 211g may reset the transmission prohibition data 211h after a predetermined time has elapsed.
  • the destination address determination unit 212c, 412c may divert the pixel data Dp obtained from the router 120, 120A or the pixel array unit 310A to a neuron at an address corresponding to transmission permitted in the transmission prohibited data 211h.
  • This diversion destination neuron has a function of auxiliary representing attention data corresponding to the neuron for which transmission is prohibited.
  • the processor 211 may output the data indicated by the neuron for which transmission is prohibited and the data indicated by the diversion destination neuron (attention data) in association with each other. In this case, it becomes possible to provide new information processing.
  • the processor 211 may change the weight value in the weight storage memory unit 211c based on the time difference value (the difference between the count values for each predetermined period) of the count values stored in the counter 211f.
  • the degree of increase in the count value can be adjusted, and therefore the frequency of detours can be adjusted.
  • congestion in data transmission from the routers 212 and 412 to the processor 211 can be reduced. Therefore, a further improvement in the processing speed can be achieved in signal processing using an SNN.
  • FIG. 26 shows an example of the operation state of a plurality of neurons in the cell C of FIG. 6, FIG. 12, FIG. 24, and FIG. 25.
  • FIG. 26 illustrates the operation state of each neuron in the membrane potential memory unit 211e.
  • some neurons (hereinafter, "main neurons") correspond to the plurality of light receiving pixels P in the pixel array units 110A and 310A.
  • the plurality of neurons other than the main neurons correspond to the detouring destination neurons of the main neurons.
  • FIG. 26 a plurality of neurons whose operation state is transmission prohibited and a plurality of neurons whose operation state is transmission enabled are illustrated.
  • the plurality of neurons whose operation state is transmission prohibited correspond to the main neurons.
  • the plurality of neurons whose operation state is transmission enabled correspond to the detouring destination neurons.
  • the destination address determination unit 212c, 412c may route pixel data Dp obtained from the router 120, 120A or pixel array unit 310A to multiple destination neurons when the operating state of each neuron corresponding to the main neuron is prohibited from transmission.
  • Figures 27(A), 27(B), and 27(C) show an example of a method for bypassing pixel data Dp obtained from router 120, 120A or pixel array unit 310A.
  • the router 212, 412 changes the destination (transfer destination) of the pixel data Dp obtained from the router 120, 120A or the pixel array unit 310A to a neuron (a detouring neuron) whose address corresponds to a transmission permitted address.
  • the multiple neurons at the destination correspond to a processing pipeline at the destination.
  • the multiple neurons at the detouring destination correspond to a processing pipeline at the detouring destination.
  • the router 212, 412 outputs transmission data DA to the processor 211, in which the pixel data Dp obtained from the router 120, 120A or the pixel array unit 310A corresponds to the destination and detouring addresses.
  • the neuron I/O unit 211a outputs pixel data Dp contained in the transmission data DA received from the routers 212 and 412 as a spike signal to the product-sum calculation unit 211b.
  • the neuron I/O unit 211a outputs the pixel data Dp received from the router 212 to the product-sum calculation unit 211b in association with the address of the detouring destination received from the router 212.
  • the product-sum calculation unit 211b multiplies the spike signal input from the neuron I/O unit 211a by a predetermined weight value set for each destination address, and performs a product-sum calculation to add the number of input spikes for each detouring destination address.
  • the product-sum calculation unit 211b stores the calculation results thus obtained in the membrane potential memory unit 211d via the neuron I/O unit 211a.
  • the weight value is stored in the weight storage memory unit 211c.
  • the neuron I/O unit 211a stores the value (result of the product-sum calculation) for each address of the detour destination received from the product-sum calculation unit 211b as a membrane potential in the membrane potential memory unit 211d.
  • the routers 212, 412 generate data on the operating state of the neurons in the processor 211 corresponding to the router 212, 412 based on the data (spike signals and addresses) obtained from the LIF unit 211d in the processor 211.
  • the routers 212, 412 generate (update) transmission prohibition data 127, 412g based on the generated data on the operating state of the neurons in the processor 211.
  • the router 212, 412 determines the status (transmission prohibited or transmission possible) of the address corresponding to the transmission prohibition based on the generated (updated) transmission prohibition data 127, 412g.
  • the router 212, 412 compares the address corresponding to the transmission prohibition with the transmission prohibition data 127, 412g obtained from the transmission prohibition control unit 126, 412f. If the address corresponding to the transmission prohibition corresponds to the transmission prohibition in the transmission prohibition data 127, 412g obtained from the transmission prohibition control unit 126, 412f, the router 212, 412, for example, continues to set the status of the address corresponding to the transmission prohibition to transmission prohibited.
  • the router 212, 412 changes the status of the address corresponding to the transmission prohibition to transmission possible.
  • the router 212, 412 may transmit the processing result of the processing pipeline of the detouring destination to each processing pipeline of the transmission destination, for example, as shown in FIG. 27(A).
  • the router 212, 412 may transmit the processing result of the processing pipeline of the detouring destination to one processing pipeline of the transmission destination, for example, as shown in FIG. 27(B).
  • the router 212, 412 may output data on the processing result at the detouring destination as a control signal ctl, for example, as shown in FIG. 27(C).
  • pixel data Dp obtained from routers 120, 120A or pixel array unit 310A is diverted to multiple neurons at the diverting destination. This makes it possible to reduce congestion in data transmission from pixel array unit 310 to each processor 211. Therefore, it is possible to achieve a further improvement in processing speed in signal processing using an SNN.
  • Fig. 28 shows a modified example of the imaging device 1000 of Fig. 1.
  • the pixel array section 110 may be configured to include a plurality of light receiving pixels P1 arranged two-dimensionally and a plurality of light receiving pixels P2 arranged two-dimensionally, for example, as shown in Fig. 28.
  • the plurality of light receiving pixels P1 and the plurality of light receiving pixels P2 are arranged alternately in the row direction and the column direction, for example.
  • the light receiving pixel P1 may include, for example, a CMOS element or a SPAD element.
  • the first pixel array consisting of a plurality of light receiving pixels P1 detects light in the visible wavelength band incident from the outside at the plurality of light receiving pixels P1, and outputs a plurality of digital signals (pixel data Dp1) to the router 120.
  • the first pixel array outputs a plurality of digital signals (pixel data Dp1) to the router 120 at a predetermined period Ta.
  • the router 120 When the router 120 acquires a plurality of pixel data Dp1 from the pixel array unit 110, it transmits transmission data DA including the acquired plurality of pixel data Dp1 to each core C, and outputs the acquired plurality of pixel data Dp1 to the encoder 510 as digital visible light image data Iout1 (for example, RGB image data).
  • the light receiving pixel P2 may include, for example, an EVS element.
  • the second pixel array consisting of a plurality of light receiving pixels P2 detects light in the visible wavelength band incident from the outside with the plurality of light receiving pixels P2, and outputs a plurality of digital signals (pixel data Dp2).
  • the second pixel array outputs a plurality of digital signals (pixel data Dp1) to the router 120 at a predetermined period Tb ( ⁇ Ta).
  • Tb predetermined period
  • the router 120 acquires the plurality of pixel data Dp2 from the pixel array section 110, it transmits transmission data DA including the acquired plurality of pixel data Dp2 to each core C, and outputs the acquired plurality of light receiving pixels P2 to the encoder 510 as digital EVS image data Iout2.
  • the pixel array unit 110 outputs visible light image data Iout1 (e.g., RGB image data) to the router 120, and then outputs one or more EVS image data Iout2 to the router 120.
  • the pixel array unit 110 outputs, for example, visible light image data Iout1 (e.g., RGB image data) and one or more EVS image data Iout2 to the router 120 at a period Ta.
  • the encoder 510 encodes the input visible light image data Iout1 (e.g., RGB image data) and outputs the resulting feature image data C1 to the transmitter 520.
  • the encoder 510 encodes the input EVS image data Iout2 and outputs the resulting feature image data C2 to the transmitter 520.
  • the information processing device 3000 includes a receiving unit 3100 capable of communicating with the transmitting unit 520, and a decoder 3200 that decodes the feature amount image data C1 and feature amount image data C2 acquired by the receiving unit 3100.
  • the decoder 3200 generates restored visible light image data Iout1' (e.g., RGB image data) by decoding the feature amount image data C1.
  • the decoder 3200 generates restored EVS image data Iout2' by decoding the feature amount image data C2.
  • the information processing device 3000 further includes a data processing unit 3300 that processes the data generated by the decoder 3200. For example, as shown in FIG. 29, the data processing unit 3300 generates complementary visible light image data Iout2'' based on the restorable visible light image data Iout1' and the restored EVS image data Iout2'.
  • the complementary visible light image data Iout2'' is data that complements the multiple restored visible light image data Iout1' that are generated periodically.
  • the pixel array section 110 is configured to include a plurality of light receiving pixels P1 arranged two-dimensionally, and a plurality of light receiving pixels P2 arranged two-dimensionally. This makes it possible to periodically output visible light image data Iout1 obtained from the plurality of light receiving pixels P1, and one or more EVS image data Iout2 obtained from the plurality of light receiving pixels P2, to the router 120. As a result, it is possible to reduce the amount of data transmitted from the imaging device 1000 to the information processing device 3000.
  • Fig. 30 shows a modified example of the imaging device 1000 of Fig. 1.
  • the pixel array unit 110 may be provided with an encoder 530 and a communication unit 540 instead of the encoder 510 and the transmission unit 520, as shown in Fig. 30.
  • the encoder 530 encodes the data Dout3 obtained by the core array unit 210 based on the visible light image data Iout1, and outputs the resulting feature image data C3 to the communication unit 540.
  • the encoder 530 encodes the data Dout4 obtained by the core array unit 210 based on the EVS image data Iout2, and outputs the resulting feature image data C4 to the communication unit 540.
  • the information processing device 3000 includes a receiving unit 3400 capable of communicating with the communication unit 540, and a decoder 3500 that decodes the feature amount image data C3 and feature amount image data C4 acquired by the receiving unit 3400.
  • the decoder 3500 generates restored visible light image data Iout3' (e.g., RGB image data) by decoding the feature amount image data C3.
  • the decoder 3500 generates restored EVS image data Iout4' by decoding the feature amount image data C4.
  • the information processing device 3000 further includes a data processing unit 3300 that processes the data generated by the decoder 3500.
  • the data processing unit 3300 generates complementary visible light image data Iout4'' based on, for example, the restorable visible light image data Iout3' and the restored EVS image data Iout4'.
  • the complementary visible light image data Iout4'' is data that complements multiple restored visible light image data Iout4' that are generated periodically.
  • the pixel array section 110 is configured to include a plurality of light receiving pixels P1 arranged two-dimensionally, and a plurality of light receiving pixels P2 arranged two-dimensionally. This makes it possible to periodically output visible light image data Iout3 obtained from the plurality of light receiving pixels P1, and one or more EVS image data Iout4 obtained from the plurality of light receiving pixels P2, to the router 120. As a result, the amount of data transmitted from the imaging device 1000 to the information processing device 3000 can be reduced.
  • [Variation I] 31 shows a modified example of the imaging device 1000 of FIG. 1.
  • the pixel array section 110 may further include an encoder 530 and a communication section 540, for example, as shown in FIG. 31.
  • the visible light image data Iout1 obtained from the plurality of light receiving pixels P1 and one or more EVS image data Iout2 obtained from the plurality of light receiving pixels P2 can be periodically output to the router 120.
  • the visible light image data Iout3 obtained from the plurality of light receiving pixels P1 and one or more EVS image data Iout4 obtained from the plurality of light receiving pixels P2 can be periodically output to the router 120.
  • the amount of data transmission from the imaging device 1000 to the information processing device 3000 can be reduced.
  • Fig. 32 shows a modified example of the imaging device 1000 of Fig. 7.
  • the pixel array section 110A may be configured to include a plurality of light receiving pixels P1 arranged two-dimensionally and a plurality of light receiving pixels P2 arranged two-dimensionally, for example, as shown in Fig. 32.
  • the plurality of light receiving pixels P1 and the plurality of light receiving pixels P2 are arranged alternately in the row direction and the column direction, for example.
  • the light receiving pixel P1 may include, for example, a CMOS element or a SPAD element.
  • the first pixel array consisting of a plurality of light receiving pixels P1 detects light in the visible wavelength band incident from the outside at the plurality of light receiving pixels P1, and outputs a plurality of digital signals (pixel data Dp1) to the router 120A.
  • the first pixel array outputs a plurality of digital signals (pixel data Dp1) to the router 120A at a predetermined period Ta.
  • the router 120A When the router 120A acquires a plurality of pixel data Dp1 from the pixel array section 110A, it transmits transmission data DA including the acquired plurality of pixel data Dp1 to the router 210A, and outputs the acquired plurality of pixel data Dp1 to the encoder 510 as digital visible light image data Iout1 (for example, RGB image data).
  • transmission data DA including the acquired plurality of pixel data Dp1 to the router 210A
  • digital visible light image data Iout1 for example, RGB image data
  • the light receiving pixel P2 may include, for example, an EVS element.
  • the second pixel array consisting of a plurality of light receiving pixels P2 detects light in the visible wavelength band incident from the outside at the plurality of light receiving pixels P2 and outputs a plurality of digital signals (pixel data Dp2).
  • the second pixel array outputs a plurality of digital signals (pixel data Dp1) to the router 120A at a predetermined period Tb ( ⁇ Ta).
  • the router 120A When the router 120A acquires the plurality of pixel data Dp2 from the pixel array section 110A, it transmits transmission data DA including the acquired plurality of pixel data Dp2 to the router 210A and outputs the acquired plurality of light receiving pixels P2 to the encoder 510 as digital EVS image data Iout2.
  • the pixel array unit 110A outputs visible light image data Iout1 (e.g., RGB image data) to the router 120A, and then outputs one or more EVS image data Iout2 to the router 120A.
  • the pixel array unit 110A outputs, for example, visible light image data Iout1 (e.g., RGB image data) and one or more EVS image data Iout2 to the router 120A at a period Ta.
  • the encoder 510 encodes the input visible light image data Iout1 (e.g., RGB image data) and outputs the resulting feature image data C1 to the transmitter 520.
  • the encoder 510 encodes the input EVS image data Iout2 and outputs the resulting feature image data C2 to the transmitter 520.
  • the information processing device 3000 includes a receiving unit 3100 capable of communicating with the transmitting unit 520, and a decoder 3200 that decodes the feature amount image data C1 and feature amount image data C2 acquired by the receiving unit 3100.
  • the decoder 3200 generates restored visible light image data Iout1' (e.g., RGB image data) by decoding the feature amount image data C1.
  • the decoder 3200 generates restored EVS image data Iout2' by decoding the feature amount image data C2.
  • the information processing device 3000 further includes a data processing unit 3300 that processes the data generated by the decoder 3200. For example, as shown in FIG. 29, the data processing unit 3300 generates complementary visible light image data Iout2'' based on the restorable visible light image data Iout1' and the restored EVS image data Iout2'.
  • the complementary visible light image data Iout2'' is data that complements the multiple restored visible light image data Iout1' that are generated periodically.
  • the pixel array section 110A is configured to include a plurality of light receiving pixels P1 arranged two-dimensionally, and a plurality of light receiving pixels P2 arranged two-dimensionally. This makes it possible to periodically output visible light image data Iout1 obtained from the plurality of light receiving pixels P1, and one or more EVS image data Iout2 obtained from the plurality of light receiving pixels P2, to the router 120A. As a result, the amount of data transmitted from the imaging device 1000 to the information processing device 3000 can be reduced.
  • Fig. 33 shows a modified example of the imaging device 1000 of Fig. 1.
  • the pixel array unit 110A may be provided with an encoder 530 and a communication unit 540 instead of the encoder 510 and the transmission unit 520, as shown in Fig. 33.
  • the encoder 530 encodes the data Dout3 obtained by the core array unit 210A based on the visible light image data Iout1, and outputs the resulting feature image data C3 to the communication unit 540.
  • the encoder 530 encodes the data Dout4 obtained by the core array unit 210A based on the EVS image data Iout2, and outputs the resulting feature image data C4 to the communication unit 540.
  • the information processing device 3000 includes a receiving unit 3400 capable of communicating with the communication unit 540, and a decoder 3500 that decodes the feature amount image data C3 and feature amount image data C4 acquired by the receiving unit 3400.
  • the decoder 3500 generates restored visible light image data Iout3' (e.g., RGB image data) by decoding the feature amount image data C3.
  • the decoder 3500 generates restored EVS image data Iout4' by decoding the feature amount image data C4.
  • the information processing device 3000 further includes a data processing unit 3300 that processes the data generated by the decoder 3500.
  • the data processing unit 3300 generates complementary visible light image data Iout4'' based on, for example, the restorable visible light image data Iout3' and the restored EVS image data Iout4'.
  • the complementary visible light image data Iout4'' is data that complements multiple restored visible light image data Iout4' that are generated periodically.
  • the pixel array section 110A is configured to include a plurality of light receiving pixels P1 arranged two-dimensionally, and a plurality of light receiving pixels P2 arranged two-dimensionally. This makes it possible to periodically output visible light image data Iout3 obtained from the plurality of light receiving pixels P1, and one or more EVS image data Iout4 obtained from the plurality of light receiving pixels P2, to the router 120A. As a result, the amount of data transmitted from the imaging device 1000 to the information processing device 3000 can be reduced.
  • FIG. 34 shows a modified example of the imaging device 1000 of FIG. 1.
  • the pixel array section 110A may further include an encoder 530 and a communication section 540, for example, as shown in FIG. 34.
  • the visible light image data Iout1 obtained from the plurality of light receiving pixels P1 and one or more EVS image data Iout2 obtained from the plurality of light receiving pixels P2 can be periodically output to the router 120A.
  • the visible light image data Iout3 obtained from the plurality of light receiving pixels P1 and one or more EVS image data Iout4 obtained from the plurality of light receiving pixels P2 can be periodically output to the router 120A.
  • the amount of data transmission from the imaging device 1000 to the information processing device 3000 can be reduced.
  • imaging device 35 shows an example of use of the imaging device 1 according to the above embodiment and its modified example.
  • the imaging device 1000 described above can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-rays, for example, as described below.
  • - Devices for taking images for viewing such as digital cameras and mobile devices with camera functions.
  • - Devices for traffic purposes such as in-vehicle sensors that take images of the front, rear, surroundings, and interior of a car for safe driving such as automatic stopping and for recognizing the driver's state, surveillance cameras that monitor moving vehicles and roads, and distance measuring sensors that measure distances between vehicles.
  • - Devices for home appliances such as televisions, refrigerators, and air conditioners that take images of users' gestures and operate the equipment according to those gestures.
  • - Devices for medical and healthcare purposes such as endoscopes and devices that take images of blood vessels by receiving infrared light.
  • - Devices for security purposes such as surveillance cameras for crime prevention and cameras for person authentication.
  • - Devices for beauty purposes such as skin measuring devices that take images of the skin and microscopes that take images of the scalp.
  • - Devices for sports purposes such as action cameras and wearable cameras for sports purposes.
  • - Devices for agriculture such as cameras for monitoring the condition of fields and crops.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure may be realized as a device mounted on any type of moving body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility device, an airplane, a drone, a ship, or a robot.
  • FIG. 36 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile object control system to which the technology disclosed herein can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside vehicle information detection unit 12030, an inside vehicle information detection unit 12040, and an integrated control unit 12050.
  • Also shown as functional components of the integrated control unit 12050 are a microcomputer 12051, an audio/video output unit 12052, and an in-vehicle network I/F (interface) 12053.
  • the drive system control unit 12010 controls the operation of devices related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 functions as a control device for a drive force generating device for generating the drive force of the vehicle, such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, a steering mechanism for adjusting the steering angle of the vehicle, and a braking device for generating a braking force for the vehicle.
  • the body system control unit 12020 controls the operation of various devices installed in the vehicle body according to various programs.
  • the body system control unit 12020 functions as a control device for a keyless entry system, a smart key system, a power window device, or various lamps such as headlamps, tail lamps, brake lamps, turn signals, and fog lamps.
  • radio waves or signals from various switches transmitted from a portable device that replaces a key can be input to the body system control unit 12020.
  • the body system control unit 12020 accepts the input of these radio waves or signals and controls the vehicle's door lock device, power window device, lamps, etc.
  • the outside-vehicle information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
  • the image capturing unit 12031 is connected to the outside-vehicle information detection unit 12030.
  • the outside-vehicle information detection unit 12030 causes the image capturing unit 12031 to capture images outside the vehicle and receives the captured images.
  • the outside-vehicle information detection unit 12030 may perform object detection processing or distance detection processing for people, cars, obstacles, signs, or characters on the road surface based on the received images.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of light received.
  • the imaging unit 12031 can output the electrical signal as an image, or as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light, or may be invisible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects information inside the vehicle.
  • a driver state detection unit 12041 that detects the state of the driver is connected.
  • the driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 may calculate the driver's degree of fatigue or concentration based on the detection information input from the driver state detection unit 12041, or may determine whether the driver is dozing off.
  • the microcomputer 12051 can calculate the control target values of the driving force generating device, steering mechanism, or braking device based on the information inside and outside the vehicle acquired by the outside vehicle information detection unit 12030 or the inside vehicle information detection unit 12040, and output a control command to the drive system control unit 12010.
  • the microcomputer 12051 can perform cooperative control aimed at realizing the functions of an ADAS (Advanced Driver Assistance System), including vehicle collision avoidance or impact mitigation, following driving based on the distance between vehicles, maintaining vehicle speed, vehicle collision warning, or vehicle lane departure warning.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 can also control the driving force generating device, steering mechanism, braking device, etc. based on information about the surroundings of the vehicle acquired by the outside vehicle information detection unit 12030 or the inside vehicle information detection unit 12040, thereby performing cooperative control aimed at automatic driving, which allows the vehicle to travel autonomously without relying on the driver's operation.
  • the microcomputer 12051 can also output control commands to the body system control unit 12020 based on information outside the vehicle acquired by the outside-vehicle information detection unit 12030. For example, the microcomputer 12051 can control the headlamps according to the position of a preceding vehicle or an oncoming vehicle detected by the outside-vehicle information detection unit 12030, and perform cooperative control aimed at preventing glare, such as switching high beams to low beams.
  • the audio/image output unit 12052 transmits at least one output signal of audio and image to an output device capable of visually or audibly notifying the occupants of the vehicle or the outside of the vehicle of information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
  • the display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 37 shows an example of the installation position of the imaging unit 12031.
  • the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at the front nose, side mirrors, rear bumper, back door, and the top of the windshield inside the vehicle cabin of the vehicle 12100.
  • the imaging unit 12101 provided at the front nose and the imaging unit 12105 provided at the top of the windshield inside the vehicle cabin mainly acquire images of the front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided at the side mirrors mainly acquire images of the sides of the vehicle 12100.
  • the imaging unit 12104 provided at the rear bumper or back door mainly acquires images of the rear of the vehicle 12100.
  • the images of the front acquired by the imaging units 12101 and 12105 are mainly used to detect preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, etc.
  • FIG. 37 shows an example of the imaging ranges of the imaging units 12101 to 12104.
  • Imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • imaging range 12114 indicates the imaging range of the imaging unit 12104 provided on the rear bumper or back door.
  • an overhead image of the vehicle 12100 viewed from above is obtained by superimposing the image data captured by the imaging units 12101 to 12104.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera consisting of multiple imaging elements, or an imaging element having pixels for detecting phase differences.
  • the microcomputer 12051 can obtain the distance to each solid object within the imaging ranges 12111 to 12114 and the change in this distance over time (relative speed with respect to the vehicle 12100) based on the distance information obtained from the imaging units 12101 to 12104, and can extract as a preceding vehicle, in particular, the closest solid object on the path of the vehicle 12100 that is traveling in approximately the same direction as the vehicle 12100 at a predetermined speed (e.g., 0 km/h or faster). Furthermore, the microcomputer 12051 can set the inter-vehicle distance that should be maintained in advance in front of the preceding vehicle, and perform automatic braking control (including follow-up stop control) and automatic acceleration control (including follow-up start control). In this way, cooperative control can be performed for the purpose of automatic driving, which runs autonomously without relying on the driver's operation.
  • automatic braking control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 classifies and extracts three-dimensional object data on three-dimensional objects, such as two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects, based on the distance information obtained from the imaging units 12101 to 12104, and can use the data to automatically avoid obstacles.
  • the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see.
  • the microcomputer 12051 determines the collision risk, which indicates the risk of collision with each obstacle, and when the collision risk is equal to or exceeds a set value and there is a possibility of a collision, it can provide driving assistance for collision avoidance by outputting an alarm to the driver via the audio speaker 12061 or the display unit 12062, or by forcibly decelerating or steering the vehicle to avoid a collision via the drive system control unit 12010.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. The recognition of such a pedestrian is performed, for example, by a procedure of extracting feature points in the captured image of the imaging units 12101 to 12104 as infrared cameras, and a procedure of performing pattern matching processing on a series of feature points that indicate the contour of an object to determine whether or not it is a pedestrian.
  • the audio/image output unit 12052 controls the display unit 12062 to superimpose a rectangular contour line for emphasis on the recognized pedestrian.
  • the audio/image output unit 12052 may also control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
  • the technology disclosed herein can be applied to the imaging unit 12031 of the configurations described above.
  • the imaging device mounted on the vehicle can increase the processing speed of captured images.
  • the vehicle control system 12000 can quickly achieve functions such as vehicle collision avoidance or collision mitigation, following driving based on the distance between vehicles, vehicle speed maintenance driving, vehicle collision warning, and vehicle lane departure warning.
  • the present technology can be configured as follows: According to the present technology configured as follows, it is possible to increase the processing speed of captured images. (1) a pixel array in which a plurality of light receiving pixels are arranged two-dimensionally; A multi-processor in which multiple processors configured with SNN (Spiking Neural Network) hardware are arranged in two dimensions; a single-layer or multi-layer router connected to the pixel array and the multiprocessor. (2) The imaging device according to (1), wherein the single-layer or multi-layer router is connected to the plurality of light receiving pixels and the plurality of processors.
  • SNN Spiking Neural Network
  • the light receiving pixel has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel, The imaging device according to (2), wherein the single-layer or multi-layer router transmits the digital signals obtained by the digital converter to the processors.
  • the pixel array has a readout circuit that reads out the light receiving pixels row by row, The imaging device according to any one of (1) to (4), wherein the single-level or multilevel router is connected to a readout circuit that reads out the light receiving pixels row by row, and to the processors.
  • the light receiving pixel or the readout circuit has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel
  • the single-layer or multi-layer router transmits the digital signals obtained from the light-receiving pixels together with a frame start and a frame end to the processors;
  • the multiple hierarchical routers are provided, the plurality of hierarchical routers include a first router in a first hierarchy assigned to the plurality of light receiving pixels, and a plurality of second routers in a second hierarchy assigned to each of the processors,
  • the imaging device according to any one of claims 1 to 5, wherein the first router is connected to each of the second routers.
  • the light receiving pixel has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel, The first router transmits data about the operational status of the digital converter to each of the second routers; The imaging device described in (10), wherein the second router determines a destination of the digital signal obtained by the digital converter based on data regarding an operating state of the digital converter obtained from the first router.
  • the pixel array includes a readout circuit for each of the first groups, the readout circuit being configured to read out the light receiving pixels on a row-by-row basis; The imaging device according to (9), wherein the first router is connected to the readout circuit and each of the second routers.
  • the light receiving pixel or the readout circuit has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel,
  • the first router transmits data about the operational status of the digital converter to each of the second routers;
  • the second router transmits data on the operating state of the neuron in the processor corresponding to the second router to the first router;
  • the multiple hierarchical routers include a first hierarchical plurality of first routers each assigned to a pair of the first group and the second group when the light receiving pixels are divided into a plurality of first groups and the processors are divided into a plurality of second groups, and a second hierarchical plurality of second routers each assigned to a pair of the first group and the second group;
  • the imaging device according to any one of claims 1 to 5, wherein the plurality of second routers are assigned to each of the processors.
  • Each of the first routers is connected to each of the light receiving pixels in the corresponding first group and each of the second routers in the corresponding second group, and is further connected to a plurality of the first routers adjacent to the corresponding first group;
  • the light receiving pixel has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel,
  • the first router transmits data about the operational status of the digital converter to each of the second routers;
  • the pixel array includes a readout circuit for each of the first groups, the readout circuit being configured to read out the light receiving pixels on a row-by-row basis;
  • Each of the first routers is connected to the readout circuit of the corresponding first group and to each of the second routers of the corresponding second group, and is further connected to a plurality of the first routers adjacent to the corresponding first group;
  • the light receiving pixel or the readout circuit has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel,
  • the first router transmits data about the operational status of the digital converter to each of the second routers;
  • the second router transmits data on the operating state of the neuron in the processor corresponding to the second router to the first router;
  • the imaging device according to any one of (15) to (19), wherein the first router determines a destination of the data obtained from the light receiving pixels based on the data from the second router.
  • the single-layer router is provided, the single-layer router includes a plurality of first routers each assigned to each of the first groups when the plurality of light receiving pixels are divided into a plurality of first groups;
  • the imaging device according to any one of claims 1 to 5, wherein the plurality of first routers are assigned to each of the processors.
  • the light receiving pixel has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel, The imaging device according to (22), wherein the first router determines a destination of the digital signal obtained by the digital converter based on data on an operating state of the digital converter.
  • the pixel array has a readout circuit that reads out the light receiving pixels row by row, The imaging device according to (21), wherein each of the first routers is connected to the readout circuit of the corresponding first group and to the corresponding processor, and is further connected to a plurality of adjacent first routers.
  • the light receiving pixel or the readout circuit has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel,
  • the imaging device according to any one of claims 24 to 30, wherein the first router determines a destination of the digital signal obtained by the digital converter based on an operation state of the digital converter.
  • the first router generates control data based on an operation state of a neuron in the processor corresponding to the first router, and transmits the generated control data to a pixel array;
  • the imaging device according to any one of (1) to (26), wherein the first chip and the second chip are stacked with the first pad electrode and the second pad electrode overlapping each other.
  • the light receiving pixel has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel, the first pad electrode is provided on a surface of the first chip opposite to a light receiving surface, and is connected to a wiring through which a digital signal obtained by the digital converter is output;
  • the pixel array has a readout circuit that reads out the light receiving pixels row by row,
  • the light receiving pixel or the readout circuit has a digital converter that digitizes a signal obtained by light detection at the light receiving pixel,
  • the first pad electrode is provided on a surface of the first chip opposite to a light receiving surface, and is connected to a wiring through which a digital signal obtained by the digital converter is output;
  • the imaging device according to (27) wherein the second pad electrode is provided on a surface of the second chip and is connected to an input end of the single-layer or multi-layer router.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

Un dispositif d'imagerie selon un aspect de la présente invention comprend : une unité de réseau de pixels comprenant une pluralité de pixels de photorécepteur qui sont situés de manière bidimensionnelle ; une unité multiprocesseur comprenant une pluralité de processeurs qui sont situés de manière bidimensionnelle et sont conçus de matériel SNN ; et une couche unique ou une pluralité de couches de routeurs qui sont connectées à l'unité de réseau de pixels et à l'unité multiprocesseur.
PCT/JP2023/040305 2022-12-23 2023-11-08 Dispositif d'imagerie WO2024135137A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022207044 2022-12-23
JP2022-207044 2022-12-23

Publications (1)

Publication Number Publication Date
WO2024135137A1 true WO2024135137A1 (fr) 2024-06-27

Family

ID=91588490

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/040305 WO2024135137A1 (fr) 2022-12-23 2023-11-08 Dispositif d'imagerie

Country Status (1)

Country Link
WO (1) WO2024135137A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021013048A (ja) * 2019-07-03 2021-02-04 公立大学法人会津大学 3次元ネットワークオンチップによるスパイキングニューラルネットワーク
WO2021210389A1 (fr) * 2020-04-14 2021-10-21 ソニーグループ株式会社 Système de reconnaissance d'objet et équipement électronique
JP2022509754A (ja) * 2018-11-01 2022-01-24 ブレインチップ,インコーポレイテッド 改良されたスパイキングニューラルネットワーク

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022509754A (ja) * 2018-11-01 2022-01-24 ブレインチップ,インコーポレイテッド 改良されたスパイキングニューラルネットワーク
JP2021013048A (ja) * 2019-07-03 2021-02-04 公立大学法人会津大学 3次元ネットワークオンチップによるスパイキングニューラルネットワーク
WO2021210389A1 (fr) * 2020-04-14 2021-10-21 ソニーグループ株式会社 Système de reconnaissance d'objet et équipement électronique

Similar Documents

Publication Publication Date Title
US11582406B2 (en) Solid-state image sensor and imaging device
US11950009B2 (en) Solid-state image sensor
US11336860B2 (en) Solid-state image capturing device, method of driving solid-state image capturing device, and electronic apparatus
US11632510B2 (en) Solid-state imaging device and electronic device
US20210218923A1 (en) Solid-state imaging device and electronic device
US11711633B2 (en) Imaging device, imaging system, and imaging method
WO2022130888A1 (fr) Dispositif de capture d'image
US20240236519A1 (en) Imaging device, electronic device, and light detecting method
US11381764B2 (en) Sensor element and electronic device
WO2018139187A1 (fr) Dispositif de capture d'images à semi-conducteurs, son procédé de commande et dispositif électronique
WO2024135137A1 (fr) Dispositif d'imagerie
WO2020105301A1 (fr) Élément d'imagerie à semi-conducteurs et dispositif d'imagerie
WO2022054742A1 (fr) Élément de capture d'image et dispositif de capture d'image
US20240177485A1 (en) Sensor device and semiconductor device
US20230308779A1 (en) Information processing device, information processing system, information processing method, and information processing program
KR20240035570A (ko) 고체 촬상 디바이스 및 고체 촬상 디바이스 작동 방법
WO2020090459A1 (fr) Dispositif d'imagerie à semi-conducteur et équipement électronique
WO2018211985A1 (fr) Élément d'imagerie, procédé de commande d'élément d'imagerie, dispositif d'imagerie, et appareil électronique
US20240078803A1 (en) Information processing apparatus, information processing method, computer program, and sensor apparatus
WO2023189279A1 (fr) Appareil de traitement de signal, appareil d'imagerie et procédé de traitement de signal
WO2024135094A1 (fr) Dispositif photodétecteur, et procédé de commande de dispositif photodétecteur
WO2023243222A1 (fr) Dispositif d'imagerie
WO2024209800A1 (fr) Élément de photodétection et dispositif électronique
WO2023074177A1 (fr) Dispositif d'imagerie
US20240089637A1 (en) Imaging apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23906508

Country of ref document: EP

Kind code of ref document: A1